Journal of Computer Applications
Next Articles
Received:
Revised:
Accepted:
Online:
Published:
樊耿鑫1,韩慧妍2,况立群1,晋紫阳1,赵华峰1
通讯作者:
基金资助:
Abstract: In robotic environmental perception tasks, single-viewpoint point clouds suffer from severe geometric information loss due to sensor viewpoint limitations. Point cloud reconstruction methods based on CAD model replacement effectively avoid the structural instability risks of direct point cloud reconstruction by retrieving similar models and applying deformation. The Unsupervised 3D Shape Retrieval and Deformation (U-RED) algorithm achieves topologically consistent CAD model replacement while maintaining editability of the reconstruction results. However, when dealing with objects of complex topology, it still faces challenges such as insufficient rotation and translation invariance representation of point clouds, difficulty distinguishing neighboring parts due to geometric similarity among homologous components, and parameter update failures caused by scattered attention weights and gradient vanishing or explosion. To address these challenges, this paper proposes the Vector Neuron Enhanced Unsupervised Retrieval and Deformation Framework with Feature Affine Residual (VU-RED-F) algorithm based on U-RED, incorporating part-level feature enhancement. A Vector Neuron Encoder (VNE) is constructed to improve the robustness of the feature extraction module in representing rotation and translation invariance of point clouds. A learnable affine transformation residual reconstruction feature mapping process is introduced to adaptively adjust feature distributions, enhancing the network’s discrimination ability of local geometric structures between parts. By integrating soft-threshold gating and residual correction, the algorithm constrains the sparsity of attention distribution while improving gradient propagation stability, thereby boosting network convergence efficiency and reducing loss during retrieval and deformation. Experimental results on the synthetic PartNet and ComplementMe datasets, as well as the real Scan2CAD dataset, show that the chamfer distance loss of the VU-RED-F algorithm decreases by 17.1%, 16.5%, and 16.4% respectively compared to U-RED, improving the fidelity of local geometric details in CAD models.
Key words: point cloud reconstruction, model retrieval and deformation, vector neurons, feature affine residuals, residual networks
摘要: 在机器人环境感知任务中,单视角点云因传感器视角受限导致几何信息严重缺失,基于计算机辅助设计(CAD)模型替换的点云重建方法通过检索相似模型并实施变形,可有效规避直接从点云中重建的结构失控风险。无监督三维形状检索与变形算法(U-RED)在保持重建结果可编辑性的同时实现拓扑一致的CAD模型替换,但面对复杂拓扑结构物体时仍存在点云旋转平移不变性表征不足、同源部件间的几何相似性导致近邻部件区分困难、注意力权重分散与梯度消失及爆炸引起的参数更新失效问题。针对上述挑战,本文基于U-RED模型,提出基于部件级特征增强的VU-RED-F算法。构建向量神经元编码器(VNE),提升特征提取模块在点云旋转平移不变性表征的鲁棒性。引入可学习的仿射变换残差重构特征映射过程,自适应调整特征分布增强网络对部件间局部几何结构的判别能力。融合软阈值门控与残差校正,约束注意力分布稀疏性的同时增强梯度传播稳定性,提升网络收敛效率,降低检索变形过程中的损失。在PartNet和ComplementMe合成数据集以及Scan2CAD真实扫描数据集上的实验结果表明,VU-RED-F算法的倒角距离(cd)损失比U-RED分别降低17.1%、16.5%和16.4%,提高了CAD模型的局部几何细节保真度。
关键词: 点云重建, 模型检索与变形, 向量神经元, 特征仿射残差, 残差网络
CLC Number:
TP391.41
樊耿鑫 韩慧妍 况立群 晋紫阳 赵华峰. VU-RED-F:改进U-RED单视角点云的CAD模型替换[J]. 《计算机应用》唯一官方网站, DOI: 10.11772/j.issn.1001-9081.2025050575.
0 / Recommend
Add to citation manager EndNote|Ris|BibTeX
URL: https://www.joca.cn/EN/10.11772/j.issn.1001-9081.2025050575