Journal of Computer Applications
Next Articles
Received:
Revised:
Accepted:
Online:
Published:
Contact:
董玉坤1,2,程 龙1,2
通讯作者:
基金资助:
Abstract: 3D human body reconstruction technology had developed rapidly. However, when the body part was occluded, the performance of the single-view human body reconstruction method would decline significantly;the multi-view approach offered better accuracy, but it was faced with challenges in effectively combining data from different views. To achieve this issue, a multi-view fusion strategy based on Virtual Marker (VM) was introduced. First, extract Virtual Marker from the image and utilize the characteristics of VM to ensure the shape accuracy of the human body; Secondly, point cloud registration was used to align the markers from different views, this process was based on confidence, ensuring that a clear perspective has a higher impact on the final result; Finally, weighted interpolation was utilized to complete the entire reconstruction process from the markers to the three-dimensional human body. The proposed method reduces computational costs, avoids common problems such as feature matching and viewpoint alignment, and also handles occlusion better than the single-view method. On the public dataset Human3.6m, the Mean Per-Joint Position Error (MPJPE) metric has increased by 22% and 12% respectively compared with the POTTER (Pooling Attention Transformer for Efficient Human Mesh Recovery) and FeatER (Feature Map-Based TransformER) methods. When applied to body size measurements on the collected dataset, the average error of all dimensions remained below 0.85 cm.
Key words: human body reconstruction, Virtual Marker (VM), muti-view fusion, point cloud alignment, human body measurement
摘要: 近年来,三维人体重建技术飞速发展,但是当身体部分被遮挡时,单视图人体重建方法性能会显著下降;多视图重建方法提供了更高的准确性,但在有效地组合来自不同视图的数据方面面临挑战。针对这一问题,本文引入一种基于虚拟标记(VM)的多视图融合策略。首先,从图片中提取虚拟标记,利用虚拟标记的特性保证人体的形状精度;其次,利用点云配准对齐来自不同视图的标记,该过程以置信度为基准,确保清晰的视角对最终结果的影响度更高;最后,利用加权插值完成从标记点到三维人体的完整重建。本文方法减少了计算成本,避免了如特征匹配和视点对齐等常见的问题,同时也比单视图方法更好地处理遮挡。在公共数据集Human3.6m上,与POTTER(Pooling Attention Transformer for Efficient Human Mesh Recovery)和FeatER(Feature Map-Based TransformER)方法相比,平均关节位置误差(MPJPE)指标分别提升了22%和12%。当应用于身体尺寸测量时,所有尺寸的平均误差保持在0.85 cm以下。
关键词: 人体重建, 虚拟标记, 多视图融合, 点云对齐, 人体测量
CLC Number:
TP391.4
董玉坤 程龙. 基于虚拟标记的多视图融合人体重建方法[J]. 《计算机应用》唯一官方网站, DOI: 10.11772/j.issn.1001-9081.2025101211.
0 / Recommend
Add to citation manager EndNote|Ris|BibTeX
URL: https://www.joca.cn/EN/10.11772/j.issn.1001-9081.2025101211