Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Unsupervised cross-domain transfer network for 3D/2D registration in surgical navigation
Xiyuan WANG, Zhancheng ZHANG, Shaokang XU, Baocheng ZHANG, Xiaoqing LUO, Fuyuan HU
Journal of Computer Applications    2024, 44 (9): 2911-2918.   DOI: 10.11772/j.issn.1001-9081.2023091332
Abstract260)   HTML2)    PDF (2025KB)(648)       Save

3D/2D registration is a key technique for intraoperative guidance. In existing deep learning based registration methods, image features were extracted through the network to regress the corresponding pose transformation parameters. This kind of method relies on real samples and their corresponding 3D labels for training, however, this part of expert-annotated medical data is scarce. In the alternative solution, the network was trained with Digital Reconstructed Radiography (DRR) images, which struggled to keep the original accuracy on Xray images due to the differences of image features across domains. For the above problems, an Unsupervised Cross-Domain Transfer Network (UCDTN) based on self-attention was designed. Without relying on Xray images and their 3D spatial labels as the training samples, the correspondence between the image features captured in the source domain and spatial transformations were migrated to the target domain. The public features were used to reduce the disparity of features between domains to minimize the negative impact of cross-domain. Experimental results show that the mTRE (mean Registration Target Error) of the result predicted by UCDTN is 2.66 mm, with a 70.61% reduction compared to the model without cross-domain transfer training, indicating the effectiveness of UCDTN in cross-domain registration tasks.

Table and Figures | Reference | Related Articles | Metrics
2D/3D spine medical image real-time registration method based on pose encoder
Shaokang XU, Zhancheng ZHANG, Haonan YAO, Zhiwei ZOU, Baocheng ZHANG
Journal of Computer Applications    2023, 43 (2): 589-594.   DOI: 10.11772/j.issn.1001-9081.2021122147
Abstract628)   HTML14)    PDF (2007KB)(359)       Save

2D/3D medical image registration is a key technology in 3D real-time navigation of orthopedic surgery. However, the traditional 2D/3D registration methods based on optimization iteration require multiple iterative calculations, which cannot meet the requirements of doctors for real-time registration during surgery. To solve this problem, a pose regression network based on autoencoder was proposed. In this network, the geometric pose information was captured through hidden space decoding, thereby quickly regressing the 3D pose of preoperative spine pose corresponding to the intraoperative X-ray image, and the final registration image was generated through reprojection. By introducing new loss functions, the model was constrained by “Rough to Fine” combined registration method to ensure the accuracy of pose regression. In CTSpine1K spine dataset, 100 CT scan image sets were extracted for 10-fold cross-validation. Experimental results show that the registration result image generated by the proposed model has the Mean Absolute Error (MAE) with the X-ray image of 0.04, the mean Target Registration Error (mTRE) with the X-ray image of 1.16 mm, and the single frame consumption time of 1.7 s. Compared to the traditional optimization based method, the proposed model has registration time greatly shortened. Compared with the learning-based method, this model ensures a high registration accuracy with quick registration. Therefore, the proposed model can meet the requirements of intraoperative real-time high-precision registration.

Table and Figures | Reference | Related Articles | Metrics