Visualization reconstruction technology aims to transform graphics into data forms that can be parsed and operated by machines, providing the necessary basic information for large-scale analysis, reuse and retrieval of visualization. However, the existing reconstruction methods focus on the recovery of visual information obviously, while ignoring the key role of interaction information in data analysis and understanding. To address the above problem, a visual interaction information reconstruction method for machine understanding was proposed. Firstly, interactions were defined formally to divide the visual elements into different visual groups, and the automated tools were used to extract interaction information of the visual graphics. Secondly, associations among interactions and visual elements were decoupled, and the interactions were split into independent experimental variables to build an interaction entity library. Thirdly, a standardized declarative language was formulated to realize querying of the interaction information. Finally, migration rules were designed to achieve migration adaptation of interactions among different visualizations based on visual element matching and adaptive adjustment mechanisms. The experimental cases focused on downstream tasks for machine understanding, such as visual question answering, querying, and migration. The results show that adding interaction information can enable machines to understand the semantics of visual interaction, thereby expanding the application scope of the above tasks. The above experimental results verify that proposed method can achieve structural integrity of the reconstructed visual graphics by integrating dynamic interaction information.