With the rapid growth of graph data sizes, Graph Neural Network (GNN) faces computational and storage challenges in processing large-scale graph-structured data. Traditional stand-alone training methods are no longer sufficient to cope with increasingly large datasets and complex GNN models. Distributed training is an effective way to address these problems due to its parallel computing power and scalability. However, on one hand, the existing distributed GNN training evaluations mainly focus on the performance metrics represented by model accuracy and the efficiency metrics represented by training time, but pay less attention to the metrics of data processing efficiency and computational resource utilization; on the other hand, the main scenarios for algorithm efficiency evaluation are single machine with one card or single machine with multiple cards, and the existing evaluation methods are relatively simple in a distributed environment. To address these shortcomings, an evaluation method for model training in distributed scenarios was proposed, which includes three aspects: evaluation metrics, datasets, and models. Three representative GNN models were selected according to the evaluation method, and distributed training experiments were conducted on four large open graph datasets with different data characteristics to collect and analyze the obtained evaluation metrics. Experimental results show that all of model complexity, training time, computing node throughput and computing Node Average Throughput Ratio (NATR) are influenced by model architecture and data structure characteristics in distributed training; sample processing and data copying take up much time in training, and the time of one computing node waiting for other computing nodes cannot be ignored either; compared with stand-alone training, distributed training reduces the computing node throughput significantly, and further optimization of resource utilization for distributed systems is needed. It can be seen that the proposed evaluation method provides a reference for optimizing the performance of GNN model training in a distributed environment, and establishes an experimental foundation for further model optimization and algorithm improvement.
Considering the issues of physiological signal emotion recognition, a bimodal emotion recognition method based on Graph Neural Network (GNN) and attention was proposed. Firstly, the GNN was used to classify ElectroEncephaloGram (EEG) signals. Secondly, an attention-based Bi-directional Long Short-Term Memory (Bi-LSTM) network was used to classify ElectroCardioGram (ECG) signals. Finally, the results of EEG and ECG classification were fused by Dempster-Shafer evidence theory, thus improving the comprehensive performance of the emotion recognition task. To verify the effectiveness of the proposed method, 20 subjects were invited to participate in the emotion elicitation experiment, and the EEG signals and ECG signals of the subjects were collected. Experimental results show that the binary classification accuracies of the proposed method are 91.82% and 88.24% in the valence dimension and arousal dimension, respectively, which are 2.65% and 0.40% higher than those of the single-modal EEG method respectively, and are 19.79% and 24.90% higher than those of the single-modal ECG method respectively. It can be seen that the proposed method can effectively improve the accuracy of emotion recognition and provide decision support for medical diagnosis and other fields.
IntraVascular UltraSound (IVUS) imaging can provide information of the coronary atherosclerotic plaque. It allows the doctor to make comprehensive and accurate evaluation of diseased vessel. Some ultrasound data collecting devices for imaging system exhibited insufficient data transfer speed, high cost or inflexibility, so the authors presented a high speed data transfer and imaging method for intravascular ultrasound. After being collected and processed, ultrasound data was transferred to computer through USB3.0 interface. In addition, logarithmic compression and digital coordinate conversion were applied in computer before imaging. Data transmission experiment shows that the transfer speed always stays around 2040Mb/s. Finally, phantom imaging was conducted to demonstrate the performance of the system. It shows a clear pipe wall and a smooth luminal border.