Sparse Matrix-Vector multiplication (SpMV) is an important numerical linear algebraic operation. The existing optimizations for SpMV suffer from issues such as incomplete consideration of preprocessing and communication time, lack of universality in storage structures. To address these issues, an adaptive optimization scheme for SpMV on heterogeneous platforms was proposed. In the proposed scheme, the Pearson correlation coefficients were utilized to determine highly correlated feature parameters, and two Gradient Boosting Decision Tree (GBDT) based algorithms eXtreme Gradient Boosting (XGBoost) and Light Gradient Boosting Machine (LightGBM) were employed to train prediction models to determine the optimal storage format for a certain sparse matrix. The use of grid searches to identify better model hyperparameters for model training resulted in both of those algorithms achieving more than 85% accuracy in selecting a more suitable storage structure. Furthermore, for sparse matrices with the HYBrid (HYB) storage format, the ELLPACK (ELL) and COOrdinate (COO) storage format parts in these metrices were computed on the GPU and CPU separately, establishing a CPU+GPU parallel hybrid computing mode. At the same time, hardware platforms were also selected for sparse matrices with small data sizes to improve computational speed. Experimental results demonstrate that the adaptive computing optimization achieves an average speedup of 1.4 compared to the Compressed Sparse Row (CSR) storage format in cuSPARSE library, and average speedup of 2.1 and 2.6 compared to the HYB and ELL storage formats, respectively.