Task-free online continual learning is a task-agnostic and autonomous machine learning approach, where model dynamic update is achieved through continuous adaptation to new data and forgetting suppression. In the existing online continual learning methods, model accuracy is prioritized typically at the expense of computational efficiency, making it difficult for the model to respond promptly to changes in the data stream due to the lag in training speed in high-speed data stream scenarios. To address the above challenge, an efficiency-performance co-optimized online sparse continual learning framework was proposed to overcome the limitations of conventional approaches through constructing a bidirectional sparse adaptive regulation mechanism. Firstly, a dynamic sparse topology optimization framework for parameter importance measurement was designed, so that unstructured parameter pruning was achieved by incorporating parameter sensitivity analysis. Secondly, a memory-efficiency dual-objective optimization model was established, in which computational budget allocation was adjusted dynamically based on online class distribution estimation, so as to realize optimal computational resource configuration. Finally, a gradient decoupling optimization strategy was developed to employ gradient masking to enable bidirectional optimization of both old and new knowledge, thereby accelerating model updates and preserving the integrity of the knowledge topology at the same time. The benchmark tests results show that the proposed framework has significant advantages. Compared to the baseline ER (Experience Replay), with a memory buffer of 100, on CIFAR-10 dataset the framework achieves average improvements of 4.86% in Average Online Accuracy (AOA) and 6.25% in Test Accuracy (TA) ; on CIFAR-100 dataset, the framework obtains enhancements of 13.77% in AOA and 3.08% in TA; on Mini-ImageNet dataset, it shows performance gains of 17.83% and 25.00% in AOA and TA respectively. Visualization analysis shows that the proposed framework captures underlying concept drift patterns in data streams successfully while maintaining real-time response ability. It can be seen that the proposed framework breaks through the traditional methods’ dilemma of trade-off between computational efficiency and model performance, and establishes a new paradigm for online continual learning systems in open environments.