Contrastive Learning (CL) has been widely used for recommendation because of its ability to extract supervised signals contained in data itself. The recent study shows that the success of CL in recommendation depends on the uniformity of node distribution brought by comparative loss — Infomation Noise Contrastive Estimation (InfoNCE) loss. In addition, the other study proves that Bayesian Personalized Ranking (BPR) loss is beneficial to alignment and uniformity, which contribute to higher recommendation performance. Since the CL loss can bring stronger uniformity than the negative term of BPR, the necessity of the negative term of BPR in CL framework has aroused suspicion. Therefore, this study experimentally disclosed that the negative term of BPR is unnecessary in CL framework for recommendation. Based on this observation, a joint optimization loss without negative sampling was proposed, which could be applied to classical CL-based methods and achieve the same or higher performance. Besides, unlike studies which focus on improving uniformity, a novel Positive Augmentation Graph Contrastive Learning method (PAGCL) was presented, which used random positive samples for perturbation at representation level to further strengthen alignment. Experimental results on several benchmark datasets show that the proposed method is superior to SOTA (State-Of-The-Art) methods like Self-supervised Graph Learning (SGL) and Simple Graph Contrastive Learning (SimGCL) on recall and Normalized Discounted Cumulative Gain (NDCG). The method’s improvement over the base model Light Graph Convolutional Network (LightGCN) can reach up to 17.6% at NDCG@20.