Journal of Computer Applications
Next Articles
Received:
Revised:
Online:
Published:
Supported by:
龚镇辉1,史晓雨1,鲁云1,刘阳成2,尚明生1
通讯作者:
基金资助:
Abstract: Large language models (LLMs) have brought enhanced semantic understanding and personalized recommendation capabilities to recommender systems, yet they face significant challenges in user fairness during practical applications. Existing fairness methods for LLM-based recommendation often rely on explicit sensitive attributes for constraints or reweighting, making them difficult to apply in scenarios where such attributes are unavailable due to privacy protection or inaccessibility. To address this issue, a sensitive-attribute-independent fair recommendation framework, Fair-MSA, is proposed. This framework enhances recommendation fairness without accessing sensitive attributes through dynamic prompt optimization and adversarial reweighting. Specifically, the first stage adopts a dynamic prompt optimization strategy to identify high-loss samples during the fine-tuning process and utilizes them as contextual examples to mitigate stereotypical patterns. The second stage introduces an adversarial reweighting mechanism to dynamically focus on underperforming regions of the data distribution, thereby amplifying the influence of underrepresented samples on model updates. Experimental results on three public datasets, including ML-1M, demonstrate that Fair-MSA reduces the gender-group gaps in NDCG@10 and HR@10 by an average of 74.47% and 61.76%, respectively, while maintaining a recommendation accuracy comparable to the BIGRec baseline. Even when compared with FACTER, a fairness recommendation method with full access to sensitive attributes, Fair-MSA remains competitive across most fairness metrics. The research indicates that Fair-MSA provides an effective and generalizable solution to the fairness problem in recommendation systems within real-world, privacy-preserving scenarios.
Key words: Recommender Systems, Large Language Models, Group Fairness, Sensitive Attribute Absence, Privacy Preservation
摘要: 大语言模型为推荐系统带来了更强的语义理解和个性化推荐能力,但其在实际应用中面临着严重的用户公平性挑战。现有大模型推荐公平性方法多依赖于显式敏感属性进行约束或重加权,在用户敏感属性因隐私保护或不可获取而缺失的场景下,其往往难以适用。针对这一问题,本文提出了一种敏感属性无关的大模型公平推荐框架 Fair-MSA。 该框架通过所提的动态提示优化与对抗重加权方法,在无需访问敏感属性的情况下实现推荐公平性提升。具体而言,第一阶段采用动态提示优化策略,在微调过程中识别高损失样本,将其作为上下文示例以缓解刻板印象模式;第二阶段引入对抗重加权机制,动态关注模型表现不佳的分布区域,从而增强代表性不足样本对模型更新的影响。本文在 ML-1M 等三个公开数据集上的实验结果表明,FAIR-MSA 在保持与 BIGRec 基线相当推荐精度的同时,将性别群体的 NDCG@10 与 HR@10 差距分别平均缩小 74.47% 与 61.76%;即便与敏感属性完全可见的公平推荐方法 FACTER 相比,FAIR-MSA 在多数公平性指标上仍具竞争力。研究表明,FAIR-MSA 为现实环境下隐私保护场景中的推荐公平性问题提供了一种有效且可推广的解决方案。
关键词: 推荐系统, 大语言模型, 群体公平性, 敏感属性缺失, 隐私保护
CLC Number:
TP391.4
龚镇辉 史晓雨 鲁云 刘阳成 尚明生. 面向敏感属性缺失的大语言模型公平推荐框架[J]. 《计算机应用》唯一官方网站, DOI: 10.11772/j.issn.1001-9081.2025080969.
0 / Recommend
Add to citation manager EndNote|Ris|BibTeX
URL: https://www.joca.cn/EN/10.11772/j.issn.1001-9081.2025080969