Journal of Computer Applications ›› 2025, Vol. 45 ›› Issue (9): 2727-2736.DOI: 10.11772/j.issn.1001-9081.2024091277
• Artificial intelligence •
Chuang WANG, Lu YU(), Jianwei CHEN, Cheng PAN, Wenbo DU
Received:
2024-09-09
Revised:
2025-02-25
Accepted:
2025-03-03
Online:
2025-03-26
Published:
2025-09-10
Contact:
Lu YU
About author:
WANG Chuang, born in 1995, M. S. candidate. His research interests include transfer learning, pattern recognition.Supported by:
通讯作者:
俞璐
作者简介:
王闯(1995—),男,安徽滁州人,硕士研究生,主要研究方向:迁移学习、模式识别基金资助:
CLC Number:
Chuang WANG, Lu YU, Jianwei CHEN, Cheng PAN, Wenbo DU. Review of open set domain adaptation[J]. Journal of Computer Applications, 2025, 45(9): 2727-2736.
王闯, 俞璐, 陈健威, 潘成, 杜文博. 开集域适应综述[J]. 《计算机应用》唯一官方网站, 2025, 45(9): 2727-2736.
Add to citation manager EndNote|Ris|BibTeX
URL: https://www.joca.cn/EN/10.11772/j.issn.1001-9081.2024091277
方法 | 基本原理 | 优点 | 局限性 | |
---|---|---|---|---|
针对数据 增强的方法 | 简单数据 增强方法 | 通过找寻源域已知类和目标域未知类数据之间的潜在关系,将源域不同已知类数据的不同组分按照一定方式合成代表未知类的数据,并用于后续模型训练 | 实现相对简单,可解释性较强,通常不会引入额外的神经网络结构,也不会增加模型复杂度和算力成本 | 需要源域已知类和目标域未知类数据特征的先验知识,否则很难找到潜在关系,导致合成的数据无法真正代表未知类,由此引发负迁移 |
神经网络 数据增强方法 | 通常需要引入GAN等额外的神经网络结构,通过设计具有针对性的策略,引导模型自动生成代表未知类的数据,并用于后续模型训练 | 神经网络自动学习源域已知类和目标域未知类数据的特征关系,即使对二者先验知识较少,也能实现 | 引入了额外神经网络,模型复杂度和计算成本增加;生成的未知类数据具有很强随机性,难以确保能很好代表未知类数据,从而导致负迁移 | |
针对特征 提取的方法 | 基于领域分布 差异的方法 | 通常需要通过模型训练先对目标域数据标记伪标签,而后通过减小源域和目标域已知类数据分布差异,同时增大和目标域未知类数据分布差异的方法减小目标域泛化误差 | 该类方法较为直观,可解释性较强,对不同数据集都有一定的兼容性 | 对伪标签质量的依赖较强,若伪标签质量较差,则容易产生负迁移;对域偏移较大的数据集效果较差,容易混淆未知类和域偏移部分 |
基于对抗学习的 方法 | 通过设定固定阈值或其他策略,对未知类数据进行鉴别,同时引入对抗训练思想,引导特征提取器学习源域数据和目标域已知类数据共有特征实现已知类对齐 | 能够有效降低源域和目标域已知类数据分布差异,通过对抗训练,能够使模型学习到更加鲁棒的特征表示,以提高模型在目标域上的泛化能力 | 模型复杂度和计算成本较高;多数基于对抗学习方法都设置了固定阈值,对参数选择较为敏感,不同开放程度的数据集阈值通常不同,需要进行多次实验进行调整选取 | |
基于语义分析的 方法 | 通常利用不同的聚类算法,提取源域和目标域不同类别数据所蕴含的语义信息,并用于指导模型训练 | 该类方法综合考虑了源域和目标域数据的语义信息,并以此进一步指导模型特征选择,能够较好提升模型性能 | 参数过多,计算量过大,计算效率较低;同时,该类方法对数据需求量较大,且对光照和噪声等因素较为敏感 | |
针对分类器的方法 | 基于对抗学习的 方法 | 通常先将目标域数据全部当成已知类,通过MCD方法进行特征提取器和分类器的交替训练,使分类边界更加清晰,而后通过额外设计未知类鉴别策略,对已经分为已知类的数据进行未知类鉴别 | 该类方法有利于模型进一步优化分类器的决策边界,减少目标域中边界样本数量,从而提高模型鲁棒性 | 模型复杂度和计算成本大幅增加,同时,由于目标域未知类数据的影响,使得分类器难以实现完全对齐 |
基于一对多 网络的方法 | 使用多个OVA网络代替多类分类器,模型训练后,目标域已知类数据将被分类为对应类别,目标域未知类数据将被分类为多个类别,或者不分类为任一类别 | 相对于多类分类器,OVA分类器自身具有识别未知类的能力,能够很好适应OSDA场景 | 多个OVA分类器,大幅提升了模型复杂度;在处理存在较大域偏移数据集时,性能难以保证 |
Tab. 1 Basic principles, advantages, and limitations of OSDA methods
方法 | 基本原理 | 优点 | 局限性 | |
---|---|---|---|---|
针对数据 增强的方法 | 简单数据 增强方法 | 通过找寻源域已知类和目标域未知类数据之间的潜在关系,将源域不同已知类数据的不同组分按照一定方式合成代表未知类的数据,并用于后续模型训练 | 实现相对简单,可解释性较强,通常不会引入额外的神经网络结构,也不会增加模型复杂度和算力成本 | 需要源域已知类和目标域未知类数据特征的先验知识,否则很难找到潜在关系,导致合成的数据无法真正代表未知类,由此引发负迁移 |
神经网络 数据增强方法 | 通常需要引入GAN等额外的神经网络结构,通过设计具有针对性的策略,引导模型自动生成代表未知类的数据,并用于后续模型训练 | 神经网络自动学习源域已知类和目标域未知类数据的特征关系,即使对二者先验知识较少,也能实现 | 引入了额外神经网络,模型复杂度和计算成本增加;生成的未知类数据具有很强随机性,难以确保能很好代表未知类数据,从而导致负迁移 | |
针对特征 提取的方法 | 基于领域分布 差异的方法 | 通常需要通过模型训练先对目标域数据标记伪标签,而后通过减小源域和目标域已知类数据分布差异,同时增大和目标域未知类数据分布差异的方法减小目标域泛化误差 | 该类方法较为直观,可解释性较强,对不同数据集都有一定的兼容性 | 对伪标签质量的依赖较强,若伪标签质量较差,则容易产生负迁移;对域偏移较大的数据集效果较差,容易混淆未知类和域偏移部分 |
基于对抗学习的 方法 | 通过设定固定阈值或其他策略,对未知类数据进行鉴别,同时引入对抗训练思想,引导特征提取器学习源域数据和目标域已知类数据共有特征实现已知类对齐 | 能够有效降低源域和目标域已知类数据分布差异,通过对抗训练,能够使模型学习到更加鲁棒的特征表示,以提高模型在目标域上的泛化能力 | 模型复杂度和计算成本较高;多数基于对抗学习方法都设置了固定阈值,对参数选择较为敏感,不同开放程度的数据集阈值通常不同,需要进行多次实验进行调整选取 | |
基于语义分析的 方法 | 通常利用不同的聚类算法,提取源域和目标域不同类别数据所蕴含的语义信息,并用于指导模型训练 | 该类方法综合考虑了源域和目标域数据的语义信息,并以此进一步指导模型特征选择,能够较好提升模型性能 | 参数过多,计算量过大,计算效率较低;同时,该类方法对数据需求量较大,且对光照和噪声等因素较为敏感 | |
针对分类器的方法 | 基于对抗学习的 方法 | 通常先将目标域数据全部当成已知类,通过MCD方法进行特征提取器和分类器的交替训练,使分类边界更加清晰,而后通过额外设计未知类鉴别策略,对已经分为已知类的数据进行未知类鉴别 | 该类方法有利于模型进一步优化分类器的决策边界,减少目标域中边界样本数量,从而提高模型鲁棒性 | 模型复杂度和计算成本大幅增加,同时,由于目标域未知类数据的影响,使得分类器难以实现完全对齐 |
基于一对多 网络的方法 | 使用多个OVA网络代替多类分类器,模型训练后,目标域已知类数据将被分类为对应类别,目标域未知类数据将被分类为多个类别,或者不分类为任一类别 | 相对于多类分类器,OVA分类器自身具有识别未知类的能力,能够很好适应OSDA场景 | 多个OVA分类器,大幅提升了模型复杂度;在处理存在较大域偏移数据集时,性能难以保证 |
方法 | 不同已知类的分类准确率 | UNK | OS* | HOS | |||||
---|---|---|---|---|---|---|---|---|---|
aeroplane | bicycle | bus | car | horse | knife | ||||
DANN[ | 93.7 | 80.4 | 89.7 | 71.0 | 92.5 | 65.2 | 0.0 | 82.1 | 0.0 |
AMS[ | 73.1 | 55.9 | 67.1 | 60.8 | 76.5 | 3.2 | 64.4 | 56.1 | 59.9 |
DAMC[ | 48.2 | 38.4 | 15.4 | 28.6 | 46.6 | 9.9 | 72.1 | 31.2 | 43.5 |
DCC[ | 75.8 | 57.7 | 83.9 | 68.7 | 76.3 | 83.5 | 32.0 | 74.3 | 44.8 |
文献[ | 86.9 | 58.6 | 77.2 | 63.3 | 86.0 | 8.8 | 52.9 | 63.5 | 57.7 |
文献[ | 56.0 | 56.1 | 69.2 | 57.8 | 66.6 | 28.8 | 73.9 | 55.7 | 63.6 |
Tab. 2 Results of various OSDA methods on VisDA dataset
方法 | 不同已知类的分类准确率 | UNK | OS* | HOS | |||||
---|---|---|---|---|---|---|---|---|---|
aeroplane | bicycle | bus | car | horse | knife | ||||
DANN[ | 93.7 | 80.4 | 89.7 | 71.0 | 92.5 | 65.2 | 0.0 | 82.1 | 0.0 |
AMS[ | 73.1 | 55.9 | 67.1 | 60.8 | 76.5 | 3.2 | 64.4 | 56.1 | 59.9 |
DAMC[ | 48.2 | 38.4 | 15.4 | 28.6 | 46.6 | 9.9 | 72.1 | 31.2 | 43.5 |
DCC[ | 75.8 | 57.7 | 83.9 | 68.7 | 76.3 | 83.5 | 32.0 | 74.3 | 44.8 |
文献[ | 86.9 | 58.6 | 77.2 | 63.3 | 86.0 | 8.8 | 52.9 | 63.5 | 57.7 |
文献[ | 56.0 | 56.1 | 69.2 | 57.8 | 66.6 | 28.8 | 73.9 | 55.7 | 63.6 |
方法 | 是否端到端训练 | A-D | A-W | D-A | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
OS* | UNK | HOS | OS* | UNK | HOS | OS* | UNK | HOS | ||||
DANN[ | √ | 94.0 | — | — | 93.5 | — | — | 84.6 | — | — | ||
AMS[ | √ | 89.3 | 71.8 | 79.6 | 87.3 | 67.3 | 76.0 | 74.9 | 65.4 | 69.8 | ||
DAMC[ | √ | 19.1 | 84.0 | 31.1 | 25.2 | 88.9 | 39.3 | 21.9 | 92.3 | 35.4 | ||
DCC[ | √ | 91.3 | 41.3 | 56.9 | 92.3 | 43.6 | 59.2 | 76.2 | 70.0 | 73.0 | ||
文献[ | × | 89.1 | 52.6 | 66.2 | 85.7 | 43.8 | 57.9 | 76.3 | 52.8 | 62.4 | ||
文献[ | √ | 88.4 | 76.2 | 81.8 | 81.6 | 82.0 | 81.8 | 58.2 | 92.6 | 71.5 | ||
方法 | D-W | W-A | W-D | Avg | ||||||||
OS* | UNK | HOS | OS* | UNK | HOS | OS* | UNK | HOS | OS* | UNK | HOS | |
DANN[ | 100.0 | — | — | 78.1 | — | — | 100.0 | — | — | 91.7 | — | — |
AMS[ | 99.3 | 78.9 | 88.0 | 76.8 | 70.2 | 73.3 | 100.0 | 82.8 | 90.6 | 88.0 | 72.8 | 79.6 |
DAMC[ | 65.0 | 94.1 | 76.9 | 25.1 | 88.6 | 39.1 | 75.3 | 91.9 | 82.8 | 38.6 | 90.0 | 50.8 |
DCC[ | 100.0 | 66.0 | 79.5 | 72.5 | 60.8 | 66.2 | 100.0 | 66.0 | 79.5 | 88.7 | 59.5 | 70.1 |
文献[ | 100.0 | 34.8 | 51.7 | 77.0 | 49.1 | 60.0 | 100.0 | 46.2 | 63.2 | 88.0 | 46.6 | 60.2 |
文献[ | 99.2 | 93.4 | 96.2 | 60.6 | 92.5 | 73.2 | 100.0 | 93.0 | 96.4 | 81.3 | 88.3 | 83.5 |
Tab. 3 Results of various OSDA methods on Office-31 dataset
方法 | 是否端到端训练 | A-D | A-W | D-A | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
OS* | UNK | HOS | OS* | UNK | HOS | OS* | UNK | HOS | ||||
DANN[ | √ | 94.0 | — | — | 93.5 | — | — | 84.6 | — | — | ||
AMS[ | √ | 89.3 | 71.8 | 79.6 | 87.3 | 67.3 | 76.0 | 74.9 | 65.4 | 69.8 | ||
DAMC[ | √ | 19.1 | 84.0 | 31.1 | 25.2 | 88.9 | 39.3 | 21.9 | 92.3 | 35.4 | ||
DCC[ | √ | 91.3 | 41.3 | 56.9 | 92.3 | 43.6 | 59.2 | 76.2 | 70.0 | 73.0 | ||
文献[ | × | 89.1 | 52.6 | 66.2 | 85.7 | 43.8 | 57.9 | 76.3 | 52.8 | 62.4 | ||
文献[ | √ | 88.4 | 76.2 | 81.8 | 81.6 | 82.0 | 81.8 | 58.2 | 92.6 | 71.5 | ||
方法 | D-W | W-A | W-D | Avg | ||||||||
OS* | UNK | HOS | OS* | UNK | HOS | OS* | UNK | HOS | OS* | UNK | HOS | |
DANN[ | 100.0 | — | — | 78.1 | — | — | 100.0 | — | — | 91.7 | — | — |
AMS[ | 99.3 | 78.9 | 88.0 | 76.8 | 70.2 | 73.3 | 100.0 | 82.8 | 90.6 | 88.0 | 72.8 | 79.6 |
DAMC[ | 65.0 | 94.1 | 76.9 | 25.1 | 88.6 | 39.1 | 75.3 | 91.9 | 82.8 | 38.6 | 90.0 | 50.8 |
DCC[ | 100.0 | 66.0 | 79.5 | 72.5 | 60.8 | 66.2 | 100.0 | 66.0 | 79.5 | 88.7 | 59.5 | 70.1 |
文献[ | 100.0 | 34.8 | 51.7 | 77.0 | 49.1 | 60.0 | 100.0 | 46.2 | 63.2 | 88.0 | 46.6 | 60.2 |
文献[ | 99.2 | 93.4 | 96.2 | 60.6 | 92.5 | 73.2 | 100.0 | 93.0 | 96.4 | 81.3 | 88.3 | 83.5 |
方法 | 是否端到端训练 | Rw-Pr | Rw-Cl | Pr-Rw | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
OS* | UNK | HOS | OS* | UNK | HOS | OS* | UNK | HOS | ||||
OSBP[ | √ | 66.45 | 54.89 | 60.12 | 41.89 | 58.49 | 48.82 | 66.19 | 60.80 | 63.38 | ||
ROS[ | × | 66.60 | 39.52 | 49.61 | 43.53 | 44.10 | 43.81 | 63.89 | 47.23 | 54.43 | ||
DANCE[ | √ | 66.32 | 57.18 | 61.41 | 42.40 | 67.63 | 52.12 | 64.07 | 64.43 | 64.25 | ||
OMEGA[ | √ | 59.61 | 77.89 | 67.53 | 42.23 | 69.58 | 52.56 | 62.75 | 73.80 | 67.83 | ||
OVA[ | √ | 67.34 | 60.23 | 63.58 | 35.66 | 77.59 | 48.87 | 63.51 | 74.00 | 68.36 | ||
方法 | Pr-Cl | Cl-Rw | Cl-Pr | Avg | ||||||||
OS* | UNK | HOS | OS* | UNK | HOS | OS* | UNK | HOS | OS* | UNK | HOS | |
OSBP[ | 38.03 | 59.67 | 46.45 | 59.47 | 53.73 | 56.45 | 55.56 | 60.48 | 57.91 | 54.60 | 58.01 | 55.52 |
ROS[ | 41.53 | 45.05 | 43.18 | 58.57 | 34.42 | 43.49 | 51.85 | 26.42 | 35.13 | 54.33 | 39.46 | 44.94 |
DANCE[ | 46.70 | 66.27 | 54.79 | 57.09 | 62.14 | 59.51 | 51.07 | 68.50 | 58.51 | 54.61 | 64.36 | 58.43 |
OMEGA[ | 45.81 | 69.81 | 55.32 | 57.35 | 68.83 | 62.57 | 58.49 | 64.04 | 61.14 | 54.37 | 70.66 | 61.16 |
OVA[ | 32.70 | 83.73 | 47.03 | 51.25 | 78.39 | 61.98 | 46.86 | 72.30 | 56.86 | 49.55 | 74.37 | 57.78 |
Tab. 4 Results of various OSDA methods on Office-Home dataset
方法 | 是否端到端训练 | Rw-Pr | Rw-Cl | Pr-Rw | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
OS* | UNK | HOS | OS* | UNK | HOS | OS* | UNK | HOS | ||||
OSBP[ | √ | 66.45 | 54.89 | 60.12 | 41.89 | 58.49 | 48.82 | 66.19 | 60.80 | 63.38 | ||
ROS[ | × | 66.60 | 39.52 | 49.61 | 43.53 | 44.10 | 43.81 | 63.89 | 47.23 | 54.43 | ||
DANCE[ | √ | 66.32 | 57.18 | 61.41 | 42.40 | 67.63 | 52.12 | 64.07 | 64.43 | 64.25 | ||
OMEGA[ | √ | 59.61 | 77.89 | 67.53 | 42.23 | 69.58 | 52.56 | 62.75 | 73.80 | 67.83 | ||
OVA[ | √ | 67.34 | 60.23 | 63.58 | 35.66 | 77.59 | 48.87 | 63.51 | 74.00 | 68.36 | ||
方法 | Pr-Cl | Cl-Rw | Cl-Pr | Avg | ||||||||
OS* | UNK | HOS | OS* | UNK | HOS | OS* | UNK | HOS | OS* | UNK | HOS | |
OSBP[ | 38.03 | 59.67 | 46.45 | 59.47 | 53.73 | 56.45 | 55.56 | 60.48 | 57.91 | 54.60 | 58.01 | 55.52 |
ROS[ | 41.53 | 45.05 | 43.18 | 58.57 | 34.42 | 43.49 | 51.85 | 26.42 | 35.13 | 54.33 | 39.46 | 44.94 |
DANCE[ | 46.70 | 66.27 | 54.79 | 57.09 | 62.14 | 59.51 | 51.07 | 68.50 | 58.51 | 54.61 | 64.36 | 58.43 |
OMEGA[ | 45.81 | 69.81 | 55.32 | 57.35 | 68.83 | 62.57 | 58.49 | 64.04 | 61.14 | 54.37 | 70.66 | 61.16 |
OVA[ | 32.70 | 83.73 | 47.03 | 51.25 | 78.39 | 61.98 | 46.86 | 72.30 | 56.86 | 49.55 | 74.37 | 57.78 |
[1] | PAN S J, YANG Q. A survey on transfer learning [J]. IEEE Transactions on Knowledge and Data Engineering, 2010, 22(10): 1345-1359. |
[2] | DAUMÉ H, Ⅲ. Frustratingly easy domain adaptation [C]// Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics. Stroudsburg: ACL, 2007: 256-263. |
[3] | ZHANG L, GAO X. Transfer adaptation learning: a decade survey[J]. IEEE Transactions on Neural Networks and Learning Systems, 2024, 35(1): 23-44. |
[4] | GANIN Y, USTINOVA E, AJAKAN H, et, al. Domain-adversarial training of neural networks [J]. Journal of Machine Learning Research, 2016, 17: 1-35. |
[5] | WANG M, DENG W. Deep visual domain adaptation: a survey[J]. Neurocomputing, 2018, 312: 135-153. |
[6] | LUO Y, ZHENG L, GUAN T, et al. Taking a closer look at domain shift: category-level adversaries for semantics consistent domain adaptation [C]// Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2019: 2502-2511. |
[7] | BUSTO P P, GALL J. Open set domain adaptation [C]// Proceedings of the 2017 IEEE International Conference on Computer Vision. Piscataway: IEEE, 2017: 754-763. |
[8] | SCHEIRER W J, DE REZENDE ROCHA A, SAPKOTA A, et al. Toward open set recognition [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013, 35(7): 1757-1772. |
[9] | SAITO K, YAMAMOTO S, USHIKU Y, et al. Open set domain adaptation by backpropagation [C]// Proceedings of the 2018 European Conference on Computer Vision, LNCS 11209. Cham: Springer, 2018:156-171. |
[10] | GENG C, HUANG S J, CHEN S. Recent advances in open set recognition: a survey [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021, 43(10): 3614-3631. |
[11] | PAWSON R, WONG G, OWEN L. Known knowns, known unknowns, unknown unknowns: the predicament of evidence-based policy [J]. American Journal of Evaluation, 2011, 32(4): 518-546. |
[12] | SUGIYAMA M, KRAULEDAT M, MÜLLER K R. Covariate shift adaptation by importance weighted cross validation [J]. Journal of Machine Learning Research, 2007, 8: 985-1005. |
[13] | GHIFARY M, KLEIJN W B, ZHANG M. Domain adaptive neural networks for object recognition [C]// Proceedings of the 2014 Pacific Rim International Conference on Artificial Intelligence, LNCS 8862. Cham: Springer, 2014:898-904. |
[14] | GOPALAN R, LI R, CHELLAPPA R. Domain adaptation for object recognition: an unsupervised approach [C]// Proceedings of the 2011 IEEE International Conference on Computer Vision. Piscataway: IEEE, 2011: 999-1006. |
[15] | LONG M, CAO Z, WANG J, et al. Conditional adversarial domain adaptation [C]// Proceedings of the 32nd International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2018: 1647-1657. |
[16] | 范苍宁,刘鹏,肖婷,等. 深度域适应综述:一般情况与复杂情况[J]. 自动化学报, 2021, 47(3):515-548. |
FAN C N, LIU P, XIAO T, et al. A review of deep domain adaptation: general situation and complex situation [J]. Acta Automatica Sinica, 2021, 47(3): 515-548. | |
[17] | FANG Z, LU J, LIU F, et al. Open set domain adaptation: theoretical bound and algorithm [J]. IEEE Transactions on Neural Networks and Learning Systems, 2021, 32(10): 4309-4322. |
[18] | ZHONG L, FANG Z, LIU F, et al. Bridging the theoretical bound and deep algorithms for open set domain adaptation [J]. IEEE Transactions on Neural Networks and Learning Systems, 2023, 34(8): 3859-3873. |
[19] | TAN C, SUN F, KONG T, et al. A survey on deep transfer learning [C]// Proceedings of the 2018 International Conference on Artificial Neural Networks, LNCS 11141. Cham: Springer, 2018: 270-279. |
[20] | BUCCI S, LOGHMANI M R, TOMMASI T. On the effectiveness of image rotation for open set domain adaptation [C]// Proceedings of the 2020 European Conference on Computer Vision, LNCS 12361. Cham: Springer, 2020: 422-438. |
[21] | GIDARIS S, SINGH P, KOMODAKIS N. Unsupervised representation learning by predicting image rotations [EB/OL]. [2024-07-11]. . |
[22] | KUNDU JN, VENKAT N, REVANUR A, et al. Towards inheritable models for open-set domain adaptation [C]// Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2020: 12373-12382. |
[23] | ZEILER M D, FERGUS R. Visualizing and Understanding convolutional networks [C]// Proceedings of the 2014 European Conference on Computer Vision, LNCS 8689. Cham: Springer, 2014: 818-833. |
[24] | CHEN H, WANG Y, XU C, et al. Data-free learning of student networks [C]// Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE, 2019: 3513-3521. |
[25] | KUNDU JN, VENKAT N, RAHUL M V, et al. Universal source-free domain adaptation [C]// Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2020: 4543-4552. |
[26] | BAKTASHMOTLAGH M, CHEN T, SALZMANN M. Learning to generate the unknowns as a remedy to the open-set domain shift[C]// Proceedings of the 2022 IEEE/CVF Winter Conference on Applications of Computer Vision. Piscataway: IEEE, 2022: 3737-3746. |
[27] | LIU Y, DENG A, DENG M, et al. Transforming the open set into a pseudo-closed set: a regularized GAN for domain adaptation in open-set fault diagnosis [J]. IEEE Transactions on Instrumentation and Measurement, 2023, 72: No.3531312. |
[28] | XU Q, SHI Y, YUAN X, et al. Universal domain adaptation for remote sensing image scene classification [J]. IEEE Transactions on Geoscience and Remote Sensing, 2023, 61: No.4700515. |
[29] | LIU J, JING M, LI J, et al. Open set domain adaptation via joint alignment and category separation [J]. IEEE Transactions on Neural Networks and Learning Systems, 2023, 34(9): 6186-6199. |
[30] | MENDES JÚNIOR P R, DE SOUZA R M, WERNECK R D O, et al. Nearest neighbors distance ratio open-set classifier [J]. Machine Learning, 2017, 106(3): 359-386. |
[31] | JING M, LI J, ZHU L, et al. Balanced open set domain adaptation via centroid alignment [C]// Proceedings of the 35th AAAI Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2021: 8013-8020. |
[32] | DAVIDSON T R, FALORSI L, DE CAO N, et al. Hyperspherical variational auto-encoders [C]// Proceedings of the 34th Conference on Uncertainty in Artificial Intelligence. Arlington, VA: AUAI Press, 2018: 856-865. |
[33] | LI X, LI J, DU Z, et al. Interpretable open-set domain adaptation via angular margin separation [C]// Proceedings of the 2022 European Conference on Computer Vision, LNCS 13694. Cham: Springer, 2022: 1-18. |
[34] | KRICHEN M. Generative adversarial networks [C]// Proceedings of the 14th International Conference on Computing Communication and Networking Technologies. Piscataway: IEEE, 2023: 1-7. |
[35] | GANIN Y, LEMPITSKY V. Unsupervised domain adaptation by backpropagation [C]// Proceedings of the 32nd International Conference on Machine Learning. New York: JMLR.org, 2015: 1180-1189. |
[36] | FU J, WU X, ZHANG S, et al. Improved open set domain adaptation with backpropagation [C]// Proceedings of the 2019 IEEE International Conference on Image Processing. Piscataway: IEEE, 2019: 2506-2510. |
[37] | SHERMIN T, LU G, TENG S W, et al. Adversarial network with multiple classifiers for open set domain adaptation [J]. IEEE Transactions on Multimedia, 2021, 23: 2732-2744. |
[38] | GAO Y, MA A J, GAO Y, et al. Adversarial open set domain adaptation via progressive selection of transferable target samples[J]. Neurocomputing, 2020, 410: 174-184. |
[39] | ZHANG H J, LI A, GUO J, et al. Improving open set domain adaptation using image-to-image translation and instance-weighted adversarial learning [J]. Journal of Computer Science and Technology, 2023, 38(3): 644-658. |
[40] | FENG Q, KANG G, FAN H, et al. Attract or distract: exploit the margin of open set [C]// Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE, 2019: 7989-7998. |
[41] | QI C, SU F. Contrastive-center loss for deep neural networks [C]// Proceedings of the 2017 IEEE International Conference on Image Processing. Piscataway: IEEE, 2017: 2851-2855. |
[42] | JING T, LIU H, DING Z. Towards novel target discovery through open-set domain adaptation [C]// Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE, 2021: 9302-9311. |
[43] | PAN Y, YAO T, LI Y, et al. Exploring category-agnostic clusters for open-set domain adaptation [C]// Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2020: 13864-13872. |
[44] | HJELM R D, FEDOROV A, LAVOIE-MARCHILDON S, et al. Learning deep representations by mutual information estimation and maximization [EB/OL]. [2024-07-14].. |
[45] | LI G, KANG G, ZHU Y, et al. Domain consensus clustering for universal domain adaptation [C]// Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2021: 9752-9761. |
[46] | SAITO K, KIM D, SCLAROFF S, et al. Universal domain adaptation through self supervision [C]// Proceedings of the 34th International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2020: 16282-16292. |
[47] | RU J, TIAN J, XIAO C, et al. Imbalanced open set domain adaptation via moving-threshold estimation and gradual alignment[J]. IEEE Transactions on Multimedia, 2024, 26: 2504-2514. |
[48] | SAITO K, WATANABE K, USHIKU Y, et al. Maximum classifier discrepancy for unsupervised domain adaptation [C]// Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2018: 3723-3732. |
[49] | XU Y, CHEN L, DUAN L, et al. Open set domain adaptation with soft unknown-class rejection [J]. IEEE Transactions on Neural Networks and Learning Systems, 2023, 34(3): 1601-1612. |
[50] | GAO F, PI D, CHEN J. Balanced and robust unsupervised open set domain adaptation via joint adversarial alignment and unknown class isolation [J]. Expert Systems with Applications, 2024, 238(Pt E): No.122127. |
[51] | SAITO K, SAENKO K. OVANet: one-vs-all network for universal domain adaptation [C]// Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE, 2021: 8980-8989. |
[52] | SILVA L F A E, SEBE N, ALMEIDA J. Tightening classification boundaries in open set domain adaptation through unknown exploitation [C]// Proceedings of the 36th SIBGRAPI Conference on Graphics, Patterns and Images. Piscataway: IEEE, 2023: 157-162. |
[53] | PENG X, USMAN B, KAUSHIK N, et al. VisDA: the visual domain adaptation challenge [EB/OL]. [2025-01-14]. . |
[54] | SAENKO K, KULIS B, FRITZ M, et al. Adapting visual category models to new domains [C]// Proceedings of the 2010 European Conference on Computer Vision, LNCS 6314. Berlin: Springer, 2010: 213-226. |
[55] | VENKATESWARA H, EUSEBIO J, CHAKRABORTY S, et al. Deep hashing network for unsupervised domain adaptation[C]// Proceedings of the 2017 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2017: 5385-5394. |
[56] | YOU K, LONG M, CAO Z, et al. Universal domain adaptation[C]// Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2019: 2715-2724. |
[1] | Yuyang SUN, Minjie ZHANG, Jie HU. Zero-shot dialogue state tracking domain transfer model based on semantic prefix-tuning [J]. Journal of Computer Applications, 2025, 45(7): 2221-2228. |
[2] | Yulin HE, Peng HE, Zhexue HUANG, Weicheng XIE, Fournier-Viger PHILIPPE. Labeling certainty enhancement-oriented positive and unlabeled learning algorithm [J]. Journal of Computer Applications, 2025, 45(7): 2101-2112. |
[3] | Bo FENG, Haizheng YU, Hong BIAN. Domain adaptive semantic segmentation based on masking enhanced self-training [J]. Journal of Computer Applications, 2025, 45(7): 2132-2137. |
[4] | Daoquan LI, Zheng XU, Sihui CHEN, Jiayu LIU. Network traffic classification model integrating variational autoencoder and AdaBoost-CNN [J]. Journal of Computer Applications, 2025, 45(6): 1841-1848. |
[5] | Chaoying JIANG, Qian LI, Ning LIU, Lei LIU, Lizhen CUI. Readmission prediction model based on graph contrastive learning [J]. Journal of Computer Applications, 2025, 45(6): 1784-1792. |
[6] | Shuangshuang CUI, Hongzhi WANG, Jiahao ZHU, Hao WU. Two-stage data selection method for classifier with low energy consumption and high performance [J]. Journal of Computer Applications, 2025, 45(6): 1703-1711. |
[7] | Xueying LI, Kun YANG, Guoqing TU, Shubo LIU. Adversarial sample generation method for time-series data based on local augmentation [J]. Journal of Computer Applications, 2025, 45(5): 1573-1581. |
[8] | Renjie TIAN, Mingli JING, Long JIAO, Fei WANG. Recommendation algorithm of graph contrastive learning based on hybrid negative sampling [J]. Journal of Computer Applications, 2025, 45(4): 1053-1060. |
[9] | Haitao SUN, Jiayu LIN, Zuhong LIANG, Jie GUO. Data augmentation technique incorporating label confusion for Chinese text classification [J]. Journal of Computer Applications, 2025, 45(4): 1113-1119. |
[10] | Meirong DING, Jinxin ZHUO, Yuwu LU, Qinglong LIU, Jicong LANG. Domain adaptation integrating environment label smoothing and nuclear norm discrepancy [J]. Journal of Computer Applications, 2025, 45(4): 1130-1138. |
[11] | Qingli CHEN, Yuanbo GUO, Chen FANG. Clustering federated learning algorithm for heterogeneous data [J]. Journal of Computer Applications, 2025, 45(4): 1086-1094. |
[12] | Yu WANG, Xianjin FANG, Gaoming YANG, Yifeng DING, Xinlu YANG. Active defense against face forgery based on attention mask and feature extraction [J]. Journal of Computer Applications, 2025, 45(3): 904-910. |
[13] | Chenwei SUN, Junli HOU, Xianggen LIU, Jiancheng LYU. Large language model prompt generation method for engineering drawing understanding [J]. Journal of Computer Applications, 2025, 45(3): 801-807. |
[14] | Kun FU, Shicong YING, Tingting ZHENG, Jiajie QU, Jingyuan CUI, Jianwei LI. Graph data augmentation method for few-shot node classification [J]. Journal of Computer Applications, 2025, 45(2): 392-402. |
[15] | Qiurun HE, Jie HU, Bo PENG, Tianyuan LI. Fabric defect detection algorithm based on context information and multi-scale feature fusion [J]. Journal of Computer Applications, 2025, 45(2): 640-646. |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||