Journal of Computer Applications ›› 2025, Vol. 45 ›› Issue (4): 1139-1147.DOI: 10.11772/j.issn.1001-9081.2024040536
• Artificial intelligence • Previous Articles Next Articles
Junxiu AN, Linwang YANG(), Yuan LIU
Received:
2024-04-30
Revised:
2024-07-29
Accepted:
2024-08-01
Online:
2025-04-08
Published:
2025-04-10
Contact:
Linwang YANG
About author:
AN Junxiu, born in 1970, M. S., professor. Her research interests include data mining, intelligent computing.Supported by:
通讯作者:
杨林旺
作者简介:
安俊秀(1970—),女,山西临汾人,教授,硕士,CCF会员,主要研究方向:数据挖掘、智能计算基金资助:
CLC Number:
Junxiu AN, Linwang YANG, Yuan LIU. Unsupervised text style transfer based on semantic perception of proximity[J]. Journal of Computer Applications, 2025, 45(4): 1139-1147.
安俊秀, 杨林旺, 柳源. 基于邻近性语义感知的无监督文本风格迁移[J]. 《计算机应用》唯一官方网站, 2025, 45(4): 1139-1147.
Add to citation manager EndNote|Ris|BibTeX
URL: https://www.joca.cn/EN/10.11772/j.issn.1001-9081.2024040536
模型 | 自然流畅度 | TST Acc | BLEU-2 | METEOR | ROUGE-L | CIDEr | 嵌入平均 | 向量极值 | 贪婪匹配 |
---|---|---|---|---|---|---|---|---|---|
β-VAE(β=0.15) | 0.829 | 0.856 | 0.060 | 0.075 | 0.206 | 0.299 | 0.768 | 0.445 | 0.596 |
LAAE(λ1=0.05) | 0.819 | 0.838 | 0.055 | 0.066 | 0.172 | 0.224 | 0.741 | 0.422 | 0.571 |
DAAE(p=0.3) | 0.746 | 0.842 | 0.272 | 0.202 | 0.482 | 1.605 | 0.838 | 0.603 | 0.750 |
EPAAE(ζ=2.0,p=0.3) | 0.741 | 0.841 | 0.263 | 0.180 | 0.424 | 1.627 | 0.844 | 0.597 | 0.725 |
SPAAE(p=0) | 0.857 | 0.845 | 0.155 | 0.133 | 0.337 | 0.737 | 0.803 | 0.496 | 0.659 |
SPAAE(p=0.3) | 0.752 | 0.834 | 0.196 | 0.154 | 0.374 | 0.975 | 0.819 | 0.554 | 0.692 |
SPAAE(p=0.1) | 0.795 | 0.838 | 0.295 | 0.214 | 0.499 | 1.647 | 0.858 | 0.615 | 0.750 |
Tab. 1 Quantitative experimental results of TST tasks on Yelp dataset
模型 | 自然流畅度 | TST Acc | BLEU-2 | METEOR | ROUGE-L | CIDEr | 嵌入平均 | 向量极值 | 贪婪匹配 |
---|---|---|---|---|---|---|---|---|---|
β-VAE(β=0.15) | 0.829 | 0.856 | 0.060 | 0.075 | 0.206 | 0.299 | 0.768 | 0.445 | 0.596 |
LAAE(λ1=0.05) | 0.819 | 0.838 | 0.055 | 0.066 | 0.172 | 0.224 | 0.741 | 0.422 | 0.571 |
DAAE(p=0.3) | 0.746 | 0.842 | 0.272 | 0.202 | 0.482 | 1.605 | 0.838 | 0.603 | 0.750 |
EPAAE(ζ=2.0,p=0.3) | 0.741 | 0.841 | 0.263 | 0.180 | 0.424 | 1.627 | 0.844 | 0.597 | 0.725 |
SPAAE(p=0) | 0.857 | 0.845 | 0.155 | 0.133 | 0.337 | 0.737 | 0.803 | 0.496 | 0.659 |
SPAAE(p=0.3) | 0.752 | 0.834 | 0.196 | 0.154 | 0.374 | 0.975 | 0.819 | 0.554 | 0.692 |
SPAAE(p=0.1) | 0.795 | 0.838 | 0.295 | 0.214 | 0.499 | 1.647 | 0.858 | 0.615 | 0.750 |
数据集 | 模型 | 自然流畅度 | TST Acc | BLEU-2 | METEOR | ROUGE-L | CIDEr | 嵌入平均 | 向量极值 | 贪婪匹配 |
---|---|---|---|---|---|---|---|---|---|---|
SNLI | β-VAE(β=0.15) | 0.979 | 0.538 | 0.441 | 0.265 | 0.559 | 2.629 | 0.945 | 0.612 | 0.813 |
LAAE(λ1=0.05) | 0.972 | 0.533 | 0.436 | 0.263 | 0.552 | 2.621 | 0.947 | 0.615 | 0.813 | |
DAAE(p=0.3) | 0.974 | 0.513 | 0.436 | 0.266 | 0.553 | 2.339 | 0.947 | 0.657 | 0.841 | |
EPAAE(ζ=2.5,p=0.3) | 0.979 | 0.509 | 0.490 | 0.300 | 0.604 | 2.924 | 0.959 | 0.696 | 0.865 | |
SPAAE(p=0) | 0.971 | 0.489 | 0.658 | 0.403 | 0.765 | 5.076 | 0.957 | 0.705 | 0.878 | |
SPAAE(p=0.3) | 0.980 | 0.503 | 0.516 | 0.315 | 0.629 | 3.201 | 0.963 | 0.718 | 0.875 | |
SPAAE(p=0.1) | 0.981 | 0.512 | 0.530 | 0.323 | 0.644 | 3.459 | 0.965 | 0.717 | 0.873 | |
DNLI | β-VAE(β=0.15) | 0.959 | 0.610 | 0.229 | 0.138 | 0.346 | 0.558 | 0.900 | 0.489 | 0.713 |
LAAE(λ1=0.05) | 0.960 | 0.629 | 0.232 | 0.140 | 0.346 | 0.577 | 0.902 | 0.495 | 0.715 | |
DAAE(p=0.3) | 0.962 | 0.618 | 0.545 | 0.318 | 0.640 | 3.990 | 0.954 | 0.709 | 0.858 | |
EPAAE(ζ=2.5,p=0.3) | 0.960 | 0.612 | 0.495 | 0.283 | 0.593 | 3.373 | 0.956 | 0.674 | 0.868 | |
SPAAE(p=0) | 0.967 | 0.608 | 0.682 | 0.409 | 0.766 | 5.750 | 0.972 | 0.801 | 0.909 | |
SPAAE(p=0.3) | 0.959 | 0.601 | 0.544 | 0.316 | 0.643 | 4.010 | 0.955 | 0.713 | 0.861 | |
SPAAE(p=0.1) | 0.966 | 0.610 | 0.653 | 0.391 | 0.738 | 5.461 | 0.968 | 0.781 | 0.900 | |
Scitail | β-VAE(β=0.15) | 0.795 | 0.595 | 0.155 | 0.099 | 0.242 | 0.545 | 0.841 | 0.428 | 0.672 |
LAAE(λ1=0.05) | 0.805 | 0.470 | 0.176 | 0.112 | 0.263 | 0.646 | 0.856 | 0.466 | 0.693 | |
DAAE(p=0.3) | 0.826 | 0.488 | 0.308 | 0.208 | 0.441 | 1.893 | 0.910 | 0.629 | 0.810 | |
EPAAE(ζ=2.5,p=0.3) | 0.830 | 0.512 | 0.290 | 0.177 | 0.449 | 1.968 | 0.906 | 0.603 | 0.776 | |
SPAAE(p=0) | 0.794 | 0.545 | 0.375 | 0.221 | 0.463 | 2.001 | 0.896 | 0.635 | 0.818 | |
SPAAE(p=0.3) | 0.797 | 0.503 | 0.336 | 0.203 | 0.423 | 1.731 | 0.914 | 0.618 | 0.807 | |
SPAAE(p=0.1) | 0.833 | 0.491 | 0.342 | 0.207 | 0.442 | 1.879 | 0.918 | 0.621 | 0.807 |
Tab. 2 Quantitative experimental results of TST tasks on NLI datasets
数据集 | 模型 | 自然流畅度 | TST Acc | BLEU-2 | METEOR | ROUGE-L | CIDEr | 嵌入平均 | 向量极值 | 贪婪匹配 |
---|---|---|---|---|---|---|---|---|---|---|
SNLI | β-VAE(β=0.15) | 0.979 | 0.538 | 0.441 | 0.265 | 0.559 | 2.629 | 0.945 | 0.612 | 0.813 |
LAAE(λ1=0.05) | 0.972 | 0.533 | 0.436 | 0.263 | 0.552 | 2.621 | 0.947 | 0.615 | 0.813 | |
DAAE(p=0.3) | 0.974 | 0.513 | 0.436 | 0.266 | 0.553 | 2.339 | 0.947 | 0.657 | 0.841 | |
EPAAE(ζ=2.5,p=0.3) | 0.979 | 0.509 | 0.490 | 0.300 | 0.604 | 2.924 | 0.959 | 0.696 | 0.865 | |
SPAAE(p=0) | 0.971 | 0.489 | 0.658 | 0.403 | 0.765 | 5.076 | 0.957 | 0.705 | 0.878 | |
SPAAE(p=0.3) | 0.980 | 0.503 | 0.516 | 0.315 | 0.629 | 3.201 | 0.963 | 0.718 | 0.875 | |
SPAAE(p=0.1) | 0.981 | 0.512 | 0.530 | 0.323 | 0.644 | 3.459 | 0.965 | 0.717 | 0.873 | |
DNLI | β-VAE(β=0.15) | 0.959 | 0.610 | 0.229 | 0.138 | 0.346 | 0.558 | 0.900 | 0.489 | 0.713 |
LAAE(λ1=0.05) | 0.960 | 0.629 | 0.232 | 0.140 | 0.346 | 0.577 | 0.902 | 0.495 | 0.715 | |
DAAE(p=0.3) | 0.962 | 0.618 | 0.545 | 0.318 | 0.640 | 3.990 | 0.954 | 0.709 | 0.858 | |
EPAAE(ζ=2.5,p=0.3) | 0.960 | 0.612 | 0.495 | 0.283 | 0.593 | 3.373 | 0.956 | 0.674 | 0.868 | |
SPAAE(p=0) | 0.967 | 0.608 | 0.682 | 0.409 | 0.766 | 5.750 | 0.972 | 0.801 | 0.909 | |
SPAAE(p=0.3) | 0.959 | 0.601 | 0.544 | 0.316 | 0.643 | 4.010 | 0.955 | 0.713 | 0.861 | |
SPAAE(p=0.1) | 0.966 | 0.610 | 0.653 | 0.391 | 0.738 | 5.461 | 0.968 | 0.781 | 0.900 | |
Scitail | β-VAE(β=0.15) | 0.795 | 0.595 | 0.155 | 0.099 | 0.242 | 0.545 | 0.841 | 0.428 | 0.672 |
LAAE(λ1=0.05) | 0.805 | 0.470 | 0.176 | 0.112 | 0.263 | 0.646 | 0.856 | 0.466 | 0.693 | |
DAAE(p=0.3) | 0.826 | 0.488 | 0.308 | 0.208 | 0.441 | 1.893 | 0.910 | 0.629 | 0.810 | |
EPAAE(ζ=2.5,p=0.3) | 0.830 | 0.512 | 0.290 | 0.177 | 0.449 | 1.968 | 0.906 | 0.603 | 0.776 | |
SPAAE(p=0) | 0.794 | 0.545 | 0.375 | 0.221 | 0.463 | 2.001 | 0.896 | 0.635 | 0.818 | |
SPAAE(p=0.3) | 0.797 | 0.503 | 0.336 | 0.203 | 0.423 | 1.731 | 0.914 | 0.618 | 0.807 | |
SPAAE(p=0.1) | 0.833 | 0.491 | 0.342 | 0.207 | 0.442 | 1.879 | 0.918 | 0.621 | 0.807 |
数据集 | 模型 | 自然流畅度 | TST Acc | BLEU-2 | METEOR | ROUGE-L | CIDEr | 嵌入平均 | 向量极值 | 贪婪匹配 |
---|---|---|---|---|---|---|---|---|---|---|
Voices | β-VAE(β=0.15) | 0.774 | 0.980 | 0.137 | 0.109 | 0.198 | 0.708 | 0.759 | 0.427 | 0.589 |
LAAE(λ1=0.05) | 0.783 | 0.982 | 0.091 | 0.078 | 0.159 | 0.418 | 0.736 | 0.382 | 0.554 | |
DAAE(p=0.3) | 0.789 | 0.981 | 0.255 | 0.176 | 0.307 | 1.593 | 0.810 | 0.532 | 0.673 | |
EPAAE(ζ=2.5,p=0.3) | 0.790 | 0.980 | 0.154 | 0.177 | 0.222 | 0.860 | 0.769 | 0.539 | 0.603 | |
SPAAE(p=0) | 0.795 | 0.987 | 0.251 | 0.186 | 0.337 | 1.418 | 0.823 | 0.536 | 0.689 | |
SPAAE(p=0.3) | 0.783 | 0.975 | 0.213 | 0.158 | 0.270 | 1.268 | 0.792 | 0.490 | 0.641 | |
SPAAE(p=0.1) | 0.781 | 0.981 | 0.275 | 0.200 | 0.347 | 1.680 | 0.830 | 0.554 | 0.700 | |
PPR | β-VAE(β=0.15) | 0.742 | 0.956 | 0.221 | 0.225 | 0.419 | 1.492 | 0.817 | 0.554 | 0.721 |
LAAE(λ1=0.05) | 0.746 | 0.945 | 0.189 | 0.184 | 0.364 | 1.212 | 0.794 | 0.510 | 0.682 | |
DAAE(p=0.3) | 0.730 | 0.948 | 0.276 | 0.292 | 0.503 | 2.007 | 0.848 | 0.628 | 0.784 | |
EPAAE(ζ=2.5,p=0.3) | 0.733 | 0.960 | 0.285 | 0.275 | 0.479 | 1.951 | 0.856 | 0.636 | 0.794 | |
SPAAE(p=0) | 0.756 | 0.942 | 0.303 | 0.300 | 0.539 | 2.180 | 0.867 | 0.652 | 0.807 | |
SPAAE(p=0.3) | 0.732 | 0.965 | 0.279 | 0.292 | 0.505 | 2.083 | 0.849 | 0.631 | 0.785 | |
SPAAE(p=0.1) | 0.746 | 0.948 | 0.320 | 0.325 | 0.558 | 2.416 | 0.874 | 0.673 | 0.822 | |
Tenses | β-VAE(β=0.15) | 0.792 | 0.998 | 0.153 | 0.116 | 0.241 | 0.788 | 0.774 | 0.436 | 0.607 |
LAAE(λ1=0.05) | 0.795 | 1.000 | 0.119 | 0.094 | 0.201 | 0.604 | 0.763 | 0.406 | 0.586 | |
DAAE(p=0.3) | 0.779 | 0.999 | 0.316 | 0.241 | 0.423 | 2.143 | 0.841 | 0.588 | 0.720 | |
EPAAE(ζ=2.5,p=0.3) | 0.777 | 0.999 | 0.321 | 0.245 | 0.431 | 2.220 | 0.835 | 0.575 | 0.722 | |
SPAAE(p=0) | 0.801 | 1.000 | 0.338 | 0.247 | 0.460 | 2.106 | 0.853 | 0.607 | 0.741 | |
SPAAE(p=0.3) | 0.770 | 0.998 | 0.398 | 0.303 | 0.507 | 2.805 | 0.868 | 0.653 | 0.770 | |
SPAAE(p=0.1) | 0.788 | 1.000 | 0.405 | 0.307 | 0.528 | 2.887 | 0.877 | 0.665 | 0.783 |
Tab. 3 Quantitative experimental results of TST tasks on fine-grained style datasets
数据集 | 模型 | 自然流畅度 | TST Acc | BLEU-2 | METEOR | ROUGE-L | CIDEr | 嵌入平均 | 向量极值 | 贪婪匹配 |
---|---|---|---|---|---|---|---|---|---|---|
Voices | β-VAE(β=0.15) | 0.774 | 0.980 | 0.137 | 0.109 | 0.198 | 0.708 | 0.759 | 0.427 | 0.589 |
LAAE(λ1=0.05) | 0.783 | 0.982 | 0.091 | 0.078 | 0.159 | 0.418 | 0.736 | 0.382 | 0.554 | |
DAAE(p=0.3) | 0.789 | 0.981 | 0.255 | 0.176 | 0.307 | 1.593 | 0.810 | 0.532 | 0.673 | |
EPAAE(ζ=2.5,p=0.3) | 0.790 | 0.980 | 0.154 | 0.177 | 0.222 | 0.860 | 0.769 | 0.539 | 0.603 | |
SPAAE(p=0) | 0.795 | 0.987 | 0.251 | 0.186 | 0.337 | 1.418 | 0.823 | 0.536 | 0.689 | |
SPAAE(p=0.3) | 0.783 | 0.975 | 0.213 | 0.158 | 0.270 | 1.268 | 0.792 | 0.490 | 0.641 | |
SPAAE(p=0.1) | 0.781 | 0.981 | 0.275 | 0.200 | 0.347 | 1.680 | 0.830 | 0.554 | 0.700 | |
PPR | β-VAE(β=0.15) | 0.742 | 0.956 | 0.221 | 0.225 | 0.419 | 1.492 | 0.817 | 0.554 | 0.721 |
LAAE(λ1=0.05) | 0.746 | 0.945 | 0.189 | 0.184 | 0.364 | 1.212 | 0.794 | 0.510 | 0.682 | |
DAAE(p=0.3) | 0.730 | 0.948 | 0.276 | 0.292 | 0.503 | 2.007 | 0.848 | 0.628 | 0.784 | |
EPAAE(ζ=2.5,p=0.3) | 0.733 | 0.960 | 0.285 | 0.275 | 0.479 | 1.951 | 0.856 | 0.636 | 0.794 | |
SPAAE(p=0) | 0.756 | 0.942 | 0.303 | 0.300 | 0.539 | 2.180 | 0.867 | 0.652 | 0.807 | |
SPAAE(p=0.3) | 0.732 | 0.965 | 0.279 | 0.292 | 0.505 | 2.083 | 0.849 | 0.631 | 0.785 | |
SPAAE(p=0.1) | 0.746 | 0.948 | 0.320 | 0.325 | 0.558 | 2.416 | 0.874 | 0.673 | 0.822 | |
Tenses | β-VAE(β=0.15) | 0.792 | 0.998 | 0.153 | 0.116 | 0.241 | 0.788 | 0.774 | 0.436 | 0.607 |
LAAE(λ1=0.05) | 0.795 | 1.000 | 0.119 | 0.094 | 0.201 | 0.604 | 0.763 | 0.406 | 0.586 | |
DAAE(p=0.3) | 0.779 | 0.999 | 0.316 | 0.241 | 0.423 | 2.143 | 0.841 | 0.588 | 0.720 | |
EPAAE(ζ=2.5,p=0.3) | 0.777 | 0.999 | 0.321 | 0.245 | 0.431 | 2.220 | 0.835 | 0.575 | 0.722 | |
SPAAE(p=0) | 0.801 | 1.000 | 0.338 | 0.247 | 0.460 | 2.106 | 0.853 | 0.607 | 0.741 | |
SPAAE(p=0.3) | 0.770 | 0.998 | 0.398 | 0.303 | 0.507 | 2.805 | 0.868 | 0.653 | 0.770 | |
SPAAE(p=0.1) | 0.788 | 1.000 | 0.405 | 0.307 | 0.528 | 2.887 | 0.877 | 0.665 | 0.783 |
模型 | 将来时‒过去时 | 过去时‒将来时 | ||
---|---|---|---|---|
基线 | when trading will be halted by them all market liquidity will be gone | simply put there will not be enough business for every store to grow | it suggested that households accumulated wealth across a broad spectrum of assets | its other chemical operations bed continued by the Henderson plant the company said |
EPAAE (ζ=2.5,p=0.3) | but they were all the most prolific market value | i thought i thought it was more like it without much | it will suggest that households will accumulate wealth across a broad spectrum of assets | its |
SPAAE (p=0.1) | when markets were halted by them all that was volatility | simply put there was not enough business for every store to grow | it will suggest that wealth will be accumulated by households across a broad spectrum of assets | its other chemical operations will be continued by the Henderson plant the company will say |
Tab. 4 Style transfer (temporal transfer) output samples on Tenses dataset
模型 | 将来时‒过去时 | 过去时‒将来时 | ||
---|---|---|---|---|
基线 | when trading will be halted by them all market liquidity will be gone | simply put there will not be enough business for every store to grow | it suggested that households accumulated wealth across a broad spectrum of assets | its other chemical operations bed continued by the Henderson plant the company said |
EPAAE (ζ=2.5,p=0.3) | but they were all the most prolific market value | i thought i thought it was more like it without much | it will suggest that households will accumulate wealth across a broad spectrum of assets | its |
SPAAE (p=0.1) | when markets were halted by them all that was volatility | simply put there was not enough business for every store to grow | it will suggest that wealth will be accumulated by households across a broad spectrum of assets | its other chemical operations will be continued by the Henderson plant the company will say |
k | SPAAE(p=0.1) | EPAAE(ζ=2.5,p=0.3) |
---|---|---|
1.0 | it | it has |
1.5 | it | it |
2.0 | it | it has |
2.5 | it | it has |
3.0 | it | it has |
Tab. 5 Emotional style transfer output samples with different intensities on Yelp dataset
k | SPAAE(p=0.1) | EPAAE(ζ=2.5,p=0.3) |
---|---|---|
1.0 | it | it has |
1.5 | it | it |
2.0 | it | it has |
2.5 | it | it has |
3.0 | it | it has |
数据集 | K | 模型 | 自然流畅度 | TST Acc | BLEU-2 | METEOR | ROUGE-L | CIDEr | 嵌入平均 | 向量极值 | 贪婪匹配 |
---|---|---|---|---|---|---|---|---|---|---|---|
Yelp | 1 | SPAAE(p=0) | 0.842 | 0.835 | 0.147 | 0.126 | 0.325 | 0.702 | 0.799 | 0.493 | 0.651 |
SPAAE(p=0.3) | 0.744 | 0.829 | 0.259 | 0.195 | 0.458 | 1.454 | 0.808 | 0.607 | 0.736 | ||
SPAAE(p=0.1) | 0.783 | 0.822 | 0.290 | 0.211 | 0.488 | 1.596 | 0.801 | 0.609 | 0.743 | ||
2 | SPAAE(p=0) | 0.857 | 0.845 | 0.155 | 0.133 | 0.337 | 0.737 | 0.810 | 0.496 | 0.659 | |
SPAAE(p=0.3) | 0.752 | 0.834 | 0.196 | 0.154 | 0.374 | 0.975 | 0.819 | 0.554 | 0.692 | ||
SPAAE(p=0.1) | 0.795 | 0.838 | 0.295 | 0.214 | 0.499 | 1.647 | 0.858 | 0.615 | 0.750 | ||
DNLI | 1 | SPAAE(p=0) | 0.966 | 0.599 | 0.537 | 0.339 | 0.674 | 4.381 | 0.959 | 0.730 | 0.871 |
SPAAE(p=0.3) | 0.959 | 0.604 | 0.559 | 0.326 | 0.645 | 4.244 | 0.954 | 0.715 | 0.863 | ||
SPAAE(p=0.1) | 0.964 | 0.597 | 0.544 | 0.317 | 0.640 | 4.056 | 0.953 | 0.701 | 0.858 | ||
2 | SPAAE(p=0) | 0.967 | 0.608 | 0.682 | 0.409 | 0.766 | 5.750 | 0.972 | 0.801 | 0.909 | |
SPAAE(p=0.3) | 0.959 | 0.601 | 0.544 | 0.316 | 0.643 | 4.010 | 0.955 | 0.713 | 0.861 | ||
SPAAE(p=0.1) | 0.966 | 0.610 | 0.653 | 0.391 | 0.738 | 5.461 | 0.968 | 0.781 | 0.900 | ||
Voices | 1 | SPAAE(p=0) | 0.794 | 0.983 | 0.240 | 0.170 | 0.321 | 1.291 | 0.815 | 0.516 | 0.671 |
SPAAE(p=0.3) | 0.780 | 0.979 | 0.221 | 0.164 | 0.276 | 1.335 | 0.799 | 0.500 | 0.650 | ||
SPAAE(p=0.1) | 0.783 | 0.968 | 0.179 | 0.137 | 0.256 | 0.997 | 0.792 | 0.472 | 0.633 | ||
2 | SPAAE(p=0) | 0.795 | 0.987 | 0.251 | 0.186 | 0.337 | 1.418 | 0.823 | 0.536 | 0.689 | |
SPAAE(p=0.3) | 0.783 | 0.975 | 0.213 | 0.158 | 0.270 | 1.268 | 0.792 | 0.490 | 0.641 | ||
SPAAE(p=0.1) | 0.781 | 0.981 | 0.275 | 0.200 | 0.347 | 1.680 | 0.830 | 0.554 | 0.700 |
Tab. 6 Quantitative experimental results of SPAAE under different K values in three style transfer tasks
数据集 | K | 模型 | 自然流畅度 | TST Acc | BLEU-2 | METEOR | ROUGE-L | CIDEr | 嵌入平均 | 向量极值 | 贪婪匹配 |
---|---|---|---|---|---|---|---|---|---|---|---|
Yelp | 1 | SPAAE(p=0) | 0.842 | 0.835 | 0.147 | 0.126 | 0.325 | 0.702 | 0.799 | 0.493 | 0.651 |
SPAAE(p=0.3) | 0.744 | 0.829 | 0.259 | 0.195 | 0.458 | 1.454 | 0.808 | 0.607 | 0.736 | ||
SPAAE(p=0.1) | 0.783 | 0.822 | 0.290 | 0.211 | 0.488 | 1.596 | 0.801 | 0.609 | 0.743 | ||
2 | SPAAE(p=0) | 0.857 | 0.845 | 0.155 | 0.133 | 0.337 | 0.737 | 0.810 | 0.496 | 0.659 | |
SPAAE(p=0.3) | 0.752 | 0.834 | 0.196 | 0.154 | 0.374 | 0.975 | 0.819 | 0.554 | 0.692 | ||
SPAAE(p=0.1) | 0.795 | 0.838 | 0.295 | 0.214 | 0.499 | 1.647 | 0.858 | 0.615 | 0.750 | ||
DNLI | 1 | SPAAE(p=0) | 0.966 | 0.599 | 0.537 | 0.339 | 0.674 | 4.381 | 0.959 | 0.730 | 0.871 |
SPAAE(p=0.3) | 0.959 | 0.604 | 0.559 | 0.326 | 0.645 | 4.244 | 0.954 | 0.715 | 0.863 | ||
SPAAE(p=0.1) | 0.964 | 0.597 | 0.544 | 0.317 | 0.640 | 4.056 | 0.953 | 0.701 | 0.858 | ||
2 | SPAAE(p=0) | 0.967 | 0.608 | 0.682 | 0.409 | 0.766 | 5.750 | 0.972 | 0.801 | 0.909 | |
SPAAE(p=0.3) | 0.959 | 0.601 | 0.544 | 0.316 | 0.643 | 4.010 | 0.955 | 0.713 | 0.861 | ||
SPAAE(p=0.1) | 0.966 | 0.610 | 0.653 | 0.391 | 0.738 | 5.461 | 0.968 | 0.781 | 0.900 | ||
Voices | 1 | SPAAE(p=0) | 0.794 | 0.983 | 0.240 | 0.170 | 0.321 | 1.291 | 0.815 | 0.516 | 0.671 |
SPAAE(p=0.3) | 0.780 | 0.979 | 0.221 | 0.164 | 0.276 | 1.335 | 0.799 | 0.500 | 0.650 | ||
SPAAE(p=0.1) | 0.783 | 0.968 | 0.179 | 0.137 | 0.256 | 0.997 | 0.792 | 0.472 | 0.633 | ||
2 | SPAAE(p=0) | 0.795 | 0.987 | 0.251 | 0.186 | 0.337 | 1.418 | 0.823 | 0.536 | 0.689 | |
SPAAE(p=0.3) | 0.783 | 0.975 | 0.213 | 0.158 | 0.270 | 1.268 | 0.792 | 0.490 | 0.641 | ||
SPAAE(p=0.1) | 0.781 | 0.981 | 0.275 | 0.200 | 0.347 | 1.680 | 0.830 | 0.554 | 0.700 |
1 | JIN D, JIN Z, HU Z, et al. Deep learning for text style transfer: a survey[J]. Computational Linguistics, 2022, 48(1): 155-205. |
2 | SHI K, WANG Y, LU H, et al. EKGTF: a knowledge-enhanced model for optimizing social network-based meteorological briefings[J]. Information Processing and Management, 2021, 58(4): No.102564. |
3 | KASHYAP A R, HAZARIKA D, KAN M-Y, et al. So different yet so alike! constrained unsupervised text style transfer[C]// Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Stroudsburg: ACL, 2022: 416-431. |
4 | TOLSTIKHIN I, BOUSQUET O, GELLY S, et al. Wasserstein auto-encoders[EB/OL]. [2024-07-25].. |
5 | FU S, CHEN J, LEI L. Cooperative attention generative adversarial network for unsupervised domain adaptation[J]. Knowledge-Based Systems, 2023, 261: No.110196. |
6 | SHEN T, MUELLER J, BARZILAY R, et al. Educating text auto-encoders: latent representation guidance via denoising[C]// Proceedings of the 37th International Conference on Machine Learning. New York: JMLR.org, 2020: 8719-8729. |
7 | NARASIMHAN S, DEY S, DESARKAR M S. Towards robust and semantically organised latent representations for unsupervised text style transfer[C]// Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg: ACL, 2022: 456-474. |
8 | LI X L, THICKSTUN J, GULRAJANI I, et al. Diffusion-LM improves controllable text generation[C]// Proceedings of the 36th International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2022: 4328-4343. |
9 | HU Z, LEE R K W, AGGARWAL C C, et al. Text style transfer: a review and experimental evaluation[J]. ACM SIGKDD Explorations Newsletter, 2022, 24(1): 14-45. |
10 | SHEN T, LEI T, BARZILAY R, et al. Style transfer from non-parallel text by cross-alignment[C]// Proceedings of the 31st International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2017: 6833-6844. |
11 | FU Z, TAN X, PENG N, et al. Style transfer in text: exploration and evaluation[C]// Proceedings of the 32nd AAAI Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2018: 663-670. |
12 | PRABHUMOYE S, TSVETKOV Y, SALAKHUTDINOV R, et al. Style transfer through back-translation[C]// Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Stroudsburg: ACL, 2018: 866-876. |
13 | JOHN V, MOU L, BAHULEYAN H, et al. Disentangled representation learning for non-parallel text style transfer[C]// Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2019: 424-434. |
14 | LI J, JIA R, HE H, et al. Delete, retrieve, generate: a simple approach to sentiment and style transfer[C]// Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers). Stroudsburg: ACL, 2018: 1865-1874. |
15 | CHEN M, TANG Q, WISEMAN S, et al. A multi-task approach for disentangling syntax and semantics in sentence representations[C]// Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Stroudsburg: ACL, 2019: 2453-2464. |
16 | BAO Y, ZHOU H, HUANG S, et al. Generating sentences from disentangled syntactic and semantic spaces[C]// Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2019: 6008-6019. |
17 | XU P, CHEUNG J C K, CAO Y. On variational learning of controllable representations for text without supervision[C]// Proceedings of the 37th International Conference on Machine Learning. New York: JMLR.org, 2020: 10534-10543. |
18 | RUBENSTEIN P K, SCHÖLKOPF B, TOLSTIKHIN I. On the latent space of Wasserstein auto-encoders[EB/OL]. [2024-07-25].. |
19 | YANG Z, HU Z, DYER C, et al. Unsupervised text style transfer using language models as discriminators[C]// Proceedings of the 32nd International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2018: 7298-7309. |
20 | BAE K, KIM H I, KWON Y, et al. Unsupervised bidirectional style transfer network using local feature transform module[C]// Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. Piscataway: IEEE, 2023: 740-749. |
21 | HU Y, TAO W, XIE Y, et al. Token-level disentanglement for unsupervised text style transfer[J]. Neurocomputing, 2023, 560: No.126823. |
22 | 蔡国永,李安庆. 提示学习启发的无监督情感风格迁移研究[J]. 计算机工程与应用, 2024, 60(5): 146-155. |
CAI G Y, LI A Q. Prompt-learning inspired approach to unsupervised sentiment style transfer[J]. Computer Engineering and Applications, 2024, 60(5): 146-155. | |
23 | LEE D, TIAN Z, XUE L, et al. Enhancing content preservation in text style transfer using reverse attention and conditional layer normalization[C]// Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Stroudsburg: ACL, 2021: 93-102. |
24 | MAI H, JIANG W, DENG Z H. Prefix-tuning based unsupervised text style transfer[C]// Findings of the Association for Computational Linguistics: EMNLP 2023. Stroudsburg: ACL, 2023: 14847-14856. |
25 | 李静文,叶琪,阮彤,等. 基于多奖励强化学习的半监督文本风格迁移方法[J]. 计算机科学, 2024, 51(8): 263-271. |
LI J W, YE Q, RUAN T, et al. Semi-supervised text style transfer method based on multi-reward reinforcement learning[J]. Computer Science, 2024, 51(8): 263-271. | |
26 | HO J, JAIN A, ABBEEL P. Denoising diffusion probabilistic models[C]// Proceedings of the 34th International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2020: 6840-6851. |
27 | ROMBACH R, BLATTMANN A, LORENZ D, et al. High-resolution image synthesis with latent diffusion models[C]// Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2022: 10674-10685. |
28 | AUSTIN J, JOHNSON D D, HO J, et al. Structured denoising diffusion models in discrete state-spaces[C]// Proceedings of the 35th International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2021: 17981-17993. |
29 | WANG Z, ZHAO L, XING W. StyleDiffusion: controllable disentangled style transfer via diffusion models[C]// Proceedings of the 2023 IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE, 2023: 7643-7655. |
30 | YE J, ZHENG Z, BAO Y, et al. DiNoiser: diffused conditional sequence learning by manipulating noises[EB/OL]. [2024-07-25].. |
31 | GHOJOGH B, CROWLEY M, KARRAY F, et al. Adversarial auto-encoders[M]// Elements of dimensionality reduction and manifold learning. Cham: Springer, 2022: 577-596. |
32 | BOWMAN S R, ANGELI G, POTTS C, et al. A large annotated corpus for learning natural language inference[C]// Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2015: 632-642. |
33 | WELLECK S, WESTON J, SZLAM A, et al. Dialogue natural language inference[C]// Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2019: 3731-3741. |
34 | KHOT T, SABHARWAL A, CLARK P. SciTail: a textual entailment dataset from science question answering[C]// Proceedings of the 32nd AAAI Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2018:5189-5197. |
35 | LYU Y, LIANG P P, PHAM H, et al. StylePTB: a compositional benchmark for fine-grained controllable text style transfer[C]// Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg: ACL, 2021: 2116-2138. |
36 | BANERJEE S, LAVIE A. METEOR: an automatic metric for MT evaluation with improved correlation with human judgments[C]// Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization. Stroudsburg: ACL, 2005: 65-72. |
37 | LIN C Y. ROUGE: a package for automatic evaluation of summaries[C]// Proceedings of the ACL-04 Workshop: Text Summarization Branches Out. Stroudsburg: ACL, 2004: 74-81. |
38 | VEDANTAM R, ZITNICK C L, PARIKH D. CIDEr: consensus-based image description evaluation[C]// Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2015: 4566-4575. |
39 | SHARMA S, ASRI L EL, SCHULZ H, et al. Relevance of unsupervised metrics in task-oriented dialogue for evaluating natural language generation[EB/OL]. [2024-07-25].. |
40 | FORGUES G, PINEAU J, LARCHÊVEQUE J M, et al. Bootstrapping dialog systems with word embeddings[EB/OL]. [2024-07-25].. |
41 | RUS V, LINTEAN M. An optimal assessment of natural language student input using word-to-word similarity metrics[C]// Proceedings of the 2012 International Conference on Intelligent Tutoring Systems, LNCS 7315. Berlin: Springer, 2012: 675-676. |
42 | MIR R, FELBO B, OBRADOVICH N, et al. Evaluating style transfer for text[C]// Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Stroudsburg: ACL, 2019: 495-504. |
43 | MIKOLOV T, YIH W T, ZWEIG G. Linguistic regularities in continuous space word representations[C]// Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg: ACL, 2013: 746-751. |
[1] | Lihu PAN, Shouxin PENG, Rui ZHANG, Zhiyang XUE, Xuzhen MAO. Video anomaly detection for moving foreground regions [J]. Journal of Computer Applications, 2025, 45(4): 1300-1309. |
[2] | Meirong DING, Jinxin ZHUO, Yuwu LU, Qinglong LIU, Jicong LANG. Domain adaptation integrating environment label smoothing and nuclear norm discrepancy [J]. Journal of Computer Applications, 2025, 45(4): 1130-1138. |
[3] | Jieru JIA, Jianchao YANG, Shuorui ZHANG, Tao YAN, Bin CHEN. Unsupervised person re-identification based on self-distilled vision Transformer [J]. Journal of Computer Applications, 2024, 44(9): 2893-2902. |
[4] | Xiawuji, Heming HUANG, Gengzangcuomao, Yutao FAN. Survey of extractive text summarization based on unsupervised learning and supervised learning [J]. Journal of Computer Applications, 2024, 44(4): 1035-1048. |
[5] | Rui JIANG, Wei LIU, Cheng CHEN, Tao LU. Asymmetric unsupervised end-to-end image deraining network [J]. Journal of Computer Applications, 2024, 44(3): 922-930. |
[6] | Xuan CAO, Tianjian LUO. Dynamic multi-domain adversarial learning method for cross-subject motor imagery EEG signals [J]. Journal of Computer Applications, 2024, 44(2): 645-653. |
[7] | Nengbing HU, Biao CAI, Xu LI, Danhua CAO. Graph classification method based on graph pooling contrast learning [J]. Journal of Computer Applications, 2024, 44(11): 3327-3334. |
[8] | Pei ZHAO, Yan QIAO, Rongyao HU, Xinyu YUAN, Minyue LI, Benchu ZHANG. Multivariate time series anomaly detection based on multi-domain feature extraction [J]. Journal of Computer Applications, 2024, 44(11): 3419-3426. |
[9] | Menglin HUANG, Lei DUAN, Yuanhao ZHANG, Peiyan WANG, Renhao LI. Prompt learning based unsupervised relation extraction model [J]. Journal of Computer Applications, 2023, 43(7): 2010-2016. |
[10] | Zhe XU, Zhihong WANG, Cunyu SHAN, Yaru SUN, Ying YANG. Unsupervised face forgery video detection based on reconstruction error [J]. Journal of Computer Applications, 2023, 43(5): 1571-1577. |
[11] | Mengting GE, Minghua WAN. Feature extraction model based on neighbor supervised locally invariant robust principal component analysis [J]. Journal of Computer Applications, 2023, 43(4): 1013-1020. |
[12] | Wenbo LI, Bo LIU, Lingling TAO, Fen LUO, Hang ZHANG. Deep spectral clustering algorithm with L1 regularization [J]. Journal of Computer Applications, 2023, 43(12): 3662-3667. |
[13] | LIU Yongmin, YANG Yujin, LUO Haoyi, HUANG Hao, XIE Tieqiang. Intrusion detection method for wireless sensor network based on bidirectional circulation generative adversarial network [J]. Journal of Computer Applications, 2023, 43(1): 160-168. |
[14] | Yiyang GUO, Jiong YU, Xusheng DU, Shaozhi YANG, Ming CAO. Outlier detection algorithm based on autoencoder and ensemble learning [J]. Journal of Computer Applications, 2022, 42(7): 2078-2087. |
[15] | LIU Ruiheng, YE Xia, YUE Zengying. Review of pre-trained models for natural language processing tasks [J]. Journal of Computer Applications, 2021, 41(5): 1236-1246. |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||