1 |
EMERY N J. The eyes have it: the neuroethology, function and evolution of social gaze [J]. Neuroscience & Biobehavioral Reviews, 2000, 24(6): 581-604.
|
2 |
TERZIOĞLU Y, MUTLU B, ŞAHIN E. Designing social cues for collaborative robots: the roie of gaze and breathing in human-robot collaboration [C]// Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction. New York: ACM, 2020: 343-357.
|
3 |
TOPAL C, GUNAL S, KOÇDEVIREN O, et al. A low-computational approach on gaze estimation with eye touch system [J]. IEEE Transactions on Cybernetics, 2014, 44(2): 228-239.
|
4 |
胡文婷,周献中,盛寅,等.基于视线跟踪的智能界面实现机制研究[J].计算机应用与软件, 2016, 33(1): 134-137.
|
|
HU W T, ZHOU X Z, SHENG Y, et al. On implementation mechanism of intelligent interface based on gaze tracking [J]. Computer Applications and Software, 2016, 33(1): 134-137.
|
5 |
CHONG E, CLARK-WHITNEY E, SOUTHERLAND A, et al. Detection of eye contact with deep neural networks is as accurate as human experts [J]. Nature Communications, 2020, 11(1): 6386.
|
6 |
LI J, CHEN Z, ZHONG Y, et al. Appearance-based gaze estimation for ASD diagnosis [J]. IEEE Transactions on Cybernetics, 2022, 52(7): 6504-6517.
|
7 |
郭爱华,潘小平.阿尔茨海默病的眼动跟踪研究[J].广东医学, 2021, 42(9): 1132-1135.
|
|
GUO A H, PAN X P. Eye tracking research on Alzheimer’s disease [J]. Guangdong Medical Journal, 2021, 42(9): 1132-1135.
|
8 |
VINNIKOV M, ALLISON R S, FERNANDES S. Gaze-contingent auditory displays for improved spatial attention in virtual reality [J]. ACM Transactions on Computer-Human Interaction, 2017, 24(3): No. 19.
|
9 |
PATNEY A, SALVI M, KIM J, et al. Towards foveated rendering for gaze-tracked virtual reality [J]. ACM Transactions on Graphics, 2016, 35(6): No. 179.
|
10 |
侯守明,贾超兰,张明敏.用于虚拟现实系统的眼动交互技术综述[J].计算机应用, 2022, 42(11): 3534-3543.
|
|
HOU S M, JIA C L, ZHANG M M. Review of eye movement-based interaction techniques for virtual reality systems [J]. Journal of Computer Applications, 2022, 42(11): 3534-3543.
|
11 |
LIU Y, ZHOU L, BAI X, et al. Goal-oriented gaze estimation for zero-shot learning [C]// Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway IEEE, 2021: 3793-3802.
|
12 |
张闯,迟健男,张朝晖,等.一种新的基于瞳孔-角膜反射技术的视线追踪方法[J].计算机学报, 2010, 33(7): 1272-1285.
|
|
ZHANG C, CHI J N, ZHANG Z H, et al. A novel eye gaze tracking technique based on pupil center cornea reflection technique [J]. Chinese Journal of Computers, 2010, 33(7): 1272-1285.
|
13 |
熊春水,黄磊,刘昌平.一种新的单点标定视线估计方法[J].自动化学报, 2014, 40(3): 459-470.
|
|
XIONG C S, HUANG L, LIU C P. A novel gaze estimation method with one-point calibration [J]. Acta Automatica Sinica, 2014, 40(3): 459-470.
|
14 |
苟超,卓莹,王康,等.眼动跟踪研究进展与展望[J].自动化学报,2022, 48(5): 1173-1192.
|
|
GOU C, ZHUO Y, WANG K, et al. Research advances and prospects of eye tracking [J]. Acta Automatica Sinica, 2022, 48(5): 1173-1192.
|
15 |
ZHANG X, SUGANO Y, FRITZ M, et al. Appearance-based gaze estimation in the wild [C]// Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2015: 4511-4520.
|
16 |
WANG K, ZHAO R, JI Q. A hierarchical generative model for eye image synthesis and eye gaze estimation [C]// Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2018: 440-448.
|
17 |
REED S, AKATA Z, YAN X, et al. Generative adversarial text to image synthesis [C]// Proceedings of the 33rd International Conference on Machine Learning. New York: ACM, 2016: 1060-1069.
|
18 |
LIU G, YU Y, MORA K A F, et al. A differential approach for gaze estimation [J]. IEEE Transactions on Pattern Analysis and Machine Intelligencem, 2021, 43(3): 1092-1099.
|
19 |
SUN Y, ZENG J, SHAN S, et al. Cross-encoder for unsupervised gaze representation learning [C]// Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2021: 3682-3691.
|
20 |
ZHANG X, SUGANO Y, FRITZ M, et al. It's written all over your face: full-face appearance-based gaze estimation [C]// Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops. Piscataway: IEEE, 2017: 2299-2308.
|
21 |
CHENG Y, HUANG S, WANG F, et al. A coarse-to-fine adaptive network for appearance-based gaze estimation [C]// Proceedings of the 34th AAAI Conference on Artificial Intelligence. Menlo Park: AAAI, 2020: 10623-10630.
|
22 |
ZHANG X, SUGANO Y, BULLING A, et al. Learning-based region selection for end-to-end gaze estimation [C]// Proceedings of the 31st British Machine Vision Conference. Nottingham, UK: BMVA Press, 2020: No. 86.
|
23 |
KELLNHOFER P, RECASENS A, STENT S, et al. Gaze360: physically unconstrained gaze estimation in the wild [C]// Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2019: 6911-6920.
|
24 |
KOTHARI R, DE MELLO S, IQBAL U, et al. Weakly-supervised physically unconstrained gaze estimation [C]// Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2021: 9975-9984.
|
25 |
NONAKA S, NOBUHARA S, NISHINO K. Dynamic 3D gaze from afar: deep gaze estimation from temporal eye-head-body coordination [C]// Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2022: 2182-2191.
|
26 |
WU Y, LI G, LIU Z, et al. Gaze estimation via modulation-based adaptive network with auxiliary self-learning [J]. IEEE Transactions on Circuits and Systems for Video Technology, 2022, 32(8): 5510-5520.
|
27 |
CHEN Z, SHI B E. Towards high performance low complexity calibration in appearance based gaze estimation [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45(1): 1174-1188.
|
28 |
CHENG Y, LU F. Gaze estimation using transformer [C]// Proceedings of the 2022 26th International Conference on Pattern Recognition. Piscataway: IEEE, 2022: 3341-3347.
|
29 |
OH J O, CHANG H J, S-I CHOI. Self-attention with convolution and deconvolution for efficient eye gaze estimation from a full face image [C]// Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. Piscataway: IEEE, 2022: 4988-4996.
|
30 |
NAGPURE V, OKUMA K. Searching efficient neural architecture with multi-resolution fusion transformer for appearance-based gaze estimation [C]// Proceedings of the 2023 IEEE/CVF Winter Conference on Applications of Computer Vision. Piscataway: IEEE, 2023: 890-899.
|
31 |
REN S, ZHOU D, HE S, et al. Shunted self-attention via multi-scale token aggregation [C]// Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2022: 10843-10852.
|
32 |
DOSOVITSKIY A, BEYER L, KOLESNIKOV A, et al. An image is worth 16x16 words: Transformers for image recognition at scale [EB/OL]. (2021-06-03) [2022-10-14].
|
33 |
CHENG Y, WANG H, BAO Y, et al. Appearance-based gaze estimation with deep learning: a review and benchmark [EB/OL]. (2021-04-26) [2023-08-22]. .
|