Journal articles on the topic 'Audio-visual attention'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 journal articles for your research on the topic 'Audio-visual attention.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.
Chen, Yanxiang, Tam V. Nguyen, Mohan Kankanhalli, Jun Yuan, Shuicheng Yan, and Meng Wang. "Audio Matters in Visual Attention." IEEE Transactions on Circuits and Systems for Video Technology 24, no. 11 (November 2014): 1992–2003. http://dx.doi.org/10.1109/tcsvt.2014.2329380.
Full textLee, Yong-Hyeok, Dong-Won Jang, Jae-Bin Kim, Rae-Hong Park, and Hyung-Min Park. "Audio–Visual Speech Recognition Based on Dual Cross-Modality Attentions with the Transformer Model." Applied Sciences 10, no. 20 (October 17, 2020): 7263. http://dx.doi.org/10.3390/app10207263.
Full textIwaki, Sunao, Mitsuo Tonoike, Masahiko Yamaguchi, and Takashi Hamada. "Modulation of extrastriate visual processing by audio-visual intermodal selective attention." NeuroImage 11, no. 5 (May 2000): S21. http://dx.doi.org/10.1016/s1053-8119(00)90956-x.
Full textNAGASAKI, Yoshiki, Masaki HAYASHI, Naoshi KANEKO, and Yoshimitsu AOKI. "Temporal Cross-Modal Attention for Audio-Visual Event Localization." Journal of the Japan Society for Precision Engineering 88, no. 3 (March 5, 2022): 263–68. http://dx.doi.org/10.2493/jjspe.88.263.
Full textXuan, Hanyu, Zhenyu Zhang, Shuo Chen, Jian Yang, and Yan Yan. "Cross-Modal Attention Network for Temporal Inconsistent Audio-Visual Event Localization." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 01 (April 3, 2020): 279–86. http://dx.doi.org/10.1609/aaai.v34i01.5361.
Full textIwaki, Sunao. "Audio-visual intermodal orientation of attention modulates task-specific extrastriate visual processing." Neuroscience Research 68 (January 2010): e269. http://dx.doi.org/10.1016/j.neures.2010.07.1195.
Full textKeitel, Christian, and Matthias M. Müller. "Audio-visual synchrony and feature-selective attention co-amplify early visual processing." Experimental Brain Research 234, no. 5 (August 1, 2015): 1221–31. http://dx.doi.org/10.1007/s00221-015-4392-8.
Full textZhu, Hao, Man-Di Luo, Rui Wang, Ai-Hua Zheng, and Ran He. "Deep Audio-visual Learning: A Survey." International Journal of Automation and Computing 18, no. 3 (April 15, 2021): 351–76. http://dx.doi.org/10.1007/s11633-021-1293-0.
Full textRan, Yue, Hongying Tang, Baoqing Li, and Guohui Wang. "Self-Supervised Video Representation and Temporally Adaptive Attention for Audio-Visual Event Localization." Applied Sciences 12, no. 24 (December 9, 2022): 12622. http://dx.doi.org/10.3390/app122412622.
Full textZhao, Sicheng, Yunsheng Ma, Yang Gu, Jufeng Yang, Tengfei Xing, Pengfei Xu, Runbo Hu, Hua Chai, and Kurt Keutzer. "An End-to-End Visual-Audio Attention Network for Emotion Recognition in User-Generated Videos." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 01 (April 3, 2020): 303–11. http://dx.doi.org/10.1609/aaai.v34i01.5364.
Full textWeijkamp, Janne, and Makiko Sadakata. "Attention to affective audio-visual information: Comparison between musicians and non-musicians." Psychology of Music 45, no. 2 (July 7, 2016): 204–15. http://dx.doi.org/10.1177/0305735616654216.
Full textBouchara, Tifanie, and Brian F. G. Katz. "Redundancy gains in audio–visual search." Seeing and Perceiving 25 (2012): 181. http://dx.doi.org/10.1163/187847612x648116.
Full textLee, Byoung-Gi, Jong-Suk Choi, Sang-Suk Yoon, Mun-Taek Choi, Mun-Sang Kim, and Dai-Jin Kim. "Audio-Visual Fusion for Sound Source Localization and Improved Attention." Transactions of the Korean Society of Mechanical Engineers A 35, no. 7 (July 1, 2011): 737–43. http://dx.doi.org/10.3795/ksme-a.2011.35.7.737.
Full textIkumi, Nara, and Salvador Soto-Faraco. "Selective Attention Modulates the Direction of Audio-Visual Temporal Recalibration." PLoS ONE 9, no. 7 (July 8, 2014): e99311. http://dx.doi.org/10.1371/journal.pone.0099311.
Full textLee, Jong-Seok, Francesca De Simone, and Touradj Ebrahimi. "Efficient video coding based on audio-visual focus of attention." Journal of Visual Communication and Image Representation 22, no. 8 (November 2011): 704–11. http://dx.doi.org/10.1016/j.jvcir.2010.11.002.
Full textFAGIOLI, S., A. COUYOUMDJIAN, and F. FERLAZZO. "Audio-visual dynamic remapping in an endogenous spatial attention task." Behavioural Brain Research 173, no. 1 (October 2, 2006): 30–38. http://dx.doi.org/10.1016/j.bbr.2006.05.030.
Full textChen, Minran, Song Zhao, Jiaqi Yu, Xuechen Leng, Mengdie Zhai, Chengzhi Feng, and Wenfeng Feng. "Audiovisual Emotional Congruency Modulates the Stimulus-Driven Cross-Modal Spread of Attention." Brain Sciences 12, no. 9 (September 10, 2022): 1229. http://dx.doi.org/10.3390/brainsci12091229.
Full textGlumm, Monica M., Kathy L. Kehring, and Timothy L. White. "Effects of Visual and Auditory Cues About Threat Location on Target Acquisition and Attention to Auditory Communications." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 49, no. 3 (September 2005): 347–51. http://dx.doi.org/10.1177/154193120504900328.
Full textLi, Yidi, Hong Liu, and Hao Tang. "Multi-Modal Perception Attention Network with Self-Supervised Learning for Audio-Visual Speaker Tracking." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 2 (June 28, 2022): 1456–63. http://dx.doi.org/10.1609/aaai.v36i2.20035.
Full textWang, Chunxiao, Jingjing Zhang, Wei Jiang, and Shuang Wang. "A Deep Multimodal Model for Predicting Affective Responses Evoked by Movies Based on Shot Segmentation." Security and Communication Networks 2021 (September 28, 2021): 1–12. http://dx.doi.org/10.1155/2021/7650483.
Full textPatten, Elena, Linda R. Watson, and Grace T. Baranek. "Temporal Synchrony Detection and Associations with Language in Young Children with ASD." Autism Research and Treatment 2014 (2014): 1–8. http://dx.doi.org/10.1155/2014/678346.
Full textZhang, Jingran, Xing Xu, Fumin Shen, Huimin Lu, Xin Liu, and Heng Tao Shen. "Enhancing Audio-Visual Association with Self-Supervised Curriculum Learning." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 4 (May 18, 2021): 3351–59. http://dx.doi.org/10.1609/aaai.v35i4.16447.
Full textKováčová, Michaela, and Martina Martausová. "Audio-Visual Culture in Textbooks of German as a Foreign Language: A Crossroads Between Media Competence and Subject-Specific Objectives." Studia Universitatis Babeș-Bolyai Philologia 67, no. 2 (June 30, 2022): 311–28. http://dx.doi.org/10.24193/subbphilo.2022.2.18.
Full textPurcell, Kevin P., and Anthony D. Andre. "Effects of Visual and Audio Callouts on Pilot Visual Attention during Electronic Moving Map Use." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 44, no. 13 (July 2000): 108. http://dx.doi.org/10.1177/154193120004401339.
Full textLi, Ning, and Linda Ng Boyle. "Allocation of Driver Attention for Varying In-Vehicle System Modalities." Human Factors: The Journal of the Human Factors and Ergonomics Society 62, no. 8 (December 30, 2019): 1349–64. http://dx.doi.org/10.1177/0018720819879585.
Full textMashannudin, Mashannudin. "PENERAPAN METODE DEMONTRASI BERBANTUAN MEDIA AUDIO VISUAL UNTUK MENINGKATKAN PERHATIAN DAN PRESTASI BELAJAR." Diadik: Jurnal Ilmiah Teknologi Pendidikan 10, no. 1 (September 30, 2021): 93–100. http://dx.doi.org/10.33369/diadik.v10i1.18113.
Full textRamezanzade, Hesam. "Adding Acoustical to Visual Movement Patterns to Retest Whether Imitation Is Goal- or Pattern-Directed." Perceptual and Motor Skills 127, no. 1 (August 29, 2019): 225–47. http://dx.doi.org/10.1177/0031512519870418.
Full textSeo, Minji, and Myungho Kim. "Fusing Visual Attention CNN and Bag of Visual Words for Cross-Corpus Speech Emotion Recognition." Sensors 20, no. 19 (September 28, 2020): 5559. http://dx.doi.org/10.3390/s20195559.
Full textPark, So-Hyun, and Young-Ho Park. "Audio-Visual Tensor Fusion Network for Piano Player Posture Classification." Applied Sciences 10, no. 19 (September 29, 2020): 6857. http://dx.doi.org/10.3390/app10196857.
Full textMulauzi, Felesia, Phiri Bwalya, Chishimba Soko, Vincent Njobvu, Jane Katema, and Felix Silungwe. "Preservation of audio-visual archives in Zambia." ESARBICA Journal: Journal of the Eastern and Southern Africa Regional Branch of the International Council on Archives 40 (November 6, 2021): 42–59. http://dx.doi.org/10.4314/esarjo.v40i1.4.
Full textMulauzi, Felesia, Phiri Bwalya, Chishimba Soko, Vincent Njobvu, Jane Katema, and Felix Silungwe. "Preservation of audio-visual archives in Zambia." ESARBICA Journal: Journal of the Eastern and Southern Africa Regional Branch of the International Council on Archives 40 (November 6, 2021): 42–59. http://dx.doi.org/10.4314/esarjo.v40i.4.
Full textMotlicek, Petr, Stefan Duffner, Danil Korchagin, Hervé Bourlard, Carl Scheffler, Jean-Marc Odobez, Giovanni Del Galdo, Markus Kallinger, and Oliver Thiergart. "Real-Time Audio-Visual Analysis for Multiperson Videoconferencing." Advances in Multimedia 2013 (2013): 1–21. http://dx.doi.org/10.1155/2013/175745.
Full textKing, Hannah, and Ioana Chitoran. "Difficult to hear but easy to see: Audio-visual perception of the /r/-/w/ contrast in Anglo-English." Journal of the Acoustical Society of America 152, no. 1 (July 2022): 368–79. http://dx.doi.org/10.1121/10.0012660.
Full textKnobel, Samuel Elia Johannes, Brigitte Charlotte Kaufmann, Nora Geiser, Stephan Moreno Gerber, René M. Müri, Tobias Nef, Thomas Nyffeler, and Dario Cazzoli. "Effects of Virtual Reality–Based Multimodal Audio-Tactile Cueing in Patients With Spatial Attention Deficits: Pilot Usability Study." JMIR Serious Games 10, no. 2 (May 25, 2022): e34884. http://dx.doi.org/10.2196/34884.
Full textZhang, Weiyu, Se-Hoon Jeong, and Martin Fishbein†. "Situational Factors Competing for Attention." Journal of Media Psychology 22, no. 1 (January 2010): 2–13. http://dx.doi.org/10.1027/1864-1105/a000002.
Full textSugiyanti, Endang. "PENERAPAN MEDIA AUDIO VISUAL DALAM PENINGKATAN PEMAHAMAN HAJI DAN UMRAH." Wawasan: Jurnal Kediklatan Balai Diklat Keagamaan Jakarta 1, no. 1 (November 23, 2020): 79–90. http://dx.doi.org/10.53800/wawasan.v1i1.38.
Full textWright, Thomas D., Jamie Ward, Sarah Simonon, and Aaron Margolis. "Where’s Wally? Audio–visual mismatch directs ocular saccades in sensory substitution." Seeing and Perceiving 25 (2012): 61. http://dx.doi.org/10.1163/187847612x646820.
Full textYin, Yifang, Harsh Shrivastava, Ying Zhang, Zhenguang Liu, Rajiv Ratn Shah, and Roger Zimmermann. "Enhanced Audio Tagging via Multi- to Single-Modal Teacher-Student Mutual Learning." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 12 (May 18, 2021): 10709–17. http://dx.doi.org/10.1609/aaai.v35i12.17280.
Full textRiza Oktariana and Wiwik Yeni Herlina. "ANALISIS KEMAMPUAN MENYIMAK ANAK KELOMPOK B TK BUNGONG SEULEUPOK SYIAH KUALA BANDA ACEH BERBANTUKAN MEDIA AUDIO VISUAL." Jurnal Buah Hati 7, no. 2 (September 22, 2020): 224–36. http://dx.doi.org/10.46244/buahhati.v7i2.1169.
Full textKozlova, Elena I. "Ways of Electronic Publications' Classification in the Legal Deposit System." Bibliotekovedenie [Russian Journal of Library Science], no. 2 (April 27, 2012): 28–32. http://dx.doi.org/10.25281/0869-608x-2012-0-2-28-32.
Full textEt.al, Charanjit Kaur Swaran Singh. "Review of Research on the Use of Audio-Visual Aids among Learners’ English Language." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, no. 3 (April 11, 2021): 895–904. http://dx.doi.org/10.17762/turcomat.v12i3.800.
Full textMorís Fernández, Luis, Maya Visser, and Salvador Soto-Faraco. "Influence of selective attention to sound in multisensory integration." Seeing and Perceiving 25 (2012): 154. http://dx.doi.org/10.1163/187847612x647856.
Full textFleming, Justin T., Ross K. Maddox, and Barbara G. Shinn-Cunningham. "Spatial alignment between faces and voices improves selective attention to audio-visual speech." Journal of the Acoustical Society of America 150, no. 4 (October 2021): 3085–100. http://dx.doi.org/10.1121/10.0006415.
Full textLoria, Tristan, Joëlle Hajj, Kanji Tanaka, Katsumi Watanabe, and Luc Tremblay. "The deployment of spatial attention during goal-directed action alters audio-visual integration." Journal of Vision 19, no. 10 (September 6, 2019): 111c. http://dx.doi.org/10.1167/19.10.111c.
Full textChen, Yanxiang, Minglong Song, Lixia Xue, Xiaoxue Chen, and Meng Wang. "An audio–visual human attention analysis approach to abrupt change detection in videos." Signal Processing 110 (May 2015): 143–54. http://dx.doi.org/10.1016/j.sigpro.2014.08.006.
Full textLee, Jong-Seok, Francesca De Simone, and Touradj Ebrahimi. "Subjective Quality Evaluation of Foveated Video Coding Using Audio-Visual Focus of Attention." IEEE Journal of Selected Topics in Signal Processing 5, no. 7 (November 2011): 1322–31. http://dx.doi.org/10.1109/jstsp.2011.2165199.
Full textBrungart, Douglas S., Alexander J. Kordik, and Brian D. Simpson. "Audio and Visual Cues in a Two-Talker Divided Attention Speech-Monitoring Task." Human Factors: The Journal of the Human Factors and Ergonomics Society 47, no. 3 (September 2005): 562–73. http://dx.doi.org/10.1518/001872005774860023.
Full textWang, Weixing, Qianqian Li, Jingwen Xie, Ningfeng Hu, Ziao Wang, and Ning Zhang. "Research on emotional semantic retrieval of attention mechanism oriented to audio-visual synesthesia." Neurocomputing 519 (January 2023): 194–204. http://dx.doi.org/10.1016/j.neucom.2022.11.036.
Full textKHOMSATUN, KHOMSATUN. "MENINGKATKAN KETERAMPILAN BEWUDU MELALUI METODE DEMONSTRASI YANG DIKOMBINASIKAN DENGAN MEDIA AUDIO VISUAL PADA PESERTA DIDIK DI KELAS VII SMP NEGERI 21 PONTIANAK." EDUCATOR : Jurnal Inovasi Tenaga Pendidik dan Kependidikan 2, no. 3 (November 8, 2022): 322–29. http://dx.doi.org/10.51878/educator.v2i3.1636.
Full textKorzeniowska, A. T., H. Root-Gutteridge, J. Simner, and D. Reby. "Audio–visual crossmodal correspondences in domestic dogs ( Canis familiaris )." Biology Letters 15, no. 11 (November 2019): 20190564. http://dx.doi.org/10.1098/rsbl.2019.0564.
Full text