Academic literature on the topic 'Video and language'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Video and language.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Video and language"
Joshi, Prof Indira. "Video Summarization for Marathi Language." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 05 (May 3, 2024): 1–5. http://dx.doi.org/10.55041/ijsrem32024.
Full textWalther, Joseph B., German Neubaum, Leonie Rösner, Stephan Winter, and Nicole C. Krämer. "The Effect of Bilingual Congruence on the Persuasive Influence of Videos and Comments on YouTube." Journal of Language and Social Psychology 37, no. 3 (August 11, 2017): 310–29. http://dx.doi.org/10.1177/0261927x17724552.
Full textShipman, Frank M., Ricardo Gutierrez-Osuna, and Caio D. D. Monteiro. "Identifying Sign Language Videos in Video Sharing Sites." ACM Transactions on Accessible Computing 5, no. 4 (March 2014): 1–14. http://dx.doi.org/10.1145/2579698.
Full textSanders, D. C., L. M. Reyes, D. J. Osborne, D. R. Ward, and D. E. Blackwelder. "USING A BILINGUAL GAPS AND HAND-WASHING DVD TO TRAIN FRESH PRODUCE FIELD AND PACKINGHOUSE WORKERS." HortScience 41, no. 3 (June 2006): 498D—498. http://dx.doi.org/10.21273/hortsci.41.3.498d.
Full textHiremath, Rashmi B., and Ramesh M. Kagalkar. "Sign Language Video Processing for Text Detection in Hindi Language." International Journal of Recent Contributions from Engineering, Science & IT (iJES) 4, no. 3 (October 26, 2016): 21. http://dx.doi.org/10.3991/ijes.v4i3.5973.
Full textAnugerah, Rezza, Yohanes Gatot Sutapa Yuliana, and Dwi Riyanti. "THE POTENTIAL OF ENGLISH LEARNING VIDEOS IN FORM OF VLOG ON YOUTUBE FOR ELT MATERIAL WRITERS." Proceedings International Conference on Teaching and Education (ICoTE) 2, no. 2 (December 24, 2019): 224. http://dx.doi.org/10.26418/icote.v2i2.38232.
Full textLiu, Yuqi, Luhui Xu, Pengfei Xiong, and Qin Jin. "Token Mixing: Parameter-Efficient Transfer Learning from Image-Language to Video-Language." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 2 (June 26, 2023): 1781–89. http://dx.doi.org/10.1609/aaai.v37i2.25267.
Full textMoon, Nazmun Nessa, Imrus Salehin, Masuma Parvin, Md Mehedi Hasan, Iftakhar Mohammad Talha, Susanta Chandra Debnath, Fernaz Narin Nur, and Mohd Saifuzzaman. "Natural language processing based advanced method of unnecessary video detection." International Journal of Electrical and Computer Engineering (IJECE) 11, no. 6 (December 1, 2021): 5411. http://dx.doi.org/10.11591/ijece.v11i6.pp5411-5419.
Full textGernsbacher, Morton Ann. "Video Captions Benefit Everyone." Policy Insights from the Behavioral and Brain Sciences 2, no. 1 (October 2015): 195–202. http://dx.doi.org/10.1177/2372732215602130.
Full textDilawari, Aniqa, Muhammad Usman Ghani Khan, Yasser D. Al-Otaibi, Zahoor-ur Rehman, Atta-ur Rahman, and Yunyoung Nam. "Natural Language Description of Videos for Smart Surveillance." Applied Sciences 11, no. 9 (April 21, 2021): 3730. http://dx.doi.org/10.3390/app11093730.
Full textDissertations / Theses on the topic "Video and language"
Khan, Muhammad Usman Ghani. "Natural language descriptions for video streams." Thesis, University of Sheffield, 2012. http://etheses.whiterose.ac.uk/2789/.
Full textMiech, Antoine. "Large-scale learning from video and natural language." Electronic Thesis or Diss., Université Paris sciences et lettres, 2020. http://www.theses.fr/2020UPSLE059.
Full textThe goal of this thesis is to build and train machine learning models capable of understanding the content of videos. Current video understanding approaches mainly rely on large-scale manually annotated video datasets for training. However, collecting and annotating such dataset is cumbersome, expensive and time-consuming. To address this issue, this thesis focuses on leveraging large amounts of readily-available, but noisy annotations in the form of natural language. In particular, we exploit a diverse corpus of textual metadata such as movie scripts, web video titles and descriptions or automatically transcribed speech obtained from narrated videos. Training video models on such readily-available textual data is challenging as such annotation is often imprecise or wrong. In this thesis, we introduce learning approaches to deal with weak annotation and design specialized training objectives and neural network architectures
Zhou, Mingjie. "Deep networks for sign language video caption." HKBU Institutional Repository, 2020. https://repository.hkbu.edu.hk/etd_oa/848.
Full textErozel, Guzen. "Natural Language Interface On A Video Data Model." Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/12606251/index.pdf.
Full textAdam, Jameel. "Video annotation wiki for South African sign language." Thesis, University of the Western Cape, 2011. http://etd.uwc.ac.za/index.php?module=etd&action=viewtitle&id=gen8Srv25Nme4_1540_1304499135.
Full textThe SASL project at the University of the Western Cape aims at developing a fully automated translation system between English and South African Sign Language (SASL). Three important aspects of this system require SASL documentation and knowledge. These are: recognition of SASL from a video sequence, linguistic translation between SASL and English and the rendering of SASL. Unfortunately, SASL documentation is a scarce resource and no official or complete documentation exists. This research focuses on creating an online collaborative video annotation knowledge management system for SASL where various members of the community can upload SASL videos to and annotate them in any of the sign language notation systems, SignWriting, HamNoSys and/or Stokoe. As such, knowledge about SASL structure is pooled into a central and freely accessible knowledge base that can be used as required. The usability and performance of the system were evaluated. The usability of the system was graded by users on a rating scale from one to five for a specific set of tasks. The system was found to have an overall usability of 3.1, slightly better than average. The performance evaluation included load and stress tests which measured the system response time for a number of users for a specific set of tasks. It was found that the system is stable and can scale up to cater for an increasing user base by improving the underlying hardware.
Ou, Yingzhe, and 区颖哲. "Teaching Chinese as a second language through video." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2011. http://hub.hku.hk/bib/B48368714.
Full textpublished_or_final_version
Education
Master
Master of Education
Addis, Pietro <1991>. "The Age of Video Games: Language and Narrative." Master's Degree Thesis, Università Ca' Foscari Venezia, 2017. http://hdl.handle.net/10579/10634.
Full textMuir, Laura J. "Content-prioritised video coding for British Sign Language communication." Thesis, Robert Gordon University, 2007. http://hdl.handle.net/10059/177.
Full textLaveborn, Joel. "Video Game Vocabulary : The effect of video games on Swedish learners‟ word comprehension." Thesis, Karlstad University, Karlstad University, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-5487.
Full textVideo games are very popular among children in the Western world. This study was done in order to investigate if video games had an effect on 49 Swedish students‟ comprehension of English words (grades 7-8). The investigation was based on questionnaire and word test data. The questionnaire aimed to measure with which frequency students were playing video games, and the word test aimed to measure their word comprehension in general. In addition, data from the word test were used to investigate how students explained the words. Depending on their explanations, students were categorized as either using a “video game approach” or a “dictionary approach” in their explanations.
The results showed a gender difference, both with regard to the frequency of playing and what types of games that were played. Playing video games seemed to increase the students‟ comprehension of English words, though there was no clear connection between the frequency with which students were playing video games and the choice of a dictionary or video game approach as an explanation.
Lopes, Solange Aparecida. "A descriptive study of the interaction behaviors in a language video program and in live elementary language classes using that video program." Diss., This resource online, 1996. http://scholar.lib.vt.edu/theses/available/etd-10052007-143033/.
Full textBooks on the topic "Video and language"
Lonergan, Jack. Video in language learning. London: Linguaphone Institute, 1987.
Find full textAltman, Rick. The video connection: Integrating video into language teaching. Boston: Houghton Mifflin, 1989.
Find full textThe video connection: Integrating video into language teaching. Boston: Houghton Mifflin, 1989.
Find full textRhodes, Nancy C. Language by video: An overview of foreign language instructional videos for children. Washington, DC: Center for Applied Linguistics/Delta Systems, 2004.
Find full text1942-, Tomalin Barry, ed. Video in action: Recipes for using video in language teaching. New York: Prentice Hall International, 1990.
Find full text1942-, Tomalin Barry, ed. Video in action: Recipes for using video in language teaching. New York, N.Y: Prentice Hall, 1990.
Find full textGreenall, Simon. Reward video. Oxford: Heinemann, 1998.
Find full textHutchinson, Tom. Project video. Oxford: Oxford University Press, 1991.
Find full textHutchinson, Tom. Project video. Oxford: Oxford UniversityPress, 1992.
Find full textCooper, Richard. Video. Oxford, [England]: Oxford University Press, 1993.
Find full textBook chapters on the topic "Video and language"
Austin, Erin E. H. "Video Options." In Going Global in the World Language Classroom, 76–83. New York: Routledge, 2023. http://dx.doi.org/10.4324/9781003384267-12.
Full textKlimas, Janina. "Video and Drama Activities." In Building Proficiency for World Language Learners, 190–210. New York: Eye on Education, 2024. http://dx.doi.org/10.4324/9781032622507-16.
Full textZhang, Shilin, and Mei Gu. "Research on Hand Language Video Retrieval." In Lecture Notes in Computer Science, 648–55. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-13498-2_85.
Full textRylander, John, Phillip Clark, and Richard Derrah. "A video-based method of assessing pragmatic awareness." In Assessing Second Language Pragmatics, 65–97. London: Palgrave Macmillan UK, 2013. http://dx.doi.org/10.1057/9781137003522_3.
Full textMa, Minuk, Sunjae Yoon, Junyeong Kim, Youngjoon Lee, Sunghun Kang, and Chang D. Yoo. "VLANet: Video-Language Alignment Network for Weakly-Supervised Video Moment Retrieval." In Computer Vision – ECCV 2020, 156–71. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58604-1_10.
Full textStopel, Bartosz. "On Botched Cinematic Transformations of Video Games." In Second Language Learning and Teaching, 173–90. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-25189-5_12.
Full textHeyerick, Isabelle. "Chapter 5. The importance of video recordings in signed language interpreting research." In Linking up with Video, 127–49. Amsterdam: John Benjamins Publishing Company, 2020. http://dx.doi.org/10.1075/btl.149.06hey.
Full textKhoreva, Anna, Anna Rohrbach, and Bernt Schiele. "Video Object Segmentation with Language Referring Expressions." In Computer Vision – ACCV 2018, 123–41. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-20870-7_8.
Full textZhang, Junchao, and Yuxin Peng. "Hierarchical Vision-Language Alignment for Video Captioning." In MultiMedia Modeling, 42–54. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-05710-7_4.
Full textKacetl, Jaroslav, and Madgalena Fiserova. "Online Video Clips in Foreign Language Teaching." In Business Challenges in the Changing Economic Landscape - Vol. 2, 355–64. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-22593-7_26.
Full textConference papers on the topic "Video and language"
Bosy, Karen, and Cristina Portugal. "Media Language: Video practices." In Proceedings of EVA London 2020. BCS Learning and Development Ltd, 2020. http://dx.doi.org/10.14236/ewic/eva2020.53.
Full textBuch, Shyamal, Cristobal Eyzaguirre, Adrien Gaidon, Jiajun Wu, Li Fei-Fei, and Juan Carlos Niebles. "Revisiting the “Video” in Video-Language Understanding." In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2022. http://dx.doi.org/10.1109/cvpr52688.2022.00293.
Full textLiu, Runze, Yaqun Fang, Fan Yu, Ruiqi Tian, Tongwei Ren, and Gangshan Wu. "Deep Video Understanding with Video-Language Model." In MM '23: The 31st ACM International Conference on Multimedia. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3581783.3612863.
Full textNam, Yoonsoo, Adam Lehavi, Daniel Yang, Digbalay Bose, Swabha Swayamdipta, and Shrikanth Narayanan. "Does Video Summarization Require Videos? Quantifying the Effectiveness of Language in Video Summarization." In ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2024. http://dx.doi.org/10.1109/icassp48485.2024.10445931.
Full textSwartz, Jonathan, and Brian C. Smith. "A resolution independent video language." In the third ACM international conference. New York, New York, USA: ACM Press, 1995. http://dx.doi.org/10.1145/217279.215265.
Full textKountchev, R., Vl Todorov, and R. Kountcheva. "Efficient sign language video representation." In 2008 International Conference on Systems, Signals and Image Processing (IWSSIP). IEEE, 2008. http://dx.doi.org/10.1109/iwssip.2008.4604396.
Full textLi, Linjie, Zhe Gan, Kevin Lin, Chung-Ching Lin, Zicheng Liu, Ce Liu, and Lijuan Wang. "LAVENDER: Unifying Video-Language Understanding as Masked Language Modeling." In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2023. http://dx.doi.org/10.1109/cvpr52729.2023.02214.
Full textTellex, Stefanie, Thomas Kollar, George Shaw, Nicholas Roy, and Deb Roy. "Grounding spatial language for video search." In International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction. New York, New York, USA: ACM Press, 2010. http://dx.doi.org/10.1145/1891903.1891944.
Full textZhang, Shilin, and Hai Wang. "HMM based hand language video retrieval." In 2010 International Conference on Intelligent Control and Information Processing (ICICIP). IEEE, 2010. http://dx.doi.org/10.1109/icicip.2010.5564284.
Full textGupta, Vaidik, Rohan Punjani, Mayur Vaswani, and Jyoti Kundale. "Video Conferencing with Sign language Detection." In 2022 2nd Asian Conference on Innovation in Technology (ASIANCON). IEEE, 2022. http://dx.doi.org/10.1109/asiancon55314.2022.9908973.
Full textReports on the topic "Video and language"
Liang, Yiqing. Video Retrieval Based on Language and Image Analysis. Fort Belvoir, VA: Defense Technical Information Center, May 1999. http://dx.doi.org/10.21236/ada364129.
Full textChorna, Olha V., Vita A. Hamaniuk, and Aleksandr D. Uchitel. Use of YouTube on lessons of practical course of German language as the first and second language at the pedagogical university. [б. в.], September 2019. http://dx.doi.org/10.31812/123456789/3253.
Full textSmith, Michael A., and Takeo Kanade. Video Skimming and Characterization through the Combination of Image and Language Understanding Techniques. Fort Belvoir, VA: Defense Technical Information Center, February 1997. http://dx.doi.org/10.21236/ada333857.
Full textDecleir, Cyril, Mohand-Saïd Hacid, and Jacques Kouloumdjian. A Database Approach for Modeling and Querying Video Data. Aachen University of Technology, 1999. http://dx.doi.org/10.25368/2022.90.
Full textTrullinger, Richard. Differential measurement of a language concept presented via video tape playback to first grade students. Portland State University Library, January 2000. http://dx.doi.org/10.15760/etd.2420.
Full textSymonenko, Svitlana V., Nataliia V. Zaitseva, Viacheslav V. Osadchyi, Kateryna P. Osadcha, and Ekaterina O. Shmeltser. Virtual reality in foreign language training at higher educational institutions. [б. в.], February 2020. http://dx.doi.org/10.31812/123456789/3759.
Full textSandeep, Bhushan, Huang Xin, and Xiao Zongwei. A comparison of regional anesthesia techniques in patients undergoing of video-assisted thoracic surgery: A network meta-analysis. INPLASY - International Platform of Registered Systematic Review and Meta-analysis Protocols, February 2022. http://dx.doi.org/10.37766/inplasy2022.2.0003.
Full textPikilnyak, Andrey V., Nadia M. Stetsenko, Volodymyr P. Stetsenko, Tetiana V. Bondarenko, and Halyna V. Tkachuk. Comparative analysis of online dictionaries in the context of the digital transformation of education. [б. в.], June 2021. http://dx.doi.org/10.31812/123456789/4431.
Full textBrenzel, Jeffrey, and Burr Settles. The Duolingo English Test: Design, Validity, and Value. Duolingo, September 2017. http://dx.doi.org/10.46999/lyqs3238.
Full textPetrovych, Olha B., Alla P. Vinnichuk, Viktor P. Krupka, Iryna A. Zelenenka, and Andrei V. Voznyak. The usage of augmented reality technologies in professional training of future teachers of Ukrainian language and literature. CEUR Workshop Proceedings, July 2021. http://dx.doi.org/10.31812/123456789/4635.
Full text