Literatura académica sobre el tema "Video and language"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Video and language".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Artículos de revistas sobre el tema "Video and language"
Joshi, Prof Indira. "Video Summarization for Marathi Language". INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, n.º 05 (3 de mayo de 2024): 1–5. http://dx.doi.org/10.55041/ijsrem32024.
Texto completoWalther, Joseph B., German Neubaum, Leonie Rösner, Stephan Winter y Nicole C. Krämer. "The Effect of Bilingual Congruence on the Persuasive Influence of Videos and Comments on YouTube". Journal of Language and Social Psychology 37, n.º 3 (11 de agosto de 2017): 310–29. http://dx.doi.org/10.1177/0261927x17724552.
Texto completoShipman, Frank M., Ricardo Gutierrez-Osuna y Caio D. D. Monteiro. "Identifying Sign Language Videos in Video Sharing Sites". ACM Transactions on Accessible Computing 5, n.º 4 (marzo de 2014): 1–14. http://dx.doi.org/10.1145/2579698.
Texto completoSanders, D. C., L. M. Reyes, D. J. Osborne, D. R. Ward y D. E. Blackwelder. "USING A BILINGUAL GAPS AND HAND-WASHING DVD TO TRAIN FRESH PRODUCE FIELD AND PACKINGHOUSE WORKERS". HortScience 41, n.º 3 (junio de 2006): 498D—498. http://dx.doi.org/10.21273/hortsci.41.3.498d.
Texto completoHiremath, Rashmi B. y Ramesh M. Kagalkar. "Sign Language Video Processing for Text Detection in Hindi Language". International Journal of Recent Contributions from Engineering, Science & IT (iJES) 4, n.º 3 (26 de octubre de 2016): 21. http://dx.doi.org/10.3991/ijes.v4i3.5973.
Texto completoAnugerah, Rezza, Yohanes Gatot Sutapa Yuliana y Dwi Riyanti. "THE POTENTIAL OF ENGLISH LEARNING VIDEOS IN FORM OF VLOG ON YOUTUBE FOR ELT MATERIAL WRITERS". Proceedings International Conference on Teaching and Education (ICoTE) 2, n.º 2 (24 de diciembre de 2019): 224. http://dx.doi.org/10.26418/icote.v2i2.38232.
Texto completoLiu, Yuqi, Luhui Xu, Pengfei Xiong y Qin Jin. "Token Mixing: Parameter-Efficient Transfer Learning from Image-Language to Video-Language". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 2 (26 de junio de 2023): 1781–89. http://dx.doi.org/10.1609/aaai.v37i2.25267.
Texto completoMoon, Nazmun Nessa, Imrus Salehin, Masuma Parvin, Md Mehedi Hasan, Iftakhar Mohammad Talha, Susanta Chandra Debnath, Fernaz Narin Nur y Mohd Saifuzzaman. "Natural language processing based advanced method of unnecessary video detection". International Journal of Electrical and Computer Engineering (IJECE) 11, n.º 6 (1 de diciembre de 2021): 5411. http://dx.doi.org/10.11591/ijece.v11i6.pp5411-5419.
Texto completoGernsbacher, Morton Ann. "Video Captions Benefit Everyone". Policy Insights from the Behavioral and Brain Sciences 2, n.º 1 (octubre de 2015): 195–202. http://dx.doi.org/10.1177/2372732215602130.
Texto completoDilawari, Aniqa, Muhammad Usman Ghani Khan, Yasser D. Al-Otaibi, Zahoor-ur Rehman, Atta-ur Rahman y Yunyoung Nam. "Natural Language Description of Videos for Smart Surveillance". Applied Sciences 11, n.º 9 (21 de abril de 2021): 3730. http://dx.doi.org/10.3390/app11093730.
Texto completoTesis sobre el tema "Video and language"
Khan, Muhammad Usman Ghani. "Natural language descriptions for video streams". Thesis, University of Sheffield, 2012. http://etheses.whiterose.ac.uk/2789/.
Texto completoMiech, Antoine. "Large-scale learning from video and natural language". Electronic Thesis or Diss., Université Paris sciences et lettres, 2020. http://www.theses.fr/2020UPSLE059.
Texto completoThe goal of this thesis is to build and train machine learning models capable of understanding the content of videos. Current video understanding approaches mainly rely on large-scale manually annotated video datasets for training. However, collecting and annotating such dataset is cumbersome, expensive and time-consuming. To address this issue, this thesis focuses on leveraging large amounts of readily-available, but noisy annotations in the form of natural language. In particular, we exploit a diverse corpus of textual metadata such as movie scripts, web video titles and descriptions or automatically transcribed speech obtained from narrated videos. Training video models on such readily-available textual data is challenging as such annotation is often imprecise or wrong. In this thesis, we introduce learning approaches to deal with weak annotation and design specialized training objectives and neural network architectures
Zhou, Mingjie. "Deep networks for sign language video caption". HKBU Institutional Repository, 2020. https://repository.hkbu.edu.hk/etd_oa/848.
Texto completoErozel, Guzen. "Natural Language Interface On A Video Data Model". Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/12606251/index.pdf.
Texto completoAdam, Jameel. "Video annotation wiki for South African sign language". Thesis, University of the Western Cape, 2011. http://etd.uwc.ac.za/index.php?module=etd&action=viewtitle&id=gen8Srv25Nme4_1540_1304499135.
Texto completoThe SASL project at the University of the Western Cape aims at developing a fully automated translation system between English and South African Sign Language (SASL). Three important aspects of this system require SASL documentation and knowledge. These are: recognition of SASL from a video sequence, linguistic translation between SASL and English and the rendering of SASL. Unfortunately, SASL documentation is a scarce resource and no official or complete documentation exists. This research focuses on creating an online collaborative video annotation knowledge management system for SASL where various members of the community can upload SASL videos to and annotate them in any of the sign language notation systems, SignWriting, HamNoSys and/or Stokoe. As such, knowledge about SASL structure is pooled into a central and freely accessible knowledge base that can be used as required. The usability and performance of the system were evaluated. The usability of the system was graded by users on a rating scale from one to five for a specific set of tasks. The system was found to have an overall usability of 3.1, slightly better than average. The performance evaluation included load and stress tests which measured the system response time for a number of users for a specific set of tasks. It was found that the system is stable and can scale up to cater for an increasing user base by improving the underlying hardware.
Ou, Yingzhe y 区颖哲. "Teaching Chinese as a second language through video". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2011. http://hub.hku.hk/bib/B48368714.
Texto completopublished_or_final_version
Education
Master
Master of Education
Addis, Pietro <1991>. "The Age of Video Games: Language and Narrative". Master's Degree Thesis, Università Ca' Foscari Venezia, 2017. http://hdl.handle.net/10579/10634.
Texto completoMuir, Laura J. "Content-prioritised video coding for British Sign Language communication". Thesis, Robert Gordon University, 2007. http://hdl.handle.net/10059/177.
Texto completoLaveborn, Joel. "Video Game Vocabulary : The effect of video games on Swedish learners‟ word comprehension". Thesis, Karlstad University, Karlstad University, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-5487.
Texto completoVideo games are very popular among children in the Western world. This study was done in order to investigate if video games had an effect on 49 Swedish students‟ comprehension of English words (grades 7-8). The investigation was based on questionnaire and word test data. The questionnaire aimed to measure with which frequency students were playing video games, and the word test aimed to measure their word comprehension in general. In addition, data from the word test were used to investigate how students explained the words. Depending on their explanations, students were categorized as either using a “video game approach” or a “dictionary approach” in their explanations.
The results showed a gender difference, both with regard to the frequency of playing and what types of games that were played. Playing video games seemed to increase the students‟ comprehension of English words, though there was no clear connection between the frequency with which students were playing video games and the choice of a dictionary or video game approach as an explanation.
Lopes, Solange Aparecida. "A descriptive study of the interaction behaviors in a language video program and in live elementary language classes using that video program". Diss., This resource online, 1996. http://scholar.lib.vt.edu/theses/available/etd-10052007-143033/.
Texto completoLibros sobre el tema "Video and language"
Lonergan, Jack. Video in language learning. London: Linguaphone Institute, 1987.
Buscar texto completoAltman, Rick. The video connection: Integrating video into language teaching. Boston: Houghton Mifflin, 1989.
Buscar texto completoThe video connection: Integrating video into language teaching. Boston: Houghton Mifflin, 1989.
Buscar texto completoRhodes, Nancy C. Language by video: An overview of foreign language instructional videos for children. Washington, DC: Center for Applied Linguistics/Delta Systems, 2004.
Buscar texto completo1942-, Tomalin Barry, ed. Video in action: Recipes for using video in language teaching. New York: Prentice Hall International, 1990.
Buscar texto completo1942-, Tomalin Barry, ed. Video in action: Recipes for using video in language teaching. New York, N.Y: Prentice Hall, 1990.
Buscar texto completoGreenall, Simon. Reward video. Oxford: Heinemann, 1998.
Buscar texto completoHutchinson, Tom. Project video. Oxford: Oxford University Press, 1991.
Buscar texto completoHutchinson, Tom. Project video. Oxford: Oxford UniversityPress, 1992.
Buscar texto completoCooper, Richard. Video. Oxford, [England]: Oxford University Press, 1993.
Buscar texto completoCapítulos de libros sobre el tema "Video and language"
Austin, Erin E. H. "Video Options". En Going Global in the World Language Classroom, 76–83. New York: Routledge, 2023. http://dx.doi.org/10.4324/9781003384267-12.
Texto completoKlimas, Janina. "Video and Drama Activities". En Building Proficiency for World Language Learners, 190–210. New York: Eye on Education, 2024. http://dx.doi.org/10.4324/9781032622507-16.
Texto completoZhang, Shilin y Mei Gu. "Research on Hand Language Video Retrieval". En Lecture Notes in Computer Science, 648–55. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-13498-2_85.
Texto completoRylander, John, Phillip Clark y Richard Derrah. "A video-based method of assessing pragmatic awareness". En Assessing Second Language Pragmatics, 65–97. London: Palgrave Macmillan UK, 2013. http://dx.doi.org/10.1057/9781137003522_3.
Texto completoMa, Minuk, Sunjae Yoon, Junyeong Kim, Youngjoon Lee, Sunghun Kang y Chang D. Yoo. "VLANet: Video-Language Alignment Network for Weakly-Supervised Video Moment Retrieval". En Computer Vision – ECCV 2020, 156–71. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58604-1_10.
Texto completoStopel, Bartosz. "On Botched Cinematic Transformations of Video Games". En Second Language Learning and Teaching, 173–90. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-25189-5_12.
Texto completoHeyerick, Isabelle. "Chapter 5. The importance of video recordings in signed language interpreting research". En Linking up with Video, 127–49. Amsterdam: John Benjamins Publishing Company, 2020. http://dx.doi.org/10.1075/btl.149.06hey.
Texto completoKhoreva, Anna, Anna Rohrbach y Bernt Schiele. "Video Object Segmentation with Language Referring Expressions". En Computer Vision – ACCV 2018, 123–41. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-20870-7_8.
Texto completoZhang, Junchao y Yuxin Peng. "Hierarchical Vision-Language Alignment for Video Captioning". En MultiMedia Modeling, 42–54. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-05710-7_4.
Texto completoKacetl, Jaroslav y Madgalena Fiserova. "Online Video Clips in Foreign Language Teaching". En Business Challenges in the Changing Economic Landscape - Vol. 2, 355–64. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-22593-7_26.
Texto completoActas de conferencias sobre el tema "Video and language"
Bosy, Karen y Cristina Portugal. "Media Language: Video practices". En Proceedings of EVA London 2020. BCS Learning and Development Ltd, 2020. http://dx.doi.org/10.14236/ewic/eva2020.53.
Texto completoBuch, Shyamal, Cristobal Eyzaguirre, Adrien Gaidon, Jiajun Wu, Li Fei-Fei y Juan Carlos Niebles. "Revisiting the “Video” in Video-Language Understanding". En 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2022. http://dx.doi.org/10.1109/cvpr52688.2022.00293.
Texto completoLiu, Runze, Yaqun Fang, Fan Yu, Ruiqi Tian, Tongwei Ren y Gangshan Wu. "Deep Video Understanding with Video-Language Model". En MM '23: The 31st ACM International Conference on Multimedia. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3581783.3612863.
Texto completoNam, Yoonsoo, Adam Lehavi, Daniel Yang, Digbalay Bose, Swabha Swayamdipta y Shrikanth Narayanan. "Does Video Summarization Require Videos? Quantifying the Effectiveness of Language in Video Summarization". En ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2024. http://dx.doi.org/10.1109/icassp48485.2024.10445931.
Texto completoSwartz, Jonathan y Brian C. Smith. "A resolution independent video language". En the third ACM international conference. New York, New York, USA: ACM Press, 1995. http://dx.doi.org/10.1145/217279.215265.
Texto completoKountchev, R., Vl Todorov y R. Kountcheva. "Efficient sign language video representation". En 2008 International Conference on Systems, Signals and Image Processing (IWSSIP). IEEE, 2008. http://dx.doi.org/10.1109/iwssip.2008.4604396.
Texto completoLi, Linjie, Zhe Gan, Kevin Lin, Chung-Ching Lin, Zicheng Liu, Ce Liu y Lijuan Wang. "LAVENDER: Unifying Video-Language Understanding as Masked Language Modeling". En 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2023. http://dx.doi.org/10.1109/cvpr52729.2023.02214.
Texto completoTellex, Stefanie, Thomas Kollar, George Shaw, Nicholas Roy y Deb Roy. "Grounding spatial language for video search". En International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction. New York, New York, USA: ACM Press, 2010. http://dx.doi.org/10.1145/1891903.1891944.
Texto completoZhang, Shilin y Hai Wang. "HMM based hand language video retrieval". En 2010 International Conference on Intelligent Control and Information Processing (ICICIP). IEEE, 2010. http://dx.doi.org/10.1109/icicip.2010.5564284.
Texto completoGupta, Vaidik, Rohan Punjani, Mayur Vaswani y Jyoti Kundale. "Video Conferencing with Sign language Detection". En 2022 2nd Asian Conference on Innovation in Technology (ASIANCON). IEEE, 2022. http://dx.doi.org/10.1109/asiancon55314.2022.9908973.
Texto completoInformes sobre el tema "Video and language"
Liang, Yiqing. Video Retrieval Based on Language and Image Analysis. Fort Belvoir, VA: Defense Technical Information Center, mayo de 1999. http://dx.doi.org/10.21236/ada364129.
Texto completoChorna, Olha V., Vita A. Hamaniuk y Aleksandr D. Uchitel. Use of YouTube on lessons of practical course of German language as the first and second language at the pedagogical university. [б. в.], septiembre de 2019. http://dx.doi.org/10.31812/123456789/3253.
Texto completoSmith, Michael A. y Takeo Kanade. Video Skimming and Characterization through the Combination of Image and Language Understanding Techniques. Fort Belvoir, VA: Defense Technical Information Center, febrero de 1997. http://dx.doi.org/10.21236/ada333857.
Texto completoDecleir, Cyril, Mohand-Saïd Hacid y Jacques Kouloumdjian. A Database Approach for Modeling and Querying Video Data. Aachen University of Technology, 1999. http://dx.doi.org/10.25368/2022.90.
Texto completoTrullinger, Richard. Differential measurement of a language concept presented via video tape playback to first grade students. Portland State University Library, enero de 2000. http://dx.doi.org/10.15760/etd.2420.
Texto completoSymonenko, Svitlana V., Nataliia V. Zaitseva, Viacheslav V. Osadchyi, Kateryna P. Osadcha y Ekaterina O. Shmeltser. Virtual reality in foreign language training at higher educational institutions. [б. в.], febrero de 2020. http://dx.doi.org/10.31812/123456789/3759.
Texto completoSandeep, Bhushan, Huang Xin y Xiao Zongwei. A comparison of regional anesthesia techniques in patients undergoing of video-assisted thoracic surgery: A network meta-analysis. INPLASY - International Platform of Registered Systematic Review and Meta-analysis Protocols, febrero de 2022. http://dx.doi.org/10.37766/inplasy2022.2.0003.
Texto completoPikilnyak, Andrey V., Nadia M. Stetsenko, Volodymyr P. Stetsenko, Tetiana V. Bondarenko y Halyna V. Tkachuk. Comparative analysis of online dictionaries in the context of the digital transformation of education. [б. в.], junio de 2021. http://dx.doi.org/10.31812/123456789/4431.
Texto completoBrenzel, Jeffrey y Burr Settles. The Duolingo English Test: Design, Validity, and Value. Duolingo, septiembre de 2017. http://dx.doi.org/10.46999/lyqs3238.
Texto completoPetrovych, Olha B., Alla P. Vinnichuk, Viktor P. Krupka, Iryna A. Zelenenka y Andrei V. Voznyak. The usage of augmented reality technologies in professional training of future teachers of Ukrainian language and literature. CEUR Workshop Proceedings, julio de 2021. http://dx.doi.org/10.31812/123456789/4635.
Texto completo