Literatura científica selecionada sobre o tema "Video and language"
Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos
Índice
Consulte a lista de atuais artigos, livros, teses, anais de congressos e outras fontes científicas relevantes para o tema "Video and language".
Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.
Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.
Artigos de revistas sobre o assunto "Video and language"
Joshi, Prof Indira. "Video Summarization for Marathi Language". INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, n.º 05 (3 de maio de 2024): 1–5. http://dx.doi.org/10.55041/ijsrem32024.
Texto completo da fonteWalther, Joseph B., German Neubaum, Leonie Rösner, Stephan Winter e Nicole C. Krämer. "The Effect of Bilingual Congruence on the Persuasive Influence of Videos and Comments on YouTube". Journal of Language and Social Psychology 37, n.º 3 (11 de agosto de 2017): 310–29. http://dx.doi.org/10.1177/0261927x17724552.
Texto completo da fonteShipman, Frank M., Ricardo Gutierrez-Osuna e Caio D. D. Monteiro. "Identifying Sign Language Videos in Video Sharing Sites". ACM Transactions on Accessible Computing 5, n.º 4 (março de 2014): 1–14. http://dx.doi.org/10.1145/2579698.
Texto completo da fonteSanders, D. C., L. M. Reyes, D. J. Osborne, D. R. Ward e D. E. Blackwelder. "USING A BILINGUAL GAPS AND HAND-WASHING DVD TO TRAIN FRESH PRODUCE FIELD AND PACKINGHOUSE WORKERS". HortScience 41, n.º 3 (junho de 2006): 498D—498. http://dx.doi.org/10.21273/hortsci.41.3.498d.
Texto completo da fonteHiremath, Rashmi B., e Ramesh M. Kagalkar. "Sign Language Video Processing for Text Detection in Hindi Language". International Journal of Recent Contributions from Engineering, Science & IT (iJES) 4, n.º 3 (26 de outubro de 2016): 21. http://dx.doi.org/10.3991/ijes.v4i3.5973.
Texto completo da fonteAnugerah, Rezza, Yohanes Gatot Sutapa Yuliana e Dwi Riyanti. "THE POTENTIAL OF ENGLISH LEARNING VIDEOS IN FORM OF VLOG ON YOUTUBE FOR ELT MATERIAL WRITERS". Proceedings International Conference on Teaching and Education (ICoTE) 2, n.º 2 (24 de dezembro de 2019): 224. http://dx.doi.org/10.26418/icote.v2i2.38232.
Texto completo da fonteLiu, Yuqi, Luhui Xu, Pengfei Xiong e Qin Jin. "Token Mixing: Parameter-Efficient Transfer Learning from Image-Language to Video-Language". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 2 (26 de junho de 2023): 1781–89. http://dx.doi.org/10.1609/aaai.v37i2.25267.
Texto completo da fonteMoon, Nazmun Nessa, Imrus Salehin, Masuma Parvin, Md Mehedi Hasan, Iftakhar Mohammad Talha, Susanta Chandra Debnath, Fernaz Narin Nur e Mohd Saifuzzaman. "Natural language processing based advanced method of unnecessary video detection". International Journal of Electrical and Computer Engineering (IJECE) 11, n.º 6 (1 de dezembro de 2021): 5411. http://dx.doi.org/10.11591/ijece.v11i6.pp5411-5419.
Texto completo da fonteGernsbacher, Morton Ann. "Video Captions Benefit Everyone". Policy Insights from the Behavioral and Brain Sciences 2, n.º 1 (outubro de 2015): 195–202. http://dx.doi.org/10.1177/2372732215602130.
Texto completo da fonteDilawari, Aniqa, Muhammad Usman Ghani Khan, Yasser D. Al-Otaibi, Zahoor-ur Rehman, Atta-ur Rahman e Yunyoung Nam. "Natural Language Description of Videos for Smart Surveillance". Applied Sciences 11, n.º 9 (21 de abril de 2021): 3730. http://dx.doi.org/10.3390/app11093730.
Texto completo da fonteTeses / dissertações sobre o assunto "Video and language"
Khan, Muhammad Usman Ghani. "Natural language descriptions for video streams". Thesis, University of Sheffield, 2012. http://etheses.whiterose.ac.uk/2789/.
Texto completo da fonteMiech, Antoine. "Large-scale learning from video and natural language". Electronic Thesis or Diss., Université Paris sciences et lettres, 2020. http://www.theses.fr/2020UPSLE059.
Texto completo da fonteThe goal of this thesis is to build and train machine learning models capable of understanding the content of videos. Current video understanding approaches mainly rely on large-scale manually annotated video datasets for training. However, collecting and annotating such dataset is cumbersome, expensive and time-consuming. To address this issue, this thesis focuses on leveraging large amounts of readily-available, but noisy annotations in the form of natural language. In particular, we exploit a diverse corpus of textual metadata such as movie scripts, web video titles and descriptions or automatically transcribed speech obtained from narrated videos. Training video models on such readily-available textual data is challenging as such annotation is often imprecise or wrong. In this thesis, we introduce learning approaches to deal with weak annotation and design specialized training objectives and neural network architectures
Zhou, Mingjie. "Deep networks for sign language video caption". HKBU Institutional Repository, 2020. https://repository.hkbu.edu.hk/etd_oa/848.
Texto completo da fonteErozel, Guzen. "Natural Language Interface On A Video Data Model". Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/12606251/index.pdf.
Texto completo da fonteAdam, Jameel. "Video annotation wiki for South African sign language". Thesis, University of the Western Cape, 2011. http://etd.uwc.ac.za/index.php?module=etd&action=viewtitle&id=gen8Srv25Nme4_1540_1304499135.
Texto completo da fonteThe SASL project at the University of the Western Cape aims at developing a fully automated translation system between English and South African Sign Language (SASL). Three important aspects of this system require SASL documentation and knowledge. These are: recognition of SASL from a video sequence, linguistic translation between SASL and English and the rendering of SASL. Unfortunately, SASL documentation is a scarce resource and no official or complete documentation exists. This research focuses on creating an online collaborative video annotation knowledge management system for SASL where various members of the community can upload SASL videos to and annotate them in any of the sign language notation systems, SignWriting, HamNoSys and/or Stokoe. As such, knowledge about SASL structure is pooled into a central and freely accessible knowledge base that can be used as required. The usability and performance of the system were evaluated. The usability of the system was graded by users on a rating scale from one to five for a specific set of tasks. The system was found to have an overall usability of 3.1, slightly better than average. The performance evaluation included load and stress tests which measured the system response time for a number of users for a specific set of tasks. It was found that the system is stable and can scale up to cater for an increasing user base by improving the underlying hardware.
Ou, Yingzhe, e 区颖哲. "Teaching Chinese as a second language through video". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2011. http://hub.hku.hk/bib/B48368714.
Texto completo da fontepublished_or_final_version
Education
Master
Master of Education
Addis, Pietro <1991>. "The Age of Video Games: Language and Narrative". Master's Degree Thesis, Università Ca' Foscari Venezia, 2017. http://hdl.handle.net/10579/10634.
Texto completo da fonteMuir, Laura J. "Content-prioritised video coding for British Sign Language communication". Thesis, Robert Gordon University, 2007. http://hdl.handle.net/10059/177.
Texto completo da fonteLaveborn, Joel. "Video Game Vocabulary : The effect of video games on Swedish learners‟ word comprehension". Thesis, Karlstad University, Karlstad University, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-5487.
Texto completo da fonteVideo games are very popular among children in the Western world. This study was done in order to investigate if video games had an effect on 49 Swedish students‟ comprehension of English words (grades 7-8). The investigation was based on questionnaire and word test data. The questionnaire aimed to measure with which frequency students were playing video games, and the word test aimed to measure their word comprehension in general. In addition, data from the word test were used to investigate how students explained the words. Depending on their explanations, students were categorized as either using a “video game approach” or a “dictionary approach” in their explanations.
The results showed a gender difference, both with regard to the frequency of playing and what types of games that were played. Playing video games seemed to increase the students‟ comprehension of English words, though there was no clear connection between the frequency with which students were playing video games and the choice of a dictionary or video game approach as an explanation.
Lopes, Solange Aparecida. "A descriptive study of the interaction behaviors in a language video program and in live elementary language classes using that video program". Diss., This resource online, 1996. http://scholar.lib.vt.edu/theses/available/etd-10052007-143033/.
Texto completo da fonteLivros sobre o assunto "Video and language"
Lonergan, Jack. Video in language learning. London: Linguaphone Institute, 1987.
Encontre o texto completo da fonteAltman, Rick. The video connection: Integrating video into language teaching. Boston: Houghton Mifflin, 1989.
Encontre o texto completo da fonteThe video connection: Integrating video into language teaching. Boston: Houghton Mifflin, 1989.
Encontre o texto completo da fonteRhodes, Nancy C. Language by video: An overview of foreign language instructional videos for children. Washington, DC: Center for Applied Linguistics/Delta Systems, 2004.
Encontre o texto completo da fonte1942-, Tomalin Barry, ed. Video in action: Recipes for using video in language teaching. New York: Prentice Hall International, 1990.
Encontre o texto completo da fonte1942-, Tomalin Barry, ed. Video in action: Recipes for using video in language teaching. New York, N.Y: Prentice Hall, 1990.
Encontre o texto completo da fonteGreenall, Simon. Reward video. Oxford: Heinemann, 1998.
Encontre o texto completo da fonteHutchinson, Tom. Project video. Oxford: Oxford University Press, 1991.
Encontre o texto completo da fonteHutchinson, Tom. Project video. Oxford: Oxford UniversityPress, 1992.
Encontre o texto completo da fonteCooper, Richard. Video. Oxford, [England]: Oxford University Press, 1993.
Encontre o texto completo da fonteCapítulos de livros sobre o assunto "Video and language"
Austin, Erin E. H. "Video Options". In Going Global in the World Language Classroom, 76–83. New York: Routledge, 2023. http://dx.doi.org/10.4324/9781003384267-12.
Texto completo da fonteKlimas, Janina. "Video and Drama Activities". In Building Proficiency for World Language Learners, 190–210. New York: Eye on Education, 2024. http://dx.doi.org/10.4324/9781032622507-16.
Texto completo da fonteZhang, Shilin, e Mei Gu. "Research on Hand Language Video Retrieval". In Lecture Notes in Computer Science, 648–55. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-13498-2_85.
Texto completo da fonteRylander, John, Phillip Clark e Richard Derrah. "A video-based method of assessing pragmatic awareness". In Assessing Second Language Pragmatics, 65–97. London: Palgrave Macmillan UK, 2013. http://dx.doi.org/10.1057/9781137003522_3.
Texto completo da fonteMa, Minuk, Sunjae Yoon, Junyeong Kim, Youngjoon Lee, Sunghun Kang e Chang D. Yoo. "VLANet: Video-Language Alignment Network for Weakly-Supervised Video Moment Retrieval". In Computer Vision – ECCV 2020, 156–71. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58604-1_10.
Texto completo da fonteStopel, Bartosz. "On Botched Cinematic Transformations of Video Games". In Second Language Learning and Teaching, 173–90. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-25189-5_12.
Texto completo da fonteHeyerick, Isabelle. "Chapter 5. The importance of video recordings in signed language interpreting research". In Linking up with Video, 127–49. Amsterdam: John Benjamins Publishing Company, 2020. http://dx.doi.org/10.1075/btl.149.06hey.
Texto completo da fonteKhoreva, Anna, Anna Rohrbach e Bernt Schiele. "Video Object Segmentation with Language Referring Expressions". In Computer Vision – ACCV 2018, 123–41. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-20870-7_8.
Texto completo da fonteZhang, Junchao, e Yuxin Peng. "Hierarchical Vision-Language Alignment for Video Captioning". In MultiMedia Modeling, 42–54. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-05710-7_4.
Texto completo da fonteKacetl, Jaroslav, e Madgalena Fiserova. "Online Video Clips in Foreign Language Teaching". In Business Challenges in the Changing Economic Landscape - Vol. 2, 355–64. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-22593-7_26.
Texto completo da fonteTrabalhos de conferências sobre o assunto "Video and language"
Bosy, Karen, e Cristina Portugal. "Media Language: Video practices". In Proceedings of EVA London 2020. BCS Learning and Development Ltd, 2020. http://dx.doi.org/10.14236/ewic/eva2020.53.
Texto completo da fonteBuch, Shyamal, Cristobal Eyzaguirre, Adrien Gaidon, Jiajun Wu, Li Fei-Fei e Juan Carlos Niebles. "Revisiting the “Video” in Video-Language Understanding". In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2022. http://dx.doi.org/10.1109/cvpr52688.2022.00293.
Texto completo da fonteLiu, Runze, Yaqun Fang, Fan Yu, Ruiqi Tian, Tongwei Ren e Gangshan Wu. "Deep Video Understanding with Video-Language Model". In MM '23: The 31st ACM International Conference on Multimedia. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3581783.3612863.
Texto completo da fonteNam, Yoonsoo, Adam Lehavi, Daniel Yang, Digbalay Bose, Swabha Swayamdipta e Shrikanth Narayanan. "Does Video Summarization Require Videos? Quantifying the Effectiveness of Language in Video Summarization". In ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2024. http://dx.doi.org/10.1109/icassp48485.2024.10445931.
Texto completo da fonteSwartz, Jonathan, e Brian C. Smith. "A resolution independent video language". In the third ACM international conference. New York, New York, USA: ACM Press, 1995. http://dx.doi.org/10.1145/217279.215265.
Texto completo da fonteKountchev, R., Vl Todorov e R. Kountcheva. "Efficient sign language video representation". In 2008 International Conference on Systems, Signals and Image Processing (IWSSIP). IEEE, 2008. http://dx.doi.org/10.1109/iwssip.2008.4604396.
Texto completo da fonteLi, Linjie, Zhe Gan, Kevin Lin, Chung-Ching Lin, Zicheng Liu, Ce Liu e Lijuan Wang. "LAVENDER: Unifying Video-Language Understanding as Masked Language Modeling". In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2023. http://dx.doi.org/10.1109/cvpr52729.2023.02214.
Texto completo da fonteTellex, Stefanie, Thomas Kollar, George Shaw, Nicholas Roy e Deb Roy. "Grounding spatial language for video search". In International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction. New York, New York, USA: ACM Press, 2010. http://dx.doi.org/10.1145/1891903.1891944.
Texto completo da fonteZhang, Shilin, e Hai Wang. "HMM based hand language video retrieval". In 2010 International Conference on Intelligent Control and Information Processing (ICICIP). IEEE, 2010. http://dx.doi.org/10.1109/icicip.2010.5564284.
Texto completo da fonteGupta, Vaidik, Rohan Punjani, Mayur Vaswani e Jyoti Kundale. "Video Conferencing with Sign language Detection". In 2022 2nd Asian Conference on Innovation in Technology (ASIANCON). IEEE, 2022. http://dx.doi.org/10.1109/asiancon55314.2022.9908973.
Texto completo da fonteRelatórios de organizações sobre o assunto "Video and language"
Liang, Yiqing. Video Retrieval Based on Language and Image Analysis. Fort Belvoir, VA: Defense Technical Information Center, maio de 1999. http://dx.doi.org/10.21236/ada364129.
Texto completo da fonteChorna, Olha V., Vita A. Hamaniuk e Aleksandr D. Uchitel. Use of YouTube on lessons of practical course of German language as the first and second language at the pedagogical university. [б. в.], setembro de 2019. http://dx.doi.org/10.31812/123456789/3253.
Texto completo da fonteSmith, Michael A., e Takeo Kanade. Video Skimming and Characterization through the Combination of Image and Language Understanding Techniques. Fort Belvoir, VA: Defense Technical Information Center, fevereiro de 1997. http://dx.doi.org/10.21236/ada333857.
Texto completo da fonteDecleir, Cyril, Mohand-Saïd Hacid e Jacques Kouloumdjian. A Database Approach for Modeling and Querying Video Data. Aachen University of Technology, 1999. http://dx.doi.org/10.25368/2022.90.
Texto completo da fonteTrullinger, Richard. Differential measurement of a language concept presented via video tape playback to first grade students. Portland State University Library, janeiro de 2000. http://dx.doi.org/10.15760/etd.2420.
Texto completo da fonteSymonenko, Svitlana V., Nataliia V. Zaitseva, Viacheslav V. Osadchyi, Kateryna P. Osadcha e Ekaterina O. Shmeltser. Virtual reality in foreign language training at higher educational institutions. [б. в.], fevereiro de 2020. http://dx.doi.org/10.31812/123456789/3759.
Texto completo da fonteSandeep, Bhushan, Huang Xin e Xiao Zongwei. A comparison of regional anesthesia techniques in patients undergoing of video-assisted thoracic surgery: A network meta-analysis. INPLASY - International Platform of Registered Systematic Review and Meta-analysis Protocols, fevereiro de 2022. http://dx.doi.org/10.37766/inplasy2022.2.0003.
Texto completo da fontePikilnyak, Andrey V., Nadia M. Stetsenko, Volodymyr P. Stetsenko, Tetiana V. Bondarenko e Halyna V. Tkachuk. Comparative analysis of online dictionaries in the context of the digital transformation of education. [б. в.], junho de 2021. http://dx.doi.org/10.31812/123456789/4431.
Texto completo da fonteBrenzel, Jeffrey, e Burr Settles. The Duolingo English Test: Design, Validity, and Value. Duolingo, setembro de 2017. http://dx.doi.org/10.46999/lyqs3238.
Texto completo da fontePetrovych, Olha B., Alla P. Vinnichuk, Viktor P. Krupka, Iryna A. Zelenenka e Andrei V. Voznyak. The usage of augmented reality technologies in professional training of future teachers of Ukrainian language and literature. CEUR Workshop Proceedings, julho de 2021. http://dx.doi.org/10.31812/123456789/4635.
Texto completo da fonte