Inhaltsverzeichnis
Auswahl der wissenschaftlichen Literatur zum Thema „Video and language“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Video and language" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Video and language"
Joshi, Prof Indira. „Video Summarization for Marathi Language“. INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, Nr. 05 (03.05.2024): 1–5. http://dx.doi.org/10.55041/ijsrem32024.
Der volle Inhalt der QuelleWalther, Joseph B., German Neubaum, Leonie Rösner, Stephan Winter und Nicole C. Krämer. „The Effect of Bilingual Congruence on the Persuasive Influence of Videos and Comments on YouTube“. Journal of Language and Social Psychology 37, Nr. 3 (11.08.2017): 310–29. http://dx.doi.org/10.1177/0261927x17724552.
Der volle Inhalt der QuelleShipman, Frank M., Ricardo Gutierrez-Osuna und Caio D. D. Monteiro. „Identifying Sign Language Videos in Video Sharing Sites“. ACM Transactions on Accessible Computing 5, Nr. 4 (März 2014): 1–14. http://dx.doi.org/10.1145/2579698.
Der volle Inhalt der QuelleSanders, D. C., L. M. Reyes, D. J. Osborne, D. R. Ward und D. E. Blackwelder. „USING A BILINGUAL GAPS AND HAND-WASHING DVD TO TRAIN FRESH PRODUCE FIELD AND PACKINGHOUSE WORKERS“. HortScience 41, Nr. 3 (Juni 2006): 498D—498. http://dx.doi.org/10.21273/hortsci.41.3.498d.
Der volle Inhalt der QuelleHiremath, Rashmi B., und Ramesh M. Kagalkar. „Sign Language Video Processing for Text Detection in Hindi Language“. International Journal of Recent Contributions from Engineering, Science & IT (iJES) 4, Nr. 3 (26.10.2016): 21. http://dx.doi.org/10.3991/ijes.v4i3.5973.
Der volle Inhalt der QuelleAnugerah, Rezza, Yohanes Gatot Sutapa Yuliana und Dwi Riyanti. „THE POTENTIAL OF ENGLISH LEARNING VIDEOS IN FORM OF VLOG ON YOUTUBE FOR ELT MATERIAL WRITERS“. Proceedings International Conference on Teaching and Education (ICoTE) 2, Nr. 2 (24.12.2019): 224. http://dx.doi.org/10.26418/icote.v2i2.38232.
Der volle Inhalt der QuelleLiu, Yuqi, Luhui Xu, Pengfei Xiong und Qin Jin. „Token Mixing: Parameter-Efficient Transfer Learning from Image-Language to Video-Language“. Proceedings of the AAAI Conference on Artificial Intelligence 37, Nr. 2 (26.06.2023): 1781–89. http://dx.doi.org/10.1609/aaai.v37i2.25267.
Der volle Inhalt der QuelleMoon, Nazmun Nessa, Imrus Salehin, Masuma Parvin, Md Mehedi Hasan, Iftakhar Mohammad Talha, Susanta Chandra Debnath, Fernaz Narin Nur und Mohd Saifuzzaman. „Natural language processing based advanced method of unnecessary video detection“. International Journal of Electrical and Computer Engineering (IJECE) 11, Nr. 6 (01.12.2021): 5411. http://dx.doi.org/10.11591/ijece.v11i6.pp5411-5419.
Der volle Inhalt der QuelleGernsbacher, Morton Ann. „Video Captions Benefit Everyone“. Policy Insights from the Behavioral and Brain Sciences 2, Nr. 1 (Oktober 2015): 195–202. http://dx.doi.org/10.1177/2372732215602130.
Der volle Inhalt der QuelleDilawari, Aniqa, Muhammad Usman Ghani Khan, Yasser D. Al-Otaibi, Zahoor-ur Rehman, Atta-ur Rahman und Yunyoung Nam. „Natural Language Description of Videos for Smart Surveillance“. Applied Sciences 11, Nr. 9 (21.04.2021): 3730. http://dx.doi.org/10.3390/app11093730.
Der volle Inhalt der QuelleDissertationen zum Thema "Video and language"
Khan, Muhammad Usman Ghani. „Natural language descriptions for video streams“. Thesis, University of Sheffield, 2012. http://etheses.whiterose.ac.uk/2789/.
Der volle Inhalt der QuelleMiech, Antoine. „Large-scale learning from video and natural language“. Electronic Thesis or Diss., Université Paris sciences et lettres, 2020. http://www.theses.fr/2020UPSLE059.
Der volle Inhalt der QuelleThe goal of this thesis is to build and train machine learning models capable of understanding the content of videos. Current video understanding approaches mainly rely on large-scale manually annotated video datasets for training. However, collecting and annotating such dataset is cumbersome, expensive and time-consuming. To address this issue, this thesis focuses on leveraging large amounts of readily-available, but noisy annotations in the form of natural language. In particular, we exploit a diverse corpus of textual metadata such as movie scripts, web video titles and descriptions or automatically transcribed speech obtained from narrated videos. Training video models on such readily-available textual data is challenging as such annotation is often imprecise or wrong. In this thesis, we introduce learning approaches to deal with weak annotation and design specialized training objectives and neural network architectures
Zhou, Mingjie. „Deep networks for sign language video caption“. HKBU Institutional Repository, 2020. https://repository.hkbu.edu.hk/etd_oa/848.
Der volle Inhalt der QuelleErozel, Guzen. „Natural Language Interface On A Video Data Model“. Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/12606251/index.pdf.
Der volle Inhalt der QuelleAdam, Jameel. „Video annotation wiki for South African sign language“. Thesis, University of the Western Cape, 2011. http://etd.uwc.ac.za/index.php?module=etd&action=viewtitle&id=gen8Srv25Nme4_1540_1304499135.
Der volle Inhalt der QuelleThe SASL project at the University of the Western Cape aims at developing a fully automated translation system between English and South African Sign Language (SASL). Three important aspects of this system require SASL documentation and knowledge. These are: recognition of SASL from a video sequence, linguistic translation between SASL and English and the rendering of SASL. Unfortunately, SASL documentation is a scarce resource and no official or complete documentation exists. This research focuses on creating an online collaborative video annotation knowledge management system for SASL where various members of the community can upload SASL videos to and annotate them in any of the sign language notation systems, SignWriting, HamNoSys and/or Stokoe. As such, knowledge about SASL structure is pooled into a central and freely accessible knowledge base that can be used as required. The usability and performance of the system were evaluated. The usability of the system was graded by users on a rating scale from one to five for a specific set of tasks. The system was found to have an overall usability of 3.1, slightly better than average. The performance evaluation included load and stress tests which measured the system response time for a number of users for a specific set of tasks. It was found that the system is stable and can scale up to cater for an increasing user base by improving the underlying hardware.
Ou, Yingzhe, und 区颖哲. „Teaching Chinese as a second language through video“. Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2011. http://hub.hku.hk/bib/B48368714.
Der volle Inhalt der Quellepublished_or_final_version
Education
Master
Master of Education
Addis, Pietro <1991>. „The Age of Video Games: Language and Narrative“. Master's Degree Thesis, Università Ca' Foscari Venezia, 2017. http://hdl.handle.net/10579/10634.
Der volle Inhalt der QuelleMuir, Laura J. „Content-prioritised video coding for British Sign Language communication“. Thesis, Robert Gordon University, 2007. http://hdl.handle.net/10059/177.
Der volle Inhalt der QuelleLaveborn, Joel. „Video Game Vocabulary : The effect of video games on Swedish learners‟ word comprehension“. Thesis, Karlstad University, Karlstad University, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-5487.
Der volle Inhalt der QuelleVideo games are very popular among children in the Western world. This study was done in order to investigate if video games had an effect on 49 Swedish students‟ comprehension of English words (grades 7-8). The investigation was based on questionnaire and word test data. The questionnaire aimed to measure with which frequency students were playing video games, and the word test aimed to measure their word comprehension in general. In addition, data from the word test were used to investigate how students explained the words. Depending on their explanations, students were categorized as either using a “video game approach” or a “dictionary approach” in their explanations.
The results showed a gender difference, both with regard to the frequency of playing and what types of games that were played. Playing video games seemed to increase the students‟ comprehension of English words, though there was no clear connection between the frequency with which students were playing video games and the choice of a dictionary or video game approach as an explanation.
Lopes, Solange Aparecida. „A descriptive study of the interaction behaviors in a language video program and in live elementary language classes using that video program“. Diss., This resource online, 1996. http://scholar.lib.vt.edu/theses/available/etd-10052007-143033/.
Der volle Inhalt der QuelleBücher zum Thema "Video and language"
Lonergan, Jack. Video in language learning. London: Linguaphone Institute, 1987.
Den vollen Inhalt der Quelle findenAltman, Rick. The video connection: Integrating video into language teaching. Boston: Houghton Mifflin, 1989.
Den vollen Inhalt der Quelle findenThe video connection: Integrating video into language teaching. Boston: Houghton Mifflin, 1989.
Den vollen Inhalt der Quelle findenRhodes, Nancy C. Language by video: An overview of foreign language instructional videos for children. Washington, DC: Center for Applied Linguistics/Delta Systems, 2004.
Den vollen Inhalt der Quelle finden1942-, Tomalin Barry, Hrsg. Video in action: Recipes for using video in language teaching. New York: Prentice Hall International, 1990.
Den vollen Inhalt der Quelle finden1942-, Tomalin Barry, Hrsg. Video in action: Recipes for using video in language teaching. New York, N.Y: Prentice Hall, 1990.
Den vollen Inhalt der Quelle findenGreenall, Simon. Reward video. Oxford: Heinemann, 1998.
Den vollen Inhalt der Quelle findenHutchinson, Tom. Project video. Oxford: Oxford University Press, 1991.
Den vollen Inhalt der Quelle findenHutchinson, Tom. Project video. Oxford: Oxford UniversityPress, 1992.
Den vollen Inhalt der Quelle findenCooper, Richard. Video. Oxford, [England]: Oxford University Press, 1993.
Den vollen Inhalt der Quelle findenBuchteile zum Thema "Video and language"
Austin, Erin E. H. „Video Options“. In Going Global in the World Language Classroom, 76–83. New York: Routledge, 2023. http://dx.doi.org/10.4324/9781003384267-12.
Der volle Inhalt der QuelleKlimas, Janina. „Video and Drama Activities“. In Building Proficiency for World Language Learners, 190–210. New York: Eye on Education, 2024. http://dx.doi.org/10.4324/9781032622507-16.
Der volle Inhalt der QuelleZhang, Shilin, und Mei Gu. „Research on Hand Language Video Retrieval“. In Lecture Notes in Computer Science, 648–55. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-13498-2_85.
Der volle Inhalt der QuelleRylander, John, Phillip Clark und Richard Derrah. „A video-based method of assessing pragmatic awareness“. In Assessing Second Language Pragmatics, 65–97. London: Palgrave Macmillan UK, 2013. http://dx.doi.org/10.1057/9781137003522_3.
Der volle Inhalt der QuelleMa, Minuk, Sunjae Yoon, Junyeong Kim, Youngjoon Lee, Sunghun Kang und Chang D. Yoo. „VLANet: Video-Language Alignment Network for Weakly-Supervised Video Moment Retrieval“. In Computer Vision – ECCV 2020, 156–71. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58604-1_10.
Der volle Inhalt der QuelleStopel, Bartosz. „On Botched Cinematic Transformations of Video Games“. In Second Language Learning and Teaching, 173–90. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-25189-5_12.
Der volle Inhalt der QuelleHeyerick, Isabelle. „Chapter 5. The importance of video recordings in signed language interpreting research“. In Linking up with Video, 127–49. Amsterdam: John Benjamins Publishing Company, 2020. http://dx.doi.org/10.1075/btl.149.06hey.
Der volle Inhalt der QuelleKhoreva, Anna, Anna Rohrbach und Bernt Schiele. „Video Object Segmentation with Language Referring Expressions“. In Computer Vision – ACCV 2018, 123–41. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-20870-7_8.
Der volle Inhalt der QuelleZhang, Junchao, und Yuxin Peng. „Hierarchical Vision-Language Alignment for Video Captioning“. In MultiMedia Modeling, 42–54. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-05710-7_4.
Der volle Inhalt der QuelleKacetl, Jaroslav, und Madgalena Fiserova. „Online Video Clips in Foreign Language Teaching“. In Business Challenges in the Changing Economic Landscape - Vol. 2, 355–64. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-22593-7_26.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "Video and language"
Bosy, Karen, und Cristina Portugal. „Media Language: Video practices“. In Proceedings of EVA London 2020. BCS Learning and Development Ltd, 2020. http://dx.doi.org/10.14236/ewic/eva2020.53.
Der volle Inhalt der QuelleBuch, Shyamal, Cristobal Eyzaguirre, Adrien Gaidon, Jiajun Wu, Li Fei-Fei und Juan Carlos Niebles. „Revisiting the “Video” in Video-Language Understanding“. In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2022. http://dx.doi.org/10.1109/cvpr52688.2022.00293.
Der volle Inhalt der QuelleLiu, Runze, Yaqun Fang, Fan Yu, Ruiqi Tian, Tongwei Ren und Gangshan Wu. „Deep Video Understanding with Video-Language Model“. In MM '23: The 31st ACM International Conference on Multimedia. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3581783.3612863.
Der volle Inhalt der QuelleNam, Yoonsoo, Adam Lehavi, Daniel Yang, Digbalay Bose, Swabha Swayamdipta und Shrikanth Narayanan. „Does Video Summarization Require Videos? Quantifying the Effectiveness of Language in Video Summarization“. In ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2024. http://dx.doi.org/10.1109/icassp48485.2024.10445931.
Der volle Inhalt der QuelleSwartz, Jonathan, und Brian C. Smith. „A resolution independent video language“. In the third ACM international conference. New York, New York, USA: ACM Press, 1995. http://dx.doi.org/10.1145/217279.215265.
Der volle Inhalt der QuelleKountchev, R., Vl Todorov und R. Kountcheva. „Efficient sign language video representation“. In 2008 International Conference on Systems, Signals and Image Processing (IWSSIP). IEEE, 2008. http://dx.doi.org/10.1109/iwssip.2008.4604396.
Der volle Inhalt der QuelleLi, Linjie, Zhe Gan, Kevin Lin, Chung-Ching Lin, Zicheng Liu, Ce Liu und Lijuan Wang. „LAVENDER: Unifying Video-Language Understanding as Masked Language Modeling“. In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2023. http://dx.doi.org/10.1109/cvpr52729.2023.02214.
Der volle Inhalt der QuelleTellex, Stefanie, Thomas Kollar, George Shaw, Nicholas Roy und Deb Roy. „Grounding spatial language for video search“. In International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction. New York, New York, USA: ACM Press, 2010. http://dx.doi.org/10.1145/1891903.1891944.
Der volle Inhalt der QuelleZhang, Shilin, und Hai Wang. „HMM based hand language video retrieval“. In 2010 International Conference on Intelligent Control and Information Processing (ICICIP). IEEE, 2010. http://dx.doi.org/10.1109/icicip.2010.5564284.
Der volle Inhalt der QuelleGupta, Vaidik, Rohan Punjani, Mayur Vaswani und Jyoti Kundale. „Video Conferencing with Sign language Detection“. In 2022 2nd Asian Conference on Innovation in Technology (ASIANCON). IEEE, 2022. http://dx.doi.org/10.1109/asiancon55314.2022.9908973.
Der volle Inhalt der QuelleBerichte der Organisationen zum Thema "Video and language"
Liang, Yiqing. Video Retrieval Based on Language and Image Analysis. Fort Belvoir, VA: Defense Technical Information Center, Mai 1999. http://dx.doi.org/10.21236/ada364129.
Der volle Inhalt der QuelleChorna, Olha V., Vita A. Hamaniuk und Aleksandr D. Uchitel. Use of YouTube on lessons of practical course of German language as the first and second language at the pedagogical university. [б. в.], September 2019. http://dx.doi.org/10.31812/123456789/3253.
Der volle Inhalt der QuelleSmith, Michael A., und Takeo Kanade. Video Skimming and Characterization through the Combination of Image and Language Understanding Techniques. Fort Belvoir, VA: Defense Technical Information Center, Februar 1997. http://dx.doi.org/10.21236/ada333857.
Der volle Inhalt der QuelleDecleir, Cyril, Mohand-Saïd Hacid und Jacques Kouloumdjian. A Database Approach for Modeling and Querying Video Data. Aachen University of Technology, 1999. http://dx.doi.org/10.25368/2022.90.
Der volle Inhalt der QuelleTrullinger, Richard. Differential measurement of a language concept presented via video tape playback to first grade students. Portland State University Library, Januar 2000. http://dx.doi.org/10.15760/etd.2420.
Der volle Inhalt der QuelleSymonenko, Svitlana V., Nataliia V. Zaitseva, Viacheslav V. Osadchyi, Kateryna P. Osadcha und Ekaterina O. Shmeltser. Virtual reality in foreign language training at higher educational institutions. [б. в.], Februar 2020. http://dx.doi.org/10.31812/123456789/3759.
Der volle Inhalt der QuelleSandeep, Bhushan, Huang Xin und Xiao Zongwei. A comparison of regional anesthesia techniques in patients undergoing of video-assisted thoracic surgery: A network meta-analysis. INPLASY - International Platform of Registered Systematic Review and Meta-analysis Protocols, Februar 2022. http://dx.doi.org/10.37766/inplasy2022.2.0003.
Der volle Inhalt der QuellePikilnyak, Andrey V., Nadia M. Stetsenko, Volodymyr P. Stetsenko, Tetiana V. Bondarenko und Halyna V. Tkachuk. Comparative analysis of online dictionaries in the context of the digital transformation of education. [б. в.], Juni 2021. http://dx.doi.org/10.31812/123456789/4431.
Der volle Inhalt der QuelleBrenzel, Jeffrey, und Burr Settles. The Duolingo English Test: Design, Validity, and Value. Duolingo, September 2017. http://dx.doi.org/10.46999/lyqs3238.
Der volle Inhalt der QuellePetrovych, Olha B., Alla P. Vinnichuk, Viktor P. Krupka, Iryna A. Zelenenka und Andrei V. Voznyak. The usage of augmented reality technologies in professional training of future teachers of Ukrainian language and literature. CEUR Workshop Proceedings, Juli 2021. http://dx.doi.org/10.31812/123456789/4635.
Der volle Inhalt der Quelle