Tesis sobre el tema "Video and language"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 50 mejores tesis para su investigación sobre el tema "Video and language".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Khan, Muhammad Usman Ghani. "Natural language descriptions for video streams". Thesis, University of Sheffield, 2012. http://etheses.whiterose.ac.uk/2789/.
Texto completoMiech, Antoine. "Large-scale learning from video and natural language". Electronic Thesis or Diss., Université Paris sciences et lettres, 2020. http://www.theses.fr/2020UPSLE059.
Texto completoThe goal of this thesis is to build and train machine learning models capable of understanding the content of videos. Current video understanding approaches mainly rely on large-scale manually annotated video datasets for training. However, collecting and annotating such dataset is cumbersome, expensive and time-consuming. To address this issue, this thesis focuses on leveraging large amounts of readily-available, but noisy annotations in the form of natural language. In particular, we exploit a diverse corpus of textual metadata such as movie scripts, web video titles and descriptions or automatically transcribed speech obtained from narrated videos. Training video models on such readily-available textual data is challenging as such annotation is often imprecise or wrong. In this thesis, we introduce learning approaches to deal with weak annotation and design specialized training objectives and neural network architectures
Zhou, Mingjie. "Deep networks for sign language video caption". HKBU Institutional Repository, 2020. https://repository.hkbu.edu.hk/etd_oa/848.
Texto completoErozel, Guzen. "Natural Language Interface On A Video Data Model". Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/12606251/index.pdf.
Texto completoAdam, Jameel. "Video annotation wiki for South African sign language". Thesis, University of the Western Cape, 2011. http://etd.uwc.ac.za/index.php?module=etd&action=viewtitle&id=gen8Srv25Nme4_1540_1304499135.
Texto completoThe SASL project at the University of the Western Cape aims at developing a fully automated translation system between English and South African Sign Language (SASL). Three important aspects of this system require SASL documentation and knowledge. These are: recognition of SASL from a video sequence, linguistic translation between SASL and English and the rendering of SASL. Unfortunately, SASL documentation is a scarce resource and no official or complete documentation exists. This research focuses on creating an online collaborative video annotation knowledge management system for SASL where various members of the community can upload SASL videos to and annotate them in any of the sign language notation systems, SignWriting, HamNoSys and/or Stokoe. As such, knowledge about SASL structure is pooled into a central and freely accessible knowledge base that can be used as required. The usability and performance of the system were evaluated. The usability of the system was graded by users on a rating scale from one to five for a specific set of tasks. The system was found to have an overall usability of 3.1, slightly better than average. The performance evaluation included load and stress tests which measured the system response time for a number of users for a specific set of tasks. It was found that the system is stable and can scale up to cater for an increasing user base by improving the underlying hardware.
Ou, Yingzhe y 区颖哲. "Teaching Chinese as a second language through video". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2011. http://hub.hku.hk/bib/B48368714.
Texto completopublished_or_final_version
Education
Master
Master of Education
Addis, Pietro <1991>. "The Age of Video Games: Language and Narrative". Master's Degree Thesis, Università Ca' Foscari Venezia, 2017. http://hdl.handle.net/10579/10634.
Texto completoMuir, Laura J. "Content-prioritised video coding for British Sign Language communication". Thesis, Robert Gordon University, 2007. http://hdl.handle.net/10059/177.
Texto completoLaveborn, Joel. "Video Game Vocabulary : The effect of video games on Swedish learners‟ word comprehension". Thesis, Karlstad University, Karlstad University, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-5487.
Texto completoVideo games are very popular among children in the Western world. This study was done in order to investigate if video games had an effect on 49 Swedish students‟ comprehension of English words (grades 7-8). The investigation was based on questionnaire and word test data. The questionnaire aimed to measure with which frequency students were playing video games, and the word test aimed to measure their word comprehension in general. In addition, data from the word test were used to investigate how students explained the words. Depending on their explanations, students were categorized as either using a “video game approach” or a “dictionary approach” in their explanations.
The results showed a gender difference, both with regard to the frequency of playing and what types of games that were played. Playing video games seemed to increase the students‟ comprehension of English words, though there was no clear connection between the frequency with which students were playing video games and the choice of a dictionary or video game approach as an explanation.
Lopes, Solange Aparecida. "A descriptive study of the interaction behaviors in a language video program and in live elementary language classes using that video program". Diss., This resource online, 1996. http://scholar.lib.vt.edu/theses/available/etd-10052007-143033/.
Texto completoMertzani, Maria. "Video-Based Computer Mediated Communication for Sign Language Learning". Thesis, University of Bristol, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.499929.
Texto completoGruba, Paul Andrew. "The role of digital video media in second language listening comprehension". Online version, 1999. http://repository.unimelb.edu.au/10187/1520.
Texto completoSawarng, Pupatwibul Rhodes Dent. "A prototype for teaching ecology in Thai language through interactive video". Normal, Ill. Illinois State University, 1992. http://wwwlib.umi.com/cr/ilstu/fullcit?p9227173.
Texto completoTitle from title page screen, viewed January 18, 2006. Dissertation Committee: Dent M. Rhodes (chair), Robert L. Fisher, Dale E. Birkenholz, Larry D. Kennedy, Deborah B. Gentry. Includes bibliographical references (leaves 63-67) and abstract. Also available in print.
McCoy, Dacia M. "Video Self-modeling with English Language Learners in the Preschool Setting". University of Cincinnati / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1439294548.
Texto completoOrnelas, Claudia. "Development of the video suggestibility scale for children spanish-language version /". To access this resource online via ProQuest Dissertations and Theses @ UTEP, 2009. http://0-proquest.umi.com.lib.utep.edu/login?COPT=REJTPTU0YmImSU5UPTAmVkVSPTI=&clientId=2515.
Texto completoKeeler, Farrah Dawn. "Developing an Electronic Film Review for October Sky". Diss., CLICK HERE for online access, 2005. http://contentdm.lib.byu.edu/ETD/image/etd800.pdf.
Texto completoRoh, Jaemin. "The effects of cultural video resources on teaching and learning Korean language". Thesis, Boston University, 2011. https://hdl.handle.net/2144/33544.
Texto completoPLEASE NOTE: Boston University Libraries did not receive an Authorization To Manage form for this thesis or dissertation. It is therefore not openly accessible, though it may be available by request. If you are the author or principal advisor of this work and would like to request open access for it, please contact us at open-help@bu.edu. Thank you.
This dissertation sought to evaluate the potential of a customized, videobased instructional method, the Cultural Video Project (CVP), which was designed to meet the needs of both heritage and non-heritage students learning Korean as a second language in a university setting. The goal of this study was to design and create the CVP, document the implementation of the CVP, and then to assess the effects the CVP had on the area that speakers of English tend to have difficulty with, such as acquisition of honorific systems in Korean. The CVP was a series of short authentic Korean video clips and matching worksheets that the researcher created. The videos were adapted from contemporary Korean broadcasting programs and Korean films. The CVP videos were used during the face-to-face setting classroom meeting sessions as a lesson and after the classroom lesson was over, the videos were available on the school's Internet courseware for students to use for their individual practice and review. Each of the CVP video segments displayed linguistic structures, vocabulary, idiomatic expressions and cultural conventions that were partly addressed in the course's Elementary Korean course materials. The participating professor, Professor Q, helped in selecting the video segments and co-authored the matching worksheets in corporation with the researcher throughout the preparation and implementation period. During the interviews, Professor Q reported changes in her teaching philosophy while creating and implementing the CVP method in her teaching. She reported that the video technology combined with the university's courseware uses created positive impacts on her students' Korean learning experiences such as heightened interest and intense attention that helped to make dynamic and interactive lessons during the classroom meetings. Students reported their responses to the CVP in various forms: Interviews, written self-reports, in-class observation reports, results of the exams and two-forms of standard school course evaluations. The findings reveal that through the CVP practice, students increased their cultural understanding, improved the listening skills, and improved their understanding of language use in a variety of culturally specific social situations.
2031-01-01
Purushotma, Ravi. "Communicative 2.0 : video games and digital culture in the foreign language classroom". Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/39145.
Texto completoI explore two core concepts in today's youth entertainment culture that will increasingly become central in future attempts to design affordable foreign language learning materials that hope to bridge the chasm between education and foreign popular culture. In the process, I outline a series of example applications that apply these concepts to developing rich foreign language materials -- starting with more experimental/long-term approaches such as using video game modding techniques to make language learning friendly video games and ending with more concrete, ready-to-go, applications like extending open source content management applications. The first concept I look at is that of "Remix culture." In short, Remix culture describes the way in which youth culture today more visibly orients itself around creating media by extracting component pieces from other people's media creations, then connecting them together to form something new. In the video game world this phenomena is more specifically termed 'modding.' In this process, amateur fans take a professional commercial game title and then modify it in creative ways that the original designers may not have considered.
(cont.) Outside of video games, we see terms like "web 2.0" used to describe technologies that allow website viewers to play a role in authoring additions to the sites they are reading, or "mashups" where users use programming interfaces to rapidly create web content by mashing together pieces from different sources. The second emerging concept critical for curricular designers to follow is that of transmedia storytelling. Traditionally, one might assume a model in which distinct media forms are used to serve distinct cultural practices: television or novels tell stories, video games are for play, blogs for socializing and textbooks for learning. While initially this may have been the case, as each of the media forms above have evolved, they have expanded to cover multiple other cultural practices, often by extending across other media forms. By following the evolution of the interactions between these various media forms and activities within entertainment industries, we can find valuable insight when forecasting their possible interactions in the education industry.
by Ravi Purushotma.
S.M.
Erasmus, Daniel. "Video quality requirements for South African Sign Language communications over mobile phones". Master's thesis, University of Cape Town, 2012. http://hdl.handle.net/11427/6395.
Texto completoIncludes bibliographical references.
This project aims to find the minimum video resolution and frame rate that supports intelligible cell phone based video communications in South African Sign Language.
Zhang, Yunxin. "Constructing Memories: A Case for Using Video in the Chinese Language Classroom". The Ohio State University, 2003. http://rave.ohiolink.edu/etdc/view?acc_num=osu1392044490.
Texto completoBado, Niamboue. "Video Games and English as a Foreign Language Education in Burkina Faso". Ohio University / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1395498334.
Texto completoLaws, Dannielle Kaye. "Gaming in Conversation: The Impact of Video Games in Second Language Communication". University of Toledo / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1461800075.
Texto completoChakravarthy, Gitu. "The preparation of English language teachers in Malaysia : a video-based approach". Thesis, Bangor University, 1993. https://research.bangor.ac.uk/portal/en/theses/the-preparation-of-english-language-teachers-in-malaysia--a-videobased-approach(7a3dc1c6-696c-4f5d-af35-b7059df803d5).html.
Texto completoJavetz, Esther. "Effects of using guided (computer-controlled videotapes) and unguided (videotapes) listening practices on listening comprehension of novice second language learners /". The Ohio State University, 1988. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487332636473769.
Texto completoMurnane, Owen D. y Kristal M. Riska. "The Video Head Impulse Test". Digital Commons @ East Tennessee State University, 2018. https://dc.etsu.edu/etsu-works/1978.
Texto completoNeyra-Gutierrez, Andre y Pedro Shiguihara-Juarez. "Feature Extraction with Video Summarization of Dynamic Gestures for Peruvian Sign Language Recognition". Institute of Electrical and Electronics Engineers Inc, 2020. http://hdl.handle.net/10757/656630.
Texto completoIn peruvian sign language (PSL), recognition of static gestures has been proposed earlier. However, to state a conversation using sign language, it is also necessary to employ dynamic gestures. We propose a method to extract a feature vector for dynamic gestures of PSL. We collect a dataset with 288 video sequences of words related to dynamic gestures and we state a workflow to process the keypoints of the hands, obtaining a feature vector for each video sequence with the support of a video summarization technique. We employ 9 neural networks to test the method, achieving an average accuracy ranging from 80% and 90%, using 10 fold cross-validation.
Kim, Joong-Won Education Faculty of Arts & Social Sciences UNSW. "Second language English listening comprehension using different presentations of pictures and video cues". Awarded by:University of New South Wales. School of Education, 2003. http://handle.unsw.edu.au/1959.4/19065.
Texto completoGill, Saran Kaur. "The appropriateness of video materials for teaching of English as an international language". Thesis, University College London (University of London), 1990. http://discovery.ucl.ac.uk/10006558/.
Texto completoJohnson, Marie A. F. "Video Modeling: Building Language and Social Skills in Individuals with Autism Spectrum Disorders". Digital Commons @ East Tennessee State University, 2015. https://dc.etsu.edu/etsu-works/1545.
Texto completoBengtsson, Andreas. "Watching video or studying? : An investigation of the extramural activities and Japanese language proficiency of foreign language learners of Japanese". Thesis, Stockholms universitet, Centrum för tvåspråkighetsforskning, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-104769.
Texto completoRamos, Ascencio Lucía Ivanette. "Adaptación del lenguaje escrito al lenguaje audiovisual: Flamenca en la propuesta visual de “Di mi nombre” de Rosalía". Bachelor's thesis, Universidad Peruana de Ciencias Aplicadas (UPC), 2019. http://hdl.handle.net/10757/652429.
Texto completoThrough the years, cinematography has evolved radically, adapting to different proposals and aesthetics that evolve year by year. An extension of it comes to be the videoclip, format by which great artists related to the music industry have had the opportunity to distribute and generate interest in viewers. Over time, this one has also evolved, being one of the most important formats used by the music industry. While the different proposals, both in the cinema and for music videos, have inspiration from third parties for their realization, one of the techniques, not that common, but still present, is the adaptation of the text to a visual format, being this one much common in the film industry. This leads to question why is this technique not seen much in the proposal for videoclips, being this ones a way to be able to express, through the music and lyrics of songs, a story much more enriching. The present work seeks to expose and describe, from an arduous process of observation and analysis, the adaptation of a text to a visual proposal, being Flamenca, an Occitan book of the thirteenth century, and the music video "Di mi nombre", by the Spanish artist Rosalía, our research objects, demonstrating what representation can be given to an ancient text to a contemporary visual proposal.
Trabajo de investigación
Hauck, Mark Anthony. "A study into the form-language of art and its application to single camera video production". Instructions for remote access. Click here to access this electronic resource. Access available to Kutztown University faculty, staff, and students only, 1990. http://www.kutztown.edu/library/services/remote_access.asp.
Texto completoVidlund, Anna. "English in video and online computer games : Potential enhancement of players’ vocabulary". Thesis, Linnéuniversitetet, Institutionen för språk (SPR), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-28402.
Texto completoBull, Hannah. "Learning sign language from subtitles". Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG013.
Texto completoSign languages are an essential means of communication for deaf communities. Sign languages are visuo-gestual languages using the modalities of hand gestures, facial expressions, gaze and body movements. They possess rich grammar structures and lexicons that differ considerably from those found among spoken languages. The uniqueness of transmission medium, structure and grammar of sign languages requires distinct methodologies. The performance of automatic translations systems between high-resource written languages or spoken languages is currently sufficient for many daily use cases, such as translating videos, websites, emails and documents. On the other hand, automatic translation systems for sign languages do not exist outside of very specific use cases with limited vocabulary. Automatic sign language translation is challenging for two main reasons. Firstly, sign languages are low-resource languages with little available training data. Secondly, sign languages are visual-spatial languages with no written form, naturally represented as video rather than audio or text. To tackle the first challenge, we contribute large datasets for training and evaluating automatic sign language translation systems with both interpreted and original sign language video content, as well as written text subtitles. Whilst interpreted data allows us to collect large numbers of hours of videos, original sign language video is more representative of sign language usage within deaf communities. Written subtitles can be used as weak supervision for various sign language understanding tasks. To address the second challenge, we develop methods to better understand visual cues from sign language video. Whilst sentence segmentation is mostly trivial for written languages, segmenting sign language video into sentence-like units relies on detecting subtle semantic and prosodic cues from sign language video. We use prosodic cues to learn to automatically segment sign language video into sentence-like units, determined by subtitle boundaries. Expanding upon this segmentation method, we then learn to align text subtitles to sign language video segments using both semantic and prosodic cues, in order to create sentence-level pairs between sign language video and text. This task is particularly important for interpreted TV data, where subtitles are generally aligned to the audio and not to the signing. Using these automatically aligned video-text pairs, we develop and improve multiple different methods to densely annotate lexical signs by querying words in the subtitle text and searching for visual cues in the sign language video for the corresponding signs
Buco, Stefani. "The video essay as a persuasive genre: A qualitative genre analysis with a focus on evaluative and persuasive linguistic features". Thesis, Stockholms universitet, Engelska institutionen, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-159814.
Texto completoAlaei, Bahareh B. "Producing as a listener| A choric approach to video as a medium of invention". Thesis, California State University, Long Beach, 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=1526888.
Texto completoFor over two decades, scholars in rhetoric and composition studies have been invested in helping to shape and adapt writing studies as institutions of higher learning negotiate conceptualizations of subjects and knowledge production in digital culture. The canon of invention, in particular, has propelled forth theories and practices that resist hermeneutic modes of knowledge production and instead advocate invention as performance. Inspired by the aforementioned scholarship, Victor Vitanza's call for knowledge production that relies on the language games of paralogy, Gregory Ulmer's heuretics, and Sarah Arroyo and Geoffrey Carter's participatory pedagogy, this thesis puts forth a method of invention entitled "producing as a listener." This methodology harnesses the potential of video editing software and video sharing ecologies as choric sites of invention, relies on the reconceptualization of subjects as whatever singularities, and invites electrate and proairetic lines of reasoning wherein video composers invent and write as listeners.
Skalban, Yvonne. "Automatic generation of factual questions from video documentaries". Thesis, University of Wolverhampton, 2013. http://hdl.handle.net/2436/314607.
Texto completoHoltmeier, Matthew. "Combining Critical and Creative Modalities through the Video Essay". Digital Commons @ East Tennessee State University, 2020. https://dc.etsu.edu/etsu-works/7819.
Texto completoPowers, Jennifer Ann. ""Designing" in the 21st century English language arts classroom processes and influences in creating multimodal video narratives /". [Kent, Ohio] : Kent State University, 2007. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=kent1194639677.
Texto completoTitle from PDF t.p. (viewed Mar. 31, 2008). Advisor: David Bruce. Keywords: multiliteracies, multi-modal literacies, language arts education, secondary education, video composition. Includes survey instrument. Includes bibliographical references (p. 169-179).
Murnane, Owen D., Stephanie M. Byrd, C. Kidd y Faith W. Akin. "The Video Head Impulse Test". Digital Commons @ East Tennessee State University, 2013. https://dc.etsu.edu/etsu-works/1883.
Texto completoMurnane, Owen D., H. Mabrey, A. Pearson, Stephanie M. Byrd y Faith W. Akin. "The Video Head Impulse Test". Digital Commons @ East Tennessee State University, 2012. https://dc.etsu.edu/etsu-works/1888.
Texto completoGardner, David. "Evaluating user interaction with interactive video : users' perceptions of self access language learning with MultiMedia Movies". Thesis, Open University, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.251394.
Texto completoTecedor, Cabrero Marta. "Developing Interactional Competence Through Video-Based Computer-Mediated Conversations: Beginning Learners of Spanish". Diss., University of Iowa, 2013. https://ir.uiowa.edu/etd/4918.
Texto completoMurnane, Owen D. "The Video Head Impulse Test". Digital Commons @ East Tennessee State University, 2013. https://dc.etsu.edu/etsu-works/1931.
Texto completoYoung, Eric H. "Promoting Second Language Learning Through Oral Asynchronous Computer-Mediated Communication". BYU ScholarsArchive, 2018. https://scholarsarchive.byu.edu/etd/7051.
Texto completoMurray, Garold Linwood. "Bodies in cyberspace : language learning in a simulated environment". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp02/NQ27209.pdf.
Texto completoZewary, Sayed Mustafa. "Visuals in foreign language teaching". Thesis, Kansas State University, 2011. http://hdl.handle.net/2097/8778.
Texto completoDepartment of Modern Languages
Mary T. Copple
This study investigates the effectiveness of visuals in the language classroom. Two types of visual aids commonly used in the language classroom, video and still pictures, are used to elicit narratives from L2 English speakers, and these narratives are subsequently compared. The data come from eleven international students from a university English Language Program, who voluntarily participated in two separate 15-minute interviews. In each interview session, they were shown either a series of pictures or a video, both depicting a story. Upon completion of the presentation of each visual, participants were asked a prompt question and their narration of the events portrayed in the visuals recorded. The narratives were transcribed and analyzed in order to test (1) if still pictures and video are equally effective in eliciting elaboration in the narratives, defined in this case, as the number of new referents introduced and the number of adjective and verb types produced; and (2) if exposure to still pictures and video elicit narrations of similar length. Both kinds of visuals stimulated learners to create narratives and elaborate on what had been shown in them. The video task elicited narratives roughly 10% longer than the picture task in regards to the raw number of words. When linguistic factors were compared, participants introduced new referents at comparable rates in both tasks while they employed 10% more verb types in the video task. Additionally, the series of still pictures prompted participants to employ a much higher number of adjective types. These observations suggest that a series of still pictures are an effective alternative for video for eliciting narratives. This study provides support for the use of still pictures as an equivalent to videos in situations where videos are less accessible in language classrooms (due to lack of technological access).
Silvestre, Cerdà Joan Albert. "Different Contributions to Cost-Effective Transcription and Translation of Video Lectures". Doctoral thesis, Universitat Politècnica de València, 2016. http://hdl.handle.net/10251/62194.
Texto completo[ES] Durante estos últimos años, los repositorios multimedia on-line han experimentado un gran crecimiento que les ha hecho establecerse como fuentes fundamentales de conocimiento, especialmente en el área de la educación, donde se han creado grandes repositorios de vídeo charlas educativas para complementar e incluso reemplazar los métodos de enseñanza tradicionales. No obstante, la mayoría de estas charlas no están transcritas ni traducidas debido a la ausencia de soluciones de bajo coste que sean capaces de hacerlo garantizando una calidad mínima aceptable. Soluciones de este tipo son claramente necesarias para hacer que las vídeo charlas sean más accesibles para hablantes de otras lenguas o para personas con discapacidades auditivas. Además, dichas soluciones podrían facilitar la aplicación de funciones de búsqueda y de análisis tales como clasificación, recomendación o detección de plagios, así como el desarrollo de funcionalidades educativas avanzadas, como por ejemplo la generación de resúmenes automáticos de contenidos para ayudar al estudiante a tomar apuntes. Por este motivo, el principal objetivo de esta tesis es desarrollar una solución de bajo coste capaz de transcribir y traducir vídeo charlas con un nivel de calidad razonable. Más específicamente, abordamos la integración de técnicas estado del arte de Reconocimiento del Habla Automático y Traducción Automática en grandes repositorios de vídeo charlas educativas para la generación de subtítulos multilingües de alta calidad sin requerir intervención humana y con un reducido coste computacional. Además, también exploramos los beneficios potenciales que conllevaría la explotación de la información de la que disponemos a priori sobre estos repositorios, es decir, conocimientos específicos sobre las charlas tales como el locutor, la temática o las transparencias, para crear sistemas de transcripción y traducción especializados mediante técnicas de adaptación masiva. Las soluciones propuestas en esta tesis han sido testeadas en escenarios reales llevando a cabo nombrosas evaluaciones objetivas y subjetivas, obteniendo muy buenos resultados. El principal legado de esta tesis, The transLectures-UPV Platform, ha sido liberado públicamente como software de código abierto, y, en el momento de escribir estas líneas, está sirviendo transcripciones y traducciones automáticas para diversos miles de vídeo charlas educativas en nombrosas universidades e instituciones Españolas y Europeas.
[CAT] Durant aquests darrers anys, els repositoris multimèdia on-line han experimentat un gran creixement que els ha fet consolidar-se com a fonts fonamentals de coneixement, especialment a l'àrea de l'educació, on s'han creat grans repositoris de vídeo xarrades educatives per tal de complementar o inclús reemplaçar els mètodes d'ensenyament tradicionals. No obstant això, la majoria d'aquestes xarrades no estan transcrites ni traduïdes degut a l'absència de solucions de baix cost capaces de fer-ho garantint una qualitat mínima acceptable. Solucions d'aquest tipus són clarament necessàries per a fer que les vídeo xarres siguen més accessibles per a parlants d'altres llengües o per a persones amb discapacitats auditives. A més, aquestes solucions podrien facilitar l'aplicació de funcions de cerca i d'anàlisi tals com classificació, recomanació o detecció de plagis, així com el desenvolupament de funcionalitats educatives avançades, com per exemple la generació de resums automàtics de continguts per ajudar a l'estudiant a prendre anotacions. Per aquest motiu, el principal objectiu d'aquesta tesi és desenvolupar una solució de baix cost capaç de transcriure i traduir vídeo xarrades amb un nivell de qualitat raonable. Més específicament, abordem la integració de tècniques estat de l'art de Reconeixement de la Parla Automàtic i Traducció Automàtica en grans repositoris de vídeo xarrades educatives per a la generació de subtítols multilingües d'alta qualitat sense requerir intervenció humana i amb un reduït cost computacional. A més, també explorem els beneficis potencials que comportaria l'explotació de la informació de la que disposem a priori sobre aquests repositoris, és a dir, coneixements específics sobre les xarrades tals com el locutor, la temàtica o les transparències, per a crear sistemes de transcripció i traducció especialitzats mitjançant tècniques d'adaptació massiva. Les solucions proposades en aquesta tesi han estat testejades en escenaris reals duent a terme nombroses avaluacions objectives i subjectives, obtenint molt bons resultats. El principal llegat d'aquesta tesi, The transLectures-UPV Platform, ha sigut alliberat públicament com a programari de codi obert, i, en el moment d'escriure aquestes línies, està servint transcripcions i traduccions automàtiques per a diversos milers de vídeo xarrades educatives en nombroses universitats i institucions Espanyoles i Europees.
Silvestre Cerdà, JA. (2016). Different Contributions to Cost-Effective Transcription and Translation of Video Lectures [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/62194
TESIS
Thompson, Scott Alan. "A Comparison of the Effects of Different Video Imagery Upon Adult ESL Students' Comprehension of a Video Narrative". PDXScholar, 1994. https://pdxscholar.library.pdx.edu/open_access_etds/4845.
Texto completoCurry, Ryan H. "CHILDREN’S THEORY OF MIND, JOINT ATTENTION, AND VIDEO CHAT". Case Western Reserve University School of Graduate Studies / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=case1616663322967054.
Texto completo