Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Video and language.

Dissertationen zum Thema „Video and language“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Video and language" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Khan, Muhammad Usman Ghani. „Natural language descriptions for video streams“. Thesis, University of Sheffield, 2012. http://etheses.whiterose.ac.uk/2789/.

Der volle Inhalt der Quelle
Annotation:
This thesis is concerned with the automatic generation of natural language descriptions that can be used for video indexing, retrieval and summarization applications. It is a step ahead of keyword based tagging as it captures relations between keywords associated with videos, thus clarifying the context between them. Initially, we prepare hand annotations consisting of descriptions for video segments crafted from a TREC Video dataset. Analysis of this data presents insights into humans interests on video contents. For machine generated descriptions, conventional image processing techniques are applied to extract high level features (HLFs) from individual video frames. Natural language description is then produced based on these HLFs. Although feature extraction processes are erroneous at various levels, approaches are explored to put them together for producing coherent descriptions. For scalability purpose, application of framework to several different video genres is also discussed. For complete video sequences, a scheme to generate coherent and compact descriptions for video streams is presented which makes use of spatial and temporal relations between HLFs and individual frames respectively. Calculating overlap between machine generated and human annotated descriptions concludes that machine generated descriptions capture context information and are in accordance with human's watching capabilities. Further, a task based evaluation shows improvement in video identification task as compared to keywords alone. Finally, application of generated natural language descriptions, for video scene classification is discussed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Miech, Antoine. „Large-scale learning from video and natural language“. Electronic Thesis or Diss., Université Paris sciences et lettres, 2020. http://www.theses.fr/2020UPSLE059.

Der volle Inhalt der Quelle
Annotation:
Nous nous intéressons à l’apprentissage automatique d’algorithmes pour la compréhension automatique de vidéos. Une majorité des approches en compréhension de vidéos dépend de larges bases de données de vidéos manuellement annotées pour l’entraînement. Cependant, la collection et l’annotation de telles bases de données est fastidieuse, coûte cher et prend du temps. Pour palier à ce problème, cette thèse se concentre sur l’exploitation de large quantité d’annotations publiquement disponible, cependant bruitées, sous forme de langage naturel. En particulier, nous nous intéressons à un corpus divers de métadonnées textuelles incluant des scripts de films, des titres et descriptions de vidéos internet ou encore des transcriptions de paroles. L’usage de ce type de données publiquement disponibles est difficile, car l’annotation y est faible. Pour cela, nous introduisons différentes approches d’apprentissage telles que de nouvelles fonctions de coûts ou architectures de réseaux de neurones, adaptées à de faibles annotations
The goal of this thesis is to build and train machine learning models capable of understanding the content of videos. Current video understanding approaches mainly rely on large-scale manually annotated video datasets for training. However, collecting and annotating such dataset is cumbersome, expensive and time-consuming. To address this issue, this thesis focuses on leveraging large amounts of readily-available, but noisy annotations in the form of natural language. In particular, we exploit a diverse corpus of textual metadata such as movie scripts, web video titles and descriptions or automatically transcribed speech obtained from narrated videos. Training video models on such readily-available textual data is challenging as such annotation is often imprecise or wrong. In this thesis, we introduce learning approaches to deal with weak annotation and design specialized training objectives and neural network architectures
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Zhou, Mingjie. „Deep networks for sign language video caption“. HKBU Institutional Repository, 2020. https://repository.hkbu.edu.hk/etd_oa/848.

Der volle Inhalt der Quelle
Annotation:
In the hearing-loss community, sign language is a primary tool to communicate with people while there is a communication gap between hearing-loss people with normal hearing people. Sign language is different from spoken language. It has its own vocabulary and grammar. Recent works concentrate on the sign language video caption which consists of sign language recognition and sign language translation. Continuous sign language recognition, which can bridge the communication gap, is a challenging task because of the weakly supervised ordered annotations where no frame-level label is provided. To overcome this problem, connectionist temporal classification (CTC) is the most widely used method. However, CTC learning could perform badly if the extracted features are not good. For better feature extraction, this thesis presents the novel self-attention-based fully-inception (SAFI) networks for vision-based end-to-end continuous sign language recognition. Considering the length of sign words differs from each other, we introduce the fully inception network with different receptive fields to extract dynamic clip-level features. To further boost the performance, the fully inception network with an auxiliary classifier is trained with aggregation cross entropy (ACE) loss. Then the encoder of self-attention networks as the global sequential feature extractor is used to model the clip-level features with CTC. The proposed model is optimized by jointly training with ACE on clip-level feature learning and CTC on global sequential feature learning in an end-to-end fashion. The best method in the baselines achieves 35.6% WER on the validation set and 34.5% WER on the test set. It employs a better decoding algorithm for generating pseudo labels to do the EM-like optimization to fine-tune the CNN module. In contrast, our approach focuses on the better feature extraction for end-to-end learning. To alleviate the overfitting on the limited dataset, we employ temporal elastic deformation to triple the real-world dataset RWTH- PHOENIX-Weather 2014. Experimental results on the real-world dataset RWTH- PHOENIX-Weather 2014 demonstrate the effectiveness of our approach which achieves 31.7% WER on the validation set and 31.2% WER on the test set. Even though sign language recognition can, to some extent, help bridge the communication gap, it is still organized in sign language grammar which is different from spoken language. Unlike sign language recognition that recognizes sign gestures, sign language translation (SLT) converts sign language to a target spoken language text which normal hearing people commonly use in their daily life. To achieve this goal, this thesis provides an effective sign language translation approach which gains state-of-the-art performance on the largest real-life German sign language translation database, RWTH-PHOENIX-Weather 2014T. Besides, a direct end-to-end sign language translation approach gives out promising results (an impressive gain from 9.94 to 13.75 BLEU and 9.58 to 14.07 BLEU on the validation set and test set) without intermediate recognition annotations. The comparative and promising experimental results show the feasibility of the direct end-to-end SLT
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Erozel, Guzen. „Natural Language Interface On A Video Data Model“. Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/12606251/index.pdf.

Der volle Inhalt der Quelle
Annotation:
The video databases and retrieval of data from these databases have become popular in various business areas of work with the improvements in technology. As a kind of video database, video archive systems need user-friendly interfaces to retrieve video frames. In this thesis, an NLP based user interface to a video database system is developed using a content-based spatio-temporal video data model. The data model is focused on the semantic content which includes objects, activities, and spatial properties of objects. Spatio-temporal relationships between video objects and also trajectories of moving objects can be queried with this data model. In this video database system, NL interface enables flexible querying. The queries, which are given as English sentences, are parsed using Link Parser. Not only exact matches but similar objects and activities are also returned from the database with the help of the conceptual ontology module to return all related frames to the user. This module is implemented using a distance-based method of semantic similarity search on the semantic domain-independent ontology, WordNet. The semantic representations of the given queries are extracted from their syntactic structures using information extraction techniques. The extracted semantic representations are used to call the related parts of the underlying spatio-temporal video data model to calculate the results of the queries.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Adam, Jameel. „Video annotation wiki for South African sign language“. Thesis, University of the Western Cape, 2011. http://etd.uwc.ac.za/index.php?module=etd&action=viewtitle&id=gen8Srv25Nme4_1540_1304499135.

Der volle Inhalt der Quelle
Annotation:

The SASL project at the University of the Western Cape aims at developing a fully automated translation system between English and South African Sign Language (SASL). Three important aspects of this system require SASL documentation and knowledge. These are: recognition of SASL from a video sequence, linguistic translation between SASL and English and the rendering of SASL. Unfortunately, SASL documentation is a scarce resource and no official or complete documentation exists. This research focuses on creating an online collaborative video annotation knowledge management system for SASL where various members of the community can upload SASL videos to and annotate them in any of the sign language notation systems, SignWriting, HamNoSys and/or Stokoe. As such, knowledge about SASL structure is pooled into a central and freely accessible knowledge base that can be used as required. The usability and performance of the system were evaluated. The usability of the system was graded by users on a rating scale from one to five for a specific set of tasks. The system was found to have an overall usability of 3.1, slightly better than average. The performance evaluation included load and stress tests which measured the system response time for a number of users for a specific set of tasks. It was found that the system is stable and can scale up to cater for an increasing user base by improving the underlying hardware.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Ou, Yingzhe, und 区颖哲. „Teaching Chinese as a second language through video“. Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2011. http://hub.hku.hk/bib/B48368714.

Der volle Inhalt der Quelle
Annotation:
在科技发展的引导下,现今不少学校或者教育机构都提倡把多媒体引入课堂,而视像教学也在多媒体教学的范畴当中。目前对于多媒体应用于课堂的问题研究已有不少,但是多集中在需要图解分析的科目范畴,在语言教学研究方面相对较少,而在第二语言教学领域中又以英文作为第二语言教学的课题研究占多数。本研究旨在探讨在中文作为第二语言教学中,视像教学否能够有助于学生提高学习成效,同时又能达到活跃课堂,提高学习动机的目的。本研究通过借鉴已有的关于多媒体应用于教学或语言习得方面的相关文献,结合笔者所在实习学校的实验,透过课堂数据收集、对比分析法、问卷法、访谈法、观课等方法对课题进行分析论证,从而得出以下结论:1.学生在不同学习能力要求下的学习成效不一致,记忆型题目比能力型题目的学习成效要好,而两者与学生的先备知识相关性均不显著;2.视像教学中记忆型题目比非视像教学的记忆型题目成效要差,而两者在能力型题目中的成效差异不显著;3.视像教学能够有效提高学生的学习动机及课堂的集中度。 Under the guidance of scientific development, many schools and education institutions are currently encouraging the introduction of multimedia into the classrooms, with video teaching as one of the methods under multimedia teaching. There have been a lot of researches on the multimedia application into classroom, but most of them focused more on the subjects which need graphic analysis but the language teaching relatively. While the research on the Teaching English as a Second Language mostly account for the research in this field, this study aims to explore whether the video teaching could be helpful to improve students’ learning effects in Teaching Chinese as a second language, as well as enlivening the classroom and improving students’ learning motivation. Using some related literature review about the multimedia application into language teaching or acquisition for references, the author has designed an appropriate experiment fit for the teaching-practicum school, then adopted some methods such as data collection, comparing analysis, questionnaire, interview and class observation to proceed the analysis and demonstration and finally drew a conclusion as follows: 1. Students has different effects under different capability requirements, additionally, they performed better in memory-oriented tests than capability-oriented tests, both of whom have no significant co-relation with prior knowledge; 2.The effect of memory-oriented test in video teaching is worse than that in non-video teaching, but both in the capability-oriented tests have no significant effects; 3.Video teaching can improve students’ learning motivation and concentration in class effectively.
published_or_final_version
Education
Master
Master of Education
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Addis, Pietro <1991&gt. „The Age of Video Games: Language and Narrative“. Master's Degree Thesis, Università Ca' Foscari Venezia, 2017. http://hdl.handle.net/10579/10634.

Der volle Inhalt der Quelle
Annotation:
The language of video games has acquired importance throughout the years, from its birth to its development, and has changed and modified its vocabulary, following the trend and the will of the gaming community, namely the creators and users of this language. Furthermore, the narrative medium of videogames has broadened the definition of ‘narrative’ giving to the term new significance and providing to its users a new type of experience, that of participated narration. However, the participatory narrative found in videogames has not revolutionized or created something entirely new. Following the previous studies on the two subjects, the scope of the dissertation is to analyse the main characteristics of the Internet Language, with a particular focus on the Language of Video-games and to discuss the elements which construct the narrative of a video game (i.e. how the narrative is constructed, what narrative elements are used). Moreover, the thesis will also provide new examples of new video-game terms coinages and will discuss, in connection to the narrative, the issue of the moral in videogames.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Muir, Laura J. „Content-prioritised video coding for British Sign Language communication“. Thesis, Robert Gordon University, 2007. http://hdl.handle.net/10059/177.

Der volle Inhalt der Quelle
Annotation:
Video communication of British Sign Language (BSL) is important for remote interpersonal communication and for the equal provision of services for deaf people. However, the use of video telephony and video conferencing applications for BSL communication is limited by inadequate video quality. BSL is a highly structured, linguistically complete, natural language system that expresses vocabulary and grammar visually and spatially using a complex combination of facial expressions (such as eyebrow movements, eye blinks and mouth/lip shapes), hand gestures, body movements and finger-spelling that change in space and time. Accurate natural BSL communication places specific demands on visual media applications which must compress video image data for efficient transmission. Current video compression schemes apply methods to reduce statistical redundancy and perceptual irrelevance in video image data based on a general model of Human Visual System (HVS) sensitivities. This thesis presents novel video image coding methods developed to achieve the conflicting requirements for high image quality and efficient coding. Novel methods of prioritising visually important video image content for optimised video coding are developed to exploit the HVS spatial and temporal response mechanisms of BSL users (determined by Eye Movement Tracking) and the characteristics of BSL video image content. The methods implement an accurate model of HVS foveation, applied in the spatial and temporal domains, at the pre-processing stage of a current standard-based system (H.264). Comparison of the performance of the developed and standard coding systems, using methods of video quality evaluation developed for this thesis, demonstrates improved perceived quality at low bit rates. BSL users, broadcasters and service providers benefit from the perception of high quality video over a range of available transmission bandwidths. The research community benefits from a new approach to video coding optimisation and better understanding of the communication needs of deaf people.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Laveborn, Joel. „Video Game Vocabulary : The effect of video games on Swedish learners‟ word comprehension“. Thesis, Karlstad University, Karlstad University, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-5487.

Der volle Inhalt der Quelle
Annotation:

Video games are very popular among children in the Western world. This study was done in order to investigate if video games had an effect on 49 Swedish students‟ comprehension of English words (grades 7-8). The investigation was based on questionnaire and word test data. The questionnaire aimed to measure with which frequency students were playing video games, and the word test aimed to measure their word comprehension in general. In addition, data from the word test were used to investigate how students explained the words. Depending on their explanations, students were categorized as either using a “video game approach” or a “dictionary approach” in their explanations.

The results showed a gender difference, both with regard to the frequency of playing and what types of games that were played. Playing video games seemed to increase the students‟ comprehension of English words, though there was no clear connection between the frequency with which students were playing video games and the choice of a dictionary or video game approach as an explanation.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Lopes, Solange Aparecida. „A descriptive study of the interaction behaviors in a language video program and in live elementary language classes using that video program“. Diss., This resource online, 1996. http://scholar.lib.vt.edu/theses/available/etd-10052007-143033/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Mertzani, Maria. „Video-Based Computer Mediated Communication for Sign Language Learning“. Thesis, University of Bristol, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.499929.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Gruba, Paul Andrew. „The role of digital video media in second language listening comprehension“. Online version, 1999. http://repository.unimelb.edu.au/10187/1520.

Der volle Inhalt der Quelle
Annotation:
The aim of this investigation was to examine the role of visual elements in second language listening comprehension when digital video was used as a mode of presentation. Despite the widespread use of video in listening instruction, little is known at present about how learners attend to dual-coded media and, in particular, how visual elements may influence comprehension processes. (For complete abstract open document)
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Sawarng, Pupatwibul Rhodes Dent. „A prototype for teaching ecology in Thai language through interactive video“. Normal, Ill. Illinois State University, 1992. http://wwwlib.umi.com/cr/ilstu/fullcit?p9227173.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ed. D.)--Illinois State University, 1992.
Title from title page screen, viewed January 18, 2006. Dissertation Committee: Dent M. Rhodes (chair), Robert L. Fisher, Dale E. Birkenholz, Larry D. Kennedy, Deborah B. Gentry. Includes bibliographical references (leaves 63-67) and abstract. Also available in print.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

McCoy, Dacia M. „Video Self-modeling with English Language Learners in the Preschool Setting“. University of Cincinnati / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1439294548.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Ornelas, Claudia. „Development of the video suggestibility scale for children spanish-language version /“. To access this resource online via ProQuest Dissertations and Theses @ UTEP, 2009. http://0-proquest.umi.com.lib.utep.edu/login?COPT=REJTPTU0YmImSU5UPTAmVkVSPTI=&clientId=2515.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Keeler, Farrah Dawn. „Developing an Electronic Film Review for October Sky“. Diss., CLICK HERE for online access, 2005. http://contentdm.lib.byu.edu/ETD/image/etd800.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Roh, Jaemin. „The effects of cultural video resources on teaching and learning Korean language“. Thesis, Boston University, 2011. https://hdl.handle.net/2144/33544.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ed.D.)--Boston University
PLEASE NOTE: Boston University Libraries did not receive an Authorization To Manage form for this thesis or dissertation. It is therefore not openly accessible, though it may be available by request. If you are the author or principal advisor of this work and would like to request open access for it, please contact us at open-help@bu.edu. Thank you.
This dissertation sought to evaluate the potential of a customized, videobased instructional method, the Cultural Video Project (CVP), which was designed to meet the needs of both heritage and non-heritage students learning Korean as a second language in a university setting. The goal of this study was to design and create the CVP, document the implementation of the CVP, and then to assess the effects the CVP had on the area that speakers of English tend to have difficulty with, such as acquisition of honorific systems in Korean. The CVP was a series of short authentic Korean video clips and matching worksheets that the researcher created. The videos were adapted from contemporary Korean broadcasting programs and Korean films. The CVP videos were used during the face-to-face setting classroom meeting sessions as a lesson and after the classroom lesson was over, the videos were available on the school's Internet courseware for students to use for their individual practice and review. Each of the CVP video segments displayed linguistic structures, vocabulary, idiomatic expressions and cultural conventions that were partly addressed in the course's Elementary Korean course materials. The participating professor, Professor Q, helped in selecting the video segments and co-authored the matching worksheets in corporation with the researcher throughout the preparation and implementation period. During the interviews, Professor Q reported changes in her teaching philosophy while creating and implementing the CVP method in her teaching. She reported that the video technology combined with the university's courseware uses created positive impacts on her students' Korean learning experiences such as heightened interest and intense attention that helped to make dynamic and interactive lessons during the classroom meetings. Students reported their responses to the CVP in various forms: Interviews, written self-reports, in-class observation reports, results of the exams and two-forms of standard school course evaluations. The findings reveal that through the CVP practice, students increased their cultural understanding, improved the listening skills, and improved their understanding of language use in a variety of culturally specific social situations.
2031-01-01
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Purushotma, Ravi. „Communicative 2.0 : video games and digital culture in the foreign language classroom“. Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/39145.

Der volle Inhalt der Quelle
Annotation:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Comparative Media Studies, 2006.
I explore two core concepts in today's youth entertainment culture that will increasingly become central in future attempts to design affordable foreign language learning materials that hope to bridge the chasm between education and foreign popular culture. In the process, I outline a series of example applications that apply these concepts to developing rich foreign language materials -- starting with more experimental/long-term approaches such as using video game modding techniques to make language learning friendly video games and ending with more concrete, ready-to-go, applications like extending open source content management applications. The first concept I look at is that of "Remix culture." In short, Remix culture describes the way in which youth culture today more visibly orients itself around creating media by extracting component pieces from other people's media creations, then connecting them together to form something new. In the video game world this phenomena is more specifically termed 'modding.' In this process, amateur fans take a professional commercial game title and then modify it in creative ways that the original designers may not have considered.
(cont.) Outside of video games, we see terms like "web 2.0" used to describe technologies that allow website viewers to play a role in authoring additions to the sites they are reading, or "mashups" where users use programming interfaces to rapidly create web content by mashing together pieces from different sources. The second emerging concept critical for curricular designers to follow is that of transmedia storytelling. Traditionally, one might assume a model in which distinct media forms are used to serve distinct cultural practices: television or novels tell stories, video games are for play, blogs for socializing and textbooks for learning. While initially this may have been the case, as each of the media forms above have evolved, they have expanded to cover multiple other cultural practices, often by extending across other media forms. By following the evolution of the interactions between these various media forms and activities within entertainment industries, we can find valuable insight when forecasting their possible interactions in the education industry.
by Ravi Purushotma.
S.M.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Erasmus, Daniel. „Video quality requirements for South African Sign Language communications over mobile phones“. Master's thesis, University of Cape Town, 2012. http://hdl.handle.net/11427/6395.

Der volle Inhalt der Quelle
Annotation:
Includes abstract.
Includes bibliographical references.
This project aims to find the minimum video resolution and frame rate that supports intelligible cell phone based video communications in South African Sign Language.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Zhang, Yunxin. „Constructing Memories: A Case for Using Video in the Chinese Language Classroom“. The Ohio State University, 2003. http://rave.ohiolink.edu/etdc/view?acc_num=osu1392044490.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Bado, Niamboue. „Video Games and English as a Foreign Language Education in Burkina Faso“. Ohio University / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1395498334.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Laws, Dannielle Kaye. „Gaming in Conversation: The Impact of Video Games in Second Language Communication“. University of Toledo / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1461800075.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Chakravarthy, Gitu. „The preparation of English language teachers in Malaysia : a video-based approach“. Thesis, Bangor University, 1993. https://research.bangor.ac.uk/portal/en/theses/the-preparation-of-english-language-teachers-in-malaysia--a-videobased-approach(7a3dc1c6-696c-4f5d-af35-b7059df803d5).html.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Javetz, Esther. „Effects of using guided (computer-controlled videotapes) and unguided (videotapes) listening practices on listening comprehension of novice second language learners /“. The Ohio State University, 1988. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487332636473769.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Murnane, Owen D., und Kristal M. Riska. „The Video Head Impulse Test“. Digital Commons @ East Tennessee State University, 2018. https://dc.etsu.edu/etsu-works/1978.

Der volle Inhalt der Quelle
Annotation:
Book Summary: Dizziness comes in many forms in each age group – some specific to an age group (e.g. benign paroxysmal vertigo of childhood) while others span the age spectrum (e.g., migraine-associated vertigo). This content organizes evaluation and management of the dizzy patient by age to bring a fresh perspective to seeing these often difficult patients.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Neyra-Gutierrez, Andre, und Pedro Shiguihara-Juarez. „Feature Extraction with Video Summarization of Dynamic Gestures for Peruvian Sign Language Recognition“. Institute of Electrical and Electronics Engineers Inc, 2020. http://hdl.handle.net/10757/656630.

Der volle Inhalt der Quelle
Annotation:
El texto completo de este trabajo no está disponible en el Repositorio Académico UPC por restricciones de la casa editorial donde ha sido publicado.
In peruvian sign language (PSL), recognition of static gestures has been proposed earlier. However, to state a conversation using sign language, it is also necessary to employ dynamic gestures. We propose a method to extract a feature vector for dynamic gestures of PSL. We collect a dataset with 288 video sequences of words related to dynamic gestures and we state a workflow to process the keypoints of the hands, obtaining a feature vector for each video sequence with the support of a video summarization technique. We employ 9 neural networks to test the method, achieving an average accuracy ranging from 80% and 90%, using 10 fold cross-validation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Kim, Joong-Won Education Faculty of Arts &amp Social Sciences UNSW. „Second language English listening comprehension using different presentations of pictures and video cues“. Awarded by:University of New South Wales. School of Education, 2003. http://handle.unsw.edu.au/1959.4/19065.

Der volle Inhalt der Quelle
Annotation:
The study tested the effects of different presentations using pictures and video cues for improving listening comprehension of English news programs. Four experiments are reported, studying listening comprehension of English as a second/foreign language with 687 Korean secondary students. Comparisons on listening comprehension showed better performance with visual cues than with no visual cues. Listening comprehension with video cues was more successful than that with pictures. The advantage of the combination of verbal and visual information over the presentation of verbal information alone was in accord with dual coding theory. When contextual information presented using priming techniques was compared to using feedback and simultaneous presentations, listening comprehension was better using priming. In the comparison of feedback with simultaneous presentations, listening comprehension was improved more when pictures with headlines were presented using feedback than using simultaneous presentations. In contrast, no differences were found between feedback and simultaneous presentations when video cues with headlines were presented. Visual cues with headlines presented using priming might enable learners to activate prior knowledge or schemata to improve listening comprehension. Headlines presented at the beginning stage of listening were effective for listening comprehension. In addition, the effects of presentations were enlarged by adding headlines to visuals. Applying the priming presentation along with the enrichment of contextual cues resulted in improved listening comprehension. Less proficient students benefited relatively more from the contextual cues with headlines and pictorial cues for comprehending the news than more proficient students. In particular, for less proficient students, video cues with headlines were more helpful in listening comprehension than pictures with headlines. This was because more abundant visual cues such as paralinguistic cues were more likely to be provided in video than in picture formats. The best listening comprehension occurred when presenting pictorial cues with headlines using priming presentation. The present study concluded that more abundant pictorial cues were useful for improving listening comprehension. Headlines added to the pictorial cues improved performance, especially for less proficient students, who benefited relatively more. The pictorial cues with headlines presented using a 'priming' technique were most effective in improving listening comprehension, probably because they activated prior knowledge or schemata.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Gill, Saran Kaur. „The appropriateness of video materials for teaching of English as an international language“. Thesis, University College London (University of London), 1990. http://discovery.ucl.ac.uk/10006558/.

Der volle Inhalt der Quelle
Annotation:
Researching on the appropriateness of video materials for learners of EIL has required in-depth discussion of the role of the medium of video in the field of crosscultural communication in an EIL context - the ASEAN countries generally and Malaysia specifically. This has drawn into the picture two perspectives. The first is sociocultural; the consideration of the role of English as an International language in Malaysia and the other ASEAN countries, the recommendation of a suitable pedagogical model of speech for audio-visual materials in Malaysia, the components of cross-cultural communication that are essential for any language learner who aspires to communicate in English with persons who come from varying sociocultural backgrounds. The second perspective is that of the role of video for the teaching of intercultural language teaching. What is it in the medium that enables it to play a pivotal role in delivering the message - aspects of cross-cultural communication? These perspectives provide the background to the main research question at hand, which is, how appropriate in sociocultural content and design are ELT video materials for language learners in Malaysia? ELT video materials have been commercially produced since the mid-70's. The majority of these materials are based in Western sociocultural settings, portraying native speakers interacting with-each other. Given the dominant role of English as an international language, linking countries communicatively that otherwise would have great difficulty doing so, yet there has been minimal change in the sociocultural nature of the materials. Therefore, this research aims to investigate via critical analysis and questionnaires, the appropriateness of the sociocultural and design features of existing and potential ELT video materials for EIL language learners. The information from the two sources will, it is hoped, provide useful recommendations for the future-production of appropriate ELT video materials for EIL-language learners in Malaysia specifically and the ASEAN countries generally.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Johnson, Marie A. F. „Video Modeling: Building Language and Social Skills in Individuals with Autism Spectrum Disorders“. Digital Commons @ East Tennessee State University, 2015. https://dc.etsu.edu/etsu-works/1545.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Bengtsson, Andreas. „Watching video or studying? : An investigation of the extramural activities and Japanese language proficiency of foreign language learners of Japanese“. Thesis, Stockholms universitet, Centrum för tvåspråkighetsforskning, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-104769.

Der volle Inhalt der Quelle
Annotation:
This study examined the extramural activities, that is, what a language learner does with the target language outside of class time, in Japanese of adult beginner level foreign language learners of Japanese studying at Stockholm University, Sweden, and how these activities relate to Japanese language proficiency. The study looked at both extramural activities and foreign language proficiency from a holistic and quantitative perspective. The participants' extramural activities were measured through self-reported data in a questionnaire, and several measures; a cloze test, earlier grades, and self-evaluations; were triangulated and used to provide an adequate measure of general Japanese language proficiency.    The results indicate that extramural activities which provide a foreign language learner with enough time for thorough processing of input and support through the usage of several cooperating modalities seem to have a positive effect on general foreign language acquisition.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Ramos, Ascencio Lucía Ivanette. „Adaptación del lenguaje escrito al lenguaje audiovisual: Flamenca en la propuesta visual de “Di mi nombre” de Rosalía“. Bachelor's thesis, Universidad Peruana de Ciencias Aplicadas (UPC), 2019. http://hdl.handle.net/10757/652429.

Der volle Inhalt der Quelle
Annotation:
A lo largo de los años, la cinematografía ha ido evolucionando de manera radical, adaptándose a diferentes propuestas y estéticas que año tras año van evolucionando. Una extensión de ella viene a ser el videoclip, formato por el cual grandes artistas relacionados a la industria musical han tenido la oportunidad de distribuir y generar interés en los espectadores. Con el tiempo, este mismo también ha ido evolucionando, siendo uno de los formatos más importantes a utilizar por la industria musical. Si bien las diferentes propuestas, tanto en el cine como para los videos musicales tienen inspiración de terceros para su realización, una de las técnicas, poco utilizada, pero aún presente, es la adaptación del texto a un formato visual, siendo esta mucho más común en la industria cinematográfica. Esto lleva a cuestionar el por qué esta técnica no se ve mucho en la propuesta para la realización de videoclips, siendo estos una modalidad de poder expresar, a través de la música y la letra de las canciones, una historia mucho más enriquecedora. El presente trabajo busca exponer y describir, a partir de un arduo proceso de observación y análisis, la adaptación de un texto a una propuesta visual, siendo Flamenca, un libro occitano del siglo XIII, y el video musical “Di mi nombre”, de la artista española Rosalía, nuestros objetos de investigación, demostrando así cuál es la representación que se le puede dar a un texto antiguo a una propuesta visual contemporánea.
Through the years, cinematography has evolved radically, adapting to different proposals and aesthetics that evolve year by year. An extension of it comes to be the videoclip, format by which great artists related to the music industry have had the opportunity to distribute and generate interest in viewers. Over time, this one has also evolved, being one of the most important formats used by the music industry. While the different proposals, both in the cinema and for music videos, have inspiration from third parties for their realization, one of the techniques, not that common, but still present, is the adaptation of the text to a visual format, being this one much common in the film industry. This leads to question why is this technique not seen much in the proposal for videoclips, being this ones a way to be able to express, through the music and lyrics of songs, a story much more enriching. The present work seeks to expose and describe, from an arduous process of observation and analysis, the adaptation of a text to a visual proposal, being Flamenca, an Occitan book of the thirteenth century, and the music video "Di mi nombre", by the Spanish artist Rosalía, our research objects, demonstrating what representation can be given to an ancient text to a contemporary visual proposal.
Trabajo de investigación
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Hauck, Mark Anthony. „A study into the form-language of art and its application to single camera video production“. Instructions for remote access. Click here to access this electronic resource. Access available to Kutztown University faculty, staff, and students only, 1990. http://www.kutztown.edu/library/services/remote_access.asp.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Vidlund, Anna. „English in video and online computer games : Potential enhancement of players’ vocabulary“. Thesis, Linnéuniversitetet, Institutionen för språk (SPR), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-28402.

Der volle Inhalt der Quelle
Annotation:
The aim of this essay is to distinguish whether the activity of playing video and online computer games as a leisure activity could be considered to be a learning situation. With the focus on vocabulary, this study investigates the possibility that gamers could improve their language proficiency while playing video and online computer games. The methodology is based on qualitative observations (Patel & Davidson 2011) and on interviews with seven players of five different games. The observations mainly considered the vocabulary used in the games and how the players used the English language while playing. The interviews are constructed with the methodology mentioned by Kylén (2004). The interview questions aimed to answer if the players had been noticing an improvement of their vocabulary. The data are mainly acquired from the observations and interviews. The background sections build on studies relating to computer-based language learning and on the previous research on ELF, primarily from Barbara Seidlhofer (2011). Even though the data acquired from the observations and interviews are limited, it is apparent that video and online computer games have a noticeable impact on language development, regarding vocabulary. The main conclusion of this study is that the games do not influence the players’ language proficiency as considerably as the engaging in the functions in conjunction with the games.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Bull, Hannah. „Learning sign language from subtitles“. Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG013.

Der volle Inhalt der Quelle
Annotation:
Les langues des signes sont un moyen de communication essentiel pour les communautés sourdes. Elles sont des langues visuo-gestuelles, qui utilisent comme modalités les mains, les expressions faciales, le regard et les mouvements du corps. Elles ont des structures grammaticales complexes et des lexiques riches qui sont considérablement différents de ceux que l'on trouve dans les langues parlées. Les spécificités des langues des signes en termes de canaux de communication, de structure et de grammaire exigent des méthodologies distinctes. Les performances des systèmes de traduction automatique entre des langues écrites ou parlées sont actuellement suffisantes pour de nombreux cas d'utilisation quotidienne, tels que la traduction de vidéos, de sites web, d'e-mails et de documents. En revanche, les systèmes de traduction automatique pour les langues des signes n'existent pas en dehors de cas d'utilisation très spécifiques avec un vocabulaire limité. La traduction automatique de langues des signes est un défi pour deux raisons principales. Premièrement, les langues des signes sont des langues à faibles ressources avec peu de données d'entraînement disponibles. Deuxièmement, les langues des signes sont des langues visuelles et spatiales sans forme écrite, naturellement représentées sous forme de vidéo plutôt que d'audio ou de texte. Pour relever le premier défi, nous fournissons de grands corpus de données pour l'entraînement et l'évaluation des systèmes de traduction automatique en langue des signes, avec des contenus vidéo en langue des signes interprétée et originale, ainsi que des sous-titres écrits. Alors que les données interprétées nous permettent de collecter un grand nombre d'heures de vidéos, les vidéos originalement en langue des signes sont plus représentatives de l'utilisation de la langue des signes au sein des communautés sourdes. Les sous-titres écrits peuvent être utilisés comme supervision faible pour diverses tâches de compréhension de la langue des signes. Pour relever le deuxième défi, cette thèse propose des méthodes permettant de mieux comprendre les vidéos en langue des signes. Alors que la segmentation des phrases est généralement triviale pour les langues écrites, la segmentation des vidéos en langue des signes en phrases repose sur la détection d'indices sémantiques et prosodiques subtils dans les vidéos. Nous utilisons des indices prosodiques pour apprendre à segmenter automatiquement une vidéo en langue des signes en unités de type phrase, déterminées par les limites des sous-titres. En développant cette méthode de segmentation, nous apprenons ensuite à aligner les sous-titres du texte sur les segments de la vidéo en langue des signes en utilisant des indices sémantiques et prosodiques, afin de créer des paires au niveau de la phrase entre la vidéo en langue des signes et le texte. Cette tâche est particulièrement importante pour les données interprétées, où les sous-titres sont généralement alignés sur l'audio et non sur la langue des signes. En utilisant ces paires vidéo-texte alignées automatiquement, nous développons et améliorons plusieurs méthodes différentes pour annoter de façon dense les signes lexicaux en interrogeant des mots dans le texte des sous-titres et en recherchant des indices visuels dans la vidéo en langue des signes pour les signes correspondants
Sign languages are an essential means of communication for deaf communities. Sign languages are visuo-gestual languages using the modalities of hand gestures, facial expressions, gaze and body movements. They possess rich grammar structures and lexicons that differ considerably from those found among spoken languages. The uniqueness of transmission medium, structure and grammar of sign languages requires distinct methodologies. The performance of automatic translations systems between high-resource written languages or spoken languages is currently sufficient for many daily use cases, such as translating videos, websites, emails and documents. On the other hand, automatic translation systems for sign languages do not exist outside of very specific use cases with limited vocabulary. Automatic sign language translation is challenging for two main reasons. Firstly, sign languages are low-resource languages with little available training data. Secondly, sign languages are visual-spatial languages with no written form, naturally represented as video rather than audio or text. To tackle the first challenge, we contribute large datasets for training and evaluating automatic sign language translation systems with both interpreted and original sign language video content, as well as written text subtitles. Whilst interpreted data allows us to collect large numbers of hours of videos, original sign language video is more representative of sign language usage within deaf communities. Written subtitles can be used as weak supervision for various sign language understanding tasks. To address the second challenge, we develop methods to better understand visual cues from sign language video. Whilst sentence segmentation is mostly trivial for written languages, segmenting sign language video into sentence-like units relies on detecting subtle semantic and prosodic cues from sign language video. We use prosodic cues to learn to automatically segment sign language video into sentence-like units, determined by subtitle boundaries. Expanding upon this segmentation method, we then learn to align text subtitles to sign language video segments using both semantic and prosodic cues, in order to create sentence-level pairs between sign language video and text. This task is particularly important for interpreted TV data, where subtitles are generally aligned to the audio and not to the signing. Using these automatically aligned video-text pairs, we develop and improve multiple different methods to densely annotate lexical signs by querying words in the subtitle text and searching for visual cues in the sign language video for the corresponding signs
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Buco, Stefani. „The video essay as a persuasive genre: A qualitative genre analysis with a focus on evaluative and persuasive linguistic features“. Thesis, Stockholms universitet, Engelska institutionen, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-159814.

Der volle Inhalt der Quelle
Annotation:
So called ‘video essays’ on films and cinema have gained substantial popularity on the video sharing internet site YouTube in the past years. This essay explores this relatively recent type of video production from the perspective of genre analysis in order to investigate whether a pattern of form, content and style can be identified, which would suggest the emergence of a new genre. Previous research has investigated a similar genre, the film review, by identifying its pervasive or obligatory moves or stages (Taboada, 2011; de Jong & Burgers, 2013). However, video essays seem to be a rather subjective form of communication, with a clear persuasive purpose. For this reason, linguistic elements expressing evaluation, assessment, feelings and opinions are analyzed in the following under the umbrella term for evaluative language use, that is Appraisal (White, 2015). Five video essays from different creators were chosen for the present analysis, which is focused on situational, structural, and Appraisal elements. The analysis shows that there indeed are similarities between the video essays, pertaining both to their situational context and structure, and their use of evaluative language. Several overall pervasive moves were found, which suggests that the essays follow a specific structural pattern. The evaluative language indicates an intention of persuading the viewer.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Alaei, Bahareh B. „Producing as a listener| A choric approach to video as a medium of invention“. Thesis, California State University, Long Beach, 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=1526888.

Der volle Inhalt der Quelle
Annotation:

For over two decades, scholars in rhetoric and composition studies have been invested in helping to shape and adapt writing studies as institutions of higher learning negotiate conceptualizations of subjects and knowledge production in digital culture. The canon of invention, in particular, has propelled forth theories and practices that resist hermeneutic modes of knowledge production and instead advocate invention as performance. Inspired by the aforementioned scholarship, Victor Vitanza's call for knowledge production that relies on the language games of paralogy, Gregory Ulmer's heuretics, and Sarah Arroyo and Geoffrey Carter's participatory pedagogy, this thesis puts forth a method of invention entitled "producing as a listener." This methodology harnesses the potential of video editing software and video sharing ecologies as choric sites of invention, relies on the reconceptualization of subjects as whatever singularities, and invites electrate and proairetic lines of reasoning wherein video composers invent and write as listeners.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Skalban, Yvonne. „Automatic generation of factual questions from video documentaries“. Thesis, University of Wolverhampton, 2013. http://hdl.handle.net/2436/314607.

Der volle Inhalt der Quelle
Annotation:
Questioning sessions are an essential part of teachers’ daily instructional activities. Questions are used to assess students’ knowledge and comprehension and to promote learning. The manual creation of such learning material is a laborious and time-consuming task. Research in Natural Language Processing (NLP) has shown that Question Generation (QG) systems can be used to efficiently create high-quality learning materials to support teachers in their work and students in their learning process. A number of successful QG applications for education and training have been developed, but these focus mainly on supporting reading materials. However, digital technology is always evolving; there is an ever-growing amount of multimedia content available, and more and more delivery methods for audio-visual content are emerging and easily accessible. At the same time, research provides empirical evidence that multimedia use in the classroom has beneficial effects on student learning. Thus, there is a need to investigate whether QG systems can be used to assist teachers in creating assessment materials from these different types of media that are being employed in classrooms. This thesis serves to explore how NLP tools and techniques can be harnessed to generate questions from non-traditional learning materials, in particular videos. A QG framework which allows the generation of factual questions from video documentaries has been developed and a number of evaluations to analyse the quality of the produced questions have been performed. The developed framework uses several readily available NLP tools to generate questions from the subtitles accompanying a video documentary. The reason for choosing video vii documentaries is two-fold: firstly, they are frequently used by teachers and secondly, their factual nature lends itself well to question generation, as will be explained within the thesis. The questions generated by the framework can be used as a quick way of testing students’ comprehension of what they have learned from the documentary. As part of this research project, the characteristics of documentary videos and their subtitles were analysed and the methodology has been adapted to be able to exploit these characteristics. An evaluation of the system output by domain experts showed promising results but also revealed that generating even shallow questions is a task which is far from trivial. To this end, the evaluation and subsequent error analysis contribute to the literature by highlighting the challenges QG from documentary videos can face. In a user study, it was investigated whether questions generated automatically by the system developed as part of this thesis and a state-of-the-art system can successfully be used to assist multimedia-based learning. Using a novel evaluation methodology, the feasibility of using a QG system’s output as ‘pre-questions’ with different types of prequestions (text-based and with images) used was examined. The psychometric parameters of the automatically generated questions by the two systems and of those generated manually were compared. The results indicate that the presence of pre-questions (preferably with images) improves the performance of test-takers and they highlight that the psychometric parameters of the questions generated by the system are comparable if not better than those of the state-of-the-art system. In another experiment, the productivity of questions in terms of time taken to generate questions manually vs. time taken to post-edit system-generated questions was analysed. A viii post-editing tool which allows for the tracking of several statistics such as edit distance measures, editing time, etc, was used. The quality of questions before and after postediting was also analysed. Not only did the experiments provide quantitative data about automatically and manually generated questions, but qualitative data in the form of user feedback, which provides an insight into how users perceived the quality of questions, was also gathered.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Holtmeier, Matthew. „Combining Critical and Creative Modalities through the Video Essay“. Digital Commons @ East Tennessee State University, 2020. https://dc.etsu.edu/etsu-works/7819.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Powers, Jennifer Ann. „"Designing" in the 21st century English language arts classroom processes and influences in creating multimodal video narratives /“. [Kent, Ohio] : Kent State University, 2007. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=kent1194639677.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph.D.)--Kent State University, 2007.
Title from PDF t.p. (viewed Mar. 31, 2008). Advisor: David Bruce. Keywords: multiliteracies, multi-modal literacies, language arts education, secondary education, video composition. Includes survey instrument. Includes bibliographical references (p. 169-179).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Murnane, Owen D., Stephanie M. Byrd, C. Kidd und Faith W. Akin. „The Video Head Impulse Test“. Digital Commons @ East Tennessee State University, 2013. https://dc.etsu.edu/etsu-works/1883.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Murnane, Owen D., H. Mabrey, A. Pearson, Stephanie M. Byrd und Faith W. Akin. „The Video Head Impulse Test“. Digital Commons @ East Tennessee State University, 2012. https://dc.etsu.edu/etsu-works/1888.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Gardner, David. „Evaluating user interaction with interactive video : users' perceptions of self access language learning with MultiMedia Movies“. Thesis, Open University, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.251394.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Tecedor, Cabrero Marta. „Developing Interactional Competence Through Video-Based Computer-Mediated Conversations: Beginning Learners of Spanish“. Diss., University of Iowa, 2013. https://ir.uiowa.edu/etd/4918.

Der volle Inhalt der Quelle
Annotation:
This dissertation examines the discourse produced by beginning learners of Spanish using social media. Specifically, it looks at the use and development of interactional resources during two video-mediated conversations. Through a combination of Conversation Analysis tools and quantitative data analysis, the use of turn-taking strategies, repair trajectories, and alignment moves was examined to discover how beginning language learners manage videoconferencing exchanges and develop their interactional capabilities in this new interactional setting. The goal of this investigation was twofold: 1) to describe and explain how students construct, manage and maintain conversations via videoconferencing, and 2) to gain a better understanding of the links between technology-based social media and language learning. The results of this study indicate that instructional videoconferencing conversations display their own clearly delimited and idiosyncratic organization of interactional features. In terms of turn-taking, the results of the analyses demonstrate that beginning learners are fully capable of participating competently in speaker selection to manage a conversation with a peer of similar proficiency level. In the area of repair, the analyses show that, during instructional videoconferencing exchanges, beginning learners orient to both the communication of personal meaning and the accuracy of their discourse. They enact this orientation through the use of self-initiated self-repair. Finally, with regard to the use of alignment moves, the analyses reveal that, in tune with their nascent linguistic and interactional abilities, beginning learners use primarily acknowledgement moves.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Murnane, Owen D. „The Video Head Impulse Test“. Digital Commons @ East Tennessee State University, 2013. https://dc.etsu.edu/etsu-works/1931.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Young, Eric H. „Promoting Second Language Learning Through Oral Asynchronous Computer-Mediated Communication“. BYU ScholarsArchive, 2018. https://scholarsarchive.byu.edu/etd/7051.

Der volle Inhalt der Quelle
Annotation:
Learning to speak a foreign language (L2) can be a challenging feat, made all the more challenging when done in only 50 minute, daily increments in class. Oral asynchronous computer-mediated communication (ACMC) provides learners with opportunities to practice spoken communication and evaluate their practice outside the classroom. In this dissertation, I explore methods for classroom integration of oral ACMC, linguistic traits developed in previous oral ACMC studies, methods for determining the effectiveness of oral ACMC, learner beliefs about the effectiveness of oral ACMC activities, and the effects of learners' deliberate practice in a series of oral ACMC activities on 3 measures of L2 fluency. In my first article, a literature review, I found that most studies on this topic focus on the linguistic traits of accuracy, fluency, and pronunciation, and determine L2 growth from oral ACMC activities through learner perceptions of L2 growth not relying on objective measures. In my second article, I analyzed the fluency change of learners who participated in a series of video recording and feedback activities. I found that, although there were few significant results, the activities may be of some benefit to learners in improving their spoken fluency. I also found that structural equational modelling may be of more value for researching classroom-based activities than t tests and regression models. In my third article, I investigated the experiences of several learners who participated in the video recording activities described in article two. Based on these learner experiences, I provided key considerations for designing asynchronous video recording assignments. The three articles included in this dissertation will be valuable in highlighting key factors related to the design, development, research, and effective use of oral ACMC activities in foreign language classrooms.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Murray, Garold Linwood. „Bodies in cyberspace : language learning in a simulated environment“. Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp02/NQ27209.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Zewary, Sayed Mustafa. „Visuals in foreign language teaching“. Thesis, Kansas State University, 2011. http://hdl.handle.net/2097/8778.

Der volle Inhalt der Quelle
Annotation:
Master of Arts
Department of Modern Languages
Mary T. Copple
This study investigates the effectiveness of visuals in the language classroom. Two types of visual aids commonly used in the language classroom, video and still pictures, are used to elicit narratives from L2 English speakers, and these narratives are subsequently compared. The data come from eleven international students from a university English Language Program, who voluntarily participated in two separate 15-minute interviews. In each interview session, they were shown either a series of pictures or a video, both depicting a story. Upon completion of the presentation of each visual, participants were asked a prompt question and their narration of the events portrayed in the visuals recorded. The narratives were transcribed and analyzed in order to test (1) if still pictures and video are equally effective in eliciting elaboration in the narratives, defined in this case, as the number of new referents introduced and the number of adjective and verb types produced; and (2) if exposure to still pictures and video elicit narrations of similar length. Both kinds of visuals stimulated learners to create narratives and elaborate on what had been shown in them. The video task elicited narratives roughly 10% longer than the picture task in regards to the raw number of words. When linguistic factors were compared, participants introduced new referents at comparable rates in both tasks while they employed 10% more verb types in the video task. Additionally, the series of still pictures prompted participants to employ a much higher number of adjective types. These observations suggest that a series of still pictures are an effective alternative for video for eliciting narratives. This study provides support for the use of still pictures as an equivalent to videos in situations where videos are less accessible in language classrooms (due to lack of technological access).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Silvestre, Cerdà Joan Albert. „Different Contributions to Cost-Effective Transcription and Translation of Video Lectures“. Doctoral thesis, Universitat Politècnica de València, 2016. http://hdl.handle.net/10251/62194.

Der volle Inhalt der Quelle
Annotation:
[EN] In recent years, on-line multimedia repositories have experiencied a strong growth that have made them consolidated as essential knowledge assets, especially in the area of education, where large repositories of video lectures have been built in order to complement or even replace traditional teaching methods. However, most of these video lectures are neither transcribed nor translated due to a lack of cost-effective solutions to do so in a way that gives accurate enough results. Solutions of this kind are clearly necessary in order to make these lectures accessible to speakers of different languages and to people with hearing disabilities. They would also facilitate lecture searchability and analysis functions, such as classification, recommendation or plagiarism detection, as well as the development of advanced educational functionalities like content summarisation to assist student note-taking. For this reason, the main aim of this thesis is to develop a cost-effective solution capable of transcribing and translating video lectures to a reasonable degree of accuracy. More specifically, we address the integration of state-of-the-art techniques in Automatic Speech Recognition and Machine Translation into large video lecture repositories to generate high-quality multilingual video subtitles without human intervention and at a reduced computational cost. Also, we explore the potential benefits of the exploitation of the information that we know a priori about these repositories, that is, lecture-specific knowledge such as speaker, topic or slides, to create specialised, in-domain transcription and translation systems by means of massive adaptation techniques. The proposed solutions have been tested in real-life scenarios by carrying out several objective and subjective evaluations, obtaining very positive results. The main outcome derived from this thesis, The transLectures-UPV Platform, has been publicly released as an open-source software, and, at the time of writing, it is serving automatic transcriptions and translations for several thousands of video lectures in many Spanish and European universities and institutions.
[ES] Durante estos últimos años, los repositorios multimedia on-line han experimentado un gran crecimiento que les ha hecho establecerse como fuentes fundamentales de conocimiento, especialmente en el área de la educación, donde se han creado grandes repositorios de vídeo charlas educativas para complementar e incluso reemplazar los métodos de enseñanza tradicionales. No obstante, la mayoría de estas charlas no están transcritas ni traducidas debido a la ausencia de soluciones de bajo coste que sean capaces de hacerlo garantizando una calidad mínima aceptable. Soluciones de este tipo son claramente necesarias para hacer que las vídeo charlas sean más accesibles para hablantes de otras lenguas o para personas con discapacidades auditivas. Además, dichas soluciones podrían facilitar la aplicación de funciones de búsqueda y de análisis tales como clasificación, recomendación o detección de plagios, así como el desarrollo de funcionalidades educativas avanzadas, como por ejemplo la generación de resúmenes automáticos de contenidos para ayudar al estudiante a tomar apuntes. Por este motivo, el principal objetivo de esta tesis es desarrollar una solución de bajo coste capaz de transcribir y traducir vídeo charlas con un nivel de calidad razonable. Más específicamente, abordamos la integración de técnicas estado del arte de Reconocimiento del Habla Automático y Traducción Automática en grandes repositorios de vídeo charlas educativas para la generación de subtítulos multilingües de alta calidad sin requerir intervención humana y con un reducido coste computacional. Además, también exploramos los beneficios potenciales que conllevaría la explotación de la información de la que disponemos a priori sobre estos repositorios, es decir, conocimientos específicos sobre las charlas tales como el locutor, la temática o las transparencias, para crear sistemas de transcripción y traducción especializados mediante técnicas de adaptación masiva. Las soluciones propuestas en esta tesis han sido testeadas en escenarios reales llevando a cabo nombrosas evaluaciones objetivas y subjetivas, obteniendo muy buenos resultados. El principal legado de esta tesis, The transLectures-UPV Platform, ha sido liberado públicamente como software de código abierto, y, en el momento de escribir estas líneas, está sirviendo transcripciones y traducciones automáticas para diversos miles de vídeo charlas educativas en nombrosas universidades e instituciones Españolas y Europeas.
[CAT] Durant aquests darrers anys, els repositoris multimèdia on-line han experimentat un gran creixement que els ha fet consolidar-se com a fonts fonamentals de coneixement, especialment a l'àrea de l'educació, on s'han creat grans repositoris de vídeo xarrades educatives per tal de complementar o inclús reemplaçar els mètodes d'ensenyament tradicionals. No obstant això, la majoria d'aquestes xarrades no estan transcrites ni traduïdes degut a l'absència de solucions de baix cost capaces de fer-ho garantint una qualitat mínima acceptable. Solucions d'aquest tipus són clarament necessàries per a fer que les vídeo xarres siguen més accessibles per a parlants d'altres llengües o per a persones amb discapacitats auditives. A més, aquestes solucions podrien facilitar l'aplicació de funcions de cerca i d'anàlisi tals com classificació, recomanació o detecció de plagis, així com el desenvolupament de funcionalitats educatives avançades, com per exemple la generació de resums automàtics de continguts per ajudar a l'estudiant a prendre anotacions. Per aquest motiu, el principal objectiu d'aquesta tesi és desenvolupar una solució de baix cost capaç de transcriure i traduir vídeo xarrades amb un nivell de qualitat raonable. Més específicament, abordem la integració de tècniques estat de l'art de Reconeixement de la Parla Automàtic i Traducció Automàtica en grans repositoris de vídeo xarrades educatives per a la generació de subtítols multilingües d'alta qualitat sense requerir intervenció humana i amb un reduït cost computacional. A més, també explorem els beneficis potencials que comportaria l'explotació de la informació de la que disposem a priori sobre aquests repositoris, és a dir, coneixements específics sobre les xarrades tals com el locutor, la temàtica o les transparències, per a crear sistemes de transcripció i traducció especialitzats mitjançant tècniques d'adaptació massiva. Les solucions proposades en aquesta tesi han estat testejades en escenaris reals duent a terme nombroses avaluacions objectives i subjectives, obtenint molt bons resultats. El principal llegat d'aquesta tesi, The transLectures-UPV Platform, ha sigut alliberat públicament com a programari de codi obert, i, en el moment d'escriure aquestes línies, està servint transcripcions i traduccions automàtiques per a diversos milers de vídeo xarrades educatives en nombroses universitats i institucions Espanyoles i Europees.
Silvestre Cerdà, JA. (2016). Different Contributions to Cost-Effective Transcription and Translation of Video Lectures [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/62194
TESIS
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Thompson, Scott Alan. „A Comparison of the Effects of Different Video Imagery Upon Adult ESL Students' Comprehension of a Video Narrative“. PDXScholar, 1994. https://pdxscholar.library.pdx.edu/open_access_etds/4845.

Der volle Inhalt der Quelle
Annotation:
This study was meant to provide empirical evidence to support or challenge the assumption that a nonfiction video narrative will be better comprehended by students of ESL if it includes a variety of relevant visual information compared to only seeing a single speaker or "talking head" reciting a narration. The overarching goal of this study was to give teachers of ESL greater knowledge and confidence in using video materials to develop the listening skills of their students. It compared two video tapes which contained the identical soundtrack but different visual information. The first tape (also called the "lecture tape") showed a single speaker, standing behind a lectern, giving a speech about Costa Rica. The second video (also called the "documentary tape") contained the identical soundtrack of tape one, but included documentary video footage actually filmed in Costa Rica which complemented the narration. A questionnaire of 45 true/false questions was created based on facts given in the narration. Thirty-nine advanced and fifty-five intermediate university ESL students took part in the study. Approximate! y half of each group viewed the lecture tape while the other half watched the documentary tape. All students answered the 45 - item questionnaire while viewing their respective video tapes. A thorough item-analysis was then conducted with the initial raw scores of all 94 students, resulting in fifteen questions being omitted from the final analysis. Based on a revised 30 - item questionnaire, the scores of the video and documentary groups were compared within each proficiency level. The hypothesis of the study was that the documentary tape would significantly improve listening comprehension at the intermediate level but that no significant difference would be found between the advanced lecture and documentary groups. In other words, it was predicted that the documentary video would have an interaction effect depending upon proficiency level. However, the results of a 2-way ANOV A did not support the hypothesis. In addition to the ANOV A, a series oft-tests also found no significant difference between the mean scores of the documentary and lecture groups at either the intermediate or the advanced levels This study was intended to be a beginning to research which may eventually reveal a "taxonomy" of video images from those which enhance listening comprehension the most to those that aid it the least. It contained limitations in the testing procedures which caused the results to be inconclusive. A variety of testing methods was suggested in order to continue research which may reveal such a "video" taxonomy. Given the plethora of video materials that ESL teachers can purchase, record, or create themselves, empirical research is needed to help guide the choices that educators make in choosing video material for their students which will provide meaningful linguistic input.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Curry, Ryan H. „CHILDREN’S THEORY OF MIND, JOINT ATTENTION, AND VIDEO CHAT“. Case Western Reserve University School of Graduate Studies / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=case1616663322967054.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie