Journal articles on the topic 'Video and language'

To see the other types of publications on this topic, follow the link: Video and language.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Video and language.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Joshi, Prof Indira. "Video Summarization for Marathi Language." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 05 (May 3, 2024): 1–5. http://dx.doi.org/10.55041/ijsrem32024.

Full text
Abstract:
The Video Summarization Platform using Python Flask is a comprehensive tool designed to summarize Marathi and English videos while providing summaries in Hindi, Marathi, and English languages. Leveraging machine learning and natural language processing (NLP) techniques, this platform offers a sophisticated solution for efficiently extracting key information from videos. The platform begins by transcribing the audio content of the video into text using automatic speech recognition (ASR) technology. This transcription process ensures that the platform can accurately analyze and summarize the video's content. Next, the text is translated into the target languages, namely Hindi, Marathi, and English, enabling users from diverse linguistic backgrounds to access the summarized content. To generate concise and informative summaries, advanced NLP algorithm is applied. This algorithm analyze the transcribed text to identify the most significant phrases, sentences, and concepts. By considering factors such as keyword frequency, semantic relevance, and context, the platform effectively distils the video's content into digestible summaries. Additionally, machine learning models are employed to classify the type of video content. These models are trained on diverse datasets encompassing various video genres and topics. By recognizing patterns and features within the video content, the platform can accurately categorize videos into distinct types, such as news, interviews, tutorials, or entertainment. The platform's user interface, powered by Python Flask, offers a seamless experience for users to upload videos, select their preferred language for summarization, and receive concise summaries in their chosen languages. The intuitive design ensures accessibility and ease of use, catering to both novice and advanced users. Overall, the Video Summarization Platform serves as a valuable resource for individuals seeking efficient ways to consume multimedia content. Whether for educational, informational, or entertainment purposes, this platform empowers users to access summarized video content in multiple languages, facilitated by cutting-edge machine learning and NLP technologies. Key Words: Transcription, Marathi-speaking users, Marathi YouTube videos, video content, transcription, summary, translation, Natural Language Toolkit (NLTK), content comprehension, user interaction data, past summaries, recommendation
APA, Harvard, Vancouver, ISO, and other styles
2

Walther, Joseph B., German Neubaum, Leonie Rösner, Stephan Winter, and Nicole C. Krämer. "The Effect of Bilingual Congruence on the Persuasive Influence of Videos and Comments on YouTube." Journal of Language and Social Psychology 37, no. 3 (August 11, 2017): 310–29. http://dx.doi.org/10.1177/0261927x17724552.

Full text
Abstract:
Social media offer a global village in which user-generated comments from different groups and many languages appear. Extending the notion of prototypes in social identification theory, research examined the persuasive influence of comments supporting or deriding a public service announcement video on YouTube, where comments’ language either matched or differed from the videos’. Bilingual participants watched videos in English or Mandarin Chinese promoting water conservation, accompanied by comments in English or Mandarin that supported or derided the videos’ message. Results replicated previous findings about the valence of comments on public service announcement evaluations, overridden by an interaction between valence and language congruity: Comments in the same language as the videos’ affected readers’ evaluations of the video more than did comments in the language other than the videos’.
APA, Harvard, Vancouver, ISO, and other styles
3

Shipman, Frank M., Ricardo Gutierrez-Osuna, and Caio D. D. Monteiro. "Identifying Sign Language Videos in Video Sharing Sites." ACM Transactions on Accessible Computing 5, no. 4 (March 2014): 1–14. http://dx.doi.org/10.1145/2579698.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Sanders, D. C., L. M. Reyes, D. J. Osborne, D. R. Ward, and D. E. Blackwelder. "USING A BILINGUAL GAPS AND HAND-WASHING DVD TO TRAIN FRESH PRODUCE FIELD AND PACKINGHOUSE WORKERS." HortScience 41, no. 3 (June 2006): 498D—498. http://dx.doi.org/10.21273/hortsci.41.3.498d.

Full text
Abstract:
The Southeastern Fresh Produce Food Safety Training Program has been training extension agents across the southeastern U.S. since 2000. This program has utilized a variety of methods including group case study to enhance learning and promote team work. Multistate trainings have fostered collaboration between states and institutions. One goal of the program was to produce a method for agents to provide training that was repeatable and easy to implement. As a result, two videos were produced for use in training field and packinghouse workers. These videos were an English language good agricultural practices (GAPs) video entitled Bridging the GAPs: From the Farm to the Table and a Spanish language hand-washing video entitled ¡Lave sus Manos: Por Los Niños! This program has been very effective, but has faced challenges due to language barriers. Many field and packinghouse crews were mixed in terms of language with some crew members speaking only English while others spoke only Spanish. As a result, Spanish speakers were unable to access the information in the good agricultural practices video while English speakers were unable to access information in the hand-washing video. The solution was to produce a bilingual training aid that included both sets of information and has been compiled into a DVD containing the footage of both of the original videos in both languages. For the Spanish version of the GAPs video and the English of the hand-washing video, the audio of the video's original language was left at a low sound level and the audio of the alternate language was added. These DVDs are currently being distributed to extension programs in all of the cooperating states with the aim of reaching growers who want to start a food safety plan.
APA, Harvard, Vancouver, ISO, and other styles
5

Hiremath, Rashmi B., and Ramesh M. Kagalkar. "Sign Language Video Processing for Text Detection in Hindi Language." International Journal of Recent Contributions from Engineering, Science & IT (iJES) 4, no. 3 (October 26, 2016): 21. http://dx.doi.org/10.3991/ijes.v4i3.5973.

Full text
Abstract:
Sign language is a way of expressing yourself with your body language, where every bit of ones expressions, goals, or sentiments are conveyed by physical practices, for example, outward appearances, body stance, motions, eye movements, touch and the utilization of space. Non-verbal communication exists in both creatures and people, yet this article concentrates on elucidations of human non-verbal or sign language interpretation into Hindi textual expression. The proposed method of implementation utilizes the image processing methods and synthetic intelligence strategies to get the goal of sign video recognition. To carry out the proposed task implementation it uses image processing methods such as frame analysing based tracking, edge detection, wavelet transform, erosion, dilation, blur elimination, noise elimination, on training videos. It also uses elliptical Fourier descriptors called SIFT for shape feature extraction and most important part analysis for feature set optimization and reduction. For result analysis, this paper uses different category videos such as sign of weeks, months, relations etc. Database of extracted outcomes are compared with the video fed to the system as a input of the signer by a trained unclear inference system.
APA, Harvard, Vancouver, ISO, and other styles
6

Anugerah, Rezza, Yohanes Gatot Sutapa Yuliana, and Dwi Riyanti. "THE POTENTIAL OF ENGLISH LEARNING VIDEOS IN FORM OF VLOG ON YOUTUBE FOR ELT MATERIAL WRITERS." Proceedings International Conference on Teaching and Education (ICoTE) 2, no. 2 (December 24, 2019): 224. http://dx.doi.org/10.26418/icote.v2i2.38232.

Full text
Abstract:
YouTube is the most popular video sharing websites where millions of videos can be found online, especially videos for learning a language. Students can use YouTube to get many language learning videos to help them study the language. Not only getting many videos that can be used as teaching materials in the classroom, teachers can also create their own English video for their students or even for everyone. Designing an English learning video is an opportunity for teachers to make videos based on their teaching context. One type of content that can have a big opportunity to develop is English learning video in form of Vlog. Vlog has a huge number of audiences, especially for Indonesian learners. This research is aimed at analyzing the potential of English learning video in form of Vlog for ELT material writers.
APA, Harvard, Vancouver, ISO, and other styles
7

Liu, Yuqi, Luhui Xu, Pengfei Xiong, and Qin Jin. "Token Mixing: Parameter-Efficient Transfer Learning from Image-Language to Video-Language." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 2 (June 26, 2023): 1781–89. http://dx.doi.org/10.1609/aaai.v37i2.25267.

Full text
Abstract:
Applying large scale pre-trained image-language model to video-language tasks has recently become a trend, which brings two challenges. One is how to effectively transfer knowledge from static images to dynamic videos, and the other is how to deal with the prohibitive cost of fully fine-tuning due to growing model size. Existing works that attempt to realize parameter-efficient image-language to video-language transfer learning can be categorized into two types: 1) appending a sequence of temporal transformer blocks after the 2D Vision Transformer (ViT), and 2) inserting a temporal block into the ViT architecture. While these two types of methods only require fine-tuning the newly added components, there are still many parameters to update, and they are only validated on a single video-language task. In this work, based on our analysis of the core ideas of different temporal modeling components in existing approaches, we propose a token mixing strategy to enable cross-frame interactions, which enables transferring from the pre-trained image-language model to video-language tasks through selecting and mixing a key set and a value set from the input video samples. As token mixing does not require the addition of any components or modules, we can directly partially fine-tune the pre-trained image-language model to achieve parameter-efficiency. We carry out extensive experiments to compare our proposed token mixing method with other parameter-efficient transfer learning methods. Our token mixing method outperforms other methods on both understanding tasks and generation tasks. Besides, our method achieves new records on multiple video-language tasks. The code is available at https://github.com/yuqi657/video_language_model.
APA, Harvard, Vancouver, ISO, and other styles
8

Moon, Nazmun Nessa, Imrus Salehin, Masuma Parvin, Md Mehedi Hasan, Iftakhar Mohammad Talha, Susanta Chandra Debnath, Fernaz Narin Nur, and Mohd Saifuzzaman. "Natural language processing based advanced method of unnecessary video detection." International Journal of Electrical and Computer Engineering (IJECE) 11, no. 6 (December 1, 2021): 5411. http://dx.doi.org/10.11591/ijece.v11i6.pp5411-5419.

Full text
Abstract:
<span>In this study we have described the process of identifying unnecessary video using an advanced combined method of natural language processing and machine learning. The system also includes a framework that contains analytics databases and which helps to find statistical accuracy and can detect, accept or reject unnecessary and unethical video content. In our video detection system, we extract text data from video content in two steps, first from video to MPEG-1 audio layer 3 (MP3) and then from MP3 to WAV format. We have used the text part of natural language processing to analyze and prepare the data set. We use both Naive Bayes and logistic regression classification algorithms in this detection system to determine the best accuracy for our system. In our research, our video MP4 data has converted to plain text data using the python advance library function. This brief study discusses the identification of unauthorized, unsocial, unnecessary, unfinished, and malicious videos when using oral video record data. By analyzing our data sets through this advanced model, we can decide which videos should be accepted or rejected for the further actions.</span>
APA, Harvard, Vancouver, ISO, and other styles
9

Gernsbacher, Morton Ann. "Video Captions Benefit Everyone." Policy Insights from the Behavioral and Brain Sciences 2, no. 1 (October 2015): 195–202. http://dx.doi.org/10.1177/2372732215602130.

Full text
Abstract:
Video captions, also known as same-language subtitles, benefit everyone who watches videos (children, adolescents, college students, and adults). More than 100 empirical studies document that captioning a video improves comprehension of, attention to, and memory for the video. Captions are particularly beneficial for persons watching videos in their non-native language, for children and adults learning to read, and for persons who are D/deaf or hard of hearing. However, despite U.S. laws, which require captioning in most workplace and educational contexts, many video audiences and video creators are naïve about the legal mandate to caption, much less the empirical benefit of captions.
APA, Harvard, Vancouver, ISO, and other styles
10

Dilawari, Aniqa, Muhammad Usman Ghani Khan, Yasser D. Al-Otaibi, Zahoor-ur Rehman, Atta-ur Rahman, and Yunyoung Nam. "Natural Language Description of Videos for Smart Surveillance." Applied Sciences 11, no. 9 (April 21, 2021): 3730. http://dx.doi.org/10.3390/app11093730.

Full text
Abstract:
After the September 11 attacks, security and surveillance measures have changed across the globe. Now, surveillance cameras are installed almost everywhere to monitor video footage. Though quite handy, these cameras produce videos in a massive size and volume. The major challenge faced by security agencies is the effort of analyzing the surveillance video data collected and generated daily. Problems related to these videos are twofold: (1) understanding the contents of video streams, and (2) conversion of the video contents to condensed formats, such as textual interpretations and summaries, to save storage space. In this paper, we have proposed a video description framework on a surveillance dataset. This framework is based on the multitask learning of high-level features (HLFs) using a convolutional neural network (CNN) and natural language generation (NLG) through bidirectional recurrent networks. For each specific task, a parallel pipeline is derived from the base visual geometry group (VGG)-16 model. Tasks include scene recognition, action recognition, object recognition and human face specific feature recognition. Experimental results on the TRECViD, UET Video Surveillance (UETVS) and AGRIINTRUSION datasets depict that the model outperforms state-of-the-art methods by a METEOR (Metric for Evaluation of Translation with Explicit ORdering) score of 33.9%, 34.3%, and 31.2%, respectively. Our results show that our framework has distinct advantages over traditional rule-based models for the recognition and generation of natural language descriptions.
APA, Harvard, Vancouver, ISO, and other styles
11

Perez, Maribel Montero, and Michael P. H. Rodgers. "Video and language learning." Language Learning Journal 47, no. 4 (July 29, 2019): 403–6. http://dx.doi.org/10.1080/09571736.2019.1629099.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Zettersten, Arne. "Video in language teaching." System 14, no. 1 (January 1986): 96–97. http://dx.doi.org/10.1016/0346-251x(86)90056-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Gillespie, Junetta, and Jack Lonergan. "Video in Language Teaching." Modern Language Journal 70, no. 2 (1986): 167. http://dx.doi.org/10.2307/327326.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

MacWilliam, I. "Video and language comprehension." ELT Journal 40, no. 2 (April 1, 1986): 131–35. http://dx.doi.org/10.1093/elt/40.2.131.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Dolores, Maria, and Jorge Mañana-Rodriguez. "Exploring Engagement in Online Videos for Language Learning through YouTube’s Learning Analytics." EDEN Conference Proceedings, no. 1 (September 21, 2021): 49–58. http://dx.doi.org/10.38069/edenconf-2021-ac0005.

Full text
Abstract:
Until a few years ago, video analytics were not accessible to learning stakeholders, mainly because online video platforms did not share the users’ interactions on the system with stakeholders. However, this scenario has changed, and currently YouTube, the world’s largest media sharing site, offers these data. YouTube is also the main tool for transmitting audio-visual content in Language MOOCs (massive open online courses), and its video engagement data can be monitored through the YouTube Studio channel, which provides free and open access to video analytics. In this paper we present our research based on the analysis of viewers’ engagement with 35 videos of the Language MOOC entitled Alemán para hispanohablantes: basic principles (German for Spanish-speakers). The data provided by the YouTube Studio Learning Analytics platform has enabled new insights related to participants’ watching of these videos in Language MOOCs (LMOOCs). The results of our study provide pedagogical implications for Foreign Language instructors concerning the use of videos in language learning.
APA, Harvard, Vancouver, ISO, and other styles
16

Nauvalia, Nurin, and Ikwan Setiawan. "Peran media “Tik Tok” dalam memperkenalkan budaya Bahasa Indonesia." Satwika : Kajian Ilmu Budaya dan Perubahan Sosial 6, no. 1 (April 28, 2022): 126–38. http://dx.doi.org/10.22219/satwika.v6i1.20409.

Full text
Abstract:
Upaya untuk melestarikan budaya harus dimulai dari bahasa daerah dimana dengan nilai-nilai budaya tersebut dapat dibagikan atau dikomunikasikan di antara anggota masyarakat. Tulisan ini bertujuan untuk mengetahui ragam bahasa daerah yang merujuk pada istilah dalam bahasa Indonesia dalam video “Tik Tok” serta peran media tersebut dalam memperkenalkan ragam Bahasa Daerah sebagai bentuk ekspresi budaya bangsa. Penelitian ini menggunakan perspektif dari new media untuk mendiskusikan persoalan tersebut. Metode penelitian menggunakan penelitian kualitatif. The source of the data obtained is from primary data in the form of a Tiktok video entitled: Compilation of regional languages. The data collection technique uses documentation techniques, namely identifying videos of various regional languages ​​in Tiktok. Data analysis technique using reduction technique. Hasil penelitian menunjukan ragam bahasa yang terdapat dalam video TikTok yang sudah dikumpulkan oleh peneliti, terdapat ragam bahasa dari Daerah Jawa, Sunda, Solo, Minang, Banyumas, dan Manado. Peran media sosial yaitu video tiktok terkait ragam bahasa atau penyebutan Bahasa Daerah dalam memperkenalkan budaya Bahasa Indonesia sehingga dapat menjadikan Bahasa Indonesia yang beragam menjadi populer, terwariskan atau ditirukan, dan diterima dilapisan masyarakat. Video tiktok sudah menjadikan budaya Indonesia yaitu Bahasa Indonesia menjadi populer diterima disetiap seluruh masyarakat baik negeri atau luar negeri. Efforts to be with culture must start from the local language where the cultural values ​​can be shared or communicated among community members. This paper aims to find out the variety of regional languages ​​that refer to the terms in Indonesian in the video "Tik Tok" and the role of the media in introducing various regional languages ​​as a form of national cultural expression. For these problems, this study uses the perspective of new media. The research method uses qualitative research. The results show the variety of languages ​​contained in the TikTok videos that have been collected by researchers, there are various languages ​​from the Java, Sundanese, Solo, Minang, Banyumas, and Manado regions. The role of social media is tiktok videos related to various languages ​​or the mention of regional languages ​​in introducing Indonesian culture so that they can make various Indonesian languages ​​popular, inherited or imitated, and accepted in society. Tiktok videos have made Indonesian culture, namely the Indonesian language, become popular which is accepted by all people, both country and abroad.
APA, Harvard, Vancouver, ISO, and other styles
17

Holla, Meghana, and Ismini Lourentzou. "Commonsense for Zero-Shot Natural Language Video Localization." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 3 (March 24, 2024): 2166–74. http://dx.doi.org/10.1609/aaai.v38i3.27989.

Full text
Abstract:
Zero-shot Natural Language-Video Localization (NLVL) methods have exhibited promising results in training NLVL models exclusively with raw video data by dynamically generating video segments and pseudo-query annotations. However, existing pseudo-queries often lack grounding in the source video, resulting in unstructured and disjointed content. In this paper, we investigate the effectiveness of commonsense reasoning in zero-shot NLVL. Specifically, we present CORONET, a zero-shot NLVL framework that leverages commonsense to bridge the gap between videos and generated pseudo-queries via a commonsense enhancement module. CORONET employs Graph Convolution Networks (GCN) to encode commonsense information extracted from a knowledge graph, conditioned on the video, and cross-attention mechanisms to enhance the encoded video and pseudo-query representations prior to localization. Through empirical evaluations on two benchmark datasets, we demonstrate that CORONET surpasses both zero-shot and weakly supervised baselines, achieving improvements up to 32.13% across various recall thresholds and up to 6.33% in mIoU. These results underscore the significance of leveraging commonsense reasoning for zero-shot NLVL.
APA, Harvard, Vancouver, ISO, and other styles
18

Li, Huiwen. "The Language of New Media Journalism on Short Video Sharing Websites." Academic Journal of Management and Social Sciences 3, no. 1 (June 6, 2023): 33–41. http://dx.doi.org/10.54097/ajmss.v3i1.9527.

Full text
Abstract:
In recent years, with the rapid expansion of the integrated media and the wide-scale popularization of the mobile network, the short video technology and its applications have been hogging the limelight. To keep pace with the development of online news tendencies, the mainstream media has started entering into many short video platforms. To a certain extent, the mainstream media short video news is an important product of the internet and the digital media era. The short video has today become an emerging communication mode shared in real-time on the social platforms, which could integrate text, dubbing, video, and other elements to achieve the purpose of information transmission, while meeting the perception needs of the users through more intuitive and three-dimensional communication. Short videos cater to the fragmented reading habits of the public within a short time to achieve comprehensive transmission of communication content under the premise of excellence. This paper endeavors to analyze the characteristics of short video news in the era of rapid development of the new media by considering the short video news released by the official Tik Tok account of Xinhua News Agency as an example. The language of short video news is different from that of the traditional mainstream media news, whereas, the influence of the combination of official media news and the current hot short video transmission has been discussed. The transmission power of official media news could be optimized and improved through the use of short news videos.
APA, Harvard, Vancouver, ISO, and other styles
19

Díaz-Arias, Rafael. "Video in Cyberspace: Usage and Language." Comunicar 17, no. 33 (October 1, 2009): 63–71. http://dx.doi.org/10.3916/c33-2009-02-006.

Full text
Abstract:
Video is growing faster than any other content in cyberspace. Video clips jumping between screens represent a narrative return to the origins of cinema. In cyberspace, videos can be cybermovies, cybertelevision, television on demand and cybervideo. The social uses of video in cyberspace overlap with information in a process of horizontal communication of immense potential, but with the risk of fragmenting the public sphere. Video is a factor of globalization and a vector of infotainment that colonizes both the media and cyberspace. Hypermedia language fragments the expressive elements of classical audiovisual language. It is necessary to explore new ways of interactive audiovisual language and investigate how video links the media to cyberspace in a new audiovisual ecosystem.El vídeo es el contenido de más rápido crecimiento en el ciberespacio. Los vídeoclips que saltan entre las pantallas suponen un regreso narrativo a los orígenes del cine. En el ciberespacio se dan modalidades audiovisuales características: cibercine, cibertelevisión, televisión a la carta y cibervídeo. Los usos sociales del vídeo en el ciberespacio se solapan con los informativos en un proceso de comunicación horizontal, de enorme potencial, pero que presenta riesgos de fragmentación de la esfera pública. El vídeo es factor de globalización y vector del info-entretenimiento que coloniza tanto el espacio mediático como el ciberespacio. En el lenguaje hipermedia los elementos expresivos del lenguaje audiovisual clásico se fragmentan. Es necesario explorar nuevos modos interactivos de lenguaje audiovisual e investigar cómo el vídeo vincula el espacio mediático y el ciberespacio en un nuevo ecosistema audiovisual.
APA, Harvard, Vancouver, ISO, and other styles
20

Sumon, Md Shaheenur Islam, Muttakee Bin Ali, Samiul Bari, Ipshita Rahman Ohi, Mayisha Islam, and Syed Mahfuzur Rahman. "Sign Language Word Detection Using LRCN." IOP Conference Series: Materials Science and Engineering 1305, no. 1 (April 1, 2024): 012023. http://dx.doi.org/10.1088/1757-899x/1305/1/012023.

Full text
Abstract:
Abstract Sign language is the most effective communication for deaf or hard-of-hearing people. Specialized training is required to understand sign language, and as such, people without disabilities around them cannot communicate effectively. The main objective of this study is to develop a mechanism for streamlining the deep learning model for sign language recognition by utilizing the 30 most prevalent words in our everyday lives. The dataset was designed through 30 ASL (American Sign Language) words consisting of custom-processed video sequences, which consist of 5 subjects and 50 sample videos for each class. The CNN model can be applied to video frames to extract spatial properties. Using CNN’s acquired data, the LSTM model may then predict the action being performed in the video. We present and evaluate the results of two separate datasets—the Pose dataset and the Raw video dataset. The dataset was trained with the Long-term Recurrent Convolutional Network (LRCN) approach. Finally, a test accuracy of 92.66% was reached for the raw dataset, while 93.66% for the pose dataset.
APA, Harvard, Vancouver, ISO, and other styles
21

Campana, Phillip J., and Rick Altman. "The Video Connection: Integrating Video into Language Teaching." Modern Language Journal 74, no. 1 (1990): 139. http://dx.doi.org/10.2307/327999.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Purwanti, Indah Tri, Eliwarti Eliwarti, Atni Prawati, and Rumiri Rotua Aruan. "Developing Educational Videos Containing Authentic Interaction Video Clips for Teaching Language Functions." EDUKATIF : JURNAL ILMU PENDIDIKAN 4, no. 4 (June 21, 2022): 5582–94. http://dx.doi.org/10.31004/edukatif.v4i4.3307.

Full text
Abstract:
Videos containing authentic interaction-based video clips play an important role in facilitating EFL learners to study the target language. This study aimed to develop video lectures containing authentic interaction-based video clips, which can facilitate students in learning requests and apology speech acts. This study employed Research and Development using the ADDIE model. The analysis phase showed that it is necessary to develop the videos. Experts’ evaluations showed that the videos were acceptable, relevant, usable, and appropriate to a very high extent. The students’ evaluation revealed that the videos were valid and applicable to a high extent. In the implementation of the videos, it was revealed that there was a significant increase in the students’ ability. This means that the content of the videos was significant to increase the students’ ability. This means that the products of this research can be applied as lecture videos or instructional devices in teaching and learning devices for independent study.
APA, Harvard, Vancouver, ISO, and other styles
23

Hadijah, Sitti, Shalawati Shalawati, Missi Tri Astuti, and Arini Nurul Hidayati. "Designing and Developing Video as an Instructional Media in English language Teaching Setting." Lectura : Jurnal Pendidikan 13, no. 2 (August 1, 2022): 192–205. http://dx.doi.org/10.31849/lectura.v13i2.10185.

Full text
Abstract:
Using video in educational context has been established by many scholars as an effective instructional media to support the teaching and learning process. There have been numerous videos are available on social media platforms that can facilitate both teachers and students to easily access the videos as learning resources. However, the teachers are challenged to be able to provide their own learning resources, such as videos which are aimed to deliver the lessons in more contextual and authentic senses. In this regards, this study is aimed to encapsulate the design and development process of creating a video as an instructional media in English language teaching context. ADDIE (Analysis, Designing, Developing, Implementing, and Evaluating) framework was employed in the process of designing and developing the video. Documentation, observation, and testing were administered as data collection techniques to support the design and development of the video. Furthermore, this paper depicts the researchers’ experiences on the design and development of a video lesson as well as the survey findings. Implications of the findings for instructional design and recommendation for future research are also discussed.
APA, Harvard, Vancouver, ISO, and other styles
24

Bashmal, Laila, Yakoub Bazi, Mohamad Mahmoud Al Rahhal, Mansour Zuair, and Farid Melgani. "CapERA: Captioning Events in Aerial Videos." Remote Sensing 15, no. 8 (April 18, 2023): 2139. http://dx.doi.org/10.3390/rs15082139.

Full text
Abstract:
In this paper, we introduce the CapERA dataset, which upgrades the Event Recognition in Aerial Videos (ERA) dataset to aerial video captioning. The newly proposed dataset aims to advance visual–language-understanding tasks for UAV videos by providing each video with diverse textual descriptions. To build the dataset, 2864 aerial videos are manually annotated with a caption that includes information such as the main event, object, place, action, numbers, and time. More captions are automatically generated from the manual annotation to take into account as much as possible the variation in describing the same video. Furthermore, we propose a captioning model for the CapERA dataset to provide benchmark results for UAV video captioning. The proposed model is based on the encoder–decoder paradigm with two configurations to encode the video. The first configuration encodes the video frames independently by an image encoder. Then, a temporal attention module is added on the top to consider the temporal dynamics between features derived from the video frames. In the second configuration, we directly encode the input video using a video encoder that employs factorized space–time attention to capture the dependencies within and between the frames. For generating captions, a language decoder is utilized to autoregressively produce the captions from the visual tokens. The experimental results under different evaluation criteria show the challenges of generating captions from aerial videos. We expect that the introduction of CapERA will open interesting new research avenues for integrating natural language processing (NLP) with UAV video understandings.
APA, Harvard, Vancouver, ISO, and other styles
25

Dewi, Hestiana, Rahmat Permana, and Budi Hendrawan. "Content Analysis of Learning Videos on Indonesian Language Material in Elementary Schools on the Rumah Belajar Portal." Jurnal Pendidikan Amartha 3, no. 1 (May 1, 2024): 59–71. http://dx.doi.org/10.57235/jpa.v3i1.2183.

Full text
Abstract:
Learning videos are already widely present around us, because they have helped students learn and teach teachers today. Learning video content must be in accordance with the needs of the teaching and learning process. This research was motivated by researchers' concerns about the appropriateness of learning video content in accordance with basic competencies, learning objectives and integrated learning themes, one of which is the presentation of Indonesian language material in elementary schools. The aim of this research is to determine the feasibility of aspects of material composition, presentation structure and language in Indonesian language materials in elementary schools in the construction of learning videos on the Rumah Belajar Portal platform. In this case, the researcher tested the feasibility based on 3 aspects, namely the material composition aspect, the presentation structure aspect and the language aspect of the learning video. The content of elementary school Indonesian language material learning videos needs to be researched because this is to determine the appropriateness of the content for presenting Indonesian language material in elementary schools in learning videos which have contributed to the world of Indonesian education by producing many learning videos. This research uses a content analysis method through a qualitative approach.
APA, Harvard, Vancouver, ISO, and other styles
26

Urazbay, Karina Rakhimzhankyzy, and Aigul Yesengeldievna Niyazova. "Authentic video materials in the practical lessons of English." Bulletin of Toraighyrov University. Pedagogics series, no. 2,2021 (July 12, 2021): 30–36. http://dx.doi.org/10.48081/fqux3848.

Full text
Abstract:
The fast development of innovation has brought numerous advancements to instruction, especially within the instructing of languages. In addition to textbooks and other activities, foreign language teachers use a variety of audiovisual tools to create successful classrooms. In this article attempts to explore the purpose of using video in English as a foreign language classroom and discussed the benefits of using authentic video materials when teaching a foreign language. The features of working with authentic video materials are also considered. According to the author of the article, authentic video materials allow students to improve their speech comprehension skills and enter into a discussion. Also, special attention is paid to the selection of video materials that may be interesting, understandable and relevant to the modern reality of a foreign language society. The article emphasizes the broad possibilities of Internet resources. Authentic video materials stimulate interest and also expand students’ knowledge of the linguistic characteristics of the target language. The use of authentic video materials in teaching a foreign language opens up great opportunities for teachers and students to master the language.
APA, Harvard, Vancouver, ISO, and other styles
27

Sukardiyono, Totok, Muhammad Irfan Luthfi, and Nisa Dwi Septiyanti. "Breaking Down Computer Networking Instructional Videos: Automatic Summarization with Video Attributes and Language Models." Elinvo (Electronics, Informatics, and Vocational Education) 8, no. 1 (June 6, 2023): 26–37. http://dx.doi.org/10.21831/elinvo.v8i1.60741.

Full text
Abstract:
Instructional videos have become a popular tool for teaching complex topics in computer networking. However, these videos can often be lengthy and time-consuming, making it difficult for learners to obtain the key information they need. In this study, we propose an approach that leverages automatic summarization and language models to generate concise and informative summaries of instructional videos. To enhance the performance of the summarization algorithm, we also incorporate video attributes that provide contextual information about the video content. Using a dataset of computer networking tutorials, we evaluate the effectiveness of the proposed method and show that it significantly improves the quality of the video summaries generated. Our study highlights the potential of using language models in automatic summarization and suggests that incorporating video attributes can further enhance the performance of these models. These findings have important implications for the development of effective instructional videos in computer networking and can be extended to other domains as well.
APA, Harvard, Vancouver, ISO, and other styles
28

КЛИМЕНКО, АНАТОЛІЙ, НАТАЛІЯ ЗАКОРДОНЕЦЬ, and INNA OBIKHOD. "THE USE OF VIDEO MATERIALS IN FLT: A CASE STUDY AT FOREIGN LANGUAGES DEPARTMENT OF TNPU." Scientific Issues of Ternopil Volodymyr Hnatiuk National Pedagogical University. Series: pedagogy 1, no. 1 (July 11, 2023): 133–40. http://dx.doi.org/10.25128/2415-3605.23.1.17.

Full text
Abstract:
With the increasing use of innovative approaches to foreign languages teaching (FLT), video materials have become more accessible and easier to incorporate into the process of language acquisition. Additionally, the COVID-19 pandemic has accelerated the use of online learning, and video materials have become an important tool for distant language teaching. The article deals with theoretical and practical aspects of the usage of video materials in language teaching and learning. In the course of research surveys were made to determine the range of purposes and varieties of the most common uses of video resources. Examples of how technology has been used to support the use of video materials in language teaching and learning have been provided. The potential benefits and challenges of using video materials in language teaching and learning have been outlined. It has been justified that video materials promote language learners’ motivation, influence vocabulary growth providing an efficient and effective way to learn real-life language. The most common ways of the applying of video materials for individual work have been considered such as listening and comprehension practice; vocabulary building; grammar practice; pronunciation practice, cultural immersion; independent research and collaborative projects
APA, Harvard, Vancouver, ISO, and other styles
29

Tri Darma, Fanji Nugraha. "Motion Graphics About Successful English Presentation For Vocational Students." JOURNAL OF APPLIED MULTIMEDIA AND NETWORKING 6, no. 1 (July 7, 2022): 44–55. http://dx.doi.org/10.30871/jamn.v6i1.4168.

Full text
Abstract:
Foreign language is an important thing in this day and age, this is because language is an important instrument to communicate. To overcome the language differences of every country in the world in communicating, English is used as an international language. English is one of the international languages needed in communication and has a very important role in establishing relations with other nations in this era. Vocational students should know how to make presentations using English. But in reality, there are still many students who have not been able to use English properly. This study aims to produce Motion Graphics videos about successful English presentations as an alternative learning media for vocational students. The result of the research is Video Motion Graphic as a learning medium about Successful English presentation for vocational students has gone through 3 stages, namely pre-production, production, and post-production. The software used for making Motion Graphic videos is Adobe Illustrator and Adobe After Effects and Adobe Premiere. The final result of the product is a video in .mp4 format with a duration of 00:07:30 minutes.
APA, Harvard, Vancouver, ISO, and other styles
30

Krishnamoorthy, Niveda, Girish Malkarnenkar, Raymond Mooney, Kate Saenko, and Sergio Guadarrama. "Generating Natural-Language Video Descriptions Using Text-Mined Knowledge." Proceedings of the AAAI Conference on Artificial Intelligence 27, no. 1 (June 30, 2013): 541–47. http://dx.doi.org/10.1609/aaai.v27i1.8679.

Full text
Abstract:
We present a holistic data-driven technique that generates natural-language descriptions for videos. We combine the output of state-of-the-art object and activity detectors with "real-world' knowledge to select the most probable subject-verb-object triplet for describing a video. We show that this knowledge, automatically mined from web-scale text corpora, enhances the triplet selection algorithm by providing it contextual information and leads to a four-fold increase in activity identification. Unlike previous methods, our approach can annotate arbitrary videos without requiring the expensive collection and annotation of a similar training video corpus. We evaluate our technique against a baseline that does not use text-mined knowledge and show that humans prefer our descriptions 61% of the time.
APA, Harvard, Vancouver, ISO, and other styles
31

Romero, Lupe, Olga Torres-Hostench, and Stavroula Sokoli. "La subtitulación al servicio del aprendizaje de lenguas: el entorno LvS." Babel. Revue internationale de la traduction / International Journal of Translation 57, no. 3 (November 10, 2011): 305–23. http://dx.doi.org/10.1075/babel.57.3.04rom.

Full text
Abstract:
LvS (Learning via Subtitling) is a subtitling tool for language teaching. The main objective of LvS is to provide educational material for the active learning of foreign languages through subtitled video clips. Using this tool and the specific activities created by the teacher, students can add subtitles to video clips, thus participating in active writing and listening comprehension activities. LvS is a subtitle simulator for specific purposes.<p>This paper presents the possibilities offered by an environment such as LvS for learning languages through subtitling and the effectiveness of the use of news videos in L2 teaching. It describes the different stages of a LvS activity involving teaching Italian for translators.<p>
APA, Harvard, Vancouver, ISO, and other styles
32

Nuryogawati, Firly. "THE IMPLEMENTATION OF TRANSLANGUAGING IN ENGLISH LANGUAGE LEARNING VIDEO." Journal of English Language Teaching and Literature (JELTL) 6, no. 2 (August 10, 2023): 176–89. http://dx.doi.org/10.47080/jeltl.v6i2.2797.

Full text
Abstract:
English is one of the foreign languages that can be useful for people who connect to share information beyond the media, including in the education sector. Furthermore, many Indonesians try to educate themselves in this language with some media flexibility, dominantly from domestic media. In several media, translanguaging has existed in the bilingual learning English language, strictly Indonesian. In this study, the researcher captured translanguaging implementation in the instructor's explanation in the English learning video on one YouTube channel and classified them according to Iversen's theory (2019). The researcher used a qualitative approach that used the content analysis method and thematic data analysis to observe and describe certainly translanguaging implementation on the three English learning videos on the GIA Academy YouTube channel. According to the report, the application of translanguaging in the English learning videos evaluated is that the teacher utilises translanguaging approaches that vary according to demands. Translanguaging is utilized in three English learning videos to explain each brief visual of the information in the form of PowerPoint to explain the subchapter, which is highly complicated learning content. The researcher also provided feedback on the English Teaching Language in Indonesia’s social platform.
APA, Harvard, Vancouver, ISO, and other styles
33

Ahmad Hafidh Ayatullah and Nanik Suciati. "TOPIC GROUPING BASED ON DESCRIPTION TEXT IN MICROSOFT RESEARCH VIDEO DESCRIPTION CORPUS DATA USING FASTTEXT, PCA AND K-MEANS CLUSTERING." Jurnal Informatika Polinema 9, no. 2 (February 27, 2023): 223–28. http://dx.doi.org/10.33795/jip.v9i2.1271.

Full text
Abstract:
This research groups topics of the Microsoft Research Video Description Corpus (MRVDC) based on text descriptions of Indonesian language dataset. The Microsoft Research Video Description Corpus (MRVDC) is a video dataset developed by Microsoft Research, which contains paraphrased event expressions in English and other languages. The results of grouping these topics show how the patterns of similarity and interrelationships between text descriptions from different video data, which will be useful for the topic-based video retrieval. The topic grouping process is based on text descriptions using fastText as word embedding, PCA as features reduction method and K-means as the clustering method. The experiment on 1959 videos with 43753 text descriptions to vary the number of k and with/without PCA result that the optimal clustering number is 180 with silhouette coefficient of 0.123115.
APA, Harvard, Vancouver, ISO, and other styles
34

Ho, Thanh Vy, and Thanh Thao Le. "The Effects of Video Materials on English-Major Students’ Learning: A Test of Hypotheses in the Vietnamese Context." FOSTER: Journal of English Language Teaching 3, no. 4 (January 24, 2023): 185–96. http://dx.doi.org/10.24256/foster-jelt.v3i4.114.

Full text
Abstract:
Video materials are used more frequently in English as a foreign language (EFL) classes since they are perceived to be effective. However, using video materials is challenging because it requires EFL teachers’ abilities to determine the videos' suitability for their learners. As a result, this study was conducted qualitatively to listen to language learners’ voices about using video materials in their English classes. There were six hypotheses related to the use of video materials in the Vietnamese context. Fine focus-group interviews were organized to collect data from 25 English students learning in a tertiary institution in the Mekong Delta of Vietnam. The data revealed that Vietnamese students perceived using video materials to be effective. The animated videos with beautiful sounds helped improve their learning motivation. Regarding the existence of video materials on various online platforms, the students could learn outside the classroom as well as increase their learner autonomy. However, to integrate video materials into classroom activities, English teachers were required to have sufficient abilities to choose the appropriate videos for their learners. As a result, the study proposed several practical implications for educators who would like to use video materials in their classrooms.
APA, Harvard, Vancouver, ISO, and other styles
35

Mathews, Meera Treesa, Joyal Raphel, Joseph Shaju C, Steve Soney Varghese, and Paul J. Puthusserry. "Sign Language Recognition and Video Generation Using Deep Learning." Journal of Applied Science, Engineering, Technology and Management 1, no. 02 (December 2, 2023): 13–16. http://dx.doi.org/10.61779/jasetm.v1i2.4.

Full text
Abstract:
The proposed system aims to help normal people understand the communication of speech impaired individuals through hand gestures recognition and generating animation gestures. The system focuses on recognizing different hand gestures and converting them into information that is understandable by normal people. YOLOv8 model, a state-of-the-art object detection algorithm, is being employed in this system to detect and classify sign language gestures. Sign language video generation can act as a guide for anyone who is in the process of learning sign language, by providing them with expressive sign language videos using avatars that can translate the user inputs to sign language videos. CWASA Package and SiGML files are used for this process. The project contributes to the advancement of assistive technologies for the hearing-impaired community, offering innovative solutions for sign language recognition and video generation.
APA, Harvard, Vancouver, ISO, and other styles
36

Nalibow, Kenneth L. "Contact: Russian Language Video Magazine." Slavic and East European Journal 42, no. 2 (1998): 363. http://dx.doi.org/10.2307/310046.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Little, D. G. "Video in english language teaching." System 14, no. 1 (January 1986): 93. http://dx.doi.org/10.1016/0346-251x(86)90054-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Nalibow, Kenneth L. "Contact: Russian Language Video Magazine." Slavic and East European Journal 35, no. 4 (1991): 596. http://dx.doi.org/10.2307/309270.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Kajornboon, Bhamani. "Video in the Language Class." PASAA 19, no. 1 (January 1989): 41–52. http://dx.doi.org/10.58837/chula.pasaa.19.1.4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Dira, Benito, and Paulus Kuswandono. "PERCEPTIONS OF ANIMATED EDUCATIONAL VIDEOS ON EFL LEARNERS’ FOREIGN LANGUAGE ANXIETY IN ELESP SANATA DHARMA UNIVERSITY." UC Journal: ELT, Linguistics and Literature Journal 5, no. 1 (May 20, 2024): 30–44. http://dx.doi.org/10.24071/uc.v5i1.8677.

Full text
Abstract:
Anxiety is known as foreign language anxiety when correlated to the learning of foreign languages. Animated educational videos can serve as excellent instruments for visually facilitating the courses. Although recent studies discussed the benefits of using video in educational activities and other similar subjects, only a few findings focused on the impact on EFL learners’ Foreign Language Anxiety. To provide a thorough description of the occurring phenomenon from the perspective of the research participants, the researcher used a qualitative approach. On the grounds of this, the researcher used convenience sampling as the preferred technique in collecting the sample, particularly comprehensive sampling. The research participants were 12 ELESP students studying in the English Language Education Study Program (ELESP). In terms of data collection, the researcher conducted an online survey using an open-ended questionnaire and an online interview. The findings revealed that there were two results based on this research. First, the use of animated educational videos both increases anxiety and reduces anxiety. Nonetheless, it was also found that the animated educational videos did not boost nor decrease EFL learners' anxiety about foreign languages. Second, the researcher found that topic and video editing were the aspects that influenced the EFL learners.
APA, Harvard, Vancouver, ISO, and other styles
41

Thakur, Nirmalya, Shuqi Cui, Victoria Knieling, Karam Khanna, and Mingchen Shao. "Investigation of the Misinformation about COVID-19 on YouTube Using Topic Modeling, Sentiment Analysis, and Language Analysis." Computation 12, no. 2 (February 6, 2024): 28. http://dx.doi.org/10.3390/computation12020028.

Full text
Abstract:
The work presented in this paper makes multiple scientific contributions with a specific focus on the analysis of misinformation about COVID-19 on YouTube. First, the results of topic modeling performed on the video descriptions of YouTube videos containing misinformation about COVID-19 revealed four distinct themes or focus areas—Promotion and Outreach Efforts, Treatment for COVID-19, Conspiracy Theories Regarding COVID-19, and COVID-19 and Politics. Second, the results of topic-specific sentiment analysis revealed the sentiment associated with each of these themes. For the videos belonging to the theme of Promotion and Outreach Efforts, 45.8% were neutral, 39.8% were positive, and 14.4% were negative. For the videos belonging to the theme of Treatment for COVID-19, 38.113% were positive, 31.343% were neutral, and 30.544% were negative. For the videos belonging to the theme of Conspiracy Theories Regarding COVID-19, 46.9% were positive, 31.0% were neutral, and 22.1% were negative. For the videos belonging to the theme of COVID-19 and Politics, 35.70% were positive, 32.86% were negative, and 31.44% were neutral. Third, topic-specific language analysis was performed to detect the various languages in which the video descriptions for each topic were published on YouTube. This analysis revealed multiple novel insights. For instance, for all the themes, English and Spanish were the most widely used and second most widely used languages, respectively. Fourth, the patterns of sharing these videos on other social media channels, such as Facebook and Twitter, were also investigated. The results revealed that videos containing video descriptions in English were shared the highest number of times on Facebook and Twitter. Finally, correlation analysis was performed by taking into account multiple characteristics of these videos. The results revealed that the correlation between the length of the video title and the number of tweets and the correlation between the length of the video title and the number of Facebook posts were statistically significant.
APA, Harvard, Vancouver, ISO, and other styles
42

Reddy, Mr G. Sekhar, A. Sahithi, P. Harsha Vardhan, and P. Ushasri. "Conversion of Sign Language Video to Text and Speech." International Journal for Research in Applied Science and Engineering Technology 10, no. 5 (May 31, 2022): 159–64. http://dx.doi.org/10.22214/ijraset.2022.42078.

Full text
Abstract:
Abstract: Sign Language recognition (SLR) is a significant and promising technique to facilitate communication for hearingimpaired people. Here, we are dedicated to finding an efficient solution to the gesture recognition problem. This work develops a sign language (SL) recognition framework with deep neural networks, which directly transcribes videos of SL sign to word. We propose a novel approach, by using Video sequences that contain both the temporal as well as the spatial features. So, we have used two different models to train both the temporal as well as spatial features. To train the model on the spatial features of the video sequences we use the (Convolutional Neural Networks) CNN model. CNN was trained on the frames obtained from the video sequences of train data. We have used RNN(recurrent neural network) to train the model on the temporal features. A trained CNN model was used to make predictions for individual frames to obtain a sequence of predictions or pool layer outputs for each video. Now this sequence of prediction or pool layer outputs was given to RNN to train on the temporal features. Thus, we perform sign language translation where input video will be given, and by using CNN and RNN, the sign shown in the video is recognized and converted to text and speech. Keywords: CNN (Convolutional Neural Network), RNN(Recurrent Neural Network), SLR(Sign Language Recognition), SL(Sign Language).
APA, Harvard, Vancouver, ISO, and other styles
43

Sooryah, N., and Dr K. R. Soundarya. "Live Captioning for Live Lectures – An Initiative to Enhance Language Acquisition in Second Language Learners, through Mobile Learning." Webology 17, no. 2 (December 21, 2020): 238–43. http://dx.doi.org/10.14704/web/v17i2/web17027.

Full text
Abstract:
World is networked through internet today. There are various mobile Applications, which help people in many different ways, based on the purpose of the application. Due to the pandemic lockdown, now-a-days, the classes are conducted online, through modes like, video lectures and video conferencing. Apart from the forced school environment, one can create their own environment to study through online classes. Apart from the established online courses that already have study material and subtitles, the live classes for under graduate students, who begin to explore online education, get the first blow in understanding and educating themselves. Many students find it difficult to listen to lectures through video conferencing because of a cluster of students with different proficiency levels. In a classroom scenario, with the use of blackboard method or power-point presentation, the students somehow get the idea of the subject. That way, if not by listening to the teacher, visual aids help the students to grasp in a better fashion. The purpose of this study is to propose the development of a mobile application, to reduce the difficulties in listening lectures online. The application might act as a caption provider in the video lecture that helps the students to comprehend the subject better. This study is based on a survey taken among 100 undergraduate students from a particular institution in India, to enquire the hardships and hurdles of learning through online lectures. The result of the analysis provided the preference of captioned videos in video lectures for enhancing the purpose of online lectures. This study tries to prove the hardships of non-native speakers of English, while attending video lectures and providing a solution to the aforementioned, based on the survey, by implementing the idea of a mobile application to provide captions, during the live lecture.
APA, Harvard, Vancouver, ISO, and other styles
44

Li, Jialu, Aishwarya Padmakumar, Gaurav Sukhatme, and Mohit Bansal. "VLN-Video: Utilizing Driving Videos for Outdoor Vision-and-Language Navigation." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 17 (March 24, 2024): 18517–26. http://dx.doi.org/10.1609/aaai.v38i17.29813.

Full text
Abstract:
Outdoor Vision-and-Language Navigation (VLN) requires an agent to navigate through realistic 3D outdoor environments based on natural language instructions. The performance of existing VLN methods is limited by insufficient diversity in navigation environments and limited training data. To address these issues, we propose VLN-Video, which utilizes the diverse outdoor environments present in driving videos in multiple cities in the U.S. augmented with automatically generated navigation instructions and actions to improve outdoor VLN performance. VLN-Video combines the best of intuitive classical approaches and modern deep learning techniques, using template infilling to generate grounded non-repetitive navigation instructions, combined with an image rotation similarity based navigation action predictor to obtain VLN style data from driving videos for pretraining deep learning VLN models. We pre-train the model on the Touchdown dataset and our video-augmented dataset created from driving videos with three proxy tasks: Masked Language Modeling, Instruction and Trajectory Matching, and Next Action Prediction, so as to learn temporally-aware and visually-aligned instruction representations. The learned instruction representation is adapted to the state-of-the-art navigation agent when fine-tuning on the Touchdown dataset. Empirical results demonstrate that VLN-Video significantly outperforms previous state-of-the-art models by 2.1% in task completion rate, achieving a new state-of-the-art on the Touchdown dataset.
APA, Harvard, Vancouver, ISO, and other styles
45

Rooney, Kevin. "The Impact of Keyword Caption Ratio on Foreign Language Listening Comprehension." International Journal of Computer-Assisted Language Learning and Teaching 4, no. 2 (April 2014): 11–28. http://dx.doi.org/10.4018/ijcallt.2014040102.

Full text
Abstract:
The purpose of this study was to investigate the impact of three keyword caption modes on the listening comprehension of Arab learners of English as a foreign language (N = 90) while viewing authentic video clips. The keyword caption modes contained approximately 10%, 30% or 50% of the words in the video scripts. The participants watched three different video clips from three science videos, each of which contained one of the three keyword caption modes. Each participant experienced all three modes and the order in which they were viewed was counterbalanced. Their understanding of the content of the video clips was measured using comprehension tests consisting of gap fill and multiple-choice questions. The analysis of the listening comprehension test scores found evidence of an effect for the 50% keyword caption condition.
APA, Harvard, Vancouver, ISO, and other styles
46

Tunbridge, D. J. "Languages in Contact: Issues for Sign Language (Video) Bible translation." Bible Translator 51, no. 2 (April 2000): 220–24. http://dx.doi.org/10.1177/026009430005100204.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Dlanske, Kati. "Way better than the original!! Music video covers and language revitalisation." Apples - Journal of Applied Language Studies 10, no. 2 (September 6, 2016): 83–103. http://dx.doi.org/10.17011/apples/urn.201610254444.

Full text
Abstract:
The development of the social media has opened up new spaces and genres for minoritised languages. As argued in previous research, access to new media spaces can contribute to the revitalisation of minoritised languages by generating new functions and values for them. Combining sociolinguistic and sociosemiotic approaches and bringing together data from four minority language contexts, Irish, Welsh, Sámi, and Corsican, this study addresses the potential of music video covers on YouTube to contribute to language revitalisation. The investigation suggests that music video covers in minority languages can have significance in language revitalisation in both language ideological and practical terms. However, these effects are not just a matter of access to a new media space (YouTube) or a new genre (music video cover) but, in a much more complex manner, a question of practices of relocalisation and the semiotic resources used. As semiotic aggregates, music video covers can not only endow minority languages and their speakers with a new glamour, but also recirculate and reinforce old, stereotypical notions. While ‘new glamour’ may be desirable, the study points, on the other hand, to the need for critical interrogation of the terms on which minority languages are commodified in the context of contemporary media culture.
APA, Harvard, Vancouver, ISO, and other styles
48

Fridberg, Elisabeth, Edward Khokhlovich, and Andrey Vyshedskiy. "Watching Videos and Television Is Related to a Lower Development of Complex Language Comprehension in Young Children with Autism." Healthcare 9, no. 4 (April 6, 2021): 423. http://dx.doi.org/10.3390/healthcare9040423.

Full text
Abstract:
The effect of passive video and television watching duration on 2- to 5-year-old children with autism was investigated in the largest and the longest observational study to date. Parents assessed the development of 3227 children quarterly for three years. Longer video and television watching were associated with better development of expressive language but significantly impeded development of complex language comprehension. On an annualized basis, low TV users (low quartile: 40 min or less of videos and television per day) improved their language comprehension 1.4 times faster than high TV users (high quartile: 2 h or more of videos and television per day). This difference was statistically significant. At the same time, high TV users improved their expressive language 1.3 times faster than low TV users. This difference was not statistically significant. No effect of video and television watching duration on sociability, cognition, or health was detected.
APA, Harvard, Vancouver, ISO, and other styles
49

Adji, Waluyo Satrio, Muhammad Iqbal Ansari, Abdul Bashith, and Melani Albar. "ANALISIS KELAYAKAN VIDEO PEMBELAJARAN IPS JENJANG MI/SD DI PLATFORM YOUTUBE PADA MATERI KERAGAMAN AGAMA DI INDONESIA." Muallimuna : Jurnal Madrasah Ibtidaiyah 6, no. 2 (April 30, 2021): 57. http://dx.doi.org/10.31602/muallimuna.v6i2.4362.

Full text
Abstract:
Abstrak: Tujuan dari penelitian ini untuk menganalisis dan mendeskripsikan kelayakan video pembelajaran di platform youtube materi keragaman agama di Indonesia. Pendekatan kualitatif dengan studi dokumentasi digunakan dalam penelitian ini, secara khusus pengumpulan data sekunder dikumpulkan melalui platform yotuber dengan kata kunci “IPS keragaman Agama di Indonesia”, kemudian video dianalisis berdasarkan Standar Kelayakan dari Badan Standar Nasional Pendidikan (BSNP) yaitu aspek isi dan penyajian, bahasa. Hasilnya peneliti menganalisis duabelas video pembelajaran IPS dari chanel yang berbeda, rata-rata pada aspek isi sebesar 67,5% berpredikat layak; bahasa sebesar 71,4% berpredikat layak; penyajian sebesar 73,3% berpredikat layak; kegrafikan sebesar 71,5% berpredikat layak. Pada ke empat aspek kelayakan menandakan bahwa videp pembelajaran IPS layak untuk dijadikan rujukan dalam Pembelajaran.FEASIBILITY ANALYSIS OF MI/SD IPS LEARNING VIDEO IN YOUTUBE PLATFORM ON RELIGIOUS DIVERSITY MATERIALS IN INDONESIAAbstract: The purpose of this study is to analyze and describe the feasibility of learning videos on the YouTube platform on the diversity of religions in Indonesia. A qualitative approach with a documentary study is used in this research, in particular, secondary data was collected through the YouTuber platform with the keyword "IPS on Religious Diversity in Indonesia", then a video based on the Feasibility Standards from the National Education Standards Agency (BSNP), namely content and presentation aspects, language. The results of the researchers analyzed the social studies learning videos from different content, on average 67.5% of the content aspect was predicated; language of 71.4% has a decent predicate; presentation of 73.3% was categorized as feasible; 71.5% of the predicate is feasible. The four aspects of feasibility indicate that the social studies learning video is feasible to be used as a reference in learning.
APA, Harvard, Vancouver, ISO, and other styles
50

Nabila, Jaroatin, Maulida Zahra Qutratu'ain, Chaerunnissa Chaerunnissa, Muhamad Diky Yulianto, and Asep Purwo Yudi Utomo. "ANALISIS TINDAK TUTUR DIREKTIF PADA DAFTAR PUTAR VIDEO PEMBELAJARAN BAHASA INDONESIA QUIPPER VIDEO." PRASASTI: Journal of Linguistics 8, no. 2 (November 23, 2023): 178. http://dx.doi.org/10.20961/prasasti.v8i2.67574.

Full text
Abstract:
<p>The study of directive speech acts underlies this study. Speech acts used to direct other people or state what the speaker wants are known as directive speech acts. Directive speech acts serve a number of functions, including commands, requests, invitations, advice, criticism, and prohibitions, etc. The objective of this study was to discover and describe numerous types of directive speech acts in Quipper Vidio Indonesian Language Learning videos based on the functions of commands, requests, invitations, advice, criticism, and prohibitions, etc. This study was designed with a qualitative descriptive method which shows where there is evidence of directive speech acts and explaining them based on existing facts in line with the explanation to enable to produce a presentation as it is. The study data collected was obtained from five video sources on YouTube. Based on the results of this study, there are 25 command functions, 6 request functions, 36 invitation functions, 15 advise functions, 0 criticism functions, and 1 prohibition function. The objectives of this study are to improve the theory of language development by studying the form of directive acts in the learning objects to be studied. It is expected that the study will expand readers' perspectives on directive speech acts as well as classifying the functions of directive speech acts in the Quipper Vidio Indonesian Language Learning playlist.</p>
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography