Gotowa bibliografia na temat „Video and language”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Video and language”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Video and language"

1

Joshi, Prof Indira. "Video Summarization for Marathi Language". INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, nr 05 (3.05.2024): 1–5. http://dx.doi.org/10.55041/ijsrem32024.

Pełny tekst źródła
Streszczenie:
The Video Summarization Platform using Python Flask is a comprehensive tool designed to summarize Marathi and English videos while providing summaries in Hindi, Marathi, and English languages. Leveraging machine learning and natural language processing (NLP) techniques, this platform offers a sophisticated solution for efficiently extracting key information from videos. The platform begins by transcribing the audio content of the video into text using automatic speech recognition (ASR) technology. This transcription process ensures that the platform can accurately analyze and summarize the video's content. Next, the text is translated into the target languages, namely Hindi, Marathi, and English, enabling users from diverse linguistic backgrounds to access the summarized content. To generate concise and informative summaries, advanced NLP algorithm is applied. This algorithm analyze the transcribed text to identify the most significant phrases, sentences, and concepts. By considering factors such as keyword frequency, semantic relevance, and context, the platform effectively distils the video's content into digestible summaries. Additionally, machine learning models are employed to classify the type of video content. These models are trained on diverse datasets encompassing various video genres and topics. By recognizing patterns and features within the video content, the platform can accurately categorize videos into distinct types, such as news, interviews, tutorials, or entertainment. The platform's user interface, powered by Python Flask, offers a seamless experience for users to upload videos, select their preferred language for summarization, and receive concise summaries in their chosen languages. The intuitive design ensures accessibility and ease of use, catering to both novice and advanced users. Overall, the Video Summarization Platform serves as a valuable resource for individuals seeking efficient ways to consume multimedia content. Whether for educational, informational, or entertainment purposes, this platform empowers users to access summarized video content in multiple languages, facilitated by cutting-edge machine learning and NLP technologies. Key Words: Transcription, Marathi-speaking users, Marathi YouTube videos, video content, transcription, summary, translation, Natural Language Toolkit (NLTK), content comprehension, user interaction data, past summaries, recommendation
Style APA, Harvard, Vancouver, ISO itp.
2

Walther, Joseph B., German Neubaum, Leonie Rösner, Stephan Winter i Nicole C. Krämer. "The Effect of Bilingual Congruence on the Persuasive Influence of Videos and Comments on YouTube". Journal of Language and Social Psychology 37, nr 3 (11.08.2017): 310–29. http://dx.doi.org/10.1177/0261927x17724552.

Pełny tekst źródła
Streszczenie:
Social media offer a global village in which user-generated comments from different groups and many languages appear. Extending the notion of prototypes in social identification theory, research examined the persuasive influence of comments supporting or deriding a public service announcement video on YouTube, where comments’ language either matched or differed from the videos’. Bilingual participants watched videos in English or Mandarin Chinese promoting water conservation, accompanied by comments in English or Mandarin that supported or derided the videos’ message. Results replicated previous findings about the valence of comments on public service announcement evaluations, overridden by an interaction between valence and language congruity: Comments in the same language as the videos’ affected readers’ evaluations of the video more than did comments in the language other than the videos’.
Style APA, Harvard, Vancouver, ISO itp.
3

Shipman, Frank M., Ricardo Gutierrez-Osuna i Caio D. D. Monteiro. "Identifying Sign Language Videos in Video Sharing Sites". ACM Transactions on Accessible Computing 5, nr 4 (marzec 2014): 1–14. http://dx.doi.org/10.1145/2579698.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Sanders, D. C., L. M. Reyes, D. J. Osborne, D. R. Ward i D. E. Blackwelder. "USING A BILINGUAL GAPS AND HAND-WASHING DVD TO TRAIN FRESH PRODUCE FIELD AND PACKINGHOUSE WORKERS". HortScience 41, nr 3 (czerwiec 2006): 498D—498. http://dx.doi.org/10.21273/hortsci.41.3.498d.

Pełny tekst źródła
Streszczenie:
The Southeastern Fresh Produce Food Safety Training Program has been training extension agents across the southeastern U.S. since 2000. This program has utilized a variety of methods including group case study to enhance learning and promote team work. Multistate trainings have fostered collaboration between states and institutions. One goal of the program was to produce a method for agents to provide training that was repeatable and easy to implement. As a result, two videos were produced for use in training field and packinghouse workers. These videos were an English language good agricultural practices (GAPs) video entitled Bridging the GAPs: From the Farm to the Table and a Spanish language hand-washing video entitled ¡Lave sus Manos: Por Los Niños! This program has been very effective, but has faced challenges due to language barriers. Many field and packinghouse crews were mixed in terms of language with some crew members speaking only English while others spoke only Spanish. As a result, Spanish speakers were unable to access the information in the good agricultural practices video while English speakers were unable to access information in the hand-washing video. The solution was to produce a bilingual training aid that included both sets of information and has been compiled into a DVD containing the footage of both of the original videos in both languages. For the Spanish version of the GAPs video and the English of the hand-washing video, the audio of the video's original language was left at a low sound level and the audio of the alternate language was added. These DVDs are currently being distributed to extension programs in all of the cooperating states with the aim of reaching growers who want to start a food safety plan.
Style APA, Harvard, Vancouver, ISO itp.
5

Hiremath, Rashmi B., i Ramesh M. Kagalkar. "Sign Language Video Processing for Text Detection in Hindi Language". International Journal of Recent Contributions from Engineering, Science & IT (iJES) 4, nr 3 (26.10.2016): 21. http://dx.doi.org/10.3991/ijes.v4i3.5973.

Pełny tekst źródła
Streszczenie:
Sign language is a way of expressing yourself with your body language, where every bit of ones expressions, goals, or sentiments are conveyed by physical practices, for example, outward appearances, body stance, motions, eye movements, touch and the utilization of space. Non-verbal communication exists in both creatures and people, yet this article concentrates on elucidations of human non-verbal or sign language interpretation into Hindi textual expression. The proposed method of implementation utilizes the image processing methods and synthetic intelligence strategies to get the goal of sign video recognition. To carry out the proposed task implementation it uses image processing methods such as frame analysing based tracking, edge detection, wavelet transform, erosion, dilation, blur elimination, noise elimination, on training videos. It also uses elliptical Fourier descriptors called SIFT for shape feature extraction and most important part analysis for feature set optimization and reduction. For result analysis, this paper uses different category videos such as sign of weeks, months, relations etc. Database of extracted outcomes are compared with the video fed to the system as a input of the signer by a trained unclear inference system.
Style APA, Harvard, Vancouver, ISO itp.
6

Anugerah, Rezza, Yohanes Gatot Sutapa Yuliana i Dwi Riyanti. "THE POTENTIAL OF ENGLISH LEARNING VIDEOS IN FORM OF VLOG ON YOUTUBE FOR ELT MATERIAL WRITERS". Proceedings International Conference on Teaching and Education (ICoTE) 2, nr 2 (24.12.2019): 224. http://dx.doi.org/10.26418/icote.v2i2.38232.

Pełny tekst źródła
Streszczenie:
YouTube is the most popular video sharing websites where millions of videos can be found online, especially videos for learning a language. Students can use YouTube to get many language learning videos to help them study the language. Not only getting many videos that can be used as teaching materials in the classroom, teachers can also create their own English video for their students or even for everyone. Designing an English learning video is an opportunity for teachers to make videos based on their teaching context. One type of content that can have a big opportunity to develop is English learning video in form of Vlog. Vlog has a huge number of audiences, especially for Indonesian learners. This research is aimed at analyzing the potential of English learning video in form of Vlog for ELT material writers.
Style APA, Harvard, Vancouver, ISO itp.
7

Liu, Yuqi, Luhui Xu, Pengfei Xiong i Qin Jin. "Token Mixing: Parameter-Efficient Transfer Learning from Image-Language to Video-Language". Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 2 (26.06.2023): 1781–89. http://dx.doi.org/10.1609/aaai.v37i2.25267.

Pełny tekst źródła
Streszczenie:
Applying large scale pre-trained image-language model to video-language tasks has recently become a trend, which brings two challenges. One is how to effectively transfer knowledge from static images to dynamic videos, and the other is how to deal with the prohibitive cost of fully fine-tuning due to growing model size. Existing works that attempt to realize parameter-efficient image-language to video-language transfer learning can be categorized into two types: 1) appending a sequence of temporal transformer blocks after the 2D Vision Transformer (ViT), and 2) inserting a temporal block into the ViT architecture. While these two types of methods only require fine-tuning the newly added components, there are still many parameters to update, and they are only validated on a single video-language task. In this work, based on our analysis of the core ideas of different temporal modeling components in existing approaches, we propose a token mixing strategy to enable cross-frame interactions, which enables transferring from the pre-trained image-language model to video-language tasks through selecting and mixing a key set and a value set from the input video samples. As token mixing does not require the addition of any components or modules, we can directly partially fine-tune the pre-trained image-language model to achieve parameter-efficiency. We carry out extensive experiments to compare our proposed token mixing method with other parameter-efficient transfer learning methods. Our token mixing method outperforms other methods on both understanding tasks and generation tasks. Besides, our method achieves new records on multiple video-language tasks. The code is available at https://github.com/yuqi657/video_language_model.
Style APA, Harvard, Vancouver, ISO itp.
8

Moon, Nazmun Nessa, Imrus Salehin, Masuma Parvin, Md Mehedi Hasan, Iftakhar Mohammad Talha, Susanta Chandra Debnath, Fernaz Narin Nur i Mohd Saifuzzaman. "Natural language processing based advanced method of unnecessary video detection". International Journal of Electrical and Computer Engineering (IJECE) 11, nr 6 (1.12.2021): 5411. http://dx.doi.org/10.11591/ijece.v11i6.pp5411-5419.

Pełny tekst źródła
Streszczenie:
<span>In this study we have described the process of identifying unnecessary video using an advanced combined method of natural language processing and machine learning. The system also includes a framework that contains analytics databases and which helps to find statistical accuracy and can detect, accept or reject unnecessary and unethical video content. In our video detection system, we extract text data from video content in two steps, first from video to MPEG-1 audio layer 3 (MP3) and then from MP3 to WAV format. We have used the text part of natural language processing to analyze and prepare the data set. We use both Naive Bayes and logistic regression classification algorithms in this detection system to determine the best accuracy for our system. In our research, our video MP4 data has converted to plain text data using the python advance library function. This brief study discusses the identification of unauthorized, unsocial, unnecessary, unfinished, and malicious videos when using oral video record data. By analyzing our data sets through this advanced model, we can decide which videos should be accepted or rejected for the further actions.</span>
Style APA, Harvard, Vancouver, ISO itp.
9

Gernsbacher, Morton Ann. "Video Captions Benefit Everyone". Policy Insights from the Behavioral and Brain Sciences 2, nr 1 (październik 2015): 195–202. http://dx.doi.org/10.1177/2372732215602130.

Pełny tekst źródła
Streszczenie:
Video captions, also known as same-language subtitles, benefit everyone who watches videos (children, adolescents, college students, and adults). More than 100 empirical studies document that captioning a video improves comprehension of, attention to, and memory for the video. Captions are particularly beneficial for persons watching videos in their non-native language, for children and adults learning to read, and for persons who are D/deaf or hard of hearing. However, despite U.S. laws, which require captioning in most workplace and educational contexts, many video audiences and video creators are naïve about the legal mandate to caption, much less the empirical benefit of captions.
Style APA, Harvard, Vancouver, ISO itp.
10

Dilawari, Aniqa, Muhammad Usman Ghani Khan, Yasser D. Al-Otaibi, Zahoor-ur Rehman, Atta-ur Rahman i Yunyoung Nam. "Natural Language Description of Videos for Smart Surveillance". Applied Sciences 11, nr 9 (21.04.2021): 3730. http://dx.doi.org/10.3390/app11093730.

Pełny tekst źródła
Streszczenie:
After the September 11 attacks, security and surveillance measures have changed across the globe. Now, surveillance cameras are installed almost everywhere to monitor video footage. Though quite handy, these cameras produce videos in a massive size and volume. The major challenge faced by security agencies is the effort of analyzing the surveillance video data collected and generated daily. Problems related to these videos are twofold: (1) understanding the contents of video streams, and (2) conversion of the video contents to condensed formats, such as textual interpretations and summaries, to save storage space. In this paper, we have proposed a video description framework on a surveillance dataset. This framework is based on the multitask learning of high-level features (HLFs) using a convolutional neural network (CNN) and natural language generation (NLG) through bidirectional recurrent networks. For each specific task, a parallel pipeline is derived from the base visual geometry group (VGG)-16 model. Tasks include scene recognition, action recognition, object recognition and human face specific feature recognition. Experimental results on the TRECViD, UET Video Surveillance (UETVS) and AGRIINTRUSION datasets depict that the model outperforms state-of-the-art methods by a METEOR (Metric for Evaluation of Translation with Explicit ORdering) score of 33.9%, 34.3%, and 31.2%, respectively. Our results show that our framework has distinct advantages over traditional rule-based models for the recognition and generation of natural language descriptions.
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "Video and language"

1

Khan, Muhammad Usman Ghani. "Natural language descriptions for video streams". Thesis, University of Sheffield, 2012. http://etheses.whiterose.ac.uk/2789/.

Pełny tekst źródła
Streszczenie:
This thesis is concerned with the automatic generation of natural language descriptions that can be used for video indexing, retrieval and summarization applications. It is a step ahead of keyword based tagging as it captures relations between keywords associated with videos, thus clarifying the context between them. Initially, we prepare hand annotations consisting of descriptions for video segments crafted from a TREC Video dataset. Analysis of this data presents insights into humans interests on video contents. For machine generated descriptions, conventional image processing techniques are applied to extract high level features (HLFs) from individual video frames. Natural language description is then produced based on these HLFs. Although feature extraction processes are erroneous at various levels, approaches are explored to put them together for producing coherent descriptions. For scalability purpose, application of framework to several different video genres is also discussed. For complete video sequences, a scheme to generate coherent and compact descriptions for video streams is presented which makes use of spatial and temporal relations between HLFs and individual frames respectively. Calculating overlap between machine generated and human annotated descriptions concludes that machine generated descriptions capture context information and are in accordance with human's watching capabilities. Further, a task based evaluation shows improvement in video identification task as compared to keywords alone. Finally, application of generated natural language descriptions, for video scene classification is discussed.
Style APA, Harvard, Vancouver, ISO itp.
2

Miech, Antoine. "Large-scale learning from video and natural language". Electronic Thesis or Diss., Université Paris sciences et lettres, 2020. http://www.theses.fr/2020UPSLE059.

Pełny tekst źródła
Streszczenie:
Nous nous intéressons à l’apprentissage automatique d’algorithmes pour la compréhension automatique de vidéos. Une majorité des approches en compréhension de vidéos dépend de larges bases de données de vidéos manuellement annotées pour l’entraînement. Cependant, la collection et l’annotation de telles bases de données est fastidieuse, coûte cher et prend du temps. Pour palier à ce problème, cette thèse se concentre sur l’exploitation de large quantité d’annotations publiquement disponible, cependant bruitées, sous forme de langage naturel. En particulier, nous nous intéressons à un corpus divers de métadonnées textuelles incluant des scripts de films, des titres et descriptions de vidéos internet ou encore des transcriptions de paroles. L’usage de ce type de données publiquement disponibles est difficile, car l’annotation y est faible. Pour cela, nous introduisons différentes approches d’apprentissage telles que de nouvelles fonctions de coûts ou architectures de réseaux de neurones, adaptées à de faibles annotations
The goal of this thesis is to build and train machine learning models capable of understanding the content of videos. Current video understanding approaches mainly rely on large-scale manually annotated video datasets for training. However, collecting and annotating such dataset is cumbersome, expensive and time-consuming. To address this issue, this thesis focuses on leveraging large amounts of readily-available, but noisy annotations in the form of natural language. In particular, we exploit a diverse corpus of textual metadata such as movie scripts, web video titles and descriptions or automatically transcribed speech obtained from narrated videos. Training video models on such readily-available textual data is challenging as such annotation is often imprecise or wrong. In this thesis, we introduce learning approaches to deal with weak annotation and design specialized training objectives and neural network architectures
Style APA, Harvard, Vancouver, ISO itp.
3

Zhou, Mingjie. "Deep networks for sign language video caption". HKBU Institutional Repository, 2020. https://repository.hkbu.edu.hk/etd_oa/848.

Pełny tekst źródła
Streszczenie:
In the hearing-loss community, sign language is a primary tool to communicate with people while there is a communication gap between hearing-loss people with normal hearing people. Sign language is different from spoken language. It has its own vocabulary and grammar. Recent works concentrate on the sign language video caption which consists of sign language recognition and sign language translation. Continuous sign language recognition, which can bridge the communication gap, is a challenging task because of the weakly supervised ordered annotations where no frame-level label is provided. To overcome this problem, connectionist temporal classification (CTC) is the most widely used method. However, CTC learning could perform badly if the extracted features are not good. For better feature extraction, this thesis presents the novel self-attention-based fully-inception (SAFI) networks for vision-based end-to-end continuous sign language recognition. Considering the length of sign words differs from each other, we introduce the fully inception network with different receptive fields to extract dynamic clip-level features. To further boost the performance, the fully inception network with an auxiliary classifier is trained with aggregation cross entropy (ACE) loss. Then the encoder of self-attention networks as the global sequential feature extractor is used to model the clip-level features with CTC. The proposed model is optimized by jointly training with ACE on clip-level feature learning and CTC on global sequential feature learning in an end-to-end fashion. The best method in the baselines achieves 35.6% WER on the validation set and 34.5% WER on the test set. It employs a better decoding algorithm for generating pseudo labels to do the EM-like optimization to fine-tune the CNN module. In contrast, our approach focuses on the better feature extraction for end-to-end learning. To alleviate the overfitting on the limited dataset, we employ temporal elastic deformation to triple the real-world dataset RWTH- PHOENIX-Weather 2014. Experimental results on the real-world dataset RWTH- PHOENIX-Weather 2014 demonstrate the effectiveness of our approach which achieves 31.7% WER on the validation set and 31.2% WER on the test set. Even though sign language recognition can, to some extent, help bridge the communication gap, it is still organized in sign language grammar which is different from spoken language. Unlike sign language recognition that recognizes sign gestures, sign language translation (SLT) converts sign language to a target spoken language text which normal hearing people commonly use in their daily life. To achieve this goal, this thesis provides an effective sign language translation approach which gains state-of-the-art performance on the largest real-life German sign language translation database, RWTH-PHOENIX-Weather 2014T. Besides, a direct end-to-end sign language translation approach gives out promising results (an impressive gain from 9.94 to 13.75 BLEU and 9.58 to 14.07 BLEU on the validation set and test set) without intermediate recognition annotations. The comparative and promising experimental results show the feasibility of the direct end-to-end SLT
Style APA, Harvard, Vancouver, ISO itp.
4

Erozel, Guzen. "Natural Language Interface On A Video Data Model". Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/12606251/index.pdf.

Pełny tekst źródła
Streszczenie:
The video databases and retrieval of data from these databases have become popular in various business areas of work with the improvements in technology. As a kind of video database, video archive systems need user-friendly interfaces to retrieve video frames. In this thesis, an NLP based user interface to a video database system is developed using a content-based spatio-temporal video data model. The data model is focused on the semantic content which includes objects, activities, and spatial properties of objects. Spatio-temporal relationships between video objects and also trajectories of moving objects can be queried with this data model. In this video database system, NL interface enables flexible querying. The queries, which are given as English sentences, are parsed using Link Parser. Not only exact matches but similar objects and activities are also returned from the database with the help of the conceptual ontology module to return all related frames to the user. This module is implemented using a distance-based method of semantic similarity search on the semantic domain-independent ontology, WordNet. The semantic representations of the given queries are extracted from their syntactic structures using information extraction techniques. The extracted semantic representations are used to call the related parts of the underlying spatio-temporal video data model to calculate the results of the queries.
Style APA, Harvard, Vancouver, ISO itp.
5

Adam, Jameel. "Video annotation wiki for South African sign language". Thesis, University of the Western Cape, 2011. http://etd.uwc.ac.za/index.php?module=etd&action=viewtitle&id=gen8Srv25Nme4_1540_1304499135.

Pełny tekst źródła
Streszczenie:

The SASL project at the University of the Western Cape aims at developing a fully automated translation system between English and South African Sign Language (SASL). Three important aspects of this system require SASL documentation and knowledge. These are: recognition of SASL from a video sequence, linguistic translation between SASL and English and the rendering of SASL. Unfortunately, SASL documentation is a scarce resource and no official or complete documentation exists. This research focuses on creating an online collaborative video annotation knowledge management system for SASL where various members of the community can upload SASL videos to and annotate them in any of the sign language notation systems, SignWriting, HamNoSys and/or Stokoe. As such, knowledge about SASL structure is pooled into a central and freely accessible knowledge base that can be used as required. The usability and performance of the system were evaluated. The usability of the system was graded by users on a rating scale from one to five for a specific set of tasks. The system was found to have an overall usability of 3.1, slightly better than average. The performance evaluation included load and stress tests which measured the system response time for a number of users for a specific set of tasks. It was found that the system is stable and can scale up to cater for an increasing user base by improving the underlying hardware.

Style APA, Harvard, Vancouver, ISO itp.
6

Ou, Yingzhe, i 区颖哲. "Teaching Chinese as a second language through video". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2011. http://hub.hku.hk/bib/B48368714.

Pełny tekst źródła
Streszczenie:
在科技发展的引导下,现今不少学校或者教育机构都提倡把多媒体引入课堂,而视像教学也在多媒体教学的范畴当中。目前对于多媒体应用于课堂的问题研究已有不少,但是多集中在需要图解分析的科目范畴,在语言教学研究方面相对较少,而在第二语言教学领域中又以英文作为第二语言教学的课题研究占多数。本研究旨在探讨在中文作为第二语言教学中,视像教学否能够有助于学生提高学习成效,同时又能达到活跃课堂,提高学习动机的目的。本研究通过借鉴已有的关于多媒体应用于教学或语言习得方面的相关文献,结合笔者所在实习学校的实验,透过课堂数据收集、对比分析法、问卷法、访谈法、观课等方法对课题进行分析论证,从而得出以下结论:1.学生在不同学习能力要求下的学习成效不一致,记忆型题目比能力型题目的学习成效要好,而两者与学生的先备知识相关性均不显著;2.视像教学中记忆型题目比非视像教学的记忆型题目成效要差,而两者在能力型题目中的成效差异不显著;3.视像教学能够有效提高学生的学习动机及课堂的集中度。 Under the guidance of scientific development, many schools and education institutions are currently encouraging the introduction of multimedia into the classrooms, with video teaching as one of the methods under multimedia teaching. There have been a lot of researches on the multimedia application into classroom, but most of them focused more on the subjects which need graphic analysis but the language teaching relatively. While the research on the Teaching English as a Second Language mostly account for the research in this field, this study aims to explore whether the video teaching could be helpful to improve students’ learning effects in Teaching Chinese as a second language, as well as enlivening the classroom and improving students’ learning motivation. Using some related literature review about the multimedia application into language teaching or acquisition for references, the author has designed an appropriate experiment fit for the teaching-practicum school, then adopted some methods such as data collection, comparing analysis, questionnaire, interview and class observation to proceed the analysis and demonstration and finally drew a conclusion as follows: 1. Students has different effects under different capability requirements, additionally, they performed better in memory-oriented tests than capability-oriented tests, both of whom have no significant co-relation with prior knowledge; 2.The effect of memory-oriented test in video teaching is worse than that in non-video teaching, but both in the capability-oriented tests have no significant effects; 3.Video teaching can improve students’ learning motivation and concentration in class effectively.
published_or_final_version
Education
Master
Master of Education
Style APA, Harvard, Vancouver, ISO itp.
7

Addis, Pietro <1991&gt. "The Age of Video Games: Language and Narrative". Master's Degree Thesis, Università Ca' Foscari Venezia, 2017. http://hdl.handle.net/10579/10634.

Pełny tekst źródła
Streszczenie:
The language of video games has acquired importance throughout the years, from its birth to its development, and has changed and modified its vocabulary, following the trend and the will of the gaming community, namely the creators and users of this language. Furthermore, the narrative medium of videogames has broadened the definition of ‘narrative’ giving to the term new significance and providing to its users a new type of experience, that of participated narration. However, the participatory narrative found in videogames has not revolutionized or created something entirely new. Following the previous studies on the two subjects, the scope of the dissertation is to analyse the main characteristics of the Internet Language, with a particular focus on the Language of Video-games and to discuss the elements which construct the narrative of a video game (i.e. how the narrative is constructed, what narrative elements are used). Moreover, the thesis will also provide new examples of new video-game terms coinages and will discuss, in connection to the narrative, the issue of the moral in videogames.
Style APA, Harvard, Vancouver, ISO itp.
8

Muir, Laura J. "Content-prioritised video coding for British Sign Language communication". Thesis, Robert Gordon University, 2007. http://hdl.handle.net/10059/177.

Pełny tekst źródła
Streszczenie:
Video communication of British Sign Language (BSL) is important for remote interpersonal communication and for the equal provision of services for deaf people. However, the use of video telephony and video conferencing applications for BSL communication is limited by inadequate video quality. BSL is a highly structured, linguistically complete, natural language system that expresses vocabulary and grammar visually and spatially using a complex combination of facial expressions (such as eyebrow movements, eye blinks and mouth/lip shapes), hand gestures, body movements and finger-spelling that change in space and time. Accurate natural BSL communication places specific demands on visual media applications which must compress video image data for efficient transmission. Current video compression schemes apply methods to reduce statistical redundancy and perceptual irrelevance in video image data based on a general model of Human Visual System (HVS) sensitivities. This thesis presents novel video image coding methods developed to achieve the conflicting requirements for high image quality and efficient coding. Novel methods of prioritising visually important video image content for optimised video coding are developed to exploit the HVS spatial and temporal response mechanisms of BSL users (determined by Eye Movement Tracking) and the characteristics of BSL video image content. The methods implement an accurate model of HVS foveation, applied in the spatial and temporal domains, at the pre-processing stage of a current standard-based system (H.264). Comparison of the performance of the developed and standard coding systems, using methods of video quality evaluation developed for this thesis, demonstrates improved perceived quality at low bit rates. BSL users, broadcasters and service providers benefit from the perception of high quality video over a range of available transmission bandwidths. The research community benefits from a new approach to video coding optimisation and better understanding of the communication needs of deaf people.
Style APA, Harvard, Vancouver, ISO itp.
9

Laveborn, Joel. "Video Game Vocabulary : The effect of video games on Swedish learners‟ word comprehension". Thesis, Karlstad University, Karlstad University, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-5487.

Pełny tekst źródła
Streszczenie:

Video games are very popular among children in the Western world. This study was done in order to investigate if video games had an effect on 49 Swedish students‟ comprehension of English words (grades 7-8). The investigation was based on questionnaire and word test data. The questionnaire aimed to measure with which frequency students were playing video games, and the word test aimed to measure their word comprehension in general. In addition, data from the word test were used to investigate how students explained the words. Depending on their explanations, students were categorized as either using a “video game approach” or a “dictionary approach” in their explanations.

The results showed a gender difference, both with regard to the frequency of playing and what types of games that were played. Playing video games seemed to increase the students‟ comprehension of English words, though there was no clear connection between the frequency with which students were playing video games and the choice of a dictionary or video game approach as an explanation.

Style APA, Harvard, Vancouver, ISO itp.
10

Lopes, Solange Aparecida. "A descriptive study of the interaction behaviors in a language video program and in live elementary language classes using that video program". Diss., This resource online, 1996. http://scholar.lib.vt.edu/theses/available/etd-10052007-143033/.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Książki na temat "Video and language"

1

Lonergan, Jack. Video in language learning. London: Linguaphone Institute, 1987.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Altman, Rick. The video connection: Integrating video into language teaching. Boston: Houghton Mifflin, 1989.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

The video connection: Integrating video into language teaching. Boston: Houghton Mifflin, 1989.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Rhodes, Nancy C. Language by video: An overview of foreign language instructional videos for children. Washington, DC: Center for Applied Linguistics/Delta Systems, 2004.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

1942-, Tomalin Barry, red. Video in action: Recipes for using video in language teaching. New York: Prentice Hall International, 1990.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

1942-, Tomalin Barry, red. Video in action: Recipes for using video in language teaching. New York, N.Y: Prentice Hall, 1990.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Greenall, Simon. Reward video. Oxford: Heinemann, 1998.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Hutchinson, Tom. Project video. Oxford: Oxford University Press, 1991.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Hutchinson, Tom. Project video. Oxford: Oxford UniversityPress, 1992.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Cooper, Richard. Video. Oxford, [England]: Oxford University Press, 1993.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "Video and language"

1

Austin, Erin E. H. "Video Options". W Going Global in the World Language Classroom, 76–83. New York: Routledge, 2023. http://dx.doi.org/10.4324/9781003384267-12.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Klimas, Janina. "Video and Drama Activities". W Building Proficiency for World Language Learners, 190–210. New York: Eye on Education, 2024. http://dx.doi.org/10.4324/9781032622507-16.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Zhang, Shilin, i Mei Gu. "Research on Hand Language Video Retrieval". W Lecture Notes in Computer Science, 648–55. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-13498-2_85.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Rylander, John, Phillip Clark i Richard Derrah. "A video-based method of assessing pragmatic awareness". W Assessing Second Language Pragmatics, 65–97. London: Palgrave Macmillan UK, 2013. http://dx.doi.org/10.1057/9781137003522_3.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Ma, Minuk, Sunjae Yoon, Junyeong Kim, Youngjoon Lee, Sunghun Kang i Chang D. Yoo. "VLANet: Video-Language Alignment Network for Weakly-Supervised Video Moment Retrieval". W Computer Vision – ECCV 2020, 156–71. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58604-1_10.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Stopel, Bartosz. "On Botched Cinematic Transformations of Video Games". W Second Language Learning and Teaching, 173–90. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-25189-5_12.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Heyerick, Isabelle. "Chapter 5. The importance of video recordings in signed language interpreting research". W Linking up with Video, 127–49. Amsterdam: John Benjamins Publishing Company, 2020. http://dx.doi.org/10.1075/btl.149.06hey.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Khoreva, Anna, Anna Rohrbach i Bernt Schiele. "Video Object Segmentation with Language Referring Expressions". W Computer Vision – ACCV 2018, 123–41. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-20870-7_8.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Zhang, Junchao, i Yuxin Peng. "Hierarchical Vision-Language Alignment for Video Captioning". W MultiMedia Modeling, 42–54. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-05710-7_4.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Kacetl, Jaroslav, i Madgalena Fiserova. "Online Video Clips in Foreign Language Teaching". W Business Challenges in the Changing Economic Landscape - Vol. 2, 355–64. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-22593-7_26.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "Video and language"

1

Bosy, Karen, i Cristina Portugal. "Media Language: Video practices". W Proceedings of EVA London 2020. BCS Learning and Development Ltd, 2020. http://dx.doi.org/10.14236/ewic/eva2020.53.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Buch, Shyamal, Cristobal Eyzaguirre, Adrien Gaidon, Jiajun Wu, Li Fei-Fei i Juan Carlos Niebles. "Revisiting the “Video” in Video-Language Understanding". W 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2022. http://dx.doi.org/10.1109/cvpr52688.2022.00293.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Liu, Runze, Yaqun Fang, Fan Yu, Ruiqi Tian, Tongwei Ren i Gangshan Wu. "Deep Video Understanding with Video-Language Model". W MM '23: The 31st ACM International Conference on Multimedia. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3581783.3612863.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Nam, Yoonsoo, Adam Lehavi, Daniel Yang, Digbalay Bose, Swabha Swayamdipta i Shrikanth Narayanan. "Does Video Summarization Require Videos? Quantifying the Effectiveness of Language in Video Summarization". W ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2024. http://dx.doi.org/10.1109/icassp48485.2024.10445931.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Swartz, Jonathan, i Brian C. Smith. "A resolution independent video language". W the third ACM international conference. New York, New York, USA: ACM Press, 1995. http://dx.doi.org/10.1145/217279.215265.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Kountchev, R., Vl Todorov i R. Kountcheva. "Efficient sign language video representation". W 2008 International Conference on Systems, Signals and Image Processing (IWSSIP). IEEE, 2008. http://dx.doi.org/10.1109/iwssip.2008.4604396.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Li, Linjie, Zhe Gan, Kevin Lin, Chung-Ching Lin, Zicheng Liu, Ce Liu i Lijuan Wang. "LAVENDER: Unifying Video-Language Understanding as Masked Language Modeling". W 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2023. http://dx.doi.org/10.1109/cvpr52729.2023.02214.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Tellex, Stefanie, Thomas Kollar, George Shaw, Nicholas Roy i Deb Roy. "Grounding spatial language for video search". W International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction. New York, New York, USA: ACM Press, 2010. http://dx.doi.org/10.1145/1891903.1891944.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Zhang, Shilin, i Hai Wang. "HMM based hand language video retrieval". W 2010 International Conference on Intelligent Control and Information Processing (ICICIP). IEEE, 2010. http://dx.doi.org/10.1109/icicip.2010.5564284.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Gupta, Vaidik, Rohan Punjani, Mayur Vaswani i Jyoti Kundale. "Video Conferencing with Sign language Detection". W 2022 2nd Asian Conference on Innovation in Technology (ASIANCON). IEEE, 2022. http://dx.doi.org/10.1109/asiancon55314.2022.9908973.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Raporty organizacyjne na temat "Video and language"

1

Liang, Yiqing. Video Retrieval Based on Language and Image Analysis. Fort Belvoir, VA: Defense Technical Information Center, maj 1999. http://dx.doi.org/10.21236/ada364129.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Chorna, Olha V., Vita A. Hamaniuk i Aleksandr D. Uchitel. Use of YouTube on lessons of practical course of German language as the first and second language at the pedagogical university. [б. в.], wrzesień 2019. http://dx.doi.org/10.31812/123456789/3253.

Pełny tekst źródła
Streszczenie:
Integration of ICT significantly increases the possibilities of the educational process and extends the boundaries of the educational sphere as a whole. Publicly available resources, such as e-mail, blogs, forums, online applications, video hosting sites, can serve as the basis for building open learning and education. Informational educational technologies of learning foreign languages are in the focus of this study. The article represents the results of theoretical analysis of content on the subject of its personal- and didactic-definite orientation, as well as some aspects of the practical use of commonly used YouTube video materials in the process of teaching German as the first or second foreign language in higher education, namely at the pedagogical university. Taking into account the practical experience of using the materials of several relevant thematic YouTube channels with a fairly wide constant audience, a concise didactic analysis of their product is presented and recommendations on converting video content into methodological material in the framework of practical course of German language by future teachers are offered. Due to the suggested recommendations, the following tasks can be solved: enrichment of the vocabulary; semantization of phraseological units, constant figures of speech, cliché; development of pronunciation skills; expansion of linguistic competence; improving listening and speaking skills; increasing motivation to learn, etc.
Style APA, Harvard, Vancouver, ISO itp.
3

Smith, Michael A., i Takeo Kanade. Video Skimming and Characterization through the Combination of Image and Language Understanding Techniques. Fort Belvoir, VA: Defense Technical Information Center, luty 1997. http://dx.doi.org/10.21236/ada333857.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Decleir, Cyril, Mohand-Saïd Hacid i Jacques Kouloumdjian. A Database Approach for Modeling and Querying Video Data. Aachen University of Technology, 1999. http://dx.doi.org/10.25368/2022.90.

Pełny tekst źródła
Streszczenie:
Indexing video data is essential for providing content based access. In this paper, we consider how database technology can offer an integrated framework for modeling and querying video data. As many concerns in video (e.g., modeling and querying) are also found in databases, databases provide an interesting angle to attack many of the problems. From a video applications perspective, database systems provide a nice basis for future video systems. More generally, database research will provide solutions to many video issues even if these are partial or fragmented. From a database perspective, video applications provide beautiful challenges. Next generation database systems will need to provide support for multimedia data (e.g., image, video, audio). These data types require new techniques for their management (i.e., storing, modeling, querying, etc.). Hence new solutions are significant. This paper develops a data model and a rule-based query language for video content based indexing and retrieval. The data model is designed around the object and constraint paradigms. A video sequence is split into a set of fragments. Each fragment can be analyzed to extract the information (symbolic descriptions) of interest that can be put into a database. This database can then be searched to find information of interest. Two types of information are considered: (1) the entities (objects) of interest in the domain of a video sequence, (2) video frames which contain these entities. To represent these information, our data model allows facts as well as objects and constraints. We present a declarative, rule-based, constraint query language that can be used to infer relationships about information represented in the model. The language has a clear declarative and operational semantics. This work is a major revision and a consolidation of [12, 13].
Style APA, Harvard, Vancouver, ISO itp.
5

Trullinger, Richard. Differential measurement of a language concept presented via video tape playback to first grade students. Portland State University Library, styczeń 2000. http://dx.doi.org/10.15760/etd.2420.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Symonenko, Svitlana V., Nataliia V. Zaitseva, Viacheslav V. Osadchyi, Kateryna P. Osadcha i Ekaterina O. Shmeltser. Virtual reality in foreign language training at higher educational institutions. [б. в.], luty 2020. http://dx.doi.org/10.31812/123456789/3759.

Pełny tekst źródła
Streszczenie:
The paper deals with the urgent problem of application of virtual reality in foreign language training. Statistical data confirms that the number of smartphone users, Internet users, including wireless Internet users, has been increasing for recent years in Ukraine and tends to grow. The coherence of quick mobile Internet access and presence of supplementary equipment enables to get trained or to self-dependently advance due to usage of virtual reality possibilities for education in the stationary classrooms, at home and in motion. Several important features of virtual reality, its advantages for education are discussed. It is noted that virtual reality is remaining a relatively new technology in language learning. Benefits from virtual reality implementation into foreign language learning and teaching are given. The aspects of immersion and gamification in foreign language learning are considered. It is emphasized that virtual reality creates necessary preconditions for motivation increasing. The results of the survey at two higher education institution as to personal experience in using VR applications for learning foreign languages are presented. Most students at both universities have indicated quite a low virtual reality application usage. Six popular virtual reality applications for foreign language learning (Mondly, VRSpeech, VR Learn English, Gold Lotus, AltSpaceVR and VirtualSpeech) are analyzed. It is stated that the most preferred VR application for foreign language learning includes detailed virtual environment for maximal immersion, high- level visual effects similar to video games, simple avatar control, thorough material selection and complete complicity level accordance of every element and aspect, affordability, helpful and unobtrusive following up.
Style APA, Harvard, Vancouver, ISO itp.
7

Sandeep, Bhushan, Huang Xin i Xiao Zongwei. A comparison of regional anesthesia techniques in patients undergoing of video-assisted thoracic surgery: A network meta-analysis. INPLASY - International Platform of Registered Systematic Review and Meta-analysis Protocols, luty 2022. http://dx.doi.org/10.37766/inplasy2022.2.0003.

Pełny tekst źródła
Streszczenie:
Review question / Objective: Although video-assisted thoracoscopic surgery is a minimally invasive surgical technique, the pain remains moderate to severe. We comprehensively compared the regional anesthesia methods for postoperative analgesia in patients undergoing video-assisted thoracoscopic surgery. Eligibility criteria: All published full-article RCTs comparing the analgesic efficacy of investigated regional anesthesia technique or comparative blocks in adult patients undergoing any VATS were eligible for inclusion. There were no language restrictions. Moreover, we also excluded case reports, non-RCT studies, incomplete clinical trials, and any trials used multiple nerve blocks. We also excluded any conference abstracts which could not offer enough information about the study design, or by data request to the author.
Style APA, Harvard, Vancouver, ISO itp.
8

Pikilnyak, Andrey V., Nadia M. Stetsenko, Volodymyr P. Stetsenko, Tetiana V. Bondarenko i Halyna V. Tkachuk. Comparative analysis of online dictionaries in the context of the digital transformation of education. [б. в.], czerwiec 2021. http://dx.doi.org/10.31812/123456789/4431.

Pełny tekst źródła
Streszczenie:
The article is devoted to a comparative analysis of popular online dictionaries and an overview of the main tools of these resources to study a language. The use of dictionaries in learning a foreign language is an important step to understanding the language. The effectiveness of this process increases with the use of online dictionaries, which have a lot of tools for improving the educational process. Based on the Alexa Internet resource it was found the most popular online dictionaries: Cambridge Dictionary, Wordreference, Merriam–Webster, Wiktionary, TheFreeDictionary, Dictionary.com, Glosbe, Collins Dictionary, Longman Dictionary, Oxford Dictionary. As a result of the deep analysis of these online dictionaries, we found out they have the next standard functions like the word explanations, transcription, audio pronounce, semantic connections, and examples of use. In propose dictionaries, we also found out the additional tools of learning foreign languages (mostly English) that can be effective. In general, we described sixteen functions of the online platforms for learning that can be useful in learning a foreign language. We have compiled a comparison table based on the next functions: machine translation, multilingualism, a video of pronunciation, an image of a word, discussion, collaborative edit, the rank of words, hints, learning tools, thesaurus, paid services, sharing content, hyperlinks in a definition, registration, lists of words, mobile version, etc. Based on the additional tools of online dictionaries we created a diagram that shows the functionality of analyzed platforms.
Style APA, Harvard, Vancouver, ISO itp.
9

Brenzel, Jeffrey, i Burr Settles. The Duolingo English Test: Design, Validity, and Value. Duolingo, wrzesień 2017. http://dx.doi.org/10.46999/lyqs3238.

Pełny tekst źródła
Streszczenie:
Colleges, universities, and secondary schools around the world are using the Duolingo English Test (DET) as a new way to assess English language proficiency. The DET offers many advantages to admissions offices and applicants: on-demand accessibility, low cost, remote test proctoring, and rapid score reporting, as well as an integrated video interview and writing sample. The DET also looks and functions differently from other proficiency tests, which raises an obvious question. If DET item types and administration differ from those used on other large scale tests like the TOEFL® or IELTS®, can DET test scores be used in the same way as the scores from those instruments?
Style APA, Harvard, Vancouver, ISO itp.
10

Petrovych, Olha B., Alla P. Vinnichuk, Viktor P. Krupka, Iryna A. Zelenenka i Andrei V. Voznyak. The usage of augmented reality technologies in professional training of future teachers of Ukrainian language and literature. CEUR Workshop Proceedings, lipiec 2021. http://dx.doi.org/10.31812/123456789/4635.

Pełny tekst źródła
Streszczenie:
The article deals with the peculiarities of creation and practical application of augmented reality (AR) technologies for the organization of students-philologists’ individual and group work in studying the discipline “Methodic of teaching literature”. The relevance of the introduction of AR technologies for the future teachers-philologists’ readiness formation to the professional activity is substantiated. Analysis of the scientific sources suggested that the professional training process requires the modernization of teaching methods, and the usage of information and communication technologies (ICT) in education, in particular AR technologies, allows to make the learning process interesting and exciting. The domestic and foreign experience of AR technologies application into current educational practices is generalized. A step-by-step algorithm for creating the AR in the mobile application Unite and its subsequent content filling for professional training of future teachers of Ukrainian language and literature is described. The visualization of the educational content of the lepbook “Incredible Lesya Ukrainka”, made by students- philologists at the Mykhailo Stelmakh Faculty of Philology and Journalism of Vinnytsia Mykhailo Kotsiubynskyi State Pedagogical University during the studying the discipline “Methodic of teaching literature”, is detailed. It is specified that the educational process is based on the creation AR with the visualization of interactive learning materials with animation, instructions, links, video content, illustrations etc. according to the rubrics of the lepbook. It is emphasized that the implementation of AR technologies provides the increasing of motivation for systematic mastering of practical skills, enhances students’ concentration and attention, increases their cognitive experience, promotes the development of their creative abilities, produces the opportunities of using the visualized content for students’ research work, stimulates them to self-expression, motivates them to self-development, trains them to the skillful use of the Internet, modern gadgets and mobile applications, etc. Prospects for studying the possibilities of using AR technologies in lessons of Ukrainian literature at secondary school are determined.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii