Dissertations / Theses on the topic 'Video text'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Video text.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Sidevåg, Emmilie. "Användarmanual text vs video." Thesis, Linnéuniversitetet, Institutionen för datavetenskap, fysik och matematik, DFM, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-17617.
Full textSalway, Andrew. "Video annotation : the role of specialist text." Thesis, University of Surrey, 1999. http://epubs.surrey.ac.uk/843350/.
Full textSmith, Gregory. "VIDEO SCENE DETECTION USING CLOSED CAPTION TEXT." VCU Scholars Compass, 2009. http://scholarscompass.vcu.edu/etd/1932.
Full textZhang, Jing. "Extraction of Text Objects in Image and Video Documents." Scholar Commons, 2012. http://scholarcommons.usf.edu/etd/4266.
Full textSjölund, Jonathan. "Detection of Frozen Video Subtitles Using Machine Learning." Thesis, Linköpings universitet, Datorseende, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-158239.
Full textChen, Datong. "Text detection and recognition in images and video sequences /." [S.l.] : [s.n.], 2003. http://library.epfl.ch/theses/?display=detail&nr=2863.
Full textŠtindlová, Marie. "Museli to založit." Master's thesis, Vysoké učení technické v Brně. Fakulta výtvarných umění, 2015. http://www.nusl.cz/ntk/nusl-232451.
Full textBird, Paul. "Elementary students' comprehension of computer presented text." Thesis, University of British Columbia, 1990. http://hdl.handle.net/2429/29187.
Full textEducation, Faculty of
Curriculum and Pedagogy (EDCP), Department of
Graduate
Sharma, Nabin. "Multi-lingual Text Processing from Videos." Thesis, Griffith University, 2015. http://hdl.handle.net/10072/367489.
Full textThesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Information and Communication Technology.
Science, Environment, Engineering and Technology
Full Text
Fraz, Muhammad. "Video content analysis for intelligent forensics." Thesis, Loughborough University, 2014. https://dspace.lboro.ac.uk/2134/18065.
Full textZheng, Yilin. "Text-Based Speech Video Synthesis from a Single Face Image." The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1572168353691788.
Full textGokturk, Ozkan Ziya. "Metadata Extraction From Text In Soccer Domain." Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/12609871/index.pdf.
Full text#64257
nd accompanying text with the video, such as soccer domain, movie domain and news domain. In this thesis, we present an approach of metadata extraction from match reports for soccer domain. The UEFA Cup and UEFA Champions League Match Reports are downloaded from the web site of UEFA by a web-crawler. These match reports are preprocessed by using regular expressions and then important events are extracted by using hand-written rules. In addition to hand-written rules, two di&
#64256
erent machine learning techniques are applied on match corpus to learn event patterns and automatically extract match events. Extracted events are saved in an MPEG-7 &
#64257
le. A user interface is implemented to query the events in the MPEG-7 match corpus and view the corresponding video segments.
Hekimoglu, M. Kadri. "Video-text processing by using Motorola 68020 CPU and its environment." Thesis, Monterey, California. Naval Postgraduate School, 1991. http://hdl.handle.net/10945/26833.
Full textTarczyńska, Anna. "Methods of Text Information Extraction in Digital Videos." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-2656.
Full textThe huge amount of existing digital video files needs to provide indexing to make it available for customers (easier searching). The indexing can be provided by text information extraction. In this thesis we have analysed and compared methods of text information extraction in digital videos. Furthermore, we have evaluated them in the new context proposed by us, namely usefulness in sports news indexing and information retrieval.
Demirtas, Kezban. "Automatic Video Categorization And Summarization." Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/3/12611113/index.pdf.
Full textHay, Richard. "Views and perceptions of the use of text and video in English teaching." Thesis, Högskolan i Gävle, Avdelningen för humaniora, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-25400.
Full textSchwarz, Katharina [Verfasser], and Hendrik P. A. [Akademischer Betreuer] Lensch. "Text–to–Video : Image Semantics and NLP / Katharina Schwarz ; Betreuer: Hendrik P. A. Lensch." Tübingen : Universitätsbibliothek Tübingen, 2019. http://d-nb.info/1182985963/34.
Full textYousfi, Sonia. "Embedded Arabic text detection and recognition in videos." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSEI069/document.
Full textThis thesis focuses on Arabic embedded text detection and recognition in videos. Different approaches robust to Arabic text variability (fonts, scales, sizes, etc.) as well as to environmental and acquisition condition challenges (contrasts, degradation, complex background, etc.) are proposed. We introduce different machine learning-based solutions for robust text detection without relying on any pre-processing. The first method is based on Convolutional Neural Networks (ConvNet) while the others use a specific boosting cascade to select relevant hand-crafted text features. For the text recognition, our methodology is segmentation-free. Text images are transformed into sequences of features using a multi-scale scanning scheme. Standing out from the dominant methodology of hand-crafted features, we propose to learn relevant text representations from data using different deep learning methods, namely Deep Auto-Encoders, ConvNets and unsupervised learning models. Each one leads to a specific OCR (Optical Character Recognition) solution. Sequence labeling is performed without any prior segmentation using a recurrent connectionist learning model. Proposed solutions are compared to other methods based on non-connectionist and hand-crafted features. In addition, we propose to enhance the recognition results using Recurrent Neural Network-based language models that are able to capture long-range linguistic dependencies. Both OCR and language model probabilities are incorporated in a joint decoding scheme where additional hyper-parameters are introduced to boost recognition results and reduce the response time. Given the lack of public multimedia Arabic datasets, we propose novel annotated datasets issued from Arabic videos. The OCR dataset, called ALIF, is publicly available for research purposes. As the best of our knowledge, it is first public dataset dedicated for Arabic video OCR. Our proposed solutions were extensively evaluated. Obtained results highlight the genericity and the efficiency of our approaches, reaching a word recognition rate of 88.63% on the ALIF dataset and outperforming well-known commercial OCR engine by more than 36%
Uggerud, Nils. "AnnotEasy: A gesture and speech-to-text based video annotation tool for note taking in pre-recorded lectures in higher education." Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-105962.
Full textStokes, Charlotte Ellenor. "Investigating the Efficacy of Video versus Text Instruction for the Recall of Food Safety Information." Digital Archive @ GSU, 2009. http://digitalarchive.gsu.edu/nutrition_theses/28.
Full textTran, Anh Xuan. "Identifying latent attributes from video scenes using knowledge acquired from large collections of text documents." Thesis, The University of Arizona, 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=3634275.
Full textPeter Drucker, a well-known influential writer and philosopher in the field of management theory and practice, once claimed that “the most important thing in communication is hearing what isn't said.” It is not difficult to see that a similar concept also holds in the context of video scene understanding. In almost every non-trivial video scene, most important elements, such as the motives and intentions of the actors, can never be seen or directly observed, yet the identification of these latent attributes is crucial to our full understanding of the scene. That is to say, latent attributes matter.
In this work, we explore the task of identifying latent attributes in video scenes, focusing on the mental states of participant actors. We propose a novel approach to the problem based on the use of large text collections as background knowledge and minimal information about the videos, such as activity and actor types, as query context. We formalize the task and a measure of merit that accounts for the semantic relatedness of mental state terms, as well as their distribution weights. We develop and test several largely unsupervised information extraction models that identify the mental state labels of human participants in video scenes given some contextual information about the scenes. We show that these models produce complementary information and their combination significantly outperforms the individual models, and improves performance over several baseline methods on two different datasets. We present an extensive analysis of our models and close with a discussion of our findings, along with a roadmap for future research.
Ulvbäck, Gustav, and Wingårdh Rickard Eriksson. "Förmedla information med animerad text : Blir textbaserad information på sociala medier mer intressant om det sker i rörlig bild med animerad text?" Thesis, Södertörns högskola, Institutionen för naturvetenskap, miljö och teknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:sh:diva-34509.
Full textWells, Emily Jean. "The effects of luminance contrast, raster modulation, and ambient illumination on text readability and subjective image quality." Thesis, This resource online, 1994. http://scholar.lib.vt.edu/theses/available/etd-07102009-040235/.
Full textJaroňová, Eva. "Od ideálu k utopii (zítřek, co už byl)." Master's thesis, Vysoké učení technické v Brně. Fakulta výtvarných umění, 2012. http://www.nusl.cz/ntk/nusl-232359.
Full textMacindoe, Annie C. "Melancholy and the memorial: Representing loss, grief and affect in contemporary visual art." Thesis, Queensland University of Technology, 2018. https://eprints.qut.edu.au/119695/1/Annie_Macindoe_Thesis.pdf.
Full textRyrå, Landgren Isabella. "Samspel i det berättartekniska : text, bild och effekter i musikvideor." Thesis, Högskolan Väst, Avd för medier och design, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:hv:diva-8965.
Full textMusikvideor har under de senaste 50 åren varit en form av underhållning för vårt samhälle. Somliga formas för att spegla känslor medan andra visar upp artisten. Det finns de som baserar sig på låttexten för att skapa en kortare film eller gestalta låttextens innehåll. Med hjälp av tekniker som visuella effekter kan dessa drömlika och omöjliga världar och historier komma till liv. Det är videor med sådana effekter jag valt att analysera i denna uppsats med syftet att ta reda påhur stor roll de visuella effekterna spelar i berättandet. För att komma fram till detta har jag gjort en semiotisk studie fokuserad på analys och tolkningar av fem valda videor skapade under eller efter 2000-talet. CGI, slow-motion och metaforer är tekniker jag kollat på och det har visat sig att de alla bidrar till hur berättandet utspelas och uppfattas. Sambandet mellan bild och text i de valda videorna har pendlat mellan tolkning till bokstavligt översatt till varandra.
Saracoglu, Ahmet. "Localization And Recognition Of Text In Digital Media." Master's thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/2/12609028/index.pdf.
Full textБикова, О. Д. "Відеовербальний текст німецькомовного вербального дискурсу." Thesis, Сумський державний університет, 2013. http://essuir.sumdu.edu.ua/handle/123456789/30524.
Full textTapaswi, Makarand Murari [Verfasser], and R. [Akademischer Betreuer] Stiefelhagen. "Story Understanding through Semantic Analysis and Automatic Alignment of Text and Video / Makarand Murari Tapaswi. Betreuer: R. Stiefelhagen." Karlsruhe : KIT-Bibliothek, 2016. http://d-nb.info/1108450725/34.
Full textSartini, Emily C. "EFFECTS OF EXPLICIT INSTRUCTION AND SELF-DIRECTED VIDEO PROMPTING ON TEXT COMPREHENSION OF STUDENTS WITH AUTISM SPECTRUM DISORDER." UKnowledge, 2016. http://uknowledge.uky.edu/edsrc_etds/24.
Full textHansen, Simon. "TEXTILE - Augmenting Text in Virtual Space." Thesis, Malmö högskola, Fakulteten för kultur och samhälle (KS), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-23172.
Full textEscobar, Mayte. "The Body As Border: El Cuerpo Como Frontera." CSUSB ScholarWorks, 2015. https://scholarworks.lib.csusb.edu/etd/247.
Full textAntonér, Jakob. "Hur svårt ska det vara med lite text? : En självobservationsstudie av textinlärning i sång." Thesis, Karlstads universitet, Institutionen för konstnärliga studier, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-46386.
Full textIn this independent project three kinds of strategies for learning with focus on learning lyrics is described and explored. For every new strategy a new song is used. The purpose is to explore and identify different kinds of resources when learning. The study is based on observations of myself in form of a logbook and video recordings over three weeks with a total of 15 practicing sessions in the fall of 2015. The starting point is a multimodal and design theoretical perspective. The result show how I use different resources when learning different lyrics by using three kinds of strategies for learning lyrics. Finally I discuss, in relation to the design theoretical perspective and earlier research.
Hung, Yu-Wan. "The use of communication strategies by learners of English and learners of Chinese in text-based and video-based synchronous computer-mediated communication (SCMC)." Thesis, Durham University, 2012. http://etheses.dur.ac.uk/4426/.
Full textHermanová, Petra. "Falešná vzpomínka." Master's thesis, Vysoké učení technické v Brně. Fakulta výtvarných umění, 2012. http://www.nusl.cz/ntk/nusl-232362.
Full textDiaz, Leanna Marie. "Usage of Emotes and Emoticons in a Massively Multiplayer Online Role-Playing Game." University of Toledo / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1533228651012048.
Full textD'Angelo, John J. "A Study of the Relationship Between the Use of Color for Text in Computer Screen Design and the Age of the Computer User." Thesis, University of North Texas, 1991. https://digital.library.unt.edu/ark:/67531/metadc663711/.
Full textStrömberg, Per. "Kan jag öva utan att sjunga? : en självobservationsstudie av instudering i sång." Thesis, Karlstads universitet, Institutionen för konstnärliga studier (from 2013), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-71363.
Full textThis self-observation purpose is to find new ways to learn song with text focus. The study presents three different methods for them self and when they are integrated in each other. The methods that been used is listening, The voice and writing. The study is based on Sociocultural perspective and other relevant research when it comes to studding songs. the method I’ve used for this study is video recording when I practise and notes from practise sessions. For two weeks, I gathered information and 14 practice sessions with two films. One session was 20 minutes each and was gather autumn 2017. Later on, in the result I will present how I used these three methods which are voice, writing and listening which I used to learn the two-different song. At the end the result will be discussed to the earlier research.
Ferguson, Ralph. "Multimodal Literacy as a form of Communication : What is the state of the students at Dalarna University multimodal literacy?" Thesis, Högskolan Dalarna, Ljud- och musikproduktion, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:du-16835.
Full textNguyen, Chu Duc. "Localization and quality enhancement for automatic recognition of vehicle license plates in video sequences." Thesis, Ecully, Ecole centrale de Lyon, 2011. http://www.theses.fr/2011ECDL0018.
Full textAutomatic reading of vehicle license plates is considered an approach to mass surveillance. It allows, through the detection / localization and optical recognition to identify a vehicle in the images or video sequences. Many applications such as traffic monitoring, detection of stolen vehicles, the toll or the management of entrance/ exit parking uses this method. Yet in spite of important progress made since the appearance of the first prototype sin 1979, with a recognition rate sometimes impressive thanks to advanced science and sensor technology, the constraints imposed for the operation of such systems limit laid. Indeed, the optimal use of techniques for localizing and recognizing license plates in operational scenarios requiring controlled lighting conditions and a limitation of the pose, velocity, or simply type plate. Automatic reading of vehicle license plates then remains an open research problem. The major contribution of this thesis is threefold. First, a new approach to robust license plate localization in images or image sequences is proposed. Then, improving the quality of the plates is treated with a localized adaptation of super-resolution technique. Finally, a unified model of location and super-resolution is proposed to reduce the time complexity of both approaches combined
Janssen, Michael. "Balansering av ett rundbaserat strategispelsystem : En reflekterande text rörande arbetet att skapa ett verktyg för balansering av precautionstridsystemet i spelet Dreamlords – The Reawakening." Thesis, University of Skövde, School of Humanities and Informatics, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-1114.
Full textFöljandet arbete är en reflekterande text som handlar om hur jag skapade ett verktyg för att balansera det rundbaserade precautionstridsystemet av spelet Dreamlords – The Reawakening.
I början förklaras mitt syfte och mål med arbetet som följs av en frågeställning kopplat till det. Jag kommer även att beskriva olika teorier som jag tittade närmare på angående spelbalansering. För att inleda läsaren till spelet Dreamlords ska jag även förklara den generella spelprincipen av hela spelet. Därefter tar jag upp min arbetsprocess som handlar om hur jag strukturerade upp mitt arbete. I denna text ska jag även förklara hur precautionstridsystemet fungerar med alla matematiska beräkningar för att sedan beskriva mitt verk som praktiskt resultat av arbetet. I slutet av rapporten inleder jag med en diskussion där mitt arbete sammanfattas och problem behandlas som uppstått vid arbetets gång. Texten beskriver i sin helhet hur man kan gå tillväga för att balansera ett spelsystem såsom Dreamlords precautionstridsystem. Som resultat av mitt arbete kan betraktas mitt balanseringsverktyg och denna reflekterande text som förhoppningsvis kommer att inspirera läsaren och flera andra personer som är intresserad av att balansera dataspel.
Templeton, Joey. "RECREATIONAL TECHNOLOGY AND ITS IMPACT ON THE LEARNING DEVELOPMENT OF CHILDREN AGES 4-8: A META-ANALYSIS FOR THE 21ST CENTURY CL." Doctoral diss., University of Central Florida, 2007. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/4297.
Full textPh.D.
Department of English
Arts and Humanities
Texts and Technology PhD
Ma, Zhenyu. "Semi-synchronous video for Deaf Telephony with an adapted synchronous codec." Thesis, University of the Western Cape, 2009. http://etd.uwc.ac.za/index.php?module=etd&action=viewtitle&id=gen8Srv25Nme4_2950_1370593938.
Full textCommunication tools such as text-based instant messaging, voice and video relay services, real-time video chat and mobile SMS and MMS have successfully been used among Deaf people. Several years of field research with a local Deaf community revealed that disadvantaged South African Deaf 
people preferred to communicate with both Deaf and hearing peers in South African Sign Language as opposed to text. Synchronous video chat and video 
relay services provided such opportunities. Both types of services are commonly available in developed regions, but not in developing countries like South 
Africa. This thesis reports on a workaround approach to design and develop an asynchronous video communication tool that adapted synchronous video 
 
codecs to store-and-forward video delivery. This novel asynchronous video tool provided high quality South African Sign Language video chat at the 
expense of some additional latency. Synchronous video codec adaptation consisted of comparing codecs, and choosing one to optimise in order to 
minimise latency and preserve video quality. Traditional quality of service metrics only addressed real-time video quality and related services. There was no 
uch standard for asynchronous video communication. Therefore, we also enhanced traditional objective video quality metrics with subjective 
assessment metrics conducted with the local Deaf community.
Bull, Hannah. "Learning sign language from subtitles." Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG013.
Full textSign languages are an essential means of communication for deaf communities. Sign languages are visuo-gestual languages using the modalities of hand gestures, facial expressions, gaze and body movements. They possess rich grammar structures and lexicons that differ considerably from those found among spoken languages. The uniqueness of transmission medium, structure and grammar of sign languages requires distinct methodologies. The performance of automatic translations systems between high-resource written languages or spoken languages is currently sufficient for many daily use cases, such as translating videos, websites, emails and documents. On the other hand, automatic translation systems for sign languages do not exist outside of very specific use cases with limited vocabulary. Automatic sign language translation is challenging for two main reasons. Firstly, sign languages are low-resource languages with little available training data. Secondly, sign languages are visual-spatial languages with no written form, naturally represented as video rather than audio or text. To tackle the first challenge, we contribute large datasets for training and evaluating automatic sign language translation systems with both interpreted and original sign language video content, as well as written text subtitles. Whilst interpreted data allows us to collect large numbers of hours of videos, original sign language video is more representative of sign language usage within deaf communities. Written subtitles can be used as weak supervision for various sign language understanding tasks. To address the second challenge, we develop methods to better understand visual cues from sign language video. Whilst sentence segmentation is mostly trivial for written languages, segmenting sign language video into sentence-like units relies on detecting subtle semantic and prosodic cues from sign language video. We use prosodic cues to learn to automatically segment sign language video into sentence-like units, determined by subtitle boundaries. Expanding upon this segmentation method, we then learn to align text subtitles to sign language video segments using both semantic and prosodic cues, in order to create sentence-level pairs between sign language video and text. This task is particularly important for interpreted TV data, where subtitles are generally aligned to the audio and not to the signing. Using these automatically aligned video-text pairs, we develop and improve multiple different methods to densely annotate lexical signs by querying words in the subtitle text and searching for visual cues in the sign language video for the corresponding signs
Dias, Laura Lima. "Análise de abordagens automáticas de anotação semântica para textos ruidosos e seus impactos na similaridade entre vídeos." Universidade Federal de Juiz de Fora (UFJF), 2017. https://repositorio.ufjf.br/jspui/handle/ufjf/6473.
Full textRejected by Adriana Oliveira (adriana.oliveira@ufjf.edu.br), reason: on 2018-01-30T14:50:12Z (GMT)
Submitted by Geandra Rodrigues (geandrar@gmail.com) on 2018-01-30T16:08:06Z No. of bitstreams: 0
Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2018-03-21T19:26:08Z (GMT) No. of bitstreams: 0
Made available in DSpace on 2018-03-21T19:26:08Z (GMT). No. of bitstreams: 0 Previous issue date: 2017-08-31
CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
Com o acúmulo de informações digitais armazenadas ao longo do tempo, alguns esforços precisam ser aplicados para facilitar a busca e indexação de conteúdos. Recursos como vídeos e áudios, por sua vez, são mais difíceis de serem tratados por mecanismos de busca. A anotação de vídeos é uma forma considerável de resumo do vídeo, busca e classificação. A parcela de vídeos que possui anotações atribuídas pelo próprio autor na maioria das vezes é muito pequena e pouco significativa, e anotar vídeos manualmente é bastante trabalhoso quando trata-se de bases legadas. Por esse motivo, automatizar esse processo tem sido desejado no campo da Recuperação de Informação. Em repositórios de videoaulas, onde a maior parte da informação se concentra na fala do professor, esse processo pode ser realizado através de anotações automáticas de transcritos gerados por sistemas de Reconhecimento Automático de Fala. Contudo, essa técnica produz textos ruidosos, dificultando a tarefa de anotação semântica automática. Entre muitas técnicas de Processamento de Linguagem de Natural utilizadas para anotação, não é trivial a escolha da técnica mais adequada a um determinado cenário, principalmente quando trata-se de anotar textos com ruídos. Essa pesquisa propõe analisar um conjunto de diferentes técnicas utilizadas para anotação automática e verificar o seu impacto em um mesmo cenário, o cenário de similaridade entre vídeos.
With the accumulation of digital information stored over time, some efforts need to be applied to facilitate search and indexing of content. Resources such as videos and audios, in turn, are more difficult to handle with by search engines. Video annotation is a considerable form of video summary, search and classification. The share of videos that have annotations attributed by the author most often is very small and not very significant, and annotating videos manually is very laborious when dealing with legacy bases. For this reason, automating this process has been desired in the field of Information Retrieval. In video lecture repositories, where most of the information is focused on the teacher’s speech, this process can be performed through automatic annotations of transcripts gene-rated by Automatic Speech Recognition systems. However, this technique produces noisy texts, making the task of automatic semantic annotation difficult. Among many Natural Language Processing techniques used for annotation, it is not trivial to choose the most appropriate technique for a given scenario, especially when writing annotated texts. This research proposes to analyze a set of different techniques used for automatic annotation and verify their impact in the same scenario, the scenario of similarity between videos.
Mengoli, Chiara. "Plagiarism." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/20928/.
Full textФіліппова, Є. С. "Мультимедійний текст: структурний, функціональний та перекладацький аспекти." Master's thesis, Сумський державний університет, 2019. http://essuir.sumdu.edu.ua/handle/123456789/75847.
Full textАктуальность исследования заключается в том, что мультимедийные тексты исследованы преимущественно через призму социологической или маркетинговой проблематики, в то время как их лингвистическое наполнение осталось в основном без внимания. В исследовании раскрыт лингвистический потенциал мультимедийных текстов на примере Интернет-дискурса, рекламных текстов и видеоконтента платформы YouTube.
The relevance of the study is that multimedia texts are explored mainly through the lens of sociological or marketing issues, while their linguistic content has been largely ignored. In our study, we uncover the linguistic potential of multimedia texts through the example of Internet discourse, advertising texts and video content on the YouTube platform.
Murnane, Owen D., and Kristal M. Riska. "The Video Head Impulse Test." Digital Commons @ East Tennessee State University, 2018. https://dc.etsu.edu/etsu-works/1978.
Full textMurnane, Owen D., Stephanie M. Byrd, C. Kidd, and Faith W. Akin. "The Video Head Impulse Test." Digital Commons @ East Tennessee State University, 2013. https://dc.etsu.edu/etsu-works/1883.
Full textMurnane, Owen D., H. Mabrey, A. Pearson, Stephanie M. Byrd, and Faith W. Akin. "The Video Head Impulse Test." Digital Commons @ East Tennessee State University, 2012. https://dc.etsu.edu/etsu-works/1888.
Full text