Dissertationen zum Thema „Vidéo texte“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Vidéo texte" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.
Ayache, Stéphane. „Indexation de documents vidéos par concepts par fusion de caractéristiques audio, vidéo et texte“. Grenoble INPG, 2007. http://www.theses.fr/2007INPG0071.
Der volle Inhalt der QuelleWork deals with information retrieval and aims to reach semantic indexing of multimediaIments. The state of the art approach tackle this problem by bridging of the semantic gap between level features, from each modality, and high-Ievel features (concepts), which are useful for humans. We propose an indexing model based on networks of operators into which data flows, called numcepts, unify informations from the various modalities and extracted at several level of abstraction. We present an instance of this model where we describe a topology of the operators and the numcepts we have deveIoped. We have conducted experiments on TREC VIDEO corpora in order to evaluate various organizations of the networks and the choice of the operators. We have studied those effects on performance of concept detection. We show that a network have to be designed with respect to the concepts in order to optimize the indexing performance
Wehbe, Hassan. „Synchronisation automatique d'un contenu audiovisuel avec un texte qui le décrit“. Thesis, Toulouse 3, 2016. http://www.theses.fr/2016TOU30104/document.
Der volle Inhalt der QuelleWe address the problem of automatic synchronization of an audiovisual content with a procedural text that describes it. The strategy consists in extracting pieces of information about the structure from both contents, and in matching them depending on their types. We propose two video analysis tools that respectively extract: * Limits of events of interest using an approach inspired by dictionary quantization. * Segments that enclose a repeated action based on the YIN frequency analysis method. We then propose a synchronization system that merges results coming from these tools in order to establish links between textual instructions and the corresponding video segments. To do so, a "Confidence Matrix" is built and recursively processed in order to identify these links in respect with their reliability
Yousfi, Sonia. „Embedded Arabic text detection and recognition in videos“. Thesis, Lyon, 2016. http://www.theses.fr/2016LYSEI069/document.
Der volle Inhalt der QuelleThis thesis focuses on Arabic embedded text detection and recognition in videos. Different approaches robust to Arabic text variability (fonts, scales, sizes, etc.) as well as to environmental and acquisition condition challenges (contrasts, degradation, complex background, etc.) are proposed. We introduce different machine learning-based solutions for robust text detection without relying on any pre-processing. The first method is based on Convolutional Neural Networks (ConvNet) while the others use a specific boosting cascade to select relevant hand-crafted text features. For the text recognition, our methodology is segmentation-free. Text images are transformed into sequences of features using a multi-scale scanning scheme. Standing out from the dominant methodology of hand-crafted features, we propose to learn relevant text representations from data using different deep learning methods, namely Deep Auto-Encoders, ConvNets and unsupervised learning models. Each one leads to a specific OCR (Optical Character Recognition) solution. Sequence labeling is performed without any prior segmentation using a recurrent connectionist learning model. Proposed solutions are compared to other methods based on non-connectionist and hand-crafted features. In addition, we propose to enhance the recognition results using Recurrent Neural Network-based language models that are able to capture long-range linguistic dependencies. Both OCR and language model probabilities are incorporated in a joint decoding scheme where additional hyper-parameters are introduced to boost recognition results and reduce the response time. Given the lack of public multimedia Arabic datasets, we propose novel annotated datasets issued from Arabic videos. The OCR dataset, called ALIF, is publicly available for research purposes. As the best of our knowledge, it is first public dataset dedicated for Arabic video OCR. Our proposed solutions were extensively evaluated. Obtained results highlight the genericity and the efficiency of our approaches, reaching a word recognition rate of 88.63% on the ALIF dataset and outperforming well-known commercial OCR engine by more than 36%
Bull, Hannah. „Learning sign language from subtitles“. Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG013.
Der volle Inhalt der QuelleSign languages are an essential means of communication for deaf communities. Sign languages are visuo-gestual languages using the modalities of hand gestures, facial expressions, gaze and body movements. They possess rich grammar structures and lexicons that differ considerably from those found among spoken languages. The uniqueness of transmission medium, structure and grammar of sign languages requires distinct methodologies. The performance of automatic translations systems between high-resource written languages or spoken languages is currently sufficient for many daily use cases, such as translating videos, websites, emails and documents. On the other hand, automatic translation systems for sign languages do not exist outside of very specific use cases with limited vocabulary. Automatic sign language translation is challenging for two main reasons. Firstly, sign languages are low-resource languages with little available training data. Secondly, sign languages are visual-spatial languages with no written form, naturally represented as video rather than audio or text. To tackle the first challenge, we contribute large datasets for training and evaluating automatic sign language translation systems with both interpreted and original sign language video content, as well as written text subtitles. Whilst interpreted data allows us to collect large numbers of hours of videos, original sign language video is more representative of sign language usage within deaf communities. Written subtitles can be used as weak supervision for various sign language understanding tasks. To address the second challenge, we develop methods to better understand visual cues from sign language video. Whilst sentence segmentation is mostly trivial for written languages, segmenting sign language video into sentence-like units relies on detecting subtle semantic and prosodic cues from sign language video. We use prosodic cues to learn to automatically segment sign language video into sentence-like units, determined by subtitle boundaries. Expanding upon this segmentation method, we then learn to align text subtitles to sign language video segments using both semantic and prosodic cues, in order to create sentence-level pairs between sign language video and text. This task is particularly important for interpreted TV data, where subtitles are generally aligned to the audio and not to the signing. Using these automatically aligned video-text pairs, we develop and improve multiple different methods to densely annotate lexical signs by querying words in the subtitle text and searching for visual cues in the sign language video for the corresponding signs
Couture, Matte Robin. „Digital games and negotiated interaction : integrating Club Penguin Island into two ESL grade 6 classes“. Master's thesis, Université Laval, 2019. http://hdl.handle.net/20.500.11794/35458.
Der volle Inhalt der QuelleThe objective of the present study was to explore negotiated interaction involving young children (age 11-12) who carried out communicative tasks supported by Club Penguin Island, a massively multiplayer online role-playing game (MMORPG). Unlike previous studies involving MMORPGs, the present study assessed the use of Club Penguin Island in the context of face-to-face interaction. More specifically, the research questions were three-fold: assess the presence focus-on-form episodes (FFEs) during tasks carried out with Club Penguin Island and identify their characteristics; evaluate the impact of task type on the presence of FFEs; and survey the attitudes of participants. The research project was carried out with 20 Grade 6 intensive English as a second language (ESL) students in the province of Quebec. The participants carried out one information-gap task and two reasoning-gap tasks including one with a writing component. The tasks were carriedout in dyads, and recordings were transcribed and analyzed to identify the presence of FFEs and their characteristics. A statistical analysis was used to assess the impact of task type on the presence of FFEs, and a questionnaire was administered to assess the attitudes of participants following the completion of all tasks. Findings revealed that carrying out tasks with the MMORPG triggered FFEs, that participants were able to successfully negotiate interaction without the help of the instructor, and that most FFEs were focused on the meaning of vocabulary found in the tasks and game. The statistical analysis showed the influence of task type since more FFEs were produced during the information-gap task than one of the reasoning-gap tasks. The attitude questionnaire revealed positive attitudes, which was in line with previous researchon digital games for language learning. Pedagogical implications point to the impact of MMORPGs for language learning and add to the scarce literature on negotiated interaction with young learners.
Sidevåg, Emmilie. „Användarmanual text vs video“. Thesis, Linnéuniversitetet, Institutionen för datavetenskap, fysik och matematik, DFM, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-17617.
Der volle Inhalt der QuelleSalway, Andrew. „Video annotation : the role of specialist text“. Thesis, University of Surrey, 1999. http://epubs.surrey.ac.uk/843350/.
Der volle Inhalt der QuelleSmith, Gregory. „VIDEO SCENE DETECTION USING CLOSED CAPTION TEXT“. VCU Scholars Compass, 2009. http://scholarscompass.vcu.edu/etd/1932.
Der volle Inhalt der QuelleZhang, Jing. „Extraction of Text Objects in Image and Video Documents“. Scholar Commons, 2012. http://scholarcommons.usf.edu/etd/4266.
Der volle Inhalt der QuelleZipstein, Marc. „Les Méthodes de compression de textes : algorithmes et performances“. Paris 7, 1990. http://www.theses.fr/1990PA077107.
Der volle Inhalt der QuelleŠtindlová, Marie. „Museli to založit“. Master's thesis, Vysoké učení technické v Brně. Fakulta výtvarných umění, 2015. http://www.nusl.cz/ntk/nusl-232451.
Der volle Inhalt der QuelleSjölund, Jonathan. „Detection of Frozen Video Subtitles Using Machine Learning“. Thesis, Linköpings universitet, Datorseende, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-158239.
Der volle Inhalt der QuelleWolf, Christian. „Détection de textes dans des images issues d'un flux vidéo pour l'indexation sémantique“. Lyon, INSA, 2003. http://theses.insa-lyon.fr/publication/2003ISAL0074/these.pdf.
Der volle Inhalt der QuelleThis work situates within the framework of image and video indexation. A way to include semantic knowledge into the indexing process is to use the text included in the images and video sequences. It is rich in information but easy to use. Existing methods for text detection are simple: most of them are based on texture estimation or edge detection followed by an accumulation of these characteristics. We suggest the usage of geometrical features very early in the detection chain: a first coarse detection calculates a text "probability" image. Afterwards, for each pixel we calculate geometrical properties of the eventual surrounding text rectangle, which are added to the features of the first step and fed into a support vector machine classifier. For the application to video sequences, we propose an algorithm which detects text on a frame by frame basis, tracking the found text rectangles across multiple frames and integrating the frame robustly into a single image. We tackle the character segmentation problem and suggest two different methods: the first algorithm maximizes a criterion based on the local contrast in the image. The second approach exploits a priori knowledge on the spatial binary distribution of the pixels. This prior knowledge in the form of a Markov random field model is integrated into Bayesian estimation framework in order to obtain an estimation of the original binary image
Bird, Paul. „Elementary students' comprehension of computer presented text“. Thesis, University of British Columbia, 1990. http://hdl.handle.net/2429/29187.
Der volle Inhalt der QuelleEducation, Faculty of
Curriculum and Pedagogy (EDCP), Department of
Graduate
Chen, Datong. „Text detection and recognition in images and video sequences /“. [S.l.] : [s.n.], 2003. http://library.epfl.ch/theses/?display=detail&nr=2863.
Der volle Inhalt der QuelleSharma, Nabin. „Multi-lingual Text Processing from Videos“. Thesis, Griffith University, 2015. http://hdl.handle.net/10072/367489.
Der volle Inhalt der QuelleThesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Information and Communication Technology.
Science, Environment, Engineering and Technology
Full Text
Minetto, Rodrigo. „Reconnaissance de zones de texte et suivi d'objets dans les images et les vidéos“. Paris 6, 2012. http://www.theses.fr/2012PA066108.
Der volle Inhalt der QuelleIn this thesis we address three computer vision problems: (1) the detection and recognition of flat text objects in images of real scenes; (2) the tracking of such text objects in a digital video; and (3) the tracking an arbitrary three-dimensional rigid object with known markings in a digital video. For each problem we developed innovative algorithms, which are at least as accurate and robust as other state-of-the-art algorithms. Specifically, for text recognition we developed (and extensively evaluated) a new HOG-based descriptor specialized for Roman script, which we call T-HOG, and showed its value as a post-filter for an existing text detector (SnooperText). We also improved the SnooperText algorithm by using the multi-scale technique to handle widely different letter sizes while limiting the sensitivity of the algorithm to various artifacts. For text tracking, we describe four basic ways of combining a text detector and a text tracker, and we developed a specific tracker based on a particle-filter which exploits the T-HOG recognizer. For rigid object tracking we developed a new accurate and robust algorithm (AFFTrack) that combines the KLT feature tracker with an improved camera calibration procedure. We extensively tested our algorithms on several benchmarks well-known in the literature. We also created benchmarks (publicly available) for the evaluation of text detection and tracking and rigid object tracking algorithms
Fraz, Muhammad. „Video content analysis for intelligent forensics“. Thesis, Loughborough University, 2014. https://dspace.lboro.ac.uk/2134/18065.
Der volle Inhalt der QuelleZheng, Yilin. „Text-Based Speech Video Synthesis from a Single Face Image“. The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1572168353691788.
Der volle Inhalt der QuelleGokturk, Ozkan Ziya. „Metadata Extraction From Text In Soccer Domain“. Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/12609871/index.pdf.
Der volle Inhalt der Quelle#64257
nd accompanying text with the video, such as soccer domain, movie domain and news domain. In this thesis, we present an approach of metadata extraction from match reports for soccer domain. The UEFA Cup and UEFA Champions League Match Reports are downloaded from the web site of UEFA by a web-crawler. These match reports are preprocessed by using regular expressions and then important events are extracted by using hand-written rules. In addition to hand-written rules, two di&
#64256
erent machine learning techniques are applied on match corpus to learn event patterns and automatically extract match events. Extracted events are saved in an MPEG-7 &
#64257
le. A user interface is implemented to query the events in the MPEG-7 match corpus and view the corresponding video segments.
Gasser, Wolfgang. „„Das Ende (m)einer Kindheit?“: Wissenschaft und Selbstbezüge – Jugendliche analysieren Texte und Video-Interviews zu Kindertransporten“. HATiKVA e.V. – Die Hoffnung Bildungs- und Begegnungsstätte für Jüdische Geschichte und Kultur Sachsen, 2015. https://slub.qucosa.de/id/qucosa%3A34939.
Der volle Inhalt der QuelleMartin, Thomas. „Vers une reconnaissance multimodale du texte et de la parole pour l'analyse de documents vidéos pédagogiques“. La Rochelle, 2009. http://www.theses.fr/2009LAROS264.
Der volle Inhalt der QuelleThis work focuses on the implementation of methods for multimodal recognition of text and speech in audiovisual content. It focuses in particular on lecture records, in which text and speech are extensively used. As the production of multimedia data is massively increasing, the access to these data becomes problematic and requires an efficient content indexing. It is necessary to take into account the heterogeneous nature of this information which is the aim of the paradigm of multimodal analysis. It should however be noted that due to the recent emergence of multimodal analysis, there have been only few attempts to define this field. In addition, only few studies have focused on the interaction between text and speech in multimedia stream and the use of this interaction for their extraction. Our contribution focuses on two points. First, we wish to address the lack of definition by proposing a model of the multimodal analysis. Its goal is to propose a framework for a better description of applications using multimodal analysis, including clearly defining the concepts of modality and multimodality. The second point of our contribution is focused on the multimodal recognition of text and speech. We compare text and speech recognition processes then consider two cases of text-speech collaboration. The first one doesn't involve text recognition. It aims to improve speech recognition by using a thematized language model based on textual resources of the course. Despite the small size of this corpus, we show a significant improvement in recognition results. We are experiencing also a text and speech cross-recognition method based on the complementarity phonetic and written representations of language. We show that this approach improves the text recognition results and could be used to emphasize the specialized vocabulary of the course
Hekimoglu, M. Kadri. „Video-text processing by using Motorola 68020 CPU and its environment“. Thesis, Monterey, California. Naval Postgraduate School, 1991. http://hdl.handle.net/10945/26833.
Der volle Inhalt der QuelleDemirtas, Kezban. „Automatic Video Categorization And Summarization“. Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/3/12611113/index.pdf.
Der volle Inhalt der QuelleSaidane, Zohra. „Reconnaissance de texte dans les images et les vidéos en utilisant les réseaux de neurones à convolutions“. Phd thesis, Télécom ParisTech, 2008. http://pastel.archives-ouvertes.fr/pastel-00004685.
Der volle Inhalt der QuelleTarczyńska, Anna. „Methods of Text Information Extraction in Digital Videos“. Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-2656.
Der volle Inhalt der QuelleThe huge amount of existing digital video files needs to provide indexing to make it available for customers (easier searching). The indexing can be provided by text information extraction. In this thesis we have analysed and compared methods of text information extraction in digital videos. Furthermore, we have evaluated them in the new context proposed by us, namely usefulness in sports news indexing and information retrieval.
Bartlett, Melissa Ellis. „High School Students Reading Informational Texts| A Comparison of Written and Video Response Modalities“. Thesis, North Carolina State University, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=3690206.
Der volle Inhalt der QuelleHay, Richard. „Views and perceptions of the use of text and video in English teaching“. Thesis, Högskolan i Gävle, Avdelningen för humaniora, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-25400.
Der volle Inhalt der QuelleCastro, Adriana Petito de Almeida Silva. „Desempenho termico de vidros utilizados na construção civil : estudo em celulas-teste“. [s.n.], 2006. http://repositorio.unicamp.br/jspui/handle/REPOSIP/257744.
Der volle Inhalt der QuelleTese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Civil, Arquitetura e Urbanismo
Made available in DSpace on 2018-08-08T09:29:44Z (GMT). No. of bitstreams: 1 Castro_AdrianaPetitodeAlmeidaSilva_D.pdf: 2534532 bytes, checksum: 4ea70ab9e519a13a5ddf61c29646f7b7 (MD5) Previous issue date: 2006
Resumo: O desempenho térmico de uma edificação depende de fatores como implantação, orientação, materiais e componentes construtivos, que devem ser adequadamente definidos para diferentes condições climáticas. O edifício atua como mecanismo de controle das variáveis do clima, através de sua envoltória (paredes, piso, cobertura e aberturas) e dos elementos do entorno, e deve ser projetado de modo a proporcionar conforto e eficiência energética. Entre os componentes da construção, os vidros funcionam como um dos elementos de maior potencialidade de aquecimento interno. Devido à sua transparência à radiação solar, possibilitam facilmente o ingresso de grande parte dessa energia no ambiente. No presente rabalho estudou-se o comportamento de superfícies transparentes em fachadas, em situação real, através de medições em seis células-teste, de dimensões 2,00 x 2,50m. Analisou-se in loco o comportamento térmico de vidros, comercialmente disponíveis no mercado, cuja caracterização, do ponto de vista da transmissão espectral, já era conhecida através de técnica espectrofotométrica. Foram selecionados 14 tipos de vidros: cinco planos, quatro refletivos pirolíticos, três refletivos metalizados a vácuo e dois laminados, os quais foram instalados em aberturas de 1,00 X 1,20m nas células-teste, localizadas em fachadas com orientação norte e oeste. A análise foi realizada separadamente para as duas orientações. Avaliou-se o desempenho térmico de cada vidro, tendo o incolor, de espessura 4mm, como referência. A metodologia utilizada consistiu na aquisição de dados mensuráveis de parâmetros ambientais: temperaturas superficiais internas dos vidros, temperaturas de bulbo seco no ambiente interno e temperaturas do ar externo. O fator de ganho solar foi calculado a partir dos resultados de absortância dos diferentes materiais, obtidos por análise espectrofotométrica, e diferenças entre as temperaturas interna e externa. Os resultados mostram o alto ganho de calor através dos vidros planos, sendo o incolor o de pior desempenho térmico, com maior ganho, seguido pelo bronze, cinza e verde. Os vidros refletivos metalizados a vácuo apresentam o melhor desempenho térmico, dentre os materiais analisados, quando se tem como objetivo atenuar o ganho de calor e projetar edificações com menor consumo energético para refrigeração
Abstract: Thermal performance of buildings depends on several factors, such as implantation, orientation, materials and building components, which should be appropriately defined for different climate conditions. The building acts as a controller of the climatic variables, through the building envelope (walls, floor, roof and openings) and the nearby elements. Building design must provide indoor comfort and energy efficiency. Glazing easily allows the penetration of solar radiation into buildings, due to its transparency to solar radiation. So glasses must be carefully considered in building design, having in mind their potential for internal heating. In this work studies about the behavior of transparent façades, in real conditions, were carried out through measurements in six test-cells with dimensions 2,00 x 2,50m. Fourteen types of glasses were selected: five float glasses, four reflective glasses produced by pyrolitic process, three reflective ones obtained by vacuum metal deposition and two laminated glasses. Their spectral behavior was known from previous spectrophotometric studies. The glasses were installed in 1,00 X 1,20m openings in two façades facing north and west, separately. The colorless 4mm float glass was taken as reference. Internal surface temperatures of glasses, internal dry bulb temperatures and outdoor temperatures were collected. Solar gain factor was calculated on the basis of absortance values, obtained from spectrophotometrical nalysis, and temperature differences inside and outside the cells. Results show a high heat gain through float glasses, with the worst thermal behavior for the colorless one, followed by bronze, gray and green. Furthermore, reflective glasses obtained by vacuum metal deposition present the best thermal performance for the purpose of heat gain attenuation and to design buildings with the least energy consumption for cooling
Doutorado
Edificações
Doutor em Engenharia Civil
Schwarz, Katharina [Verfasser], und Hendrik P. A. [Akademischer Betreuer] Lensch. „Text–to–Video : Image Semantics and NLP / Katharina Schwarz ; Betreuer: Hendrik P. A. Lensch“. Tübingen : Universitätsbibliothek Tübingen, 2019. http://d-nb.info/1182985963/34.
Der volle Inhalt der QuelleUggerud, Nils. „AnnotEasy: A gesture and speech-to-text based video annotation tool for note taking in pre-recorded lectures in higher education“. Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-105962.
Der volle Inhalt der QuelleMemmi, Paul Joseph. „Etude sémiolinguistique du sous-titrage pour une écriture concise assistée par ordinateur (ECAO) avec application à l'audiovisuel“. Paris 10, 2005. http://www.theses.fr/2005PA100069.
Der volle Inhalt der QuelleIntelligentiæ pauca – To intelligence, little (is enough). Through its elliptic form, the pleasure it arouses and the wit it calls for, this phrase praised by Stendhal points out what concise writing is. This thesis aims at conceiving a word processor ÉCAO (French for Automatically Processed Concise Writing – APCW) which, in its audiovisual application, should find uses also for Internet, subtitled translations and subtitling for the hearing-impaired. A semiolinguistic study of the subtitling, an example of concise writing in a verbal and audiovisual environment, leads to coming up with a method for referencing and disambiguating the source information and with a set of phrastic concision operators. Some are programmable, others reveal the automaton's deficiencies faced with sense constructions which are yet of capital importance. There lies the essential purpose of this research: the study of cognitive integration of complex communications and of concision as a mode of representation
Ulvbäck, Gustav, und Wingårdh Rickard Eriksson. „Förmedla information med animerad text : Blir textbaserad information på sociala medier mer intressant om det sker i rörlig bild med animerad text?“ Thesis, Södertörns högskola, Institutionen för naturvetenskap, miljö och teknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:sh:diva-34509.
Der volle Inhalt der QuelleBachman, Kathryn M. „Using Videos versus Traditional Written Texts in the Classroom to Enhance Student Learning“. Ohio Dominican University Honors Theses / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=oduhonors1449441013.
Der volle Inhalt der QuelleJaroňová, Eva. „Od ideálu k utopii (zítřek, co už byl)“. Master's thesis, Vysoké učení technické v Brně. Fakulta výtvarných umění, 2012. http://www.nusl.cz/ntk/nusl-232359.
Der volle Inhalt der QuelleWells, Emily Jean. „The effects of luminance contrast, raster modulation, and ambient illumination on text readability and subjective image quality“. Thesis, This resource online, 1994. http://scholar.lib.vt.edu/theses/available/etd-07102009-040235/.
Der volle Inhalt der QuelleStokes, Charlotte Ellenor. „Investigating the Efficacy of Video versus Text Instruction for the Recall of Food Safety Information“. Digital Archive @ GSU, 2009. http://digitalarchive.gsu.edu/nutrition_theses/28.
Der volle Inhalt der QuelleTran, Anh Xuan. „Identifying latent attributes from video scenes using knowledge acquired from large collections of text documents“. Thesis, The University of Arizona, 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=3634275.
Der volle Inhalt der QuellePeter Drucker, a well-known influential writer and philosopher in the field of management theory and practice, once claimed that “the most important thing in communication is hearing what isn't said.” It is not difficult to see that a similar concept also holds in the context of video scene understanding. In almost every non-trivial video scene, most important elements, such as the motives and intentions of the actors, can never be seen or directly observed, yet the identification of these latent attributes is crucial to our full understanding of the scene. That is to say, latent attributes matter.
In this work, we explore the task of identifying latent attributes in video scenes, focusing on the mental states of participant actors. We propose a novel approach to the problem based on the use of large text collections as background knowledge and minimal information about the videos, such as activity and actor types, as query context. We formalize the task and a measure of merit that accounts for the semantic relatedness of mental state terms, as well as their distribution weights. We develop and test several largely unsupervised information extraction models that identify the mental state labels of human participants in video scenes given some contextual information about the scenes. We show that these models produce complementary information and their combination significantly outperforms the individual models, and improves performance over several baseline methods on two different datasets. We present an extensive analysis of our models and close with a discussion of our findings, along with a roadmap for future research.
Macindoe, Annie C. „Melancholy and the memorial: Representing loss, grief and affect in contemporary visual art“. Thesis, Queensland University of Technology, 2018. https://eprints.qut.edu.au/119695/1/Annie_Macindoe_Thesis.pdf.
Der volle Inhalt der QuelleWolf, Christian Jolion Jean-Michel. „Détection de textes dans des images issues d'un flux vidéo pour l'indexation sémantique“. Villeurbanne : Doc'INSA, 2005. http://docinsa.insa-lyon.fr/these/pont.php?id=wolf.
Der volle Inhalt der QuelleThèse rédigée en anglais. Introduction et conclusion générale en français. En 2ème partie, choix d'articles en français avec résumés, mots-clef et réf. bibliogr. Titre provenant de l'écran-titre. Bibliogr. p. 147-154. Publications de l'auteur p. 155-157.
Бикова, О. Д. „Відеовербальний текст німецькомовного вербального дискурсу“. Thesis, Сумський державний університет, 2013. http://essuir.sumdu.edu.ua/handle/123456789/30524.
Der volle Inhalt der QuelleRyrå, Landgren Isabella. „Samspel i det berättartekniska : text, bild och effekter i musikvideor“. Thesis, Högskolan Väst, Avd för medier och design, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:hv:diva-8965.
Der volle Inhalt der QuelleMusikvideor har under de senaste 50 åren varit en form av underhållning för vårt samhälle. Somliga formas för att spegla känslor medan andra visar upp artisten. Det finns de som baserar sig på låttexten för att skapa en kortare film eller gestalta låttextens innehåll. Med hjälp av tekniker som visuella effekter kan dessa drömlika och omöjliga världar och historier komma till liv. Det är videor med sådana effekter jag valt att analysera i denna uppsats med syftet att ta reda påhur stor roll de visuella effekterna spelar i berättandet. För att komma fram till detta har jag gjort en semiotisk studie fokuserad på analys och tolkningar av fem valda videor skapade under eller efter 2000-talet. CGI, slow-motion och metaforer är tekniker jag kollat på och det har visat sig att de alla bidrar till hur berättandet utspelas och uppfattas. Sambandet mellan bild och text i de valda videorna har pendlat mellan tolkning till bokstavligt översatt till varandra.
Bayar, Mujdat. „Event Boundary Detection Using Web-cating Texts And Audio-visual Features“. Master's thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12613755/index.pdf.
Der volle Inhalt der QuelleSaracoglu, Ahmet. „Localization And Recognition Of Text In Digital Media“. Master's thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/2/12609028/index.pdf.
Der volle Inhalt der QuelleSrsen, Kenney Kristen Laura. „CRITICAL VIDEO PROJECTS: UNDERSTANDING NINE STUDENTS’ EXPERIENCES WITH CRITICAL LITERACY AS THEY RE-IMAGINE CANONICAL TEXTS THROUGH FILMS“. Kent State University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=kent1572546051237628.
Der volle Inhalt der QuelleRamírez, Díaz José Fernando. „Formación de imagen completa de una página con texto impreso mediante procesamiento de imágenes obtenidas de un video“. Bachelor's thesis, Pontificia Universidad Católica del Perú, 2020. http://hdl.handle.net/20.500.12404/17644.
Der volle Inhalt der QuelleTesis
Miana, Anna Christina. „Avaliação do desempenho térmico de brises transparentes: ensaio em células-teste“. Universidade de São Paulo, 2005. http://www.teses.usp.br/teses/disponiveis/18/18141/tde-06032006-120003/.
Der volle Inhalt der QuelleThis research intended to evaluate thermal performance of transparent solar protections using measurements in full scale test cells, located in the city of Campinas, São Paulo, Brazil. Surface (window and shading device) and internal air temperatures were compared for six test cells. One of them was unprotected, for reference purposes, another was obstructed with a metallic shading device and the other four had transparent glass horizontal and vertical shades installed. Four different types of glass with different optical properties were selected: float clear glass, mini-boreal printed glass, float blue glass and metallic silver reflective glass. The results of the thermal appraisal showed that silver reflective glass, float blue and printed glass shadings achieved similar performance, not very different to the ones obtained for the metallic shading device. Therefore the float clear glass shading did not attain a satisfactory result. The field measurements procedures were also evaluated and test cells characteristics problems were identified, in order to suggest changes for future research in this area. This research began to evaluate light performance of the same solar protections. For light performance evaluation were measured the daylight in the center of each test cell and outside. It was concluded that printed glass shadings presented very good results and silver reflective glass reduced the daylight inside the test cell
Nguyen, Chu Duc. „Localization and quality enhancement for automatic recognition of vehicle license plates in video sequences“. Thesis, Ecully, Ecole centrale de Lyon, 2011. http://www.theses.fr/2011ECDL0018.
Der volle Inhalt der QuelleAutomatic reading of vehicle license plates is considered an approach to mass surveillance. It allows, through the detection / localization and optical recognition to identify a vehicle in the images or video sequences. Many applications such as traffic monitoring, detection of stolen vehicles, the toll or the management of entrance/ exit parking uses this method. Yet in spite of important progress made since the appearance of the first prototype sin 1979, with a recognition rate sometimes impressive thanks to advanced science and sensor technology, the constraints imposed for the operation of such systems limit laid. Indeed, the optimal use of techniques for localizing and recognizing license plates in operational scenarios requiring controlled lighting conditions and a limitation of the pose, velocity, or simply type plate. Automatic reading of vehicle license plates then remains an open research problem. The major contribution of this thesis is threefold. First, a new approach to robust license plate localization in images or image sequences is proposed. Then, improving the quality of the plates is treated with a localized adaptation of super-resolution technique. Finally, a unified model of location and super-resolution is proposed to reduce the time complexity of both approaches combined
Escobar, Mayte. „The Body As Border: El Cuerpo Como Frontera“. CSUSB ScholarWorks, 2015. https://scholarworks.lib.csusb.edu/etd/247.
Der volle Inhalt der QuelleHansen, Simon. „TEXTILE - Augmenting Text in Virtual Space“. Thesis, Malmö högskola, Fakulteten för kultur och samhälle (KS), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-23172.
Der volle Inhalt der Quelle