Literatura académica sobre el tema "Vidéo texte"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Vidéo texte".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Artículos de revistas sobre el tema "Vidéo texte"
Guéraud-Pinet, Guylaine. "Quand la vidéo devient silencieuse : analyse sémio-historique du sous-titrage dans les productions audiovisuelles des médias en ligne français (2014–2020)". SHS Web of Conferences 130 (2021): 03002. http://dx.doi.org/10.1051/shsconf/202113003002.
Texto completoBrunner, Regula. "Autre langue – autre vidéo?" Protée 27, n.º 1 (12 de abril de 2005): 105–10. http://dx.doi.org/10.7202/030550ar.
Texto completoBaychelier, Guillaume. "Immersion sous contrainte et écologie des territoires hostiles dans la série Metro : enjeux ludiques et affectifs de la pratique vidéoludique en milieu post-apocalyptique". Articles avec comité de lecture, spécial (7 de octubre de 2021): 49–72. http://dx.doi.org/10.7202/1082343ar.
Texto completoDausendschön-Gay, Ulrich. "Observation(s)". Cahiers du Centre de Linguistique et des Sciences du Langage, n.º 23 (9 de abril de 2022): 19–26. http://dx.doi.org/10.26034/la.cdclsl.2007.1426.
Texto completoThély, Nicolas. "Usages et usagers de la vidéo (réflexions sur les arts et les médias : première partie)". Figures de l'Art. Revue d'études esthétiques 7, n.º 1 (2003): 463–73. http://dx.doi.org/10.3406/fdart.2003.1295.
Texto completoMatteson, Steven y Patrick Bideault. "En route vers Noto". La Lettre GUTenberg, n.º 50 (14 de junio de 2023): 53–73. http://dx.doi.org/10.60028/lettre.vi50.126.
Texto completoFaguy, Robert. "Pour un récepteur hautement résolu…". Protée 27, n.º 1 (12 de abril de 2005): 117–24. http://dx.doi.org/10.7202/030552ar.
Texto completoRozik, Eli. "Intertextualité et déconstruction". Protée 27, n.º 1 (12 de abril de 2005): 111–16. http://dx.doi.org/10.7202/030551ar.
Texto completoGiroux, Amélie. "L’édition critique d’un texte fondateur : La Sagouine d’Antonine Maillet". Études, n.º 20-21 (10 de julio de 2012): 149–66. http://dx.doi.org/10.7202/1010386ar.
Texto completoRokka, Joonas y Joel Hietanen. "Réflexion autour du positionnement de la vidéographie comme outil de théorisation". Recherche et Applications en Marketing (French Edition) 33, n.º 3 (27 de marzo de 2018): 128–46. http://dx.doi.org/10.1177/0767370118761749.
Texto completoTesis sobre el tema "Vidéo texte"
Ayache, Stéphane. "Indexation de documents vidéos par concepts par fusion de caractéristiques audio, vidéo et texte". Grenoble INPG, 2007. http://www.theses.fr/2007INPG0071.
Texto completoWork deals with information retrieval and aims to reach semantic indexing of multimediaIments. The state of the art approach tackle this problem by bridging of the semantic gap between level features, from each modality, and high-Ievel features (concepts), which are useful for humans. We propose an indexing model based on networks of operators into which data flows, called numcepts, unify informations from the various modalities and extracted at several level of abstraction. We present an instance of this model where we describe a topology of the operators and the numcepts we have deveIoped. We have conducted experiments on TREC VIDEO corpora in order to evaluate various organizations of the networks and the choice of the operators. We have studied those effects on performance of concept detection. We show that a network have to be designed with respect to the concepts in order to optimize the indexing performance
Wehbe, Hassan. "Synchronisation automatique d'un contenu audiovisuel avec un texte qui le décrit". Thesis, Toulouse 3, 2016. http://www.theses.fr/2016TOU30104/document.
Texto completoWe address the problem of automatic synchronization of an audiovisual content with a procedural text that describes it. The strategy consists in extracting pieces of information about the structure from both contents, and in matching them depending on their types. We propose two video analysis tools that respectively extract: * Limits of events of interest using an approach inspired by dictionary quantization. * Segments that enclose a repeated action based on the YIN frequency analysis method. We then propose a synchronization system that merges results coming from these tools in order to establish links between textual instructions and the corresponding video segments. To do so, a "Confidence Matrix" is built and recursively processed in order to identify these links in respect with their reliability
Yousfi, Sonia. "Embedded Arabic text detection and recognition in videos". Thesis, Lyon, 2016. http://www.theses.fr/2016LYSEI069/document.
Texto completoThis thesis focuses on Arabic embedded text detection and recognition in videos. Different approaches robust to Arabic text variability (fonts, scales, sizes, etc.) as well as to environmental and acquisition condition challenges (contrasts, degradation, complex background, etc.) are proposed. We introduce different machine learning-based solutions for robust text detection without relying on any pre-processing. The first method is based on Convolutional Neural Networks (ConvNet) while the others use a specific boosting cascade to select relevant hand-crafted text features. For the text recognition, our methodology is segmentation-free. Text images are transformed into sequences of features using a multi-scale scanning scheme. Standing out from the dominant methodology of hand-crafted features, we propose to learn relevant text representations from data using different deep learning methods, namely Deep Auto-Encoders, ConvNets and unsupervised learning models. Each one leads to a specific OCR (Optical Character Recognition) solution. Sequence labeling is performed without any prior segmentation using a recurrent connectionist learning model. Proposed solutions are compared to other methods based on non-connectionist and hand-crafted features. In addition, we propose to enhance the recognition results using Recurrent Neural Network-based language models that are able to capture long-range linguistic dependencies. Both OCR and language model probabilities are incorporated in a joint decoding scheme where additional hyper-parameters are introduced to boost recognition results and reduce the response time. Given the lack of public multimedia Arabic datasets, we propose novel annotated datasets issued from Arabic videos. The OCR dataset, called ALIF, is publicly available for research purposes. As the best of our knowledge, it is first public dataset dedicated for Arabic video OCR. Our proposed solutions were extensively evaluated. Obtained results highlight the genericity and the efficiency of our approaches, reaching a word recognition rate of 88.63% on the ALIF dataset and outperforming well-known commercial OCR engine by more than 36%
Bull, Hannah. "Learning sign language from subtitles". Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG013.
Texto completoSign languages are an essential means of communication for deaf communities. Sign languages are visuo-gestual languages using the modalities of hand gestures, facial expressions, gaze and body movements. They possess rich grammar structures and lexicons that differ considerably from those found among spoken languages. The uniqueness of transmission medium, structure and grammar of sign languages requires distinct methodologies. The performance of automatic translations systems between high-resource written languages or spoken languages is currently sufficient for many daily use cases, such as translating videos, websites, emails and documents. On the other hand, automatic translation systems for sign languages do not exist outside of very specific use cases with limited vocabulary. Automatic sign language translation is challenging for two main reasons. Firstly, sign languages are low-resource languages with little available training data. Secondly, sign languages are visual-spatial languages with no written form, naturally represented as video rather than audio or text. To tackle the first challenge, we contribute large datasets for training and evaluating automatic sign language translation systems with both interpreted and original sign language video content, as well as written text subtitles. Whilst interpreted data allows us to collect large numbers of hours of videos, original sign language video is more representative of sign language usage within deaf communities. Written subtitles can be used as weak supervision for various sign language understanding tasks. To address the second challenge, we develop methods to better understand visual cues from sign language video. Whilst sentence segmentation is mostly trivial for written languages, segmenting sign language video into sentence-like units relies on detecting subtle semantic and prosodic cues from sign language video. We use prosodic cues to learn to automatically segment sign language video into sentence-like units, determined by subtitle boundaries. Expanding upon this segmentation method, we then learn to align text subtitles to sign language video segments using both semantic and prosodic cues, in order to create sentence-level pairs between sign language video and text. This task is particularly important for interpreted TV data, where subtitles are generally aligned to the audio and not to the signing. Using these automatically aligned video-text pairs, we develop and improve multiple different methods to densely annotate lexical signs by querying words in the subtitle text and searching for visual cues in the sign language video for the corresponding signs
Couture, Matte Robin. "Digital games and negotiated interaction : integrating Club Penguin Island into two ESL grade 6 classes". Master's thesis, Université Laval, 2019. http://hdl.handle.net/20.500.11794/35458.
Texto completoThe objective of the present study was to explore negotiated interaction involving young children (age 11-12) who carried out communicative tasks supported by Club Penguin Island, a massively multiplayer online role-playing game (MMORPG). Unlike previous studies involving MMORPGs, the present study assessed the use of Club Penguin Island in the context of face-to-face interaction. More specifically, the research questions were three-fold: assess the presence focus-on-form episodes (FFEs) during tasks carried out with Club Penguin Island and identify their characteristics; evaluate the impact of task type on the presence of FFEs; and survey the attitudes of participants. The research project was carried out with 20 Grade 6 intensive English as a second language (ESL) students in the province of Quebec. The participants carried out one information-gap task and two reasoning-gap tasks including one with a writing component. The tasks were carriedout in dyads, and recordings were transcribed and analyzed to identify the presence of FFEs and their characteristics. A statistical analysis was used to assess the impact of task type on the presence of FFEs, and a questionnaire was administered to assess the attitudes of participants following the completion of all tasks. Findings revealed that carrying out tasks with the MMORPG triggered FFEs, that participants were able to successfully negotiate interaction without the help of the instructor, and that most FFEs were focused on the meaning of vocabulary found in the tasks and game. The statistical analysis showed the influence of task type since more FFEs were produced during the information-gap task than one of the reasoning-gap tasks. The attitude questionnaire revealed positive attitudes, which was in line with previous researchon digital games for language learning. Pedagogical implications point to the impact of MMORPGs for language learning and add to the scarce literature on negotiated interaction with young learners.
Sidevåg, Emmilie. "Användarmanual text vs video". Thesis, Linnéuniversitetet, Institutionen för datavetenskap, fysik och matematik, DFM, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-17617.
Texto completoSalway, Andrew. "Video annotation : the role of specialist text". Thesis, University of Surrey, 1999. http://epubs.surrey.ac.uk/843350/.
Texto completoSmith, Gregory. "VIDEO SCENE DETECTION USING CLOSED CAPTION TEXT". VCU Scholars Compass, 2009. http://scholarscompass.vcu.edu/etd/1932.
Texto completoZhang, Jing. "Extraction of Text Objects in Image and Video Documents". Scholar Commons, 2012. http://scholarcommons.usf.edu/etd/4266.
Texto completoZipstein, Marc. "Les Méthodes de compression de textes : algorithmes et performances". Paris 7, 1990. http://www.theses.fr/1990PA077107.
Texto completoLibros sobre el tema "Vidéo texte"
France. Les textes juridiques: Cinéma, télévision, vidéo. Paris: CNC, Centre national de la cinématographie, 2006.
Buscar texto completoGeukens, Alfons. Homeopathic practice: Texts for the seminar : video presentation of materia medica. Hechtel-Eksel, Belgium: VZW Centrum voor Homeopathie, 1992.
Buscar texto completoJapanese cinema: Texts and contexts. London: Routledge, 2007.
Buscar texto completoRomiszowski, A. J. Developing auto-instructional materials: From programmed texts to CAL and interactive video. London: Kogan Page, 1986.
Buscar texto completoDeveloping auto-instructional materials: From programmed texts to CAL and interactive video. London: Kogan Page, 1990.
Buscar texto completoDeveloping auto-instructional materials: From programmed texts to CAL and interactive video. London: Kogan Page, 1986.
Buscar texto completoCompany-Ramón, Juan Miguel. El trazo de la letra en la imagen: Texto literario y texto fílmico. Madrid: Cátedra, 1987.
Buscar texto completoSlavko, Kacunko, Spielmann Yvonne y Reader Stephen, eds. Take it or leave it: Marcel Odenbach anthology of texts and videos. Berlin: Logos, 2013.
Buscar texto completoMugglestone, Patricia. English in sight: Video materials for students of English. Hemel Hempstead: Prentice-Hall, 1986.
Buscar texto completoNauman, Bruce. Bruce Nauman: Image/texte, 1966-1996. Paris: Centre Georges Pompidou, 1997.
Buscar texto completoCapítulos de libros sobre el tema "Vidéo texte"
Weik, Martin H. "video text". En Computer Science and Communications Dictionary, 1892. Boston, MA: Springer US, 2000. http://dx.doi.org/10.1007/1-4020-0613-6_20796.
Texto completoLu, Tong, Shivakumara Palaiahnakote, Chew Lim Tan y Wenyin Liu. "Video Preprocessing". En Video Text Detection, 19–47. London: Springer London, 2014. http://dx.doi.org/10.1007/978-1-4471-6515-6_2.
Texto completoShivakumara, Palaiahnakote y Umapada Pal. "Video Text Recognition". En Cognitive Intelligence and Robotics, 233–71. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-7069-5_9.
Texto completoShivakumara, Palaiahnakote y Umapada Pal. "Video Text Detection". En Cognitive Intelligence and Robotics, 61–94. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-7069-5_4.
Texto completoLu, Tong, Shivakumara Palaiahnakote, Chew Lim Tan y Wenyin Liu. "Video Text Detection Systems". En Video Text Detection, 169–93. London: Springer London, 2014. http://dx.doi.org/10.1007/978-1-4471-6515-6_7.
Texto completoLu, Tong, Shivakumara Palaiahnakote, Chew Lim Tan y Wenyin Liu. "Introduction to Video Text Detection". En Video Text Detection, 1–18. London: Springer London, 2014. http://dx.doi.org/10.1007/978-1-4471-6515-6_1.
Texto completoLu, Tong, Shivakumara Palaiahnakote, Chew Lim Tan y Wenyin Liu. "Performance Evaluation". En Video Text Detection, 247–54. London: Springer London, 2014. http://dx.doi.org/10.1007/978-1-4471-6515-6_10.
Texto completoLu, Tong, Shivakumara Palaiahnakote, Chew Lim Tan y Wenyin Liu. "Video Caption Detection". En Video Text Detection, 49–80. London: Springer London, 2014. http://dx.doi.org/10.1007/978-1-4471-6515-6_3.
Texto completoLu, Tong, Shivakumara Palaiahnakote, Chew Lim Tan y Wenyin Liu. "Text Detection from Video Scenes". En Video Text Detection, 81–126. London: Springer London, 2014. http://dx.doi.org/10.1007/978-1-4471-6515-6_4.
Texto completoLu, Tong, Shivakumara Palaiahnakote, Chew Lim Tan y Wenyin Liu. "Post-processing of Video Text Detection". En Video Text Detection, 127–44. London: Springer London, 2014. http://dx.doi.org/10.1007/978-1-4471-6515-6_5.
Texto completoActas de conferencias sobre el tema "Vidéo texte"
Zu, Xinyan, Haiyang Yu, Bin Li y Xiangyang Xue. "Towards Accurate Video Text Spotting with Text-wise Semantic Reasoning". En Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/206.
Texto completoChen, Jiafu, Boyan Ji, Zhanjie Zhang, Tianyi Chu, Zhiwen Zuo, Lei Zhao, Wei Xing y Dongming Lu. "TeSTNeRF: Text-Driven 3D Style Transfer via Cross-Modal Learning". En Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/642.
Texto completoOliveira, Leandro Massetti Ribeiro, Antonio José G. Busson, Carlos de Salles S. Neto, Gabriel N. P. dos Santos y Sérgio Colcher. "Automatic Generation of Learning Objects Using Text Summarizer Based on Deep Learning Models". En Simpósio Brasileiro de Informática na Educação. Sociedade Brasileira de Computação - SBC, 2021. http://dx.doi.org/10.5753/sbie.2021.217360.
Texto completoCardoso, Walcir y Danial Mehdipour-Kolour. "Writing with automatic speech recognition: Examining user’s behaviours and text quality (lexical diversity)". En EuroCALL 2023: CALL for all Languages. Editorial Universitat Politécnica de Valéncia: Editorial Universitat Politécnica de Valéncia, 2023. http://dx.doi.org/10.4995/eurocall2023.2023.16997.
Texto completoRyabinin, Konstantin Valentinovich, Svetlana Vladimirovna Alexeeva y Tatiana Evgenievna Petrova. "Automatic Areas of Interest Detector for Mobile Eye Trackers". En 32nd International Conference on Computer Graphics and Vision. Keldysh Institute of Applied Mathematics, 2022. http://dx.doi.org/10.20948/graphicon-2022-228-239.
Texto completoBalaji, Yogesh, Martin Renqiang Min, Bing Bai, Rama Chellappa y Hans Peter Graf. "Conditional GAN with Discriminative Filter Generation for Text-to-Video Synthesis". En Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/276.
Texto completoKim, Jonghee, Youngwan Lee y Jinyoung Moon. "T2V2T: Text-to-Video-to-Text Fusion for Text-to-Video Retrieval". En 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, 2023. http://dx.doi.org/10.1109/cvprw59228.2023.00594.
Texto completoPelli, Denis G. "Reading and Contrast Adaptation". En Applied Vision. Washington, D.C.: Optica Publishing Group, 1989. http://dx.doi.org/10.1364/av.1989.thb4.
Texto completoDenoue, Laurent, Scott Carter y Matthew Cooper. "Video text retouch". En the adjunct publication of the 27th annual ACM symposium. New York, New York, USA: ACM Press, 2014. http://dx.doi.org/10.1145/2658779.2659102.
Texto completoAlekseev, Alexander Petrovich y Tatyana Vladimirovna Alekseeva. "Video search by image - technology "Video Color"". En 24th Scientific Conference “Scientific Services & Internet – 2022”. Keldysh Institute of Applied Mathematics, 2022. http://dx.doi.org/10.20948/abrau-2022-2.
Texto completoInformes sobre el tema "Vidéo texte"
Li, Huiping, David Doermann y Omid Kia. Automatic Text Detection and Tracking in Digital Video. Fort Belvoir, VA: Defense Technical Information Center, diciembre de 1998. http://dx.doi.org/10.21236/ada458675.
Texto completoSharova, Iryna. WAYS OF PROMOTING UKRANIAN PUBLISHING HOUSES ON FACEBOOK DURING QUARANTINE. Ivan Franko National University of Lviv, febrero de 2021. http://dx.doi.org/10.30970/vjo.2021.49.11076.
Texto completoBaluk, Nadia, Natalia Basij, Larysa Buk y Olha Vovchanska. VR/AR-TECHNOLOGIES – NEW CONTENT OF THE NEW MEDIA. Ivan Franko National University of Lviv, febrero de 2021. http://dx.doi.org/10.30970/vjo.2021.49.11074.
Texto completoKalenych, Volodymyr. IMMERSIVE TECHNOLOGIES OF JOURNALISM IN THE UKRAINIAN AND GLOBAL MEDIA SPACE. Ivan Franko National University of Lviv, marzo de 2024. http://dx.doi.org/10.30970/vjo.2024.54-55.12161.
Texto completoKuzmin, Vyacheslav, Alebai Sabitov, Andrei Reutov, Vladimir Amosov, Lidiia Neupokeva y Igor Chernikov. Electronic training manual "Providing first aid to the population". SIB-Expertise, enero de 2024. http://dx.doi.org/10.12731/er0774.29012024.
Texto completoFelix, Juri y Laura Webb. Use of artificial intelligence in education delivery and assessment. Parliamentary Office of Science and Technology, enero de 2024. http://dx.doi.org/10.58248/pn712.
Texto completoKaisler, Raphaela y Thomas Palfinger. Patient and Public Involvement and Engagement (PPIE): Funding, facilitating and evaluating participatory research approaches in Austria. Fteval - Austrian Platform for Research and Technology Policy Evaluation, abril de 2022. http://dx.doi.org/10.22163/fteval.2022.551.
Texto completoLi, Baisong y Bo Xu. PR-469-19604-Z01 Auto Diagnostic Method Development for Ultrasonic Flow Meter. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), febrero de 2022. http://dx.doi.org/10.55274/r0012204.
Texto completoPrudkov, Mikhail, Vasily Ermolaev, Elena Shurygina y Eduard Mikaelyan. Electronic educational resource "Hospital Surgery for 5th year students of the Faculty of Pediatrics". SIB-Expertise, enero de 2024. http://dx.doi.org/10.12731/er0780.29012024.
Texto completoInitiation au logiciel Nvivo. Instats Inc., 2023. http://dx.doi.org/10.61700/fzb3icwf9pox5469.
Texto completo