Academic literature on the topic 'Vidéo texte'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Vidéo texte.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Vidéo texte"
Guéraud-Pinet, Guylaine. "Quand la vidéo devient silencieuse : analyse sémio-historique du sous-titrage dans les productions audiovisuelles des médias en ligne français (2014–2020)." SHS Web of Conferences 130 (2021): 03002. http://dx.doi.org/10.1051/shsconf/202113003002.
Full textBrunner, Regula. "Autre langue – autre vidéo?" Protée 27, no. 1 (April 12, 2005): 105–10. http://dx.doi.org/10.7202/030550ar.
Full textBaychelier, Guillaume. "Immersion sous contrainte et écologie des territoires hostiles dans la série Metro : enjeux ludiques et affectifs de la pratique vidéoludique en milieu post-apocalyptique." Articles avec comité de lecture, spécial (October 7, 2021): 49–72. http://dx.doi.org/10.7202/1082343ar.
Full textDausendschön-Gay, Ulrich. "Observation(s)." Cahiers du Centre de Linguistique et des Sciences du Langage, no. 23 (April 9, 2022): 19–26. http://dx.doi.org/10.26034/la.cdclsl.2007.1426.
Full textThély, Nicolas. "Usages et usagers de la vidéo (réflexions sur les arts et les médias : première partie)." Figures de l'Art. Revue d'études esthétiques 7, no. 1 (2003): 463–73. http://dx.doi.org/10.3406/fdart.2003.1295.
Full textMatteson, Steven, and Patrick Bideault. "En route vers Noto." La Lettre GUTenberg, no. 50 (June 14, 2023): 53–73. http://dx.doi.org/10.60028/lettre.vi50.126.
Full textFaguy, Robert. "Pour un récepteur hautement résolu…" Protée 27, no. 1 (April 12, 2005): 117–24. http://dx.doi.org/10.7202/030552ar.
Full textRozik, Eli. "Intertextualité et déconstruction." Protée 27, no. 1 (April 12, 2005): 111–16. http://dx.doi.org/10.7202/030551ar.
Full textGiroux, Amélie. "L’édition critique d’un texte fondateur : La Sagouine d’Antonine Maillet." Études, no. 20-21 (July 10, 2012): 149–66. http://dx.doi.org/10.7202/1010386ar.
Full textRokka, Joonas, and Joel Hietanen. "Réflexion autour du positionnement de la vidéographie comme outil de théorisation." Recherche et Applications en Marketing (French Edition) 33, no. 3 (March 27, 2018): 128–46. http://dx.doi.org/10.1177/0767370118761749.
Full textDissertations / Theses on the topic "Vidéo texte"
Ayache, Stéphane. "Indexation de documents vidéos par concepts par fusion de caractéristiques audio, vidéo et texte." Grenoble INPG, 2007. http://www.theses.fr/2007INPG0071.
Full textWork deals with information retrieval and aims to reach semantic indexing of multimediaIments. The state of the art approach tackle this problem by bridging of the semantic gap between level features, from each modality, and high-Ievel features (concepts), which are useful for humans. We propose an indexing model based on networks of operators into which data flows, called numcepts, unify informations from the various modalities and extracted at several level of abstraction. We present an instance of this model where we describe a topology of the operators and the numcepts we have deveIoped. We have conducted experiments on TREC VIDEO corpora in order to evaluate various organizations of the networks and the choice of the operators. We have studied those effects on performance of concept detection. We show that a network have to be designed with respect to the concepts in order to optimize the indexing performance
Wehbe, Hassan. "Synchronisation automatique d'un contenu audiovisuel avec un texte qui le décrit." Thesis, Toulouse 3, 2016. http://www.theses.fr/2016TOU30104/document.
Full textWe address the problem of automatic synchronization of an audiovisual content with a procedural text that describes it. The strategy consists in extracting pieces of information about the structure from both contents, and in matching them depending on their types. We propose two video analysis tools that respectively extract: * Limits of events of interest using an approach inspired by dictionary quantization. * Segments that enclose a repeated action based on the YIN frequency analysis method. We then propose a synchronization system that merges results coming from these tools in order to establish links between textual instructions and the corresponding video segments. To do so, a "Confidence Matrix" is built and recursively processed in order to identify these links in respect with their reliability
Yousfi, Sonia. "Embedded Arabic text detection and recognition in videos." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSEI069/document.
Full textThis thesis focuses on Arabic embedded text detection and recognition in videos. Different approaches robust to Arabic text variability (fonts, scales, sizes, etc.) as well as to environmental and acquisition condition challenges (contrasts, degradation, complex background, etc.) are proposed. We introduce different machine learning-based solutions for robust text detection without relying on any pre-processing. The first method is based on Convolutional Neural Networks (ConvNet) while the others use a specific boosting cascade to select relevant hand-crafted text features. For the text recognition, our methodology is segmentation-free. Text images are transformed into sequences of features using a multi-scale scanning scheme. Standing out from the dominant methodology of hand-crafted features, we propose to learn relevant text representations from data using different deep learning methods, namely Deep Auto-Encoders, ConvNets and unsupervised learning models. Each one leads to a specific OCR (Optical Character Recognition) solution. Sequence labeling is performed without any prior segmentation using a recurrent connectionist learning model. Proposed solutions are compared to other methods based on non-connectionist and hand-crafted features. In addition, we propose to enhance the recognition results using Recurrent Neural Network-based language models that are able to capture long-range linguistic dependencies. Both OCR and language model probabilities are incorporated in a joint decoding scheme where additional hyper-parameters are introduced to boost recognition results and reduce the response time. Given the lack of public multimedia Arabic datasets, we propose novel annotated datasets issued from Arabic videos. The OCR dataset, called ALIF, is publicly available for research purposes. As the best of our knowledge, it is first public dataset dedicated for Arabic video OCR. Our proposed solutions were extensively evaluated. Obtained results highlight the genericity and the efficiency of our approaches, reaching a word recognition rate of 88.63% on the ALIF dataset and outperforming well-known commercial OCR engine by more than 36%
Bull, Hannah. "Learning sign language from subtitles." Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG013.
Full textSign languages are an essential means of communication for deaf communities. Sign languages are visuo-gestual languages using the modalities of hand gestures, facial expressions, gaze and body movements. They possess rich grammar structures and lexicons that differ considerably from those found among spoken languages. The uniqueness of transmission medium, structure and grammar of sign languages requires distinct methodologies. The performance of automatic translations systems between high-resource written languages or spoken languages is currently sufficient for many daily use cases, such as translating videos, websites, emails and documents. On the other hand, automatic translation systems for sign languages do not exist outside of very specific use cases with limited vocabulary. Automatic sign language translation is challenging for two main reasons. Firstly, sign languages are low-resource languages with little available training data. Secondly, sign languages are visual-spatial languages with no written form, naturally represented as video rather than audio or text. To tackle the first challenge, we contribute large datasets for training and evaluating automatic sign language translation systems with both interpreted and original sign language video content, as well as written text subtitles. Whilst interpreted data allows us to collect large numbers of hours of videos, original sign language video is more representative of sign language usage within deaf communities. Written subtitles can be used as weak supervision for various sign language understanding tasks. To address the second challenge, we develop methods to better understand visual cues from sign language video. Whilst sentence segmentation is mostly trivial for written languages, segmenting sign language video into sentence-like units relies on detecting subtle semantic and prosodic cues from sign language video. We use prosodic cues to learn to automatically segment sign language video into sentence-like units, determined by subtitle boundaries. Expanding upon this segmentation method, we then learn to align text subtitles to sign language video segments using both semantic and prosodic cues, in order to create sentence-level pairs between sign language video and text. This task is particularly important for interpreted TV data, where subtitles are generally aligned to the audio and not to the signing. Using these automatically aligned video-text pairs, we develop and improve multiple different methods to densely annotate lexical signs by querying words in the subtitle text and searching for visual cues in the sign language video for the corresponding signs
Couture, Matte Robin. "Digital games and negotiated interaction : integrating Club Penguin Island into two ESL grade 6 classes." Master's thesis, Université Laval, 2019. http://hdl.handle.net/20.500.11794/35458.
Full textThe objective of the present study was to explore negotiated interaction involving young children (age 11-12) who carried out communicative tasks supported by Club Penguin Island, a massively multiplayer online role-playing game (MMORPG). Unlike previous studies involving MMORPGs, the present study assessed the use of Club Penguin Island in the context of face-to-face interaction. More specifically, the research questions were three-fold: assess the presence focus-on-form episodes (FFEs) during tasks carried out with Club Penguin Island and identify their characteristics; evaluate the impact of task type on the presence of FFEs; and survey the attitudes of participants. The research project was carried out with 20 Grade 6 intensive English as a second language (ESL) students in the province of Quebec. The participants carried out one information-gap task and two reasoning-gap tasks including one with a writing component. The tasks were carriedout in dyads, and recordings were transcribed and analyzed to identify the presence of FFEs and their characteristics. A statistical analysis was used to assess the impact of task type on the presence of FFEs, and a questionnaire was administered to assess the attitudes of participants following the completion of all tasks. Findings revealed that carrying out tasks with the MMORPG triggered FFEs, that participants were able to successfully negotiate interaction without the help of the instructor, and that most FFEs were focused on the meaning of vocabulary found in the tasks and game. The statistical analysis showed the influence of task type since more FFEs were produced during the information-gap task than one of the reasoning-gap tasks. The attitude questionnaire revealed positive attitudes, which was in line with previous researchon digital games for language learning. Pedagogical implications point to the impact of MMORPGs for language learning and add to the scarce literature on negotiated interaction with young learners.
Sidevåg, Emmilie. "Användarmanual text vs video." Thesis, Linnéuniversitetet, Institutionen för datavetenskap, fysik och matematik, DFM, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-17617.
Full textSalway, Andrew. "Video annotation : the role of specialist text." Thesis, University of Surrey, 1999. http://epubs.surrey.ac.uk/843350/.
Full textSmith, Gregory. "VIDEO SCENE DETECTION USING CLOSED CAPTION TEXT." VCU Scholars Compass, 2009. http://scholarscompass.vcu.edu/etd/1932.
Full textZhang, Jing. "Extraction of Text Objects in Image and Video Documents." Scholar Commons, 2012. http://scholarcommons.usf.edu/etd/4266.
Full textZipstein, Marc. "Les Méthodes de compression de textes : algorithmes et performances." Paris 7, 1990. http://www.theses.fr/1990PA077107.
Full textBooks on the topic "Vidéo texte"
France. Les textes juridiques: Cinéma, télévision, vidéo. Paris: CNC, Centre national de la cinématographie, 2006.
Find full textGeukens, Alfons. Homeopathic practice: Texts for the seminar : video presentation of materia medica. Hechtel-Eksel, Belgium: VZW Centrum voor Homeopathie, 1992.
Find full textJapanese cinema: Texts and contexts. London: Routledge, 2007.
Find full textRomiszowski, A. J. Developing auto-instructional materials: From programmed texts to CAL and interactive video. London: Kogan Page, 1986.
Find full textDeveloping auto-instructional materials: From programmed texts to CAL and interactive video. London: Kogan Page, 1990.
Find full textDeveloping auto-instructional materials: From programmed texts to CAL and interactive video. London: Kogan Page, 1986.
Find full textCompany-Ramón, Juan Miguel. El trazo de la letra en la imagen: Texto literario y texto fílmico. Madrid: Cátedra, 1987.
Find full textSlavko, Kacunko, Spielmann Yvonne, and Reader Stephen, eds. Take it or leave it: Marcel Odenbach anthology of texts and videos. Berlin: Logos, 2013.
Find full textMugglestone, Patricia. English in sight: Video materials for students of English. Hemel Hempstead: Prentice-Hall, 1986.
Find full textNauman, Bruce. Bruce Nauman: Image/texte, 1966-1996. Paris: Centre Georges Pompidou, 1997.
Find full textBook chapters on the topic "Vidéo texte"
Weik, Martin H. "video text." In Computer Science and Communications Dictionary, 1892. Boston, MA: Springer US, 2000. http://dx.doi.org/10.1007/1-4020-0613-6_20796.
Full textLu, Tong, Shivakumara Palaiahnakote, Chew Lim Tan, and Wenyin Liu. "Video Preprocessing." In Video Text Detection, 19–47. London: Springer London, 2014. http://dx.doi.org/10.1007/978-1-4471-6515-6_2.
Full textShivakumara, Palaiahnakote, and Umapada Pal. "Video Text Recognition." In Cognitive Intelligence and Robotics, 233–71. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-7069-5_9.
Full textShivakumara, Palaiahnakote, and Umapada Pal. "Video Text Detection." In Cognitive Intelligence and Robotics, 61–94. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-7069-5_4.
Full textLu, Tong, Shivakumara Palaiahnakote, Chew Lim Tan, and Wenyin Liu. "Video Text Detection Systems." In Video Text Detection, 169–93. London: Springer London, 2014. http://dx.doi.org/10.1007/978-1-4471-6515-6_7.
Full textLu, Tong, Shivakumara Palaiahnakote, Chew Lim Tan, and Wenyin Liu. "Introduction to Video Text Detection." In Video Text Detection, 1–18. London: Springer London, 2014. http://dx.doi.org/10.1007/978-1-4471-6515-6_1.
Full textLu, Tong, Shivakumara Palaiahnakote, Chew Lim Tan, and Wenyin Liu. "Performance Evaluation." In Video Text Detection, 247–54. London: Springer London, 2014. http://dx.doi.org/10.1007/978-1-4471-6515-6_10.
Full textLu, Tong, Shivakumara Palaiahnakote, Chew Lim Tan, and Wenyin Liu. "Video Caption Detection." In Video Text Detection, 49–80. London: Springer London, 2014. http://dx.doi.org/10.1007/978-1-4471-6515-6_3.
Full textLu, Tong, Shivakumara Palaiahnakote, Chew Lim Tan, and Wenyin Liu. "Text Detection from Video Scenes." In Video Text Detection, 81–126. London: Springer London, 2014. http://dx.doi.org/10.1007/978-1-4471-6515-6_4.
Full textLu, Tong, Shivakumara Palaiahnakote, Chew Lim Tan, and Wenyin Liu. "Post-processing of Video Text Detection." In Video Text Detection, 127–44. London: Springer London, 2014. http://dx.doi.org/10.1007/978-1-4471-6515-6_5.
Full textConference papers on the topic "Vidéo texte"
Zu, Xinyan, Haiyang Yu, Bin Li, and Xiangyang Xue. "Towards Accurate Video Text Spotting with Text-wise Semantic Reasoning." In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/206.
Full textChen, Jiafu, Boyan Ji, Zhanjie Zhang, Tianyi Chu, Zhiwen Zuo, Lei Zhao, Wei Xing, and Dongming Lu. "TeSTNeRF: Text-Driven 3D Style Transfer via Cross-Modal Learning." In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/642.
Full textOliveira, Leandro Massetti Ribeiro, Antonio José G. Busson, Carlos de Salles S. Neto, Gabriel N. P. dos Santos, and Sérgio Colcher. "Automatic Generation of Learning Objects Using Text Summarizer Based on Deep Learning Models." In Simpósio Brasileiro de Informática na Educação. Sociedade Brasileira de Computação - SBC, 2021. http://dx.doi.org/10.5753/sbie.2021.217360.
Full textCardoso, Walcir, and Danial Mehdipour-Kolour. "Writing with automatic speech recognition: Examining user’s behaviours and text quality (lexical diversity)." In EuroCALL 2023: CALL for all Languages. Editorial Universitat Politécnica de Valéncia: Editorial Universitat Politécnica de Valéncia, 2023. http://dx.doi.org/10.4995/eurocall2023.2023.16997.
Full textRyabinin, Konstantin Valentinovich, Svetlana Vladimirovna Alexeeva, and Tatiana Evgenievna Petrova. "Automatic Areas of Interest Detector for Mobile Eye Trackers." In 32nd International Conference on Computer Graphics and Vision. Keldysh Institute of Applied Mathematics, 2022. http://dx.doi.org/10.20948/graphicon-2022-228-239.
Full textBalaji, Yogesh, Martin Renqiang Min, Bing Bai, Rama Chellappa, and Hans Peter Graf. "Conditional GAN with Discriminative Filter Generation for Text-to-Video Synthesis." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/276.
Full textKim, Jonghee, Youngwan Lee, and Jinyoung Moon. "T2V2T: Text-to-Video-to-Text Fusion for Text-to-Video Retrieval." In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, 2023. http://dx.doi.org/10.1109/cvprw59228.2023.00594.
Full textPelli, Denis G. "Reading and Contrast Adaptation." In Applied Vision. Washington, D.C.: Optica Publishing Group, 1989. http://dx.doi.org/10.1364/av.1989.thb4.
Full textDenoue, Laurent, Scott Carter, and Matthew Cooper. "Video text retouch." In the adjunct publication of the 27th annual ACM symposium. New York, New York, USA: ACM Press, 2014. http://dx.doi.org/10.1145/2658779.2659102.
Full textAlekseev, Alexander Petrovich, and Tatyana Vladimirovna Alekseeva. "Video search by image - technology "Video Color"." In 24th Scientific Conference “Scientific Services & Internet – 2022”. Keldysh Institute of Applied Mathematics, 2022. http://dx.doi.org/10.20948/abrau-2022-2.
Full textReports on the topic "Vidéo texte"
Li, Huiping, David Doermann, and Omid Kia. Automatic Text Detection and Tracking in Digital Video. Fort Belvoir, VA: Defense Technical Information Center, December 1998. http://dx.doi.org/10.21236/ada458675.
Full textSharova, Iryna. WAYS OF PROMOTING UKRANIAN PUBLISHING HOUSES ON FACEBOOK DURING QUARANTINE. Ivan Franko National University of Lviv, February 2021. http://dx.doi.org/10.30970/vjo.2021.49.11076.
Full textBaluk, Nadia, Natalia Basij, Larysa Buk, and Olha Vovchanska. VR/AR-TECHNOLOGIES – NEW CONTENT OF THE NEW MEDIA. Ivan Franko National University of Lviv, February 2021. http://dx.doi.org/10.30970/vjo.2021.49.11074.
Full textKalenych, Volodymyr. IMMERSIVE TECHNOLOGIES OF JOURNALISM IN THE UKRAINIAN AND GLOBAL MEDIA SPACE. Ivan Franko National University of Lviv, March 2024. http://dx.doi.org/10.30970/vjo.2024.54-55.12161.
Full textKuzmin, Vyacheslav, Alebai Sabitov, Andrei Reutov, Vladimir Amosov, Lidiia Neupokeva, and Igor Chernikov. Electronic training manual "Providing first aid to the population". SIB-Expertise, January 2024. http://dx.doi.org/10.12731/er0774.29012024.
Full textFelix, Juri, and Laura Webb. Use of artificial intelligence in education delivery and assessment. Parliamentary Office of Science and Technology, January 2024. http://dx.doi.org/10.58248/pn712.
Full textKaisler, Raphaela, and Thomas Palfinger. Patient and Public Involvement and Engagement (PPIE): Funding, facilitating and evaluating participatory research approaches in Austria. Fteval - Austrian Platform for Research and Technology Policy Evaluation, April 2022. http://dx.doi.org/10.22163/fteval.2022.551.
Full textLi, Baisong, and Bo Xu. PR-469-19604-Z01 Auto Diagnostic Method Development for Ultrasonic Flow Meter. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), February 2022. http://dx.doi.org/10.55274/r0012204.
Full textPrudkov, Mikhail, Vasily Ermolaev, Elena Shurygina, and Eduard Mikaelyan. Electronic educational resource "Hospital Surgery for 5th year students of the Faculty of Pediatrics". SIB-Expertise, January 2024. http://dx.doi.org/10.12731/er0780.29012024.
Full textInitiation au logiciel Nvivo. Instats Inc., 2023. http://dx.doi.org/10.61700/fzb3icwf9pox5469.
Full text