Academic literature on the topic 'Vidéo texte'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Vidéo texte.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Vidéo texte"

1

Guéraud-Pinet, Guylaine. "Quand la vidéo devient silencieuse : analyse sémio-historique du sous-titrage dans les productions audiovisuelles des médias en ligne français (2014–2020)." SHS Web of Conferences 130 (2021): 03002. http://dx.doi.org/10.1051/shsconf/202113003002.

Full text
Abstract:
Depuis le milieu des années 2010, la vidéo informative en ligne (Brut., Konbini, Culture’, Loopsider, etc.) se développe en France. L’une de ses particularités réside dans la mobilisation systématique du sous-titrage ou du légendage. Production audio-visuelle, la vidéo devrait solliciter, par définition, aussi bien la vue que l’ouïe. Cependant, ce recours au texte questionne la place des formes sonores dans la vidéo en ligne. Cet article étudie alors la construction « audio-scripto-visuelle » de ces contenus. À partir d’une analyse sémio-historique jointe à une analyse de contenu de vidéos en ligne, les résultats montrent que le recours au texte (sous-titrage et légendage) dans ces vidéos provient d’un héritage médiatique ancien (cinéma, radio, télévision). Ce recours au texte montre aussi que le son semble de plus en plus dispensable à la compréhension des contenus. Le format de vidéo en ligne étudié se multiplie et semble se standardiser. Par conséquent, son industrialisation est questionnée au regard des enjeux sonores.
APA, Harvard, Vancouver, ISO, and other styles
2

Brunner, Regula. "Autre langue – autre vidéo?" Protée 27, no. 1 (April 12, 2005): 105–10. http://dx.doi.org/10.7202/030550ar.

Full text
Abstract:
Cet article veut proposer une description de l’impact de la langue sur la perception de la vidéo, La Mort de Molière, en comparant ses deux versions, française et allemande. À part la langue et la voix, la manière de parler, qui communique la perception et l’interprétation du texte par l’acteur, influence aussi la perception du spectateur. La comparaison des facteurs plutôt extérieurs, telles la traduction des textes, leur structuration temporelle par pauses, césures, etc., et l’intégration de la langue dans les autres éléments de la bande acoustique, montre que les deux versions sont similaires et même identiques dans beaucoup de passages. À cause de cette précision d’harmonisation des textes dans les deux versions et de leur provenance (oeuvres d’auteurs différents de langue française, allemande et aussi anglaise), on ne peut pas considérer une des versions comme texte de référence. Cependant chacune des deux versions souligne un aspect spécifique de la vidéo, la version allemande une rigueur esthétiquerationnelle et une distance réservée, la version française une légèreté ludique et un aspect plus humain.
APA, Harvard, Vancouver, ISO, and other styles
3

Baychelier, Guillaume. "Immersion sous contrainte et écologie des territoires hostiles dans la série Metro : enjeux ludiques et affectifs de la pratique vidéoludique en milieu post-apocalyptique." Articles avec comité de lecture, spécial (October 7, 2021): 49–72. http://dx.doi.org/10.7202/1082343ar.

Full text
Abstract:
Après avoir rappelé brièvement les conditions nécessaires à l’expérience immersive vidéoludique, ce texte observe comment le recours à un récit post-apocalyptique donne aux jeux vidéo de la série Metro (Metro 2033, Metro: Last Light) la possibilité de mettre en oeuvre des modalités ludiques s’intégrant parfaitement à l’écologie de leur univers fictionnel. En outre, nous montrerons comment la spatialité contraignante propre à ces jeux permet d’observer l’incidence de la mise en image des jeux vidéo sur la qualité des expériences immersives qu’ils autorisent.
APA, Harvard, Vancouver, ISO, and other styles
4

Dausendschön-Gay, Ulrich. "Observation(s)." Cahiers du Centre de Linguistique et des Sciences du Langage, no. 23 (April 9, 2022): 19–26. http://dx.doi.org/10.26034/la.cdclsl.2007.1426.

Full text
Abstract:
Le travail avec des enregistrements vidéo introduit dans l’analyse deux types de perspectives : -les choix techniques et pragmatiques du caméraman ; -les catégorisations opérées par le chercheur sur et à la base des séquences filmées. Le texte suivant, rédigé avec une orientation plutôt descriptive, se veut une exemplification de ce constant.
APA, Harvard, Vancouver, ISO, and other styles
5

Thély, Nicolas. "Usages et usagers de la vidéo (réflexions sur les arts et les médias : première partie)." Figures de l'Art. Revue d'études esthétiques 7, no. 1 (2003): 463–73. http://dx.doi.org/10.3406/fdart.2003.1295.

Full text
Abstract:
Bien moins qu’un prétexte, la notion d’artiste et d’artisan est l’occasion de faire une mise au point sur la conduite d’une réflexion concernant les arts et les médias. En effet réfléchir sur le couple “artiste-artisan” par le prisme de la vidéo est une affaire délicate pour le spécialiste de la vidéo. Cette question en pose une autre : qu’est-ce qu’un artiste qui utilisent la vidéo ? qu’est-ce qu’un artiste aujourd’hui ? on peut répondre d’un point de vue sociologique, étudier ce couple “artiste-artisan” qui élabore un nouveau rapport entre la conception et l’exécution au sein de la division du travail, mais d’un point de vue critique et esthétique la réponse est moins évidente. La présence de la technique nous conduit directement à faire l’hypothèse de la transformation de l’état d’artiste. Dans le domaine de la vidéo, la présence de la technique témoigne d’un intérêt grandissant des artistes pour ce médium, d’un goût, d’une curiosité. Ce que propose ce texte, c’est moins l’analyse du travail de l’artiste confronté à la technique que celui des multiples appropriations du médium, ou comment la figure “artiste artisan” vole en éclats par un irréversible processus de démocratisation de la vidéo : quand les artistes ne sont plus que de simples usagers de la vidéo.
APA, Harvard, Vancouver, ISO, and other styles
6

Matteson, Steven, and Patrick Bideault. "En route vers Noto." La Lettre GUTenberg, no. 50 (June 14, 2023): 53–73. http://dx.doi.org/10.60028/lettre.vi50.126.

Full text
Abstract:
Note de la rédaction du TUGboat : ce texte est la transcription de l’exposé de l’auteur lors de la conférence TUG 2020. Elle n’a fait l’objet que de retouches mineures. Certaines de ses illustrations sont ici omises. Leur intégralité, ainsi qu’une vidéo de la conférence, sont disponibles en ligne à l’adresse de la conférence : https://tug.org/tug2020. Cet article en est une traduction française.
APA, Harvard, Vancouver, ISO, and other styles
7

Faguy, Robert. "Pour un récepteur hautement résolu…" Protée 27, no. 1 (April 12, 2005): 117–24. http://dx.doi.org/10.7202/030552ar.

Full text
Abstract:
Assister à une oeuvre de Robert Wilson suppose de la part du spectateur une certaine forme de renonciation à sa façon habituelle d’aborder la création artistique. À travers un style poétique privilégiant le formalisme et la mise en évidence d’une « surface simple », l’auteur construit son système avec des éléments métacommunicationnels (titre, structure, relations son/image et texte/image). Si cette approche est bien celle des premières oeuvres vidéo de Wilson, ce texte entend démontrer qu’avec La Mort de Molière, on assiste à un changement apparent de stratégie. Grâce notamment à l’utilisation d’un texte dit en voix hors champ, en ajout aux autres éléments formels, une dislocation se produit en ce qui a trait à notre réception. Une lutte s’installe entre les deux hémisphères cérébraux qui possèdent des fonctions de perception complémentaires. Pour dénouer l’impasse et obtenir la haute définition souhaitée, plus d’une lecture de l’oeuvre sera nécessaire.
APA, Harvard, Vancouver, ISO, and other styles
8

Rozik, Eli. "Intertextualité et déconstruction." Protée 27, no. 1 (April 12, 2005): 111–16. http://dx.doi.org/10.7202/030551ar.

Full text
Abstract:
Une série de séquences qui servent à suggérer la mort de Molière constitue l’architecture de cette vidéo. Apparemment, ces plans servent à créer une image englobante de la mort de l’illustre dramaturge – le dernier chapitre de son bio-texte – tel qu’on peut le connaître par les biographies et les livres d’histoire. Mais ces images et ces citations sont en fait des métonymies d’un immense corpus de textes qui, depuis des siècles, remplissent cette fonction cruciale de créer l’image d’un héros culturel, d’un champion de la critique sociale. Cependant, cette interprétation apparemment historique de sa mort est minée par une série d’intertextes, qui viennent jusqu’à inverser l’image établie de Molière. Ces intertextes incluent ses propres comédies, dont Le Malade imaginaire, Dom Juan et Le Tartuffe.
APA, Harvard, Vancouver, ISO, and other styles
9

Giroux, Amélie. "L’édition critique d’un texte fondateur : La Sagouine d’Antonine Maillet." Études, no. 20-21 (July 10, 2012): 149–66. http://dx.doi.org/10.7202/1010386ar.

Full text
Abstract:
Si l’impact de La Sagouine sur la littérature, la culture et la société acadiennes est aujourd’hui reconnu tant par la critique savante que populaire, il reste notamment, grâce aux manuscrits et aux différentes éditions de l’oeuvre, à en découvrir le processus de création. Car de manuscrits en tapuscrits, elle s’est entre autres faite lecture radiophonique, scénarios de théâtre puis de vidéo et, bien sûr, éditions. À travers ce dynamisme intergénérique, l’auteure a modifié son texte, parfois par des ajouts substantiels, ou encore, au fil des éditions, par une acadianisation de l’orthographe et de la syntaxe. Cet aspect de l’oeuvre est encore méconnu et, en ce sens, en établir l’édition critique permettra de raffiner l’analyse de son processus de création, notamment de ce jeu sur l’écriture. Cet article vise à présenter le projet de recherche envisagé.
APA, Harvard, Vancouver, ISO, and other styles
10

Rokka, Joonas, and Joel Hietanen. "Réflexion autour du positionnement de la vidéographie comme outil de théorisation." Recherche et Applications en Marketing (French Edition) 33, no. 3 (March 27, 2018): 128–46. http://dx.doi.org/10.1177/0767370118761749.

Full text
Abstract:
Résumé L’objectif du présent article consiste à examiner d’un œil critique la vidéographie dans le répertoire des approches visuelles et à définir un positionnement qui serait caractéristique, inspirant et audacieux. Nous préconisons notamment des approches vidéographiques différentes qui élargissent le domaine au-delà du « mode de représentation » dominant en reconnaissant le pouvoir évocateur de la vidéo. Ce n’est que de cette manière que la recherche vidéographique pourrait apparaître comme une méthode de recherche autonome sans relever d’autres ordres d’analyse représentatifs, tels que le fait d’être interprétée « comme un texte » ou figée dans des « instantanés » photographiques. Ce faisant, nous développons une image de pensée des vidéographies des consommateurs, dans laquelle les forces affectives de la vidéo sont exploitées au profit de la « théorisation », sous une forme émergente plutôt que descriptive.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Vidéo texte"

1

Ayache, Stéphane. "Indexation de documents vidéos par concepts par fusion de caractéristiques audio, vidéo et texte." Grenoble INPG, 2007. http://www.theses.fr/2007INPG0071.

Full text
Abstract:
Cette thèse s'inscrit dans le cadre de la Recherche d'Information et vise à une indexation sémantique des documents multimédias. Les recherches dans ce domaine sont confrontées au « fossé sémantique» qui sépare les descriptions brutes de différentes modalités des descriptions conceptuelles compréhensibles par les utilisateurs. Nous proposons un modèle d'indexation basé sur des réseaux d'opérateurs dans lesquels les flots de données, appelés numcepts, unifient des informations provenant des différentes modalités et extraites à différents niveaux d'abstractions. Nous présentons une instance de ce modèle où nous décrivons une typologie des opérateurs et des numcepts mis en œuvre. Nous avons mené des expérimentations sur les corpus TREC VIDEO afin d'évaluer l'impact de l'agencement et de l' implémentation des opérateurs sur la qualité de l'indexation des documents vidéos. Nous montrons qu'un réseau doit être décliné relativement à un concept afin d'optimiser la qualité de l'indexation
Work deals with information retrieval and aims to reach semantic indexing of multimediaIments. The state of the art approach tackle this problem by bridging of the semantic gap between level features, from each modality, and high-Ievel features (concepts), which are useful for humans. We propose an indexing model based on networks of operators into which data flows, called numcepts, unify informations from the various modalities and extracted at several level of abstraction. We present an instance of this model where we describe a topology of the operators and the numcepts we have deveIoped. We have conducted experiments on TREC VIDEO corpora in order to evaluate various organizations of the networks and the choice of the operators. We have studied those effects on performance of concept detection. We show that a network have to be designed with respect to the concepts in order to optimize the indexing performance
APA, Harvard, Vancouver, ISO, and other styles
2

Wehbe, Hassan. "Synchronisation automatique d'un contenu audiovisuel avec un texte qui le décrit." Thesis, Toulouse 3, 2016. http://www.theses.fr/2016TOU30104/document.

Full text
Abstract:
Nous abordons le problème de la synchronisation automatique d'un contenu audiovisuel avec une procédure textuelle qui le décrit. La stratégie consiste à extraire des informations sur la structure des deux contenus puis à les mettre en correspondance. Nous proposons deux outils d'analyse vidéo qui extraient respectivement : * les limites des évènements d'intérêt à l'aide d'une méthode de quantification de type dictionnaire * les segments dans lesquels une action se répète en exploitant une méthode d'analyse fréquentielle : le YIN. Ensuite, nous proposons un système de synchronisation qui fusionne les informations fournies par ces outils pour établir des associations entre les instructions textuelles et les segments vidéo correspondants. Une "Matrice de confiance" est construite et exploitée de manière récursive pour établir ces associations en regard de leur fiabilité
We address the problem of automatic synchronization of an audiovisual content with a procedural text that describes it. The strategy consists in extracting pieces of information about the structure from both contents, and in matching them depending on their types. We propose two video analysis tools that respectively extract: * Limits of events of interest using an approach inspired by dictionary quantization. * Segments that enclose a repeated action based on the YIN frequency analysis method. We then propose a synchronization system that merges results coming from these tools in order to establish links between textual instructions and the corresponding video segments. To do so, a "Confidence Matrix" is built and recursively processed in order to identify these links in respect with their reliability
APA, Harvard, Vancouver, ISO, and other styles
3

Yousfi, Sonia. "Embedded Arabic text detection and recognition in videos." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSEI069/document.

Full text
Abstract:
Cette thèse s'intéresse à la détection et la reconnaissance du texte arabe incrusté dans les vidéos. Dans ce contexte, nous proposons différents prototypes de détection et d'OCR vidéo (Optical Character Recognition) qui sont robustes à la complexité du texte arabe (différentes échelles, tailles, polices, etc.) ainsi qu'aux différents défis liés à l'environnement vidéo et aux conditions d'acquisitions (variabilité du fond, luminosité, contraste, faible résolution, etc.). Nous introduisons différents détecteurs de texte arabe qui se basent sur l'apprentissage artificiel sans aucun prétraitement. Les détecteurs se basent sur des Réseaux de Neurones à Convolution (ConvNet) ainsi que sur des schémas de boosting pour apprendre la sélection des caractéristiques textuelles manuellement conçus. Quant à notre méthodologie d'OCR, elle se passe de la segmentation en traitant chaque image de texte en tant que séquence de caractéristiques grâce à un processus de scanning. Contrairement aux méthodes existantes qui se basent sur des caractéristiques manuellement conçues, nous proposons des représentations pertinentes apprises automatiquement à partir des données. Nous utilisons différents modèles d'apprentissage profond, regroupant des Auto-Encodeurs, des ConvNets et un modèle d'apprentissage non-supervisé, qui génèrent automatiquement ces caractéristiques. Chaque modèle résulte en un système d'OCR bien spécifique. Le processus de reconnaissance se base sur une approche connexionniste récurrente pour l'apprentissage de l'étiquetage des séquences de caractéristiques sans aucune segmentation préalable. Nos modèles d'OCR proposés sont comparés à d'autres modèles qui se basent sur des caractéristiques manuellement conçues. Nous proposons, en outre, d'intégrer des modèles de langage (LM) arabes afin d'améliorer les résultats de reconnaissance. Nous introduisons différents LMs à base des Réseaux de Neurones Récurrents capables d'apprendre des longues interdépendances linguistiques. Nous proposons un schéma de décodage conjoint qui intègre les inférences du LM en parallèle avec celles de l'OCR tout en introduisant un ensemble d’hyper-paramètres afin d'améliorer la reconnaissance et réduire le temps de réponse. Afin de surpasser le manque de corpus textuels arabes issus de contenus multimédia, nous mettons au point de nouveaux corpus manuellement annotés à partir des flux TV arabes. Le corpus conçu pour l'OCR, nommé ALIF et composée de 6,532 images de texte annotées, a été publié a des fins de recherche. Nos systèmes ont été développés et évalués sur ces corpus. L’étude des résultats a permis de valider nos approches et de montrer leurs efficacité et généricité avec plus de 97% en taux de détection, 88.63% en taux de reconnaissance mots sur le corpus ALIF dépassant ainsi un des systèmes d'OCR commerciaux les mieux connus par 36 points
This thesis focuses on Arabic embedded text detection and recognition in videos. Different approaches robust to Arabic text variability (fonts, scales, sizes, etc.) as well as to environmental and acquisition condition challenges (contrasts, degradation, complex background, etc.) are proposed. We introduce different machine learning-based solutions for robust text detection without relying on any pre-processing. The first method is based on Convolutional Neural Networks (ConvNet) while the others use a specific boosting cascade to select relevant hand-crafted text features. For the text recognition, our methodology is segmentation-free. Text images are transformed into sequences of features using a multi-scale scanning scheme. Standing out from the dominant methodology of hand-crafted features, we propose to learn relevant text representations from data using different deep learning methods, namely Deep Auto-Encoders, ConvNets and unsupervised learning models. Each one leads to a specific OCR (Optical Character Recognition) solution. Sequence labeling is performed without any prior segmentation using a recurrent connectionist learning model. Proposed solutions are compared to other methods based on non-connectionist and hand-crafted features. In addition, we propose to enhance the recognition results using Recurrent Neural Network-based language models that are able to capture long-range linguistic dependencies. Both OCR and language model probabilities are incorporated in a joint decoding scheme where additional hyper-parameters are introduced to boost recognition results and reduce the response time. Given the lack of public multimedia Arabic datasets, we propose novel annotated datasets issued from Arabic videos. The OCR dataset, called ALIF, is publicly available for research purposes. As the best of our knowledge, it is first public dataset dedicated for Arabic video OCR. Our proposed solutions were extensively evaluated. Obtained results highlight the genericity and the efficiency of our approaches, reaching a word recognition rate of 88.63% on the ALIF dataset and outperforming well-known commercial OCR engine by more than 36%
APA, Harvard, Vancouver, ISO, and other styles
4

Bull, Hannah. "Learning sign language from subtitles." Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG013.

Full text
Abstract:
Les langues des signes sont un moyen de communication essentiel pour les communautés sourdes. Elles sont des langues visuo-gestuelles, qui utilisent comme modalités les mains, les expressions faciales, le regard et les mouvements du corps. Elles ont des structures grammaticales complexes et des lexiques riches qui sont considérablement différents de ceux que l'on trouve dans les langues parlées. Les spécificités des langues des signes en termes de canaux de communication, de structure et de grammaire exigent des méthodologies distinctes. Les performances des systèmes de traduction automatique entre des langues écrites ou parlées sont actuellement suffisantes pour de nombreux cas d'utilisation quotidienne, tels que la traduction de vidéos, de sites web, d'e-mails et de documents. En revanche, les systèmes de traduction automatique pour les langues des signes n'existent pas en dehors de cas d'utilisation très spécifiques avec un vocabulaire limité. La traduction automatique de langues des signes est un défi pour deux raisons principales. Premièrement, les langues des signes sont des langues à faibles ressources avec peu de données d'entraînement disponibles. Deuxièmement, les langues des signes sont des langues visuelles et spatiales sans forme écrite, naturellement représentées sous forme de vidéo plutôt que d'audio ou de texte. Pour relever le premier défi, nous fournissons de grands corpus de données pour l'entraînement et l'évaluation des systèmes de traduction automatique en langue des signes, avec des contenus vidéo en langue des signes interprétée et originale, ainsi que des sous-titres écrits. Alors que les données interprétées nous permettent de collecter un grand nombre d'heures de vidéos, les vidéos originalement en langue des signes sont plus représentatives de l'utilisation de la langue des signes au sein des communautés sourdes. Les sous-titres écrits peuvent être utilisés comme supervision faible pour diverses tâches de compréhension de la langue des signes. Pour relever le deuxième défi, cette thèse propose des méthodes permettant de mieux comprendre les vidéos en langue des signes. Alors que la segmentation des phrases est généralement triviale pour les langues écrites, la segmentation des vidéos en langue des signes en phrases repose sur la détection d'indices sémantiques et prosodiques subtils dans les vidéos. Nous utilisons des indices prosodiques pour apprendre à segmenter automatiquement une vidéo en langue des signes en unités de type phrase, déterminées par les limites des sous-titres. En développant cette méthode de segmentation, nous apprenons ensuite à aligner les sous-titres du texte sur les segments de la vidéo en langue des signes en utilisant des indices sémantiques et prosodiques, afin de créer des paires au niveau de la phrase entre la vidéo en langue des signes et le texte. Cette tâche est particulièrement importante pour les données interprétées, où les sous-titres sont généralement alignés sur l'audio et non sur la langue des signes. En utilisant ces paires vidéo-texte alignées automatiquement, nous développons et améliorons plusieurs méthodes différentes pour annoter de façon dense les signes lexicaux en interrogeant des mots dans le texte des sous-titres et en recherchant des indices visuels dans la vidéo en langue des signes pour les signes correspondants
Sign languages are an essential means of communication for deaf communities. Sign languages are visuo-gestual languages using the modalities of hand gestures, facial expressions, gaze and body movements. They possess rich grammar structures and lexicons that differ considerably from those found among spoken languages. The uniqueness of transmission medium, structure and grammar of sign languages requires distinct methodologies. The performance of automatic translations systems between high-resource written languages or spoken languages is currently sufficient for many daily use cases, such as translating videos, websites, emails and documents. On the other hand, automatic translation systems for sign languages do not exist outside of very specific use cases with limited vocabulary. Automatic sign language translation is challenging for two main reasons. Firstly, sign languages are low-resource languages with little available training data. Secondly, sign languages are visual-spatial languages with no written form, naturally represented as video rather than audio or text. To tackle the first challenge, we contribute large datasets for training and evaluating automatic sign language translation systems with both interpreted and original sign language video content, as well as written text subtitles. Whilst interpreted data allows us to collect large numbers of hours of videos, original sign language video is more representative of sign language usage within deaf communities. Written subtitles can be used as weak supervision for various sign language understanding tasks. To address the second challenge, we develop methods to better understand visual cues from sign language video. Whilst sentence segmentation is mostly trivial for written languages, segmenting sign language video into sentence-like units relies on detecting subtle semantic and prosodic cues from sign language video. We use prosodic cues to learn to automatically segment sign language video into sentence-like units, determined by subtitle boundaries. Expanding upon this segmentation method, we then learn to align text subtitles to sign language video segments using both semantic and prosodic cues, in order to create sentence-level pairs between sign language video and text. This task is particularly important for interpreted TV data, where subtitles are generally aligned to the audio and not to the signing. Using these automatically aligned video-text pairs, we develop and improve multiple different methods to densely annotate lexical signs by querying words in the subtitle text and searching for visual cues in the sign language video for the corresponding signs
APA, Harvard, Vancouver, ISO, and other styles
5

Couture, Matte Robin. "Digital games and negotiated interaction : integrating Club Penguin Island into two ESL grade 6 classes." Master's thesis, Université Laval, 2019. http://hdl.handle.net/20.500.11794/35458.

Full text
Abstract:
Cette étude avait pour objectif d’explorer l’interaction entre de jeunes apprenants (11-12 ans) lors de tâches communicatives accomplies face à face et supportées par Club Penguin Island, un jeu de rôle en ligne massivement multijoueur (MMORPG). Les questions de recherche étaient triples: évaluer la présence d’épisodes de centration sur la forme (FFEs) lors des tâches communicatives accomplies avec Club Penguin Island et identifier leurs caractéristiques; évaluer l’impact de différents types de tâches sur la présence de FFEs; et examiner les attitudes des participants. Ce projet de recherche a été réalisé auprès de 20 élèves de 6ième année en anglais intensif dans la province de Québec. Les participants ont exécuté une tâche de type «information gap», et deux tâches de type «reasoninggap» dont une incluant une composante écrite. Les tâches ont été réalisées en dyades et les enregistrements des interactions ont été transcrits et analysés pour identifier la présence de FFEs et leurs caractéristiques. Une analyse statistique fut utilisée pour évaluer l’impact du type de tâche sur la présence de FFEs, et un questionnaire a été administré pour examiner les attitudes des participants à la suite des tâches. Les résultats révèlent que des FFEs ont été produits lors des tâches accomplies avec le MMORPG, que les participants ont pu négocier les interactions sans l’aide de l’instructeur et que la majorité des FFEs visaient des mots trouvés dans les tâches et le jeu. L’analyse statistique démontre l’influence du type de tâche, puisque davantage de FFEs ont été produits lors de la tâche de type «information gap» que l’une des tâches de type «reasoning gap». Le questionnaire sur les attitudes révèle qu’elles ont été positives. Les implications pédagogiques soulignent l’impact des MMORPGs pour l’acquisition des langues secondes et les conclusions ajoutent à la littérature limitée sur les interactions entre de jeunes apprenants.
The objective of the present study was to explore negotiated interaction involving young children (age 11-12) who carried out communicative tasks supported by Club Penguin Island, a massively multiplayer online role-playing game (MMORPG). Unlike previous studies involving MMORPGs, the present study assessed the use of Club Penguin Island in the context of face-to-face interaction. More specifically, the research questions were three-fold: assess the presence focus-on-form episodes (FFEs) during tasks carried out with Club Penguin Island and identify their characteristics; evaluate the impact of task type on the presence of FFEs; and survey the attitudes of participants. The research project was carried out with 20 Grade 6 intensive English as a second language (ESL) students in the province of Quebec. The participants carried out one information-gap task and two reasoning-gap tasks including one with a writing component. The tasks were carriedout in dyads, and recordings were transcribed and analyzed to identify the presence of FFEs and their characteristics. A statistical analysis was used to assess the impact of task type on the presence of FFEs, and a questionnaire was administered to assess the attitudes of participants following the completion of all tasks. Findings revealed that carrying out tasks with the MMORPG triggered FFEs, that participants were able to successfully negotiate interaction without the help of the instructor, and that most FFEs were focused on the meaning of vocabulary found in the tasks and game. The statistical analysis showed the influence of task type since more FFEs were produced during the information-gap task than one of the reasoning-gap tasks. The attitude questionnaire revealed positive attitudes, which was in line with previous researchon digital games for language learning. Pedagogical implications point to the impact of MMORPGs for language learning and add to the scarce literature on negotiated interaction with young learners.
APA, Harvard, Vancouver, ISO, and other styles
6

Sidevåg, Emmilie. "Användarmanual text vs video." Thesis, Linnéuniversitetet, Institutionen för datavetenskap, fysik och matematik, DFM, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-17617.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Salway, Andrew. "Video annotation : the role of specialist text." Thesis, University of Surrey, 1999. http://epubs.surrey.ac.uk/843350/.

Full text
Abstract:
Digital video is among the most information-intensive modes of communication. The retrieval of video from digital libraries, along with sound and text, is a major challenge for the computing community in general and for the artificial intelligence community specifically. The advent of digital video has set some old questions in a new light. Questions relating to aesthetics and to the role of surrogates - image for reality and text for image, invariably touch upon the link between vision and language. Dealing with this link computationally is important for the artificial intelligence enterprise. Interesting images to consider both aesthetically and for research in video retrieval include those which are constrained and patterned, and which convey rich meanings; for example, dance. These are specialist images for us and require a special language for description and interpretation. Furthermore, they require specialist knowledge to be understood since there is usually more than meets the untrained eye: this knowledge may also be articulated in the language of the specialism. In order to be retrieved effectively and efficiently, video has to be annotated-, particularly so for specialist moving images. Annotation involves attaching keywords from the specialism along with, for us, commentaries produced by experts, including those written and spoken specifically for annotation and those obtained from a corpus of extant texts. A system that processes such collateral text for video annotation should perhaps be grounded in an understanding of the link between vision and language. This thesis attempts to synthesise ideas from artificial intelligence, multimedia systems, linguistics, cognitive psychology and aesthetics. The link between vision and language is explored by focusing on moving images of dance and the special language used to describe and interpret them. We have developed an object-oriented system, KAB, which helps to annotate a digital video library with a collateral corpus of texts and terminology. User evaluation has been encouraging. The system is now available on the WWW.
APA, Harvard, Vancouver, ISO, and other styles
8

Smith, Gregory. "VIDEO SCENE DETECTION USING CLOSED CAPTION TEXT." VCU Scholars Compass, 2009. http://scholarscompass.vcu.edu/etd/1932.

Full text
Abstract:
Issues in Automatic Video Biography Editing are similar to those in Video Scene Detection and Topic Detection and Tracking (TDT). The techniques of Video Scene Detection and TDT can be applied to interviews to reduce the time necessary to edit a video biography. The system has attacked the problems of extraction of video text, story segmentation, and correlation. This thesis project was divided into three parts: extraction, scene detection, and correlation. The project successfully detected scene breaks in series television episodes and displayed scenes that had similar content.
APA, Harvard, Vancouver, ISO, and other styles
9

Zhang, Jing. "Extraction of Text Objects in Image and Video Documents." Scholar Commons, 2012. http://scholarcommons.usf.edu/etd/4266.

Full text
Abstract:
The popularity of digital image and video is increasing rapidly. To help users navigate libraries of image and video, Content Based Information Retrieval (CBIR) system that can automatically index image and video documents are needed. However, due to the semantic gap between low-level machine descriptors and high-level semantic descriptors, the existing CBIR systems are still far from perfect. Text embedded in multi-media data, as a well-defined model of concepts for humans' communication, contains much semantic information related to the content. This text information can provide a much truer form of content-based access to the image and video documents if it can be extracted and harnessed efficiently. This dissertation solves the problem involved in detecting text object in image and video and tracking text event in video. For text detection problem, we propose a new unsupervised text detection algorithm. A new text model is constructed to describe text object using pictorial structure. Each character is a part in the model and every two neighboring characters are connected by a spring-like link. Two characters and the link connecting them are defined as a text unit. We localize candidate parts by extracting closed boundaries and initialize the links by connecting two neighboring candidate parts based on the spatial relationship of characters. For every candidate part, we compute character energy using three new character features, averaged angle difference of corresponding pairs, fraction of non-noise pairs, and vector of stroke width. They are extracted based on our observation that the edge of a character can be divided into two sets with high similarities in length, curvature, and orientation. For every candidate link, we compute link energy based on our observation that the characters of a text typically align along certain direction with similar color, size, and stroke width. For every candidate text unit, we combine character and link energies to compute text unit energy which indicates the probability that the candidate text model is a real text object. The final text detection results are generated using a text unit energy based thresholding. For text tracking problem, we construct a text event model by using pictorial structure as well. In this model, the detected text object in each video frame is a part and two neighboring text objects of a text event are connected by a spring-like link. Inter-frame link energy is computed for each link based on the character energy, similarity of neighboring text objects, and motion information. After refining the model using inter-frame link energy, the remaining text event models are marked as text events. At character level, because the proposed method is based on the assumption that the strokes of a character have uniform thickness, it can detect and localize characters from different languages in different styles, such as typewritten text or handwriting text, if the characters have approximately uniform stroke thickness. At text level, however, because the spatial relationship between two neighboring characters is used to localize text objects, the proposed method may fail to detect and localize the characters with multiple separate strokes or connected characters. For example, some East Asian language characters, such as Chinese, Japanese, and Korean, have many strokes of a single character. We need to group the strokes first to form single characters and then group characters to form text objects. While, the characters of some languages, such Arabic and Hindi, are connected together, we cannot extract spatial information between neighboring characters since they are detected as a single character. Therefore, in current stage the proposed method can detect and localize the text objects that are composed of separate characters with connected strokes with approximately uniform thickness. We evaluated our method comprehensively using three English language-based image and video datasets: ICDAR 2003/2005 text locating dataset (258 training images and 251 test images), Microsoft Street View text detection dataset (307 street view images), and VACE video dataset (50 broadcast news videos from CNN and ABC). The experimental results demonstrate that the proposed text detection method can capture the inherent properties of text and discriminate text from other objects efficiently.
APA, Harvard, Vancouver, ISO, and other styles
10

Zipstein, Marc. "Les Méthodes de compression de textes : algorithmes et performances." Paris 7, 1990. http://www.theses.fr/1990PA077107.

Full text
Abstract:
La compression de textes a pour but de réduire le nombre de symboles nécessaires à la représentation d'un texte. L'objet de cette thèse est l'étude, la mise au point et la comparaison de méthodes de compression universelles, c'est-à-dire capables de traiter de manière efficace n'importe quel type de textes. Nous montrons que l'utilisation d'automates permet d'augmenter l'efficacité des méthodes de compression classiques et nous présentons une nouvelle méthode basée sur l'utilisation de l'automate des facteurs. Nous présentons les deux grandes classes d'algorithmes de compression de données : les algorithmes de codages statistiques et les algorithmes de codage par facteurs. Les algorithmes de codage statistique traitent les textes par blocs de longueur fixe, un bloc fréquent ayant une traduction courte. Nous présentons les codages de Huffman statique et adaptatif ainsi que le codage arithmétique. Nous proposons une représentation du codage arithmétique à l'aide d'un transducteur ce qui garantit un traitement en temps réel. Les codages par facteur traduisent les textes en utilisant leurs propres facteurs. Nous présentons les algorithmes de codage dus à Ziv et Lempel, et nous décrivons une nouvelle méthode basée sur l'utilisation de l'automate des facteurs. Ce travail se termine par la comparaison des performances des algorithmes décrits
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Vidéo texte"

1

France. Les textes juridiques: Cinéma, télévision, vidéo. Paris: CNC, Centre national de la cinématographie, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Geukens, Alfons. Homeopathic practice: Texts for the seminar : video presentation of materia medica. Hechtel-Eksel, Belgium: VZW Centrum voor Homeopathie, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Japanese cinema: Texts and contexts. London: Routledge, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Romiszowski, A. J. Developing auto-instructional materials: From programmed texts to CAL and interactive video. London: Kogan Page, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Developing auto-instructional materials: From programmed texts to CAL and interactive video. London: Kogan Page, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Developing auto-instructional materials: From programmed texts to CAL and interactive video. London: Kogan Page, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Company-Ramón, Juan Miguel. El trazo de la letra en la imagen: Texto literario y texto fílmico. Madrid: Cátedra, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Slavko, Kacunko, Spielmann Yvonne, and Reader Stephen, eds. Take it or leave it: Marcel Odenbach anthology of texts and videos. Berlin: Logos, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Mugglestone, Patricia. English in sight: Video materials for students of English. Hemel Hempstead: Prentice-Hall, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Nauman, Bruce. Bruce Nauman: Image/texte, 1966-1996. Paris: Centre Georges Pompidou, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Vidéo texte"

1

Weik, Martin H. "video text." In Computer Science and Communications Dictionary, 1892. Boston, MA: Springer US, 2000. http://dx.doi.org/10.1007/1-4020-0613-6_20796.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lu, Tong, Shivakumara Palaiahnakote, Chew Lim Tan, and Wenyin Liu. "Video Preprocessing." In Video Text Detection, 19–47. London: Springer London, 2014. http://dx.doi.org/10.1007/978-1-4471-6515-6_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Shivakumara, Palaiahnakote, and Umapada Pal. "Video Text Recognition." In Cognitive Intelligence and Robotics, 233–71. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-7069-5_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Shivakumara, Palaiahnakote, and Umapada Pal. "Video Text Detection." In Cognitive Intelligence and Robotics, 61–94. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-7069-5_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lu, Tong, Shivakumara Palaiahnakote, Chew Lim Tan, and Wenyin Liu. "Video Text Detection Systems." In Video Text Detection, 169–93. London: Springer London, 2014. http://dx.doi.org/10.1007/978-1-4471-6515-6_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lu, Tong, Shivakumara Palaiahnakote, Chew Lim Tan, and Wenyin Liu. "Introduction to Video Text Detection." In Video Text Detection, 1–18. London: Springer London, 2014. http://dx.doi.org/10.1007/978-1-4471-6515-6_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lu, Tong, Shivakumara Palaiahnakote, Chew Lim Tan, and Wenyin Liu. "Performance Evaluation." In Video Text Detection, 247–54. London: Springer London, 2014. http://dx.doi.org/10.1007/978-1-4471-6515-6_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Lu, Tong, Shivakumara Palaiahnakote, Chew Lim Tan, and Wenyin Liu. "Video Caption Detection." In Video Text Detection, 49–80. London: Springer London, 2014. http://dx.doi.org/10.1007/978-1-4471-6515-6_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Lu, Tong, Shivakumara Palaiahnakote, Chew Lim Tan, and Wenyin Liu. "Text Detection from Video Scenes." In Video Text Detection, 81–126. London: Springer London, 2014. http://dx.doi.org/10.1007/978-1-4471-6515-6_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lu, Tong, Shivakumara Palaiahnakote, Chew Lim Tan, and Wenyin Liu. "Post-processing of Video Text Detection." In Video Text Detection, 127–44. London: Springer London, 2014. http://dx.doi.org/10.1007/978-1-4471-6515-6_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Vidéo texte"

1

Zu, Xinyan, Haiyang Yu, Bin Li, and Xiangyang Xue. "Towards Accurate Video Text Spotting with Text-wise Semantic Reasoning." In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/206.

Full text
Abstract:
Video text spotting (VTS) aims at extracting texts from videos, where text detection, tracking and recognition are conducted simultaneously. There have been some works that can tackle VTS; however, they may ignore the underlying semantic relationships among texts within a frame. We observe that the texts within a frame usually share similar semantics, which suggests that, if one text is predicted incorrectly by a text recognizer, it still has a chance to be corrected via semantic reasoning. In this paper, we propose an accurate video text spotter, VLSpotter, that reads texts visually, linguistically, and semantically. For ‘visually’, we propose a plug-and-play text-focused super-resolution module to alleviate motion blur and enhance video quality. For ‘linguistically’, a language model is employed to capture intra-text context to mitigate wrongly spelled text predictions. For ‘semantically’, we propose a text-wise semantic reasoning module to model inter-text semantic relationships and reason for better results. The experimental results on multiple VTS benchmarks demonstrate that the proposed VLSpotter outperforms the existing state-of-the-art methods in end-to-end video text spotting.
APA, Harvard, Vancouver, ISO, and other styles
2

Chen, Jiafu, Boyan Ji, Zhanjie Zhang, Tianyi Chu, Zhiwen Zuo, Lei Zhao, Wei Xing, and Dongming Lu. "TeSTNeRF: Text-Driven 3D Style Transfer via Cross-Modal Learning." In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/642.

Full text
Abstract:
Text-driven 3D style transfer aims at stylizing a scene according to the text and generating arbitrary novel views with consistency. Simply combining image/video style transfer methods and novel view synthesis methods results in flickering when changing viewpoints, while existing 3D style transfer methods learn styles from images instead of texts. To address this problem, we for the first time design an efficient text-driven model for 3D style transfer, named TeSTNeRF, to stylize the scene using texts via cross-modal learning: we leverage an advanced text encoder to embed the texts in order to control 3D style transfer and align the input text and output stylized images in latent space. Furthermore, to obtain better visual results, we introduce style supervision, learning feature statistics from style images and utilizing 2D stylization results to rectify abrupt color spill. Extensive experiments demonstrate that TeSTNeRF significantly outperforms existing methods and provides a new way to guide 3D style transfer.
APA, Harvard, Vancouver, ISO, and other styles
3

Oliveira, Leandro Massetti Ribeiro, Antonio José G. Busson, Carlos de Salles S. Neto, Gabriel N. P. dos Santos, and Sérgio Colcher. "Automatic Generation of Learning Objects Using Text Summarizer Based on Deep Learning Models." In Simpósio Brasileiro de Informática na Educação. Sociedade Brasileira de Computação - SBC, 2021. http://dx.doi.org/10.5753/sbie.2021.217360.

Full text
Abstract:
A learning object (LO) is an entity, digital or not, that can be used and reused or referenced during a technological support process for teaching and learning. Despite mainly being multimedia, with audio, video, text and images synchronized with each other, LOs can help disseminate knowledge even only in educational texts. However, creating these texts can be costly in time and effort, creating the need to seek new ways to generate this content. This article presents a solution for the generation of text-based LOs generated through summaries supported by Deep Learning models. The present work was evaluated in a supervised experiment in which volunteers rate computer educational texts generated by three types of summarizers. The results presented are positive and allow us to compare the performance of summaries as LO generators in text format. The findings also suggest that using post-processing in the output of models can improve the readability of generated content.
APA, Harvard, Vancouver, ISO, and other styles
4

Cardoso, Walcir, and Danial Mehdipour-Kolour. "Writing with automatic speech recognition: Examining user’s behaviours and text quality (lexical diversity)." In EuroCALL 2023: CALL for all Languages. Editorial Universitat Politécnica de Valéncia: Editorial Universitat Politécnica de Valéncia, 2023. http://dx.doi.org/10.4995/eurocall2023.2023.16997.

Full text
Abstract:
This study explores the potential of Automatic Speech Recognition (ASR) as a writing tool by investigating user behaviours (strategies henceforth) and text quality (lexical diversity) when users engage with the technology. Thirty English second language writers dictated texts into an ASR system (Google Voice Typing) while also using optional additional input devices, such as keyboards and mice. Analysis of video recordings and field observations revealed four strategies employed by users to produce texts: use of ASR exclusively, ASR in tandem with keyboarding, ASR followed by keyboarding, and ASR followed by both keyboarding and ASR. These strategies reflected cognitive differences and text generation challenges. Text quality was operationalized through lexical diversity metrics. Results showed that ASR use in tandem with keyboarding and ASR followed by both keyboarding and ASR yielded greater lexical diversity, whereas the use of ASR exclusively or ASR followed by keyboarding had lower diversity. Findings suggest that the integrated use of ASR and keyboarding activates dual channels, thus dispersing cognitive load and possibly improving text quality (i.e. lexical diversity). This exploratory study demonstrates potential for ASR as a complementary writing tool and lays groundwork for further research on the strategic integration of ASR and keyboarding to improve the quality of written texts.
APA, Harvard, Vancouver, ISO, and other styles
5

Ryabinin, Konstantin Valentinovich, Svetlana Vladimirovna Alexeeva, and Tatiana Evgenievna Petrova. "Automatic Areas of Interest Detector for Mobile Eye Trackers." In 32nd International Conference on Computer Graphics and Vision. Keldysh Institute of Applied Mathematics, 2022. http://dx.doi.org/10.20948/graphicon-2022-228-239.

Full text
Abstract:
Thе paper deals with automatic areas of interest detection in video streams derived from mobile eye trackers. Defining such areas on a visual stimulus viewed by an informant is an important step in setting up any eye-tracking-based experiment. If the informant’s field of view is stationary, areas of interest can be selected manually, but when we use mobile eye trackers, the field of view is usually constantly changing, so automation is badly needed. We propose using computer vision algorithms to automatically locate the given 2D stimulus template in a video stream and construct the homography transform that can map the undistorted stimulus template to the video frame coordinate system. In parallel to this, the segmentation of a stimulus template into the areas of interest is performed, and the areas of interest are mapped to the video frame. The considered stimuli are texts typed in specific fonts and the interest areas are individual words in these texts. Optical character recognition leveraged by the Tesseract engine is used for segmentation. The text location relies on a combination of Scale-Invariant Feature Transform and Fast Library for Approximate Nearest Neighbors. The homography is constructed using Random Sample Consensus. All the algorithms are implemented based on the OpenCV library as microservices within the SciVi ontology-driven platform that provides high-level tools to compose pipelines using a data-flow-based visual programming paradigm. The proposed pipeline was tested on real eye tracking data and proved to be efficient and robust.
APA, Harvard, Vancouver, ISO, and other styles
6

Balaji, Yogesh, Martin Renqiang Min, Bing Bai, Rama Chellappa, and Hans Peter Graf. "Conditional GAN with Discriminative Filter Generation for Text-to-Video Synthesis." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/276.

Full text
Abstract:
Developing conditional generative models for text-to-video synthesis is an extremely challenging yet an important topic of research in machine learning. In this work, we address this problem by introducing Text-Filter conditioning Generative Adversarial Network (TFGAN), a conditional GAN model with a novel multi-scale text-conditioning scheme that improves text-video associations. By combining the proposed conditioning scheme with a deep GAN architecture, TFGAN generates high quality videos from text on challenging real-world video datasets. In addition, we construct a synthetic dataset of text-conditioned moving shapes to systematically evaluate our conditioning scheme. Extensive experiments demonstrate that TFGAN significantly outperforms existing approaches, and can also generate videos of novel categories not seen during training.
APA, Harvard, Vancouver, ISO, and other styles
7

Kim, Jonghee, Youngwan Lee, and Jinyoung Moon. "T2V2T: Text-to-Video-to-Text Fusion for Text-to-Video Retrieval." In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, 2023. http://dx.doi.org/10.1109/cvprw59228.2023.00594.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Pelli, Denis G. "Reading and Contrast Adaptation." In Applied Vision. Washington, D.C.: Optica Publishing Group, 1989. http://dx.doi.org/10.1364/av.1989.thb4.

Full text
Abstract:
Lunn and Banks (1986), studying the reasons for “visual fatigue” among users of video text displays, found that reading from a video display caused a tenfold elevation of the contrast threshold for a sinusoidal grating with the same spatial frequency as the lines of text. They suggested that this may contribute to “visual fatigue,” possibly by affecting accommodation, but they did not explain why printed text would not have the same effect.
APA, Harvard, Vancouver, ISO, and other styles
9

Denoue, Laurent, Scott Carter, and Matthew Cooper. "Video text retouch." In the adjunct publication of the 27th annual ACM symposium. New York, New York, USA: ACM Press, 2014. http://dx.doi.org/10.1145/2658779.2659102.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Alekseev, Alexander Petrovich, and Tatyana Vladimirovna Alekseeva. "Video search by image - technology "Video Color"." In 24th Scientific Conference “Scientific Services & Internet – 2022”. Keldysh Institute of Applied Mathematics, 2022. http://dx.doi.org/10.20948/abrau-2022-2.

Full text
Abstract:
We all face the challenge of finding information every day. It is required to find text, images, audio or video information. Most often, text is used for the search query. Less often - images. There are services like "Shazam" that search for music using sound recording. We focused on creating a search service that searches for videos. We use images as parameters for the request.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Vidéo texte"

1

Li, Huiping, David Doermann, and Omid Kia. Automatic Text Detection and Tracking in Digital Video. Fort Belvoir, VA: Defense Technical Information Center, December 1998. http://dx.doi.org/10.21236/ada458675.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Sharova, Iryna. WAYS OF PROMOTING UKRANIAN PUBLISHING HOUSES ON FACEBOOK DURING QUARANTINE. Ivan Franko National University of Lviv, February 2021. http://dx.doi.org/10.30970/vjo.2021.49.11076.

Full text
Abstract:
The article reviews and analyzes the promotion of Ukrainian publishing houses on Facebook during quarantine in 2020. The study’s main objective is content and its types, which were used for representing on Facebook. We found out that going live and posting a text with a picture was most popular. The phenomenon of live video is tightly connected to the quarantine phenomenon. Though, not every publishing house was able to go live permanently or at least regular. However, simple text with a picture is the most uncomplicated content to post and the most popular. Ukrainian publishers also use UGC (User Generated Content), situational content, and different contexts. The biggest problem for Ukrainian publishers is continual strategic work with social media for promotion. During quarantine, social media became the first channel for communication with customers and subscribers. Therefore promotion on the Internet and in social media indeed should become equivalent to offline promotion.
APA, Harvard, Vancouver, ISO, and other styles
3

Baluk, Nadia, Natalia Basij, Larysa Buk, and Olha Vovchanska. VR/AR-TECHNOLOGIES – NEW CONTENT OF THE NEW MEDIA. Ivan Franko National University of Lviv, February 2021. http://dx.doi.org/10.30970/vjo.2021.49.11074.

Full text
Abstract:
The article analyzes the peculiarities of the media content shaping and transformation in the convergent dimension of cross-media, taking into account the possibilities of augmented reality. With the help of the principles of objectivity, complexity and reliability in scientific research, a number of general scientific and special methods are used: method of analysis, synthesis, generalization, method of monitoring, observation, problem-thematic, typological and discursive methods. According to the form of information presentation, such types of media content as visual, audio, verbal and combined are defined and characterized. The most important in journalism is verbal content, it is the one that carries the main information load. The dynamic development of converged media leads to the dominance of image and video content; the likelihood of increasing the secondary content of the text increases. Given the market situation, the effective information product is a combined content that combines text with images, spreadsheets with video, animation with infographics, etc. Increasing number of new media are using applications and website platforms to interact with recipients. To proceed, the peculiarities of the new content of new media with the involvement of augmented reality are determined. Examples of successful interactive communication between recipients, the leading news agencies and commercial structures are provided. The conditions for effective use of VR / AR-technologies in the media content of new media, the involvement of viewers in changing stories with augmented reality are determined. The so-called immersive effect with the use of VR / AR-technologies involves complete immersion, immersion of the interested audience in the essence of the event being relayed. This interaction can be achieved through different types of VR video interactivity. One of the most important results of using VR content is the spatio-temporal and emotional immersion of viewers in the plot. The recipient turns from an external observer into an internal one; but his constant participation requires that the user preferences are taken into account. Factors such as satisfaction, positive reinforcement, empathy, and value influence the choice of VR / AR content by viewers.
APA, Harvard, Vancouver, ISO, and other styles
4

Kalenych, Volodymyr. IMMERSIVE TECHNOLOGIES OF JOURNALISM IN THE UKRAINIAN AND GLOBAL MEDIA SPACE. Ivan Franko National University of Lviv, March 2024. http://dx.doi.org/10.30970/vjo.2024.54-55.12161.

Full text
Abstract:
The article deals with the new technologies of immersive journalism in the Ukrainian and global mediaspace for the example of specific media. The 360° video stands out among the main formats of immersive journalism, in it the viewer himself explores the video space, becoming a witness of events. The formats of photogrammetry, virtual reality (VR), 3D panoramas and 3D maps are also immersive. New formats and technologies have revolutionized the media sphere and allowed to create more dynamic and interesting stories. Immersive technologies made possible to transport the audience directly to the center of the news event through the format of 360-degree video and three-dimensional virtual reality, providing the «effect of presence». The format of 3D models and photogrammetry allowed users to interact with stories on a visual level more actively. Immersive technologies have also had a profound impact on the functioning of immersive journalism and fundamentally changed the way audiences interact with news stories. «Radio Svoboda», «Texty», «Ukraїner», «The New York Times», «The Guardian», «Der Tagesspiegel», «WDR» and other media experiment with the immersive formats. They give the opportunity for viewers to be in the center of a news event directly or to get an interactive, data-rich experience. This immersive approach allowed for increased empathy and understanding of each information consumer because they can feel and see the environments which are associated with a particular story. Key words: new media, media format, media technology, immersive technologies, immersive journalism.
APA, Harvard, Vancouver, ISO, and other styles
5

Kuzmin, Vyacheslav, Alebai Sabitov, Andrei Reutov, Vladimir Amosov, Lidiia Neupokeva, and Igor Chernikov. Electronic training manual "Providing first aid to the population". SIB-Expertise, January 2024. http://dx.doi.org/10.12731/er0774.29012024.

Full text
Abstract:
First aid represents the simplest urgent measures necessary to save the lives of victims of injuries, accidents and sudden illnesses. Providing first aid greatly increases the chances of salvation in case of bleeding, injury, cardiac and respiratory arrest, and prevents complications such as shock, massive blood loss, additional displacement of bone fragments and injury to large nerve trunks and blood vessels. This electronic educational resourse consists of four theoretical educational modules: legal aspects of providing first aid to victims and work safety when providing first aid; providing first aid in critical conditions of the body; providing first aid for injuries of various origins; providing first aid in case of extreme exposures, accidents and poisonings. The electronic educational resource materials include 8 emergency conditions and 11 life-saving measures. The theoretical block of modules is presented by presentations, the text of lectures with illustrations, a video film and video lectures. Control classes in the form of test control accompany each theoretical module. After studying all modules, the student passes the final test control. Mastering the electronic manual will ensure a high level of readiness to provide first aid to persons without medical education.
APA, Harvard, Vancouver, ISO, and other styles
6

Felix, Juri, and Laura Webb. Use of artificial intelligence in education delivery and assessment. Parliamentary Office of Science and Technology, January 2024. http://dx.doi.org/10.58248/pn712.

Full text
Abstract:
This POSTnote considers how artificial intelligence (AI) technologies can be used by educators and learners in schools, colleges and universities. Artificial intelligence technologies that can be used in education have developed rapidly in recent years. This has been driven in part by advancements of generative AI, which is now capable of performing a wide range of tasks including the production of realistic content such as text, images, audio and video. Artificial intelligence tools have the potential to provide different ways of learning and to help educators with lesson planning, marking and other tasks. However, adoption of AI in education is still in an early and experimental phase. There is uncertainty about the benefits and limitations. Some stakeholders have expressed concerns that over-reliance on AI could diminish educator-learner relationships. Concerns also relate to potential negative impacts on learners’ writing and critical thinking skills, through work being undertaken by AI. In November 2023, the Department for Education published a report on the use of Generative AI in education. The UK Government have also announced an investment of up to £2 million to provide new AI-powered resources for teachers in England.
APA, Harvard, Vancouver, ISO, and other styles
7

Kaisler, Raphaela, and Thomas Palfinger. Patient and Public Involvement and Engagement (PPIE): Funding, facilitating and evaluating participatory research approaches in Austria. Fteval - Austrian Platform for Research and Technology Policy Evaluation, April 2022. http://dx.doi.org/10.22163/fteval.2022.551.

Full text
Abstract:
The LBG OIS Center established a new Patient and Public Involvement and Engagement (PPIE) Implementation program aiming at ‘active involving’ public members in research across different phases of the research cycle – from setting the agenda to disseminating results – and its governance. The program offers funding and facilitation of these PPIE activities. The first PPIE pilot call was launched in Autumn 2020. It supports researchers in Austria with up to EUR 60.000 in order to implement their PPIE activities. In addition, the program offers support in the form of consultation, training, knowledge exchange and networking opportunities. One important characteristic of the selection process is the composition of the expert panel, bringing together transdisciplinary expertise from different areas (scientific experts, patients, and students). The expert panel recommended 11 out of 25 PPIE projects for funding (success rate 44%). 45% of the applicants participated in the support offers prior to the call and 52% in the continuing support offer after the call had been closed. Based on our online surveys, overall, participants were very satisfied with the support offers. Learnings of the first call address the eligibility of applicants. In the selection meeting, we found that different understandings of ‘active involvement’ were negotiated among experts. However, this was not a problem due to the open and collaborative atmosphere and mutual learning opportunity for experts. The panel suggested opening the call to non-research bodies, which indicates small changes in the application format – e.g. video and text-based applications in German and English. Despite of small adaptions in the second PPIE Pilot Call 2021, it seems that the funding instrument was appropriate and reflects a low-threshold offering for researchers introducing public involvement activities in their work.
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Baisong, and Bo Xu. PR-469-19604-Z01 Auto Diagnostic Method Development for Ultrasonic Flow Meter. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), February 2022. http://dx.doi.org/10.55274/r0012204.

Full text
Abstract:
The objectives of this research are to develop methods for performing ultrasonic flow meter (USM) diagnostic evaluation automatically and a software tool with all necessary attachments. USM-based diagnostics have been established and thirteen categories of knowledge rules of existing cases have been learned and integrated. A search engine for relevant standards, specifications, and other documents of the measurement system has been developed, which enables the free search of text content. Further, with the assistance of modern reasoning techniques, the authorized user only needs to configure an EXCEL file or scripts to activate the rules of the knowledge base by using Drools technology. Therefore, the integration of any potential knowledge rules is convenient and requires no professional skills or changing of the internal source code of the software. Secondly, a new flow meter diagnostic method is proposed based on multiple information methodologies and it is based on the real-time measurement data, operation data, and video data if applicable. The method is intended to identify abnormal states of the measurement system on a real-time basis with the assistance of the knowledge rules and to provide a strategy for mitigating the meter error of components within the measurement system. Thirdly, the applications of Gaussian quadrature diagnostics in daily acquisition nomination change situations and compressor-induced pulsating flow scenarios have been investigated and results are shown in the document. Based on the results, it can be concluded that the measurement uncertainty caused by compressor-induced pulsating flows is obvious, while the measurement uncertainty caused by daily acquisition nomination change is relatively smaller. The software is then developed based on the knowledge, the idea of multiple information methods, and applications of the Gaussian quadrature diagnostics method with all necessary attachments. The architecture, the algorithm, and a few examples are introduced.
APA, Harvard, Vancouver, ISO, and other styles
9

Prudkov, Mikhail, Vasily Ermolaev, Elena Shurygina, and Eduard Mikaelyan. Electronic educational resource "Hospital Surgery for 5th year students of the Faculty of Pediatrics". SIB-Expertise, January 2024. http://dx.doi.org/10.12731/er0780.29012024.

Full text
Abstract:
Electronic educational resourc was created for independent work of 5th year students of the pediatric faculty in the study of the discipline "Hospital Surgery". The possibility of control by the teacher is provided. This EER includes an introductory module, a topic module, and a quality assessment module. The structure of each topic in the EER (there are 19 topics in total) consists of the following sections: educational and methodological tasks on the topic, abstract of the topic, control tests on the topic, clinical situational tasks on the topic and a list of references. The section "Summary of the topic" at the moment can be presented in the form of a text file, or a presentation, or a video lecture, or a monograph by the staff of the department, etc. This section is gradually updated with new materials. The section "Control tests on the topic" is designed to control the teacher for the independent work of students and contains 15 tests, the solution of which is given 10 minutes and two attempts, a passing result for crediting 71% of correct answers. The section "Clinical situational tasks" serves for self-control of the student in mastering the topic - if he understood the content of the task, made a preliminary diagnosis and knows the tactics of managing the patient, the topic is mastered. There are ten clinical situational tasks for each topic, students receive different versions of tasks. In addition, the EER has a "Final test control" section, which contains test tasks from all topics of practical classes. The program randomly generates a final test of 30 tasks, 20 minutes are allotted for solving, the student has the right to two attempts. More than 71% of correct answers are counted.
APA, Harvard, Vancouver, ISO, and other styles
10

Initiation au logiciel Nvivo. Instats Inc., 2023. http://dx.doi.org/10.61700/fzb3icwf9pox5469.

Full text
Abstract:
Au cours de cette formation de quatre demi-jours au logiciel d'analyse qualitative NVivo, les participants exploreront les fondamentaux du logiciel, plongeant dans une première journée dédiée à la découverte approfondie de l'interface et des fonctionnalités de base. La deuxième journée se concentrera sur la gestion des codes et l'encodage de divers types de documents tels que texte, PDF et images. La troisième journée approfondira les compétences avec des sessions consacrées aux requêtes de fréquences et aux recherches textuelles, tout en abordant la création de cas et la gestion des caractéristiques pour une analyse plus approfondie. Enfin, lors de la quatrième journée, les participants exploreront les requêtes d'encodage matriciel et les tableaux croisés, avec une attention particulière portée à l'intégration des vidéos dans l'environnement NVivo.Pour les doctorants européens, le séminaire offre 1 point d'équivalence ECTS.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography