Literatura científica selecionada sobre o tema "Réponse aux questions vidéo"
Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos
Consulte a lista de atuais artigos, livros, teses, anais de congressos e outras fontes científicas relevantes para o tema "Réponse aux questions vidéo".
Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.
Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.
Artigos de revistas sobre o assunto "Réponse aux questions vidéo"
Bégin, Mathieu. "Quand des adolescents font une vidéo sur la cyberintimidation : une action citoyenne ?1". Lien social et Politiques, n.º 80 (22 de março de 2018): 128–48. http://dx.doi.org/10.7202/1044113ar.
Texto completo da fonteHaffen, E. "Le ralentissement psychomoteur : une dimension à (re)découvrir ?" European Psychiatry 29, S3 (novembro de 2014): 578–79. http://dx.doi.org/10.1016/j.eurpsy.2014.09.277.
Texto completo da fonteLAMORT-BOUCHE, M., T. PIPARD, C. PIGACHE, A. MOREAU, J.-B. FASSIER e F. ZORZI. "Enseigner la conduite d’entretien semi-directif en recherche qualitative. Développement et évaluation d’un kit d’auto-apprentissage avec vidéo modèle et contre-modèle". EXERCER 31, n.º 166 (1 de outubro de 2020): 365–71. http://dx.doi.org/10.56746/exercer.2020.166.365.
Texto completo da fonteBardol, T., A. Cubisino e R. Souche. "Réponse aux commentaires sur « jéjunostomie par voie laparoscopique (avec vidéo) »". Journal de Chirurgie Viscérale 158, n.º 4 (agosto de 2021): 400–401. http://dx.doi.org/10.1016/j.jchirv.2021.05.002.
Texto completo da fonteBéjà, Vincent. "Réponse aux questions de Patrice Ranjard". Gestalt 41, n.º 1 (2012): 183. http://dx.doi.org/10.3917/gest.041.0183.
Texto completo da fonteLambelet, Daniel. "« Image, dis-moi… » ou la formation à l’aide de la vidéo". Articles 16, n.º 3 (19 de novembro de 2009): 405–14. http://dx.doi.org/10.7202/900676ar.
Texto completo da fonteFioraso, Geneviève, e Jean-Luc Cacciali. "Quelques éléments de réponse aux questions sur la jeunesse". La revue lacanienne 18, n.º 1 (2017): 202. http://dx.doi.org/10.3917/lrl.171.0202.
Texto completo da fonteBovas, Magali, Etienne Chabloz e Vanessa Lentillon-Kaestner. "Projet « Lü_Move & Learn »". L'Education physique en mouvement, n.º 5 (18 de dezembro de 2022): 23–26. http://dx.doi.org/10.26034/vd.epm.2021.3522.
Texto completo da fonteErard, Yves. "De l'énonciation à l'enaction. L'inscription corporelle de la langue". Cahiers du Centre de Linguistique et des Sciences du Langage, n.º 11 (9 de abril de 2022): 91–121. http://dx.doi.org/10.26034/la.cdclsl.1998.1842.
Texto completo da fonteGordon, Robert J., Ian Dew-Becker e Gérard Cornilleau. "Questions sans réponse sur l'augmentation des inégalités aux États-Unis". Revue de l'OFCE 102, n.º 3 (2007): 417. http://dx.doi.org/10.3917/reof.102.0417.
Texto completo da fonteTeses / dissertações sobre o assunto "Réponse aux questions vidéo"
Engin, Deniz. "Video question answering with limited supervision". Electronic Thesis or Diss., Université de Rennes (2023-....), 2024. http://www.theses.fr/2024URENS016.
Texto completo da fonteVideo content has significantly increased in volume and diversity in the digital era, and this expansion has highlighted the necessity for advanced video understanding technologies. Driven by this necessity, this thesis explores semantically understanding videos, leveraging multiple perceptual modes similar to human cognitive processes and efficient learning with limited supervision similar to human learning capabilities. This thesis specifically focuses on video question answering as one of the main video understanding tasks. Our first contribution addresses long-range video question answering, requiring an understanding of extended video content. While recent approaches rely on human-generated external sources, we process raw data to generate video summaries. Our following contribution explores zero-shot and few-shot video question answering, aiming to enhance efficient learning from limited data. We leverage the knowledge of existing large-scale models by eliminating challenges in adapting pre-trained models to limited data. We demonstrate that these contributions significantly enhance the capabilities of multimodal video question-answering systems, where specifically human-annotated labeled data is limited or unavailable
Lerner, Paul. "Répondre aux questions visuelles à propos d'entités nommées". Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG074.
Texto completo da fonteThis thesis is positioned at the intersection of several research fields, Natural Language Processing, Information Retrieval (IR) and Computer Vision, which have unified around representation learning and pre-training methods. In this context, we have defined and studied a new multimodal task: Knowledge-based Visual Question Answering about Named Entities (KVQAE).In this context, we were particularly interested in cross-modal interactions and different ways of representing named entities. We also focused on data used to train and, more importantly, evaluate Question Answering systems through different metrics.More specifically, we proposed a dataset for this purpose, the first in KVQAE comprising various types of entities. We also defined an experimental framework for dealing with KVQAE in two stages through an unstructured knowledge base and identified IR as the main bottleneck of KVQAE, especially for questions about non-person entities. To improve the IR stage, we studied different multimodal fusion methods, which are pre-trained through an original task: the Multimodal Inverse Cloze Task. We found that these models leveraged a cross-modal interaction that we had not originally considered, and which may address the heterogeneity of visual representations of named entities. These results were strengthened by a study of the CLIP model, which allows this cross-modal interaction to be modeled directly. These experiments were carried out while staying aware of biases present in the dataset or evaluation metrics, especially of textual biases, which affect any multimodal task
Perrin, Olivier. "Des énigmes de la recherche d'information : contribution à l'analyse du couple question-réponse dans le processus d'échec documentaire chez les professionnels de l'information". Phd thesis, Université Paris VIII Vincennes-Saint Denis, 2013. http://tel.archives-ouvertes.fr/tel-01022475.
Texto completo da fonteBahdanau, Dzmitry. "On sample efficiency and systematic generalization of grounded language understanding with deep learning". Thesis, 2020. http://hdl.handle.net/1866/23943.
Texto completo da fonteBy using the methodology of deep learning that advocates relying more on data and flexible neural models rather than on the expert's knowledge of the domain, the research community has recently achieved remarkable progress in natural language understanding and generation. Nevertheless, it remains unclear whether simply scaling up existing deep learning methods will be sufficient to achieve the goal of using natural language for human-computer interaction. We focus on two related aspects in which current methods appear to require major improvements. The first such aspect is the data inefficiency of deep learning systems: they are known to require extreme amounts of data to perform well. The second aspect is their limited ability to generalize systematically, namely to understand language in situations when the data distribution changes yet the principles of syntax and semantics remain the same. In this thesis, we present four case studies in which we seek to provide more clarity regarding the aforementioned data efficiency and systematic generalization aspects of deep learning approaches to language understanding, as well as to facilitate further work on these topics. In order to separate the problem of representing open-ended real-world knowledge from the problem of core language learning, we conduct all these studies using synthetic languages that are grounded in simple visual environments. In the first article, we study how to train agents to follow compositional instructions in environments with a restricted form of supervision. Namely for every instruction and initial environment configuration we only provide a goal-state instead of a complete trajectory with actions at all steps. We adapt adversarial imitation learning methods to this setting and demonstrate that such a restricted form of data is sufficient to learn compositional meanings of the instructions. Our second article also focuses on instruction following. We develop the BabyAI platform to facilitate further, more extensive and rigorous studies of this setup. The platform features a compositional Baby language with $10^{19}$ instructions, whose semantics is precisely defined in a partially-observable gridworld environment. We report baseline results on how much supervision is required to teach the agent certain subsets of Baby language with different training methods, such as reinforcement learning and imitation learning. In the third article we study systematic generalization of visual question answering (VQA) models. In the VQA setting the system must answer compositional questions about images. We construct a dataset of spatial questions about object pairs and evaluate how well different models perform on questions about pairs of objects that never occured in the same question in the training distribution. We show that models in which word meanings are represented by separate modules that perform independent computation generalize much better than models whose design is not explicitly modular. The modular models, however, generalize well only when the modules are connected in an appropriate layout, and our experiments highlight the challenges of learning the layout by end-to-end learning on the training distribution. In our fourth and final article we also study generalization of VQA models to questions outside of the training distribution, but this time using the popular CLEVR dataset of complex questions about 3D-rendered scenes as the platform. We generate novel CLEVR-like questions by using similarity-based references (e.g. ``the ball that has the same color as ...'') in contexts that occur in CLEVR questions but only with location-based references (e.g. ``the ball that is to the left of ...''). We analyze zero- and few- shot generalization to CLOSURE after training on CLEVR for a number of existing models as well as a novel one.
Livros sobre o assunto "Réponse aux questions vidéo"
Jäger, Theo. Pierre Bayles Philosophie in der "Réponse aux questions d'un provinicial". Marburg: Tectum, 2004.
Encontre o texto completo da fonteCalvin, Jean. Réponse aux questions et objections d'un certain juif: Transcendance messianique, l'ouverture et l'impensé. Genève: Labor et fides, 2010.
Encontre o texto completo da fonteSaïd Ibn Mohammed Al Jazairi. Réponse des Grands Savants: Aux Questions des Musulmans D'Occident. Independently Published, 2021.
Encontre o texto completo da fonteModernité et avant-gardisme de l'art académique: La réponse de l'art aux questions de notre temps, ou, "l'académisme éclectique" : essai. Paris: Pierre-Guillaume de Roux, 2018.
Encontre o texto completo da fonteHabich, Eduardo Juan, e Aoust (Louis Stanislas Xavier Barthéle. Etudes Géométriques et Cinématiques: Note Sur Quelques Questions de Géométrie et de Cinématique et Réponse Aux Réclamations de M. l'abbé Aoust. Creative Media Partners, LLC, 2023.
Encontre o texto completo da fonteConnaissance de l'univers et de la Planète Terre: Réponse d'experts Aux Questions d'un Enfant - développement des Connaissances Pour les Enfants et Les écoliers. Independently Published, 2021.
Encontre o texto completo da fonteAvis aux Vrais Catholiques, ou Conduite A Tenir dans les Circonstances Actuelles: En Réponse aux Cinq Questions Suivantes: 1. Que Doivent Faire les ... Pasteur Dṕlacé? 4. Que Doive (French Edition). Forgotten Books, 2018.
Encontre o texto completo da fonteCapítulos de livros sobre o assunto "Réponse aux questions vidéo"
"21. Réponses aux questions". In Voyage au coeur de la relation dose-réponse du médicament, 287–90. EDP Sciences, 2020. http://dx.doi.org/10.1051/978-2-7598-1999-7.c025.
Texto completo da fonteFALL, Papis Comakha. "Les mobilités dans l'espace ouest-africain aux XIXe et XXe siècles". In Les Annales du Sud, No. 1, 113–26. Editions des archives contemporaines, 2023. http://dx.doi.org/10.17184/eac.7246.
Texto completo da fonteARGAUD, Evelyne. "Le cours de culture française". In Quelles compétences en langues, littératures et cultures étrangères ?, 95–104. Editions des archives contemporaines, 2021. http://dx.doi.org/10.17184/eac.3892.
Texto completo da fonteREDONDO, Cécile, e Caroline LADAGE. "L’hétérogénéité des fondements épistémologiques dans l’éducation au développement durable". In Expériences pédagogiques depuis l'Anthropocène, 15–26. Editions des archives contemporaines, 2021. http://dx.doi.org/10.17184/eac.5383.
Texto completo da fonteBarroche, Julien. "Le moment européen". In Le moment européen, 81–114. Hermann, 2024. http://dx.doi.org/10.3917/herm.crign.2024.01.0081.
Texto completo da fonteLEFORT, H., J. PANNETIER, O. BON, P. GINDRE, H. MARSAA, S. TRAVERS e F. PEDUZZI. "Expert santé du mécanisme de protection civile de l’Union européenne". In Médecine et Armées Vol. 46 No.3, 289–98. Editions des archives contemporaines, 2018. http://dx.doi.org/10.17184/eac.7345.
Texto completo da fonteRelatórios de organizações sobre o assunto "Réponse aux questions vidéo"
Dudoit, Alain. Les espaces européens communs de données : une initiative structurante nécessaire et adaptable au Canada. CIRANO, outubro de 2023. http://dx.doi.org/10.54932/ryht5065.
Texto completo da fonteGestion de la pandémie de COVID-19 - Analyse de la dotation en personnel dans les centres d'hébergement de soins de longue durée du Québec au cours de la première vague. CIRANO, junho de 2023. http://dx.doi.org/10.54932/fupo1664.
Texto completo da fonte