Gotowa bibliografia na temat „Traitement multimodal”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Traitement multimodal”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "Traitement multimodal"
Wilkniss, Sandra M., Richard H. Hunter i Steven M. Silverstein. "Traitement multimodal de l’agressivité et de la violence chez des personnes souffrant de psychose". Santé mentale au Québec 29, nr 2 (5.10.2005): 143–74. http://dx.doi.org/10.7202/010835ar.
Pełny tekst źródłaHodaj, H., J. M. Pellat, A. Dumolard, J. J. Banihachemi, B. Rosnoblet, J. P. Alibeu i C. Jacquot. "TO30 Traitement multimodal de l’algoneurodystrophie rebelle". Douleurs : Evaluation - Diagnostic - Traitement 8 (październik 2007): 80. http://dx.doi.org/10.1016/s1624-5687(07)73172-4.
Pełny tekst źródłaJacot, W., X. Quantin, S. Valette, F. Khial i J. L. Pujol. "185 Traitement multimodal des tumeurs épithéliales thymiques". Revue des Maladies Respiratoires 21 (styczeń 2004): 77. http://dx.doi.org/10.1016/s0761-8425(04)71811-2.
Pełny tekst źródłaHimmighoffen, Holger, i Heinz Böker. "L’importance de l’électroconvulsivothérapie (ECT) dans le traitement multimodal des troubles dépressifs". Psychotherapie-Wissenschaft 10, nr 2 (październik 2020): 74–75. http://dx.doi.org/10.30820/1664-9583-2020-2-74.
Pełny tekst źródłaMadhi, S., I. Bouassida, A. Abdelkebir, H. Zribi, M. Abdennadher, S. Zairi, A. Ben Mansour i A. Marghli. "Traitement multimodal des tumeurs de la trachée : résultats chirurgicaux et oncologiques". Revue des Maladies Respiratoires Actualités 15, nr 1 (styczeń 2023): 141. http://dx.doi.org/10.1016/j.rmra.2022.11.202.
Pełny tekst źródłaBonnette, P. "Mésothéliome pleural : où en sont la chirurgie radicale et le traitement multimodal ?" Revue de Pneumologie Clinique 67, nr 4 (wrzesień 2011): 184–90. http://dx.doi.org/10.1016/j.pneumo.2011.04.002.
Pełny tekst źródłaElias, D. "Rationnels de la chirurgie oncologique au sein d’un traitement multimodal des cancers". Journal de Chirurgie 142, nr 5 (wrzesień 2005): 284–90. http://dx.doi.org/10.1016/s0021-7697(05)80931-7.
Pełny tekst źródłaAbrous-Anane, S., A. Savignoni, C. Daveau, J. Y. Pierga, C. Gautier, R. Dendale, F. Campana, Y. Kirova, A. Fourquet i M. Bollet. "Traitement multimodal du cancer du sein inflammatoire : quelle place pour la chirurgie ?" Cancer/Radiothérapie 13, nr 6-7 (październik 2009): 690. http://dx.doi.org/10.1016/j.canrad.2009.08.122.
Pełny tekst źródłaSimeon, C., C. Pepin-Richard i M. Fine. "Traitement multimodal d’un carcinome mammaire inflammatoire : chirurgie, chimiothérapie et AINS COX-2 sélectif". Pratique Médicale et Chirurgicale de l'Animal de Compagnie 48, nr 3 (lipiec 2013): 79–86. http://dx.doi.org/10.1016/j.anicom.2013.03.002.
Pełny tekst źródłaHeuberger, Schneider i Bodis. "Stellenwert der Radiotherapie beim Nicht-kleinzelligen Bronchuskarzinom". Praxis 91, nr 33 (1.08.2002): 1307–14. http://dx.doi.org/10.1024/0369-8394.91.33.1307.
Pełny tekst źródłaRozprawy doktorskie na temat "Traitement multimodal"
Dourlens, Sébastien. "Multimodal interaction semantic architecture for ambient intelligence". Versailles-St Quentin en Yvelines, 2012. http://www.theses.fr/2012VERS0011.
Pełny tekst źródłaThere still exist many fields in which ways are to be explored to improve the human-system interaction. These systems must have the capability to take advantage of the environment in order to improve interaction. This extends the capabilities of system (machine or robot) to better reach natural language used by human beings. We propose a methodology to solve the multimodal interaction problem adapted to several contexts by defining and modelling a distributed architecture relying on W3C standards and web services (semantic agents and input/output services) working in ambient intelligence environment. This architecture is embedded in a multi-agent system modelling technique. In order to achieve this goal, we need to model the environment using a knowledge representation and communication language (EKRL, Ontology). The obtained semantic environment model is used in two main semantic inference processes: fusion and fission of events at different levels of abstraction. They are considered as two context-aware operations. The fusion operation interprets and understands the environment and detects the happening scenario. The multimodal fission operation interprets the scenario, divides it into elementary tasks, and executes these tasks which require the discovery, selection and composition of appropriate services in the environment to accomplish various aims. The adaptation to environmental context is based on multilevel reinforcement learning technique. The overall architecture of fusion and fission is validated under our framework (agents, services, EKRL concentrator), by developing different performance analysis on some use cases such as monitoring and assistance in daily activities at home and in the town
Chlaily, Saloua. "Modèle d'interaction et performances du traitement du signal multimodal". Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAT026/document.
Pełny tekst źródłaThe joint processing of multimodal measurements is supposed to lead to better performances than those obtained using a single modality or several modalities independently. However, in literature, there are examples that show that is not always true. In this thesis, we analyze, in terms of mutual information and estimation error, the different situations of multimodal analysis in order to determine the conditions to achieve the optimal performances.In the first part, we consider the simple case of two or three modalities, each associated with noisy measurement of a signal. These modalities are linked through the correlations between the useful parts of the signal and the correlations between the noises. We show that the performances are improved if the links between the modalities are exploited. In the second part, we study the impact on performance of wrong links between modalities. We show that these false assumptions decline the performance, which can become lower than the performance achieved using a single modality.In the general case, we model the multiple modalities as a noisy Gaussian channel. We then extend literature results by considering the impact of the errors on signal and noise probability densities on the information transmitted by the channel. We then analyze this relationship in the case of a simple model of two modalities. Our results show in particular the unexpected fact that a double mismatch of the noise and the signal can sometimes compensate for each other, and thus lead to very good performances
Caglayan, Ozan. "Multimodal Machine Translation". Thesis, Le Mans, 2019. http://www.theses.fr/2019LEMA1016/document.
Pełny tekst źródłaMachine translation aims at automatically translating documents from one language to another without human intervention. With the advent of deep neural networks (DNN), neural approaches to machine translation started to dominate the field, reaching state-ofthe-art performance in many languages. Neural machine translation (NMT) also revived the interest in interlingual machine translation due to how it naturally fits the task into an encoder-decoder framework which produces a translation by decoding a latent source representation. Combined with the architectural flexibility of DNNs, this framework paved the way for further research in multimodality with the objective of augmenting the latent representations with other modalities such as vision or speech, for example. This thesis focuses on a multimodal machine translation (MMT) framework that integrates a secondary visual modality to achieve better and visually grounded language understanding. I specifically worked with a dataset containing images and their translated descriptions, where visual context can be useful forword sense disambiguation, missing word imputation, or gender marking when translating from a language with gender-neutral nouns to one with grammatical gender system as is the case with English to French. I propose two main approaches to integrate the visual modality: (i) a multimodal attention mechanism that learns to take into account both sentence and convolutional visual representations, (ii) a method that uses global visual feature vectors to prime the sentence encoders and the decoders. Through automatic and human evaluation conducted on multiple language pairs, the proposed approaches were demonstrated to be beneficial. Finally, I further show that by systematically removing certain linguistic information from the input sentences, the true strength of both methods emerges as they successfully impute missing nouns, colors and can even translate when parts of the source sentences are completely removed
Choumane, Ali Siroux Jacques. "Traitement générique des références dans le cadre multimodal parole-image-tactile". Rennes : [s.n.], 2008. ftp://ftp.irisa.fr/techreports/theses/2008/choumane.pdf.
Pełny tekst źródłaChoumane, Ali. "Traitement générique des références dans le cadre multimodal parole-image-tactile". Rennes 1, 2008. ftp://ftp.irisa.fr/techreports/theses/2008/choumane.pdf.
Pełny tekst źródłaWe are interested in multimodal human-computer communication systems that use the following modes: speech, gesture and vision. The user communicates with the system by oral utterance in natural language and/or by gesture. The user's request contains his/her goal and the designation of objects (referents) required to the goal realisation. The system should identify in a precise and non ambiguous way the designated objects. In this context, we aim to improve the understanding process of multimodal requests. Hence, we propose a generic set of processing of modalities, for fusion and for reference resolution. The main aspects of the realisation consist in modeling the natural language processing in speech environment, the gesture processing and the visual context (visual salience use) while taking into account the difficulties in multimodal context: speech recognition errors, natural language ambiguity, gesture imprecision due to the user performance, designation ambiguity due to the perception of the displayed objects or to the display topology. To complete the interpretation of the user's request, we propose a method for fusion/verification of modalities processing results to find the designated objects by the user
Sarrut, David Miguet Serge. "Recalage multimodal et plate-forme d'imagerie médicale à accès distant". [S.l.] : [s.n.], 2000. http://demeter.univ-lyon2.fr:8080/sdx/theses/lyon2/2000/sarrut_d.
Pełny tekst źródłaSarrut, David. "Recalage multimodal et plate-forme d'imagerie médicale à accès distant". Lyon 2, 2000. http://theses.univ-lyon2.fr/documents/lyon2/2000/sarrut_d.
Pełny tekst źródłaCadène, Rémi. "Deep Multimodal Learning for Vision and Language Processing". Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS277.
Pełny tekst źródłaDigital technologies have become instrumental in transforming our society. Recent statistical methods have been successfully deployed to automate the processing of the growing amount of images, videos, and texts we produce daily. In particular, deep neural networks have been adopted by the computer vision and natural language processing communities for their ability to perform accurate image recognition and text understanding once trained on big sets of data. Advances in both communities built the groundwork for new research problems at the intersection of vision and language. Integrating language into visual recognition could have an important impact on human life through the creation of real-world applications such as next-generation search engines or AI assistants.In the first part of this thesis, we focus on systems for cross-modal text-image retrieval. We propose a learning strategy to efficiently align both modalities while structuring the retrieval space with semantic information. In the second part, we focus on systems able to answer questions about an image. We propose a multimodal architecture that iteratively fuses the visual and textual modalities using a factorized bilinear model while modeling pairwise relationships between each region of the image. In the last part, we address the issues related to biases in the modeling. We propose a learning strategy to reduce the language biases which are commonly present in visual question answering systems
Chen, Jianan. "Deep Learning Based Multimodal Retrieval". Electronic Thesis or Diss., Rennes, INSA, 2023. http://www.theses.fr/2023ISAR0019.
Pełny tekst źródłaMultimodal tasks play a crucial role in the progression towards achieving general artificial intelligence (AI). The primary goal of multimodal retrieval is to employ machine learning algorithms to extract relevant semantic information, bridging the gap between different modalities such as visual images, linguistic text, and other data sources. It is worth noting that the information entropy associated with heterogeneous data for the same high-level semantics varies significantly, posing a significant challenge for multimodal models. Deep learning-based multimodal network models provide an effective solution to tackle the difficulties arising from substantial differences in information entropy. These models exhibit impressive accuracy and stability in large-scale cross-modal information matching tasks, such as image-text retrieval. Furthermore, they demonstrate strong transfer learning capabilities, enabling a well-trained model from one multimodal task to be fine-tuned and applied to a new multimodal task, even in scenarios involving few-shot or zero-shot learning. In our research, we develop a novel generative multimodal multi-view database specifically designed for the multimodal referential segmentation task. Additionally, we establish a state-of-the-art (SOTA) benchmark and multi-view metric for referring expression segmentation models in the multimodal domain. The results of our comparative experiments are presented visually, providing clear and comprehensive insights
Delecraz, Sébastien. "Approches jointes texte/image pour la compréhension multimodale de documents". Thesis, Aix-Marseille, 2018. http://www.theses.fr/2018AIXM0634/document.
Pełny tekst źródłaThe human faculties of understanding are essentially multimodal. To understand the world around them, human beings fuse the information coming from all of their sensory receptors. Most of the documents used in automatic information processing contain multimodal information, for example text and image in textual documents or image and sound in video documents, however the processings used are most often monomodal. The aim of this thesis is to propose joint processes applying mainly to text and image for the processing of multimodal documents through two studies: one on multimodal fusion for the speaker role recognition in television broadcasts, the other on the complementarity of modalities for a task of linguistic analysis on corpora of images with captions. In the first part of this study, we interested in audiovisual documents analysis from news television channels. We propose an approach that uses in particular deep neural networks for representation and fusion of modalities. In the second part of this thesis, we are interested in approaches allowing to use several sources of multimodal information for a monomodal task of natural language processing in order to study their complementarity. We propose a complete system of correction of prepositional attachments using visual information, trained on a multimodal corpus of images with captions
Książki na temat "Traitement multimodal"
Gérard, Lopez, i Sabouraud-Séguin Aurore, red. Psychothérapie des victimes: Le traitement multimodal du psychotraumatisme. Paris: Dunod, 1998.
Znajdź pełny tekst źródłaFerran, Marques, Knovel (Firm) i ScienceDirect (Online service), red. Multimodal signal processing: Theory and applications for human-computer interaction. Amsterdam: Academic, 2010.
Znajdź pełny tekst źródłaInteractive Multimodal Information Management. Taylor & Francis Group, 2014.
Znajdź pełny tekst źródłaBourlard, Hervé. Interactive Multimodal Information Management. Presses Polytechniques et Universitaires Romandes, 2021.
Znajdź pełny tekst źródłaDelgado, Ramon Lopez, i Masahiro Araki. Spoken, Multilingual and Multimodal Dialogue Systems. Wiley & Sons, Incorporated, John, 2007.
Znajdź pełny tekst źródłaSpoken, Multilingual and Multimodal Dialogue Systems: Development and Assessment. Wiley, 2005.
Znajdź pełny tekst źródłaCzęści książek na temat "Traitement multimodal"
Colón de Carvajal, Isabel. "Chapitre 8. Traitement multimodal des données versus analyse multimodale des interactions : perspective de l’ethnométhodologie et de l’analyse conversationnelle". W Multimodalité du langage dans les interactions et l’acquisition, 211–51. UGA Éditions, 2019. http://dx.doi.org/10.4000/books.ugaeditions.10992.
Pełny tekst źródłaDAUL, Christian, i Walter BLONDEL. "Imagerie endoscopique multimodale et multispectrale à champ de vue étendu". W Imageries optiques non conventionnelles pour la biologie, 207–45. ISTE Group, 2023. http://dx.doi.org/10.51926/iste.9132.ch7.
Pełny tekst źródłaCoqueugniot, Hélène. "Paléo-imagerie par rayons X : une méthode d’exploration transdisciplinaire, de l’archéologie à la chirurgie Hélène". W Regards croisés: quand les sciences archéologiques rencontrent l'innovation, 139–56. Editions des archives contemporaines, 2017. http://dx.doi.org/10.17184/eac.3794.
Pełny tekst źródła