Littérature scientifique sur le sujet « Multimodal Knowledge Representation »
Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres
Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Multimodal Knowledge Representation ».
À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.
Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.
Articles de revues sur le sujet "Multimodal Knowledge Representation"
Azañón, Elena, Luigi Tamè, Angelo Maravita, Sally A. Linkenauger, Elisa R. Ferrè, Ana Tajadura-Jiménez et Matthew R. Longo. « Multimodal Contributions to Body Representation ». Multisensory Research 29, no 6-7 (2016) : 635–61. http://dx.doi.org/10.1163/22134808-00002531.
Texte intégralCoelho, Ana, Paulo Marques, Ricardo Magalhães, Nuno Sousa, José Neves et Victor Alves. « A Knowledge Representation and Reasoning System for Multimodal Neuroimaging Studies ». Inteligencia Artificial 20, no 59 (6 février 2017) : 42. http://dx.doi.org/10.4114/intartif.vol20iss59pp42-52.
Texte intégralBruni, E., N. K. Tran et M. Baroni. « Multimodal Distributional Semantics ». Journal of Artificial Intelligence Research 49 (23 janvier 2014) : 1–47. http://dx.doi.org/10.1613/jair.4135.
Texte intégralToraldo, Maria Laura, Gazi Islam et Gianluigi Mangia. « Modes of Knowing ». Organizational Research Methods 21, no 2 (14 juillet 2016) : 438–65. http://dx.doi.org/10.1177/1094428116657394.
Texte intégralGül, Davut, et Bayram Costu. « To What Extent Do Teachers of Gifted Students Identify Inner and Intermodal Relations in Knowledge Representation ? » Mimbar Sekolah Dasar 8, no 1 (30 avril 2021) : 55–80. http://dx.doi.org/10.53400/mimbar-sd.v8i1.31333.
Texte intégralTomskaya, Maria, et Irina Zaytseva. « MULTIMEDIA REPRESENTATION OF KNOWLEDGE IN ACADEMIC DISCOURSE ». Verbum 8, no 8 (19 janvier 2018) : 129. http://dx.doi.org/10.15388/verb.2017.8.11357.
Texte intégralCholewa, Wojciech, Marcin Amarowicz, Paweł Chrzanowski et Tomasz Rogala. « Development Environment for Diagnostic Multimodal Statement Networks ». Key Engineering Materials 588 (octobre 2013) : 74–83. http://dx.doi.org/10.4028/www.scientific.net/kem.588.74.
Texte intégralPrieto-Velasco, Juan Antonio, et Clara I. López Rodríguez. « Managing graphic information in terminological knowledge bases ». Terminology 15, no 2 (11 novembre 2009) : 179–213. http://dx.doi.org/10.1075/term.15.2.02pri.
Texte intégralLaenen, Katrien, et Marie-Francine Moens. « Learning Explainable Disentangled Representations of E-Commerce Data by Aligning Their Visual and Textual Attributes ». Computers 11, no 12 (10 décembre 2022) : 182. http://dx.doi.org/10.3390/computers11120182.
Texte intégralLi, Jinghua, Runze Liu, Dehui Kong, Shaofan Wang, Lichun Wang, Baocai Yin et Ronghua Gao. « Attentive 3D-Ghost Module for Dynamic Hand Gesture Recognition with Positive Knowledge Transfer ». Computational Intelligence and Neuroscience 2021 (18 novembre 2021) : 1–12. http://dx.doi.org/10.1155/2021/5044916.
Texte intégralThèses sur le sujet "Multimodal Knowledge Representation"
Palframan, Shirley Anne. « Multimodal representation and the making of knowledge : a social semiotic excavation of learning sites ». Thesis, University College London (University of London), 2006. http://discovery.ucl.ac.uk/10019283/.
Texte intégralGuo, Xuan. « Discovering a Domain Knowledge Representation for Image Grouping| Multimodal Data Modeling, Fusion, and Interactive Learning ». Thesis, Rochester Institute of Technology, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10603860.
Texte intégralIn visually-oriented specialized medical domains such as dermatology and radiology, physicians explore interesting image cases from medical image repositories for comparative case studies to aid clinical diagnoses, educate medical trainees, and support medical research. However, general image classification and retrieval approaches fail in grouping medical images from the physicians' viewpoint. This is because fully-automated learning techniques cannot yet bridge the gap between image features and domain-specific content for the absence of expert knowledge. Understanding how experts get information from medical images is therefore an important research topic.
As a prior study, we conducted data elicitation experiments, where physicians were instructed to inspect each medical image towards a diagnosis while describing image content to a student seated nearby. Experts' eye movements and their verbal descriptions of the image content were recorded to capture various aspects of expert image understanding. This dissertation aims at an intuitive approach to extracting expert knowledge, which is to find patterns in expert data elicited from image-based diagnoses. These patterns are useful to understand both the characteristics of the medical images and the experts' cognitive reasoning processes.
The transformation from the viewed raw image features to interpretation as domain-specific concepts requires experts' domain knowledge and cognitive reasoning. This dissertation also approximates this transformation using a matrix factorization-based framework, which helps project multiple expert-derived data modalities to high-level abstractions.
To combine additional expert interventions with computational processing capabilities, an interactive machine learning paradigm is developed to treat experts as an integral part of the learning process. Specifically, experts refine medical image groups presented by the learned model locally, to incrementally re-learn the model globally. This paradigm avoids the onerous expert annotations for model training, while aligning the learned model with experts' sense-making.
Florén, Henrika. « Shapes of Knowledge : A multimodal study of six Swedish upper secondary students' meaning making and transduction of knowledge across essays and audiovisual presentations ». Thesis, Stockholms universitet, Institutionen för pedagogik och didaktik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-156907.
Texte intégralAdjali, Omar. « Dynamic architecture for multimodal applications to reinforce robot-environment interaction ». Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLV100.
Texte intégralKnowledge Representation and Reasoning is at the heart of the great challenge of Artificial Intelligence. More specifically, in the context of robotic applications, knowledge representation and reasoning approaches are necessary to solve decision problems that autonomous robots face when it comes to evolve in uncertain, dynamic and complex environments or to ensure a natural interaction in human environment. In a robotic interaction system, information has to be represented and processed at various levels of abstraction: From sensor up to actions and plans. Thus, knowledge representation provides the means to describe the environment with different abstraction levels which allow performing appropriate decisions. In this thesis we propose a methodology to solve the problem of multimodal interaction by describing a semantic interaction architecture based on a framework that demonstrates an approach for representing and reasoning with environment knowledge representation language (EKRL), to enhance interaction between robots and their environment. This framework is used to manage the interaction process by representing the knowledge involved in the interaction with EKRL and reasoning on it to make inference. The interaction process includes fusion of values from different sensors to interpret and understand what is happening in the environment, and the fission which suggests a detailed set of actions that are for implementation. Before such actions are implemented by actuators, these actions are first evaluated in a virtual environment which mimics the real-world environment to assess the feasibility of the action implementation in the real world. During these processes, reasoning abilities are necessary to guarantee a global execution of a given interaction scenario. Thus, we provided EKRL framework with reasoning techniques to draw deterministic inferences thanks to unification algorithms and probabilistic inferences to manage uncertain knowledge by combining statistical relational models using Markov logic Networks(MLN) framework with EKRL. The proposed work is validated through scenarios that demonstrate the usability and the performance of our framework in real world applications
Ben, salem Yosra. « Fusion d'images multimodales pour l'aide au diagnostic du cancer du sein ». Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2017. http://www.theses.fr/2017IMTA0062/document.
Texte intégralThe breast cancer is the most prevalent cancer among women over 40 years old. Indeed, studies evinced that an early detection and an appropriate treatment of breast cancer increases significantly the chances of survival. The mammography is the most tool used in the diagnosis of breast lesions. However, this technique may be insufficient to evince the structures of the breast and reveal the anomalies present. The doctor can use additional imaging modalities such as MRI (Magnetic Reasoning Image). Therefore, the doctor proceeds to a mental fusion of the different information on the two images in order to make the adequate diagnosis. To assist the doctor in this process, we propose a solution to merge the two images. Although the idea of the fusion seems simple, its implementation poses many problems not only related to the paradigm of fusion in general but also to the nature of medical images that are generally poorly contrasted images, and presenting heterogeneous, inaccurate and ambiguous data. Mammography images and IRM images present very different information representations, since they are taken under different conditions. Which leads us to pose the following question: How to pass from the heterogeneous representation of information in the image space, to another space of uniform representation from the two modalities? In order to treat this problem, we opt a multilevel processing approach : the pixel level, the primitive level, the object level and the scene level. We model the pathological objects extracted from the different images by local ontologies. The fusion is then performed on these local ontologies and results in a global ontology containing the different knowledge on the pathological objects of the studied case. This global ontology serves to instantiate a reference ontology modeling knowledge of the medical diagnosis of breast lesions. Case-based reasoning (CBR) is used to provide the diagnostic reports of the most similar cases that can help the doctor to make the best decision. In order to model the imperfection of the treated information, we use the possibility theory with the ontologies. The final result is a diagnostic reports containing the most similar cases to the studied case with similarity degrees expressed with possibility measures. A 3D symbolic model complete the diagnostic report with a simplified overview of the studied scene
Maatouk, Stefan. « Orientalism - A Netflix Unlimited Series : A Multimodal Critical Discourse Analysis of the Orientalist Representations of Arab Identify on Netflix Film and Television ». Thesis, Malmö universitet, Malmö högskola, Institutionen för globala politiska studier (GPS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-43793.
Texte intégral(11170170), Zhi Huang. « Integrative Analysis of Multimodal Biomedical Data with Machine Learning ». Thesis, 2021.
Trouver le texte intégralAmelio, Ravelli Andrea. « Annotation of Linguistically Derived Action Concepts in Computer Vision Datasets ». Doctoral thesis, 2020. http://hdl.handle.net/2158/1200356.
Texte intégralThompson, Robyn Dyan. « Philosophy for children in a foundation phase literacy classroom in South Africa : multimodal representations of knowledge ». Thesis, 2015. http://hdl.handle.net/10539/17833.
Texte intégralLivres sur le sujet "Multimodal Knowledge Representation"
Reilly, Jamie, et Nadine Martin. Semantic Processing in Transcortical Sensory Aphasia. Sous la direction de Anastasia M. Raymer et Leslie J. Gonzalez Rothi. Oxford University Press, 2015. http://dx.doi.org/10.1093/oxfordhb/9780199772391.013.6.
Texte intégralDove, Guy. Abstract Concepts and the Embodied Mind. Oxford University Press, 2022. http://dx.doi.org/10.1093/oso/9780190061975.001.0001.
Texte intégralChapitres de livres sur le sujet "Multimodal Knowledge Representation"
Latoschik, Marc Erich, Peter Biermann et Ipke Wachsmuth. « Knowledge in the Loop : Semantics Representation for Multimodal Simulative Environments ». Dans Smart Graphics, 25–39. Berlin, Heidelberg : Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11536482_3.
Texte intégralDe Silva, Daswin, Damminda Alahakoon et Shyamali Dharmage. « Extensions to Knowledge Acquisition and Effect of Multimodal Representation in Unsupervised Learning ». Dans Studies in Computational Intelligence, 281–305. Berlin, Heidelberg : Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-01082-8_11.
Texte intégralMcTear, Michael, Kristiina Jokinen, Mohnish Dubey, Gérard Chollet, Jérôme Boudy, Christophe Lohr, Sonja Dana Roelen, Wanja Mössing et Rainer Wieching. « Empowering Well-Being Through Conversational Coaching for Active and Healthy Ageing ». Dans Lecture Notes in Computer Science, 257–65. Cham : Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-09593-1_21.
Texte intégralDanielsson, Kristina, et Staffan Selander. « Semiotic Modes and Representations of Knowledge ». Dans Multimodal Texts in Disciplinary Education, 17–23. Cham : Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-63960-0_3.
Texte intégralMoschini, Ilaria, et Maria Grazia Sindoni. « The Digital Mediation of Knowledge, Representations and Practices through the Lenses of a Multimodal Theory of Communication ». Dans Mediation and Multimodal Meaning Making in Digital Environments, 1–14. New York : Routledge, 2021. http://dx.doi.org/10.4324/9781003225423-1.
Texte intégralZhang, Chao, et Jiawei Han. « Data Mining and Knowledge Discovery ». Dans Urban Informatics, 797–814. Singapore : Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-15-8983-6_42.
Texte intégralScrocca, Mario, Marco Comerio, Alessio Carenini et Irene Celino. « Modelling Business Agreements in the Multimodal Transportation Domain Through Ontological Smart Contracts ». Dans Towards a Knowledge-Aware AI. IOS Press, 2022. http://dx.doi.org/10.3233/ssw220016.
Texte intégralFarmer, Lesley S. J. « Extensions of Content Analysis in the Creation of Multimodal Knowledge Representations ». Dans Advances in Library and Information Science, 63–81. IGI Global, 2018. http://dx.doi.org/10.4018/978-1-5225-5164-5.ch005.
Texte intégralKhakhalin, Gennady K., Sergey S. Kurbatov, Xenia Naidenova et Alex P. Lobzin. « Integration of the Image and NL-text Analysis/Synthesis Systems ». Dans Intelligent Data Analysis for Real-Life Applications, 160–85. IGI Global, 2012. http://dx.doi.org/10.4018/978-1-4666-1806-0.ch009.
Texte intégralCastellano Sanz, Margarida. « Challenging Picturebooks and Domestic Geographies ». Dans Advances in Psychology, Mental Health, and Behavioral Studies, 213–35. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-6684-4735-2.ch015.
Texte intégralActes de conférences sur le sujet "Multimodal Knowledge Representation"
« KNOWLEDGE-BASED MULTIMODAL DATA REPRESENTATION AND QUERYING ». Dans International Conference on Knowledge Engineering and Ontology Development. SciTePress - Science and and Technology Publications, 2011. http://dx.doi.org/10.5220/0003627901520158.
Texte intégralWang, Zikang, Linjing Li, Qiudan Li et Daniel Zeng. « Multimodal Data Enhanced Representation Learning for Knowledge Graphs ». Dans 2019 International Joint Conference on Neural Networks (IJCNN). IEEE, 2019. http://dx.doi.org/10.1109/ijcnn.2019.8852079.
Texte intégralSun, Chenkai, Weijiang Li, Jinfeng Xiao, Nikolaus Nova Parulian, ChengXiang Zhai et Heng Ji. « Fine-Grained Chemical Entity Typing with Multimodal Knowledge Representation ». Dans 2021 IEEE International Conference on Bioinformatics and Biomedicine (BIBM). IEEE, 2021. http://dx.doi.org/10.1109/bibm52615.2021.9669360.
Texte intégralMousselly Sergieh, Hatem, Teresa Botschen, Iryna Gurevych et Stefan Roth. « A Multimodal Translation-Based Approach for Knowledge Graph Representation Learning ». Dans Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics. Stroudsburg, PA, USA : Association for Computational Linguistics, 2018. http://dx.doi.org/10.18653/v1/s18-2027.
Texte intégralJenkins, Porter, Ahmad Farag, Suhang Wang et Zhenhui Li. « Unsupervised Representation Learning of Spatial Data via Multimodal Embedding ». Dans CIKM '19 : The 28th ACM International Conference on Information and Knowledge Management. New York, NY, USA : ACM, 2019. http://dx.doi.org/10.1145/3357384.3358001.
Texte intégralĆalić, J., N. Campbell, S. Dasiopoulou et Y. Kompatsiaris. « An overview of multimodal video representation for semantic analysis ». Dans 2nd European Workshop on the Integration of Knowledge, Semantics and Digital Media Technology (EWIMT 2005). IET, 2005. http://dx.doi.org/10.1049/ic.2005.0708.
Texte intégralLiu, Wenxuan, Hao Duan, Zeng Li, Jingdong Liu, Hong Huo et Tao Fang. « Entity Representation Learning with Multimodal Neighbors for Link Prediction in Knowledge Graph ». Dans 2021 7th International Conference on Computer and Communications (ICCC). IEEE, 2021. http://dx.doi.org/10.1109/iccc54389.2021.9674496.
Texte intégralGôlo, Marcos P. S., Rafael G. Rossi et Ricardo M. Marcacini. « Triple-VAE : A Triple Variational Autoencoder to Represent Events in One-Class Event Detection ». Dans Encontro Nacional de Inteligência Artificial e Computacional. Sociedade Brasileira de Computação - SBC, 2021. http://dx.doi.org/10.5753/eniac.2021.18291.
Texte intégralMoctezuma, Daniela, Víctor Muníz et Jorge García. « Multimodal Data Evaluation for Classification Problems ». Dans 7th International Conference on VLSI and Applications (VLSIA 2021). Academy and Industry Research Collaboration Center (AIRCC), 2021. http://dx.doi.org/10.5121/csit.2021.112105.
Texte intégralOliveira, Angelo Schranko, et Renato José Sassi. « Hunting Android Malware Using Multimodal Deep Learning and Hybrid Analysis Data ». Dans Congresso Brasileiro de Inteligência Computacional. SBIC, 2021. http://dx.doi.org/10.21528/cbic2021-32.
Texte intégral