Literatura académica sobre el tema "Multimodal Knowledge Representation"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Multimodal Knowledge Representation".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Artículos de revistas sobre el tema "Multimodal Knowledge Representation"
Azañón, Elena, Luigi Tamè, Angelo Maravita, Sally A. Linkenauger, Elisa R. Ferrè, Ana Tajadura-Jiménez y Matthew R. Longo. "Multimodal Contributions to Body Representation". Multisensory Research 29, n.º 6-7 (2016): 635–61. http://dx.doi.org/10.1163/22134808-00002531.
Texto completoCoelho, Ana, Paulo Marques, Ricardo Magalhães, Nuno Sousa, José Neves y Victor Alves. "A Knowledge Representation and Reasoning System for Multimodal Neuroimaging Studies". Inteligencia Artificial 20, n.º 59 (6 de febrero de 2017): 42. http://dx.doi.org/10.4114/intartif.vol20iss59pp42-52.
Texto completoBruni, E., N. K. Tran y M. Baroni. "Multimodal Distributional Semantics". Journal of Artificial Intelligence Research 49 (23 de enero de 2014): 1–47. http://dx.doi.org/10.1613/jair.4135.
Texto completoToraldo, Maria Laura, Gazi Islam y Gianluigi Mangia. "Modes of Knowing". Organizational Research Methods 21, n.º 2 (14 de julio de 2016): 438–65. http://dx.doi.org/10.1177/1094428116657394.
Texto completoGül, Davut y Bayram Costu. "To What Extent Do Teachers of Gifted Students Identify Inner and Intermodal Relations in Knowledge Representation?" Mimbar Sekolah Dasar 8, n.º 1 (30 de abril de 2021): 55–80. http://dx.doi.org/10.53400/mimbar-sd.v8i1.31333.
Texto completoTomskaya, Maria y Irina Zaytseva. "MULTIMEDIA REPRESENTATION OF KNOWLEDGE IN ACADEMIC DISCOURSE". Verbum 8, n.º 8 (19 de enero de 2018): 129. http://dx.doi.org/10.15388/verb.2017.8.11357.
Texto completoCholewa, Wojciech, Marcin Amarowicz, Paweł Chrzanowski y Tomasz Rogala. "Development Environment for Diagnostic Multimodal Statement Networks". Key Engineering Materials 588 (octubre de 2013): 74–83. http://dx.doi.org/10.4028/www.scientific.net/kem.588.74.
Texto completoPrieto-Velasco, Juan Antonio y Clara I. López Rodríguez. "Managing graphic information in terminological knowledge bases". Terminology 15, n.º 2 (11 de noviembre de 2009): 179–213. http://dx.doi.org/10.1075/term.15.2.02pri.
Texto completoLaenen, Katrien y Marie-Francine Moens. "Learning Explainable Disentangled Representations of E-Commerce Data by Aligning Their Visual and Textual Attributes". Computers 11, n.º 12 (10 de diciembre de 2022): 182. http://dx.doi.org/10.3390/computers11120182.
Texto completoLi, Jinghua, Runze Liu, Dehui Kong, Shaofan Wang, Lichun Wang, Baocai Yin y Ronghua Gao. "Attentive 3D-Ghost Module for Dynamic Hand Gesture Recognition with Positive Knowledge Transfer". Computational Intelligence and Neuroscience 2021 (18 de noviembre de 2021): 1–12. http://dx.doi.org/10.1155/2021/5044916.
Texto completoTesis sobre el tema "Multimodal Knowledge Representation"
Palframan, Shirley Anne. "Multimodal representation and the making of knowledge : a social semiotic excavation of learning sites". Thesis, University College London (University of London), 2006. http://discovery.ucl.ac.uk/10019283/.
Texto completoGuo, Xuan. "Discovering a Domain Knowledge Representation for Image Grouping| Multimodal Data Modeling, Fusion, and Interactive Learning". Thesis, Rochester Institute of Technology, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10603860.
Texto completoIn visually-oriented specialized medical domains such as dermatology and radiology, physicians explore interesting image cases from medical image repositories for comparative case studies to aid clinical diagnoses, educate medical trainees, and support medical research. However, general image classification and retrieval approaches fail in grouping medical images from the physicians' viewpoint. This is because fully-automated learning techniques cannot yet bridge the gap between image features and domain-specific content for the absence of expert knowledge. Understanding how experts get information from medical images is therefore an important research topic.
As a prior study, we conducted data elicitation experiments, where physicians were instructed to inspect each medical image towards a diagnosis while describing image content to a student seated nearby. Experts' eye movements and their verbal descriptions of the image content were recorded to capture various aspects of expert image understanding. This dissertation aims at an intuitive approach to extracting expert knowledge, which is to find patterns in expert data elicited from image-based diagnoses. These patterns are useful to understand both the characteristics of the medical images and the experts' cognitive reasoning processes.
The transformation from the viewed raw image features to interpretation as domain-specific concepts requires experts' domain knowledge and cognitive reasoning. This dissertation also approximates this transformation using a matrix factorization-based framework, which helps project multiple expert-derived data modalities to high-level abstractions.
To combine additional expert interventions with computational processing capabilities, an interactive machine learning paradigm is developed to treat experts as an integral part of the learning process. Specifically, experts refine medical image groups presented by the learned model locally, to incrementally re-learn the model globally. This paradigm avoids the onerous expert annotations for model training, while aligning the learned model with experts' sense-making.
Florén, Henrika. "Shapes of Knowledge : A multimodal study of six Swedish upper secondary students' meaning making and transduction of knowledge across essays and audiovisual presentations". Thesis, Stockholms universitet, Institutionen för pedagogik och didaktik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-156907.
Texto completoAdjali, Omar. "Dynamic architecture for multimodal applications to reinforce robot-environment interaction". Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLV100.
Texto completoKnowledge Representation and Reasoning is at the heart of the great challenge of Artificial Intelligence. More specifically, in the context of robotic applications, knowledge representation and reasoning approaches are necessary to solve decision problems that autonomous robots face when it comes to evolve in uncertain, dynamic and complex environments or to ensure a natural interaction in human environment. In a robotic interaction system, information has to be represented and processed at various levels of abstraction: From sensor up to actions and plans. Thus, knowledge representation provides the means to describe the environment with different abstraction levels which allow performing appropriate decisions. In this thesis we propose a methodology to solve the problem of multimodal interaction by describing a semantic interaction architecture based on a framework that demonstrates an approach for representing and reasoning with environment knowledge representation language (EKRL), to enhance interaction between robots and their environment. This framework is used to manage the interaction process by representing the knowledge involved in the interaction with EKRL and reasoning on it to make inference. The interaction process includes fusion of values from different sensors to interpret and understand what is happening in the environment, and the fission which suggests a detailed set of actions that are for implementation. Before such actions are implemented by actuators, these actions are first evaluated in a virtual environment which mimics the real-world environment to assess the feasibility of the action implementation in the real world. During these processes, reasoning abilities are necessary to guarantee a global execution of a given interaction scenario. Thus, we provided EKRL framework with reasoning techniques to draw deterministic inferences thanks to unification algorithms and probabilistic inferences to manage uncertain knowledge by combining statistical relational models using Markov logic Networks(MLN) framework with EKRL. The proposed work is validated through scenarios that demonstrate the usability and the performance of our framework in real world applications
Ben, salem Yosra. "Fusion d'images multimodales pour l'aide au diagnostic du cancer du sein". Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2017. http://www.theses.fr/2017IMTA0062/document.
Texto completoThe breast cancer is the most prevalent cancer among women over 40 years old. Indeed, studies evinced that an early detection and an appropriate treatment of breast cancer increases significantly the chances of survival. The mammography is the most tool used in the diagnosis of breast lesions. However, this technique may be insufficient to evince the structures of the breast and reveal the anomalies present. The doctor can use additional imaging modalities such as MRI (Magnetic Reasoning Image). Therefore, the doctor proceeds to a mental fusion of the different information on the two images in order to make the adequate diagnosis. To assist the doctor in this process, we propose a solution to merge the two images. Although the idea of the fusion seems simple, its implementation poses many problems not only related to the paradigm of fusion in general but also to the nature of medical images that are generally poorly contrasted images, and presenting heterogeneous, inaccurate and ambiguous data. Mammography images and IRM images present very different information representations, since they are taken under different conditions. Which leads us to pose the following question: How to pass from the heterogeneous representation of information in the image space, to another space of uniform representation from the two modalities? In order to treat this problem, we opt a multilevel processing approach : the pixel level, the primitive level, the object level and the scene level. We model the pathological objects extracted from the different images by local ontologies. The fusion is then performed on these local ontologies and results in a global ontology containing the different knowledge on the pathological objects of the studied case. This global ontology serves to instantiate a reference ontology modeling knowledge of the medical diagnosis of breast lesions. Case-based reasoning (CBR) is used to provide the diagnostic reports of the most similar cases that can help the doctor to make the best decision. In order to model the imperfection of the treated information, we use the possibility theory with the ontologies. The final result is a diagnostic reports containing the most similar cases to the studied case with similarity degrees expressed with possibility measures. A 3D symbolic model complete the diagnostic report with a simplified overview of the studied scene
Maatouk, Stefan. "Orientalism - A Netflix Unlimited Series : A Multimodal Critical Discourse Analysis of the Orientalist Representations of Arab Identify on Netflix Film and Television". Thesis, Malmö universitet, Malmö högskola, Institutionen för globala politiska studier (GPS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-43793.
Texto completo(11170170), Zhi Huang. "Integrative Analysis of Multimodal Biomedical Data with Machine Learning". Thesis, 2021.
Buscar texto completoAmelio, Ravelli Andrea. "Annotation of Linguistically Derived Action Concepts in Computer Vision Datasets". Doctoral thesis, 2020. http://hdl.handle.net/2158/1200356.
Texto completoThompson, Robyn Dyan. "Philosophy for children in a foundation phase literacy classroom in South Africa : multimodal representations of knowledge". Thesis, 2015. http://hdl.handle.net/10539/17833.
Texto completoLibros sobre el tema "Multimodal Knowledge Representation"
Reilly, Jamie y Nadine Martin. Semantic Processing in Transcortical Sensory Aphasia. Editado por Anastasia M. Raymer y Leslie J. Gonzalez Rothi. Oxford University Press, 2015. http://dx.doi.org/10.1093/oxfordhb/9780199772391.013.6.
Texto completoDove, Guy. Abstract Concepts and the Embodied Mind. Oxford University Press, 2022. http://dx.doi.org/10.1093/oso/9780190061975.001.0001.
Texto completoCapítulos de libros sobre el tema "Multimodal Knowledge Representation"
Latoschik, Marc Erich, Peter Biermann y Ipke Wachsmuth. "Knowledge in the Loop: Semantics Representation for Multimodal Simulative Environments". En Smart Graphics, 25–39. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11536482_3.
Texto completoDe Silva, Daswin, Damminda Alahakoon y Shyamali Dharmage. "Extensions to Knowledge Acquisition and Effect of Multimodal Representation in Unsupervised Learning". En Studies in Computational Intelligence, 281–305. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-01082-8_11.
Texto completoMcTear, Michael, Kristiina Jokinen, Mohnish Dubey, Gérard Chollet, Jérôme Boudy, Christophe Lohr, Sonja Dana Roelen, Wanja Mössing y Rainer Wieching. "Empowering Well-Being Through Conversational Coaching for Active and Healthy Ageing". En Lecture Notes in Computer Science, 257–65. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-09593-1_21.
Texto completoDanielsson, Kristina y Staffan Selander. "Semiotic Modes and Representations of Knowledge". En Multimodal Texts in Disciplinary Education, 17–23. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-63960-0_3.
Texto completoMoschini, Ilaria y Maria Grazia Sindoni. "The Digital Mediation of Knowledge, Representations and Practices through the Lenses of a Multimodal Theory of Communication". En Mediation and Multimodal Meaning Making in Digital Environments, 1–14. New York: Routledge, 2021. http://dx.doi.org/10.4324/9781003225423-1.
Texto completoZhang, Chao y Jiawei Han. "Data Mining and Knowledge Discovery". En Urban Informatics, 797–814. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-15-8983-6_42.
Texto completoScrocca, Mario, Marco Comerio, Alessio Carenini y Irene Celino. "Modelling Business Agreements in the Multimodal Transportation Domain Through Ontological Smart Contracts". En Towards a Knowledge-Aware AI. IOS Press, 2022. http://dx.doi.org/10.3233/ssw220016.
Texto completoFarmer, Lesley S. J. "Extensions of Content Analysis in the Creation of Multimodal Knowledge Representations". En Advances in Library and Information Science, 63–81. IGI Global, 2018. http://dx.doi.org/10.4018/978-1-5225-5164-5.ch005.
Texto completoKhakhalin, Gennady K., Sergey S. Kurbatov, Xenia Naidenova y Alex P. Lobzin. "Integration of the Image and NL-text Analysis/Synthesis Systems". En Intelligent Data Analysis for Real-Life Applications, 160–85. IGI Global, 2012. http://dx.doi.org/10.4018/978-1-4666-1806-0.ch009.
Texto completoCastellano Sanz, Margarida. "Challenging Picturebooks and Domestic Geographies". En Advances in Psychology, Mental Health, and Behavioral Studies, 213–35. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-6684-4735-2.ch015.
Texto completoActas de conferencias sobre el tema "Multimodal Knowledge Representation"
"KNOWLEDGE-BASED MULTIMODAL DATA REPRESENTATION AND QUERYING". En International Conference on Knowledge Engineering and Ontology Development. SciTePress - Science and and Technology Publications, 2011. http://dx.doi.org/10.5220/0003627901520158.
Texto completoWang, Zikang, Linjing Li, Qiudan Li y Daniel Zeng. "Multimodal Data Enhanced Representation Learning for Knowledge Graphs". En 2019 International Joint Conference on Neural Networks (IJCNN). IEEE, 2019. http://dx.doi.org/10.1109/ijcnn.2019.8852079.
Texto completoSun, Chenkai, Weijiang Li, Jinfeng Xiao, Nikolaus Nova Parulian, ChengXiang Zhai y Heng Ji. "Fine-Grained Chemical Entity Typing with Multimodal Knowledge Representation". En 2021 IEEE International Conference on Bioinformatics and Biomedicine (BIBM). IEEE, 2021. http://dx.doi.org/10.1109/bibm52615.2021.9669360.
Texto completoMousselly Sergieh, Hatem, Teresa Botschen, Iryna Gurevych y Stefan Roth. "A Multimodal Translation-Based Approach for Knowledge Graph Representation Learning". En Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics. Stroudsburg, PA, USA: Association for Computational Linguistics, 2018. http://dx.doi.org/10.18653/v1/s18-2027.
Texto completoJenkins, Porter, Ahmad Farag, Suhang Wang y Zhenhui Li. "Unsupervised Representation Learning of Spatial Data via Multimodal Embedding". En CIKM '19: The 28th ACM International Conference on Information and Knowledge Management. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3357384.3358001.
Texto completoĆalić, J., N. Campbell, S. Dasiopoulou y Y. Kompatsiaris. "An overview of multimodal video representation for semantic analysis". En 2nd European Workshop on the Integration of Knowledge, Semantics and Digital Media Technology (EWIMT 2005). IET, 2005. http://dx.doi.org/10.1049/ic.2005.0708.
Texto completoLiu, Wenxuan, Hao Duan, Zeng Li, Jingdong Liu, Hong Huo y Tao Fang. "Entity Representation Learning with Multimodal Neighbors for Link Prediction in Knowledge Graph". En 2021 7th International Conference on Computer and Communications (ICCC). IEEE, 2021. http://dx.doi.org/10.1109/iccc54389.2021.9674496.
Texto completoGôlo, Marcos P. S., Rafael G. Rossi y Ricardo M. Marcacini. "Triple-VAE: A Triple Variational Autoencoder to Represent Events in One-Class Event Detection". En Encontro Nacional de Inteligência Artificial e Computacional. Sociedade Brasileira de Computação - SBC, 2021. http://dx.doi.org/10.5753/eniac.2021.18291.
Texto completoMoctezuma, Daniela, Víctor Muníz y Jorge García. "Multimodal Data Evaluation for Classification Problems". En 7th International Conference on VLSI and Applications (VLSIA 2021). Academy and Industry Research Collaboration Center (AIRCC), 2021. http://dx.doi.org/10.5121/csit.2021.112105.
Texto completoOliveira, Angelo Schranko y Renato José Sassi. "Hunting Android Malware Using Multimodal Deep Learning and Hybrid Analysis Data". En Congresso Brasileiro de Inteligência Computacional. SBIC, 2021. http://dx.doi.org/10.21528/cbic2021-32.
Texto completo