Academic literature on the topic 'Multimodal Knowledge Representation'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Multimodal Knowledge Representation.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Multimodal Knowledge Representation"
Azañón, Elena, Luigi Tamè, Angelo Maravita, Sally A. Linkenauger, Elisa R. Ferrè, Ana Tajadura-Jiménez, and Matthew R. Longo. "Multimodal Contributions to Body Representation." Multisensory Research 29, no. 6-7 (2016): 635–61. http://dx.doi.org/10.1163/22134808-00002531.
Full textCoelho, Ana, Paulo Marques, Ricardo Magalhães, Nuno Sousa, José Neves, and Victor Alves. "A Knowledge Representation and Reasoning System for Multimodal Neuroimaging Studies." Inteligencia Artificial 20, no. 59 (February 6, 2017): 42. http://dx.doi.org/10.4114/intartif.vol20iss59pp42-52.
Full textBruni, E., N. K. Tran, and M. Baroni. "Multimodal Distributional Semantics." Journal of Artificial Intelligence Research 49 (January 23, 2014): 1–47. http://dx.doi.org/10.1613/jair.4135.
Full textToraldo, Maria Laura, Gazi Islam, and Gianluigi Mangia. "Modes of Knowing." Organizational Research Methods 21, no. 2 (July 14, 2016): 438–65. http://dx.doi.org/10.1177/1094428116657394.
Full textGül, Davut, and Bayram Costu. "To What Extent Do Teachers of Gifted Students Identify Inner and Intermodal Relations in Knowledge Representation?" Mimbar Sekolah Dasar 8, no. 1 (April 30, 2021): 55–80. http://dx.doi.org/10.53400/mimbar-sd.v8i1.31333.
Full textTomskaya, Maria, and Irina Zaytseva. "MULTIMEDIA REPRESENTATION OF KNOWLEDGE IN ACADEMIC DISCOURSE." Verbum 8, no. 8 (January 19, 2018): 129. http://dx.doi.org/10.15388/verb.2017.8.11357.
Full textCholewa, Wojciech, Marcin Amarowicz, Paweł Chrzanowski, and Tomasz Rogala. "Development Environment for Diagnostic Multimodal Statement Networks." Key Engineering Materials 588 (October 2013): 74–83. http://dx.doi.org/10.4028/www.scientific.net/kem.588.74.
Full textPrieto-Velasco, Juan Antonio, and Clara I. López Rodríguez. "Managing graphic information in terminological knowledge bases." Terminology 15, no. 2 (November 11, 2009): 179–213. http://dx.doi.org/10.1075/term.15.2.02pri.
Full textLaenen, Katrien, and Marie-Francine Moens. "Learning Explainable Disentangled Representations of E-Commerce Data by Aligning Their Visual and Textual Attributes." Computers 11, no. 12 (December 10, 2022): 182. http://dx.doi.org/10.3390/computers11120182.
Full textLi, Jinghua, Runze Liu, Dehui Kong, Shaofan Wang, Lichun Wang, Baocai Yin, and Ronghua Gao. "Attentive 3D-Ghost Module for Dynamic Hand Gesture Recognition with Positive Knowledge Transfer." Computational Intelligence and Neuroscience 2021 (November 18, 2021): 1–12. http://dx.doi.org/10.1155/2021/5044916.
Full textDissertations / Theses on the topic "Multimodal Knowledge Representation"
Palframan, Shirley Anne. "Multimodal representation and the making of knowledge : a social semiotic excavation of learning sites." Thesis, University College London (University of London), 2006. http://discovery.ucl.ac.uk/10019283/.
Full textGuo, Xuan. "Discovering a Domain Knowledge Representation for Image Grouping| Multimodal Data Modeling, Fusion, and Interactive Learning." Thesis, Rochester Institute of Technology, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10603860.
Full textIn visually-oriented specialized medical domains such as dermatology and radiology, physicians explore interesting image cases from medical image repositories for comparative case studies to aid clinical diagnoses, educate medical trainees, and support medical research. However, general image classification and retrieval approaches fail in grouping medical images from the physicians' viewpoint. This is because fully-automated learning techniques cannot yet bridge the gap between image features and domain-specific content for the absence of expert knowledge. Understanding how experts get information from medical images is therefore an important research topic.
As a prior study, we conducted data elicitation experiments, where physicians were instructed to inspect each medical image towards a diagnosis while describing image content to a student seated nearby. Experts' eye movements and their verbal descriptions of the image content were recorded to capture various aspects of expert image understanding. This dissertation aims at an intuitive approach to extracting expert knowledge, which is to find patterns in expert data elicited from image-based diagnoses. These patterns are useful to understand both the characteristics of the medical images and the experts' cognitive reasoning processes.
The transformation from the viewed raw image features to interpretation as domain-specific concepts requires experts' domain knowledge and cognitive reasoning. This dissertation also approximates this transformation using a matrix factorization-based framework, which helps project multiple expert-derived data modalities to high-level abstractions.
To combine additional expert interventions with computational processing capabilities, an interactive machine learning paradigm is developed to treat experts as an integral part of the learning process. Specifically, experts refine medical image groups presented by the learned model locally, to incrementally re-learn the model globally. This paradigm avoids the onerous expert annotations for model training, while aligning the learned model with experts' sense-making.
Florén, Henrika. "Shapes of Knowledge : A multimodal study of six Swedish upper secondary students' meaning making and transduction of knowledge across essays and audiovisual presentations." Thesis, Stockholms universitet, Institutionen för pedagogik och didaktik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-156907.
Full textAdjali, Omar. "Dynamic architecture for multimodal applications to reinforce robot-environment interaction." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLV100.
Full textKnowledge Representation and Reasoning is at the heart of the great challenge of Artificial Intelligence. More specifically, in the context of robotic applications, knowledge representation and reasoning approaches are necessary to solve decision problems that autonomous robots face when it comes to evolve in uncertain, dynamic and complex environments or to ensure a natural interaction in human environment. In a robotic interaction system, information has to be represented and processed at various levels of abstraction: From sensor up to actions and plans. Thus, knowledge representation provides the means to describe the environment with different abstraction levels which allow performing appropriate decisions. In this thesis we propose a methodology to solve the problem of multimodal interaction by describing a semantic interaction architecture based on a framework that demonstrates an approach for representing and reasoning with environment knowledge representation language (EKRL), to enhance interaction between robots and their environment. This framework is used to manage the interaction process by representing the knowledge involved in the interaction with EKRL and reasoning on it to make inference. The interaction process includes fusion of values from different sensors to interpret and understand what is happening in the environment, and the fission which suggests a detailed set of actions that are for implementation. Before such actions are implemented by actuators, these actions are first evaluated in a virtual environment which mimics the real-world environment to assess the feasibility of the action implementation in the real world. During these processes, reasoning abilities are necessary to guarantee a global execution of a given interaction scenario. Thus, we provided EKRL framework with reasoning techniques to draw deterministic inferences thanks to unification algorithms and probabilistic inferences to manage uncertain knowledge by combining statistical relational models using Markov logic Networks(MLN) framework with EKRL. The proposed work is validated through scenarios that demonstrate the usability and the performance of our framework in real world applications
Ben, salem Yosra. "Fusion d'images multimodales pour l'aide au diagnostic du cancer du sein." Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2017. http://www.theses.fr/2017IMTA0062/document.
Full textThe breast cancer is the most prevalent cancer among women over 40 years old. Indeed, studies evinced that an early detection and an appropriate treatment of breast cancer increases significantly the chances of survival. The mammography is the most tool used in the diagnosis of breast lesions. However, this technique may be insufficient to evince the structures of the breast and reveal the anomalies present. The doctor can use additional imaging modalities such as MRI (Magnetic Reasoning Image). Therefore, the doctor proceeds to a mental fusion of the different information on the two images in order to make the adequate diagnosis. To assist the doctor in this process, we propose a solution to merge the two images. Although the idea of the fusion seems simple, its implementation poses many problems not only related to the paradigm of fusion in general but also to the nature of medical images that are generally poorly contrasted images, and presenting heterogeneous, inaccurate and ambiguous data. Mammography images and IRM images present very different information representations, since they are taken under different conditions. Which leads us to pose the following question: How to pass from the heterogeneous representation of information in the image space, to another space of uniform representation from the two modalities? In order to treat this problem, we opt a multilevel processing approach : the pixel level, the primitive level, the object level and the scene level. We model the pathological objects extracted from the different images by local ontologies. The fusion is then performed on these local ontologies and results in a global ontology containing the different knowledge on the pathological objects of the studied case. This global ontology serves to instantiate a reference ontology modeling knowledge of the medical diagnosis of breast lesions. Case-based reasoning (CBR) is used to provide the diagnostic reports of the most similar cases that can help the doctor to make the best decision. In order to model the imperfection of the treated information, we use the possibility theory with the ontologies. The final result is a diagnostic reports containing the most similar cases to the studied case with similarity degrees expressed with possibility measures. A 3D symbolic model complete the diagnostic report with a simplified overview of the studied scene
Maatouk, Stefan. "Orientalism - A Netflix Unlimited Series : A Multimodal Critical Discourse Analysis of the Orientalist Representations of Arab Identify on Netflix Film and Television." Thesis, Malmö universitet, Malmö högskola, Institutionen för globala politiska studier (GPS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-43793.
Full text(11170170), Zhi Huang. "Integrative Analysis of Multimodal Biomedical Data with Machine Learning." Thesis, 2021.
Find full textAmelio, Ravelli Andrea. "Annotation of Linguistically Derived Action Concepts in Computer Vision Datasets." Doctoral thesis, 2020. http://hdl.handle.net/2158/1200356.
Full textThompson, Robyn Dyan. "Philosophy for children in a foundation phase literacy classroom in South Africa : multimodal representations of knowledge." Thesis, 2015. http://hdl.handle.net/10539/17833.
Full textBooks on the topic "Multimodal Knowledge Representation"
Reilly, Jamie, and Nadine Martin. Semantic Processing in Transcortical Sensory Aphasia. Edited by Anastasia M. Raymer and Leslie J. Gonzalez Rothi. Oxford University Press, 2015. http://dx.doi.org/10.1093/oxfordhb/9780199772391.013.6.
Full textDove, Guy. Abstract Concepts and the Embodied Mind. Oxford University Press, 2022. http://dx.doi.org/10.1093/oso/9780190061975.001.0001.
Full textBook chapters on the topic "Multimodal Knowledge Representation"
Latoschik, Marc Erich, Peter Biermann, and Ipke Wachsmuth. "Knowledge in the Loop: Semantics Representation for Multimodal Simulative Environments." In Smart Graphics, 25–39. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11536482_3.
Full textDe Silva, Daswin, Damminda Alahakoon, and Shyamali Dharmage. "Extensions to Knowledge Acquisition and Effect of Multimodal Representation in Unsupervised Learning." In Studies in Computational Intelligence, 281–305. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-01082-8_11.
Full textMcTear, Michael, Kristiina Jokinen, Mohnish Dubey, Gérard Chollet, Jérôme Boudy, Christophe Lohr, Sonja Dana Roelen, Wanja Mössing, and Rainer Wieching. "Empowering Well-Being Through Conversational Coaching for Active and Healthy Ageing." In Lecture Notes in Computer Science, 257–65. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-09593-1_21.
Full textDanielsson, Kristina, and Staffan Selander. "Semiotic Modes and Representations of Knowledge." In Multimodal Texts in Disciplinary Education, 17–23. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-63960-0_3.
Full textMoschini, Ilaria, and Maria Grazia Sindoni. "The Digital Mediation of Knowledge, Representations and Practices through the Lenses of a Multimodal Theory of Communication." In Mediation and Multimodal Meaning Making in Digital Environments, 1–14. New York: Routledge, 2021. http://dx.doi.org/10.4324/9781003225423-1.
Full textZhang, Chao, and Jiawei Han. "Data Mining and Knowledge Discovery." In Urban Informatics, 797–814. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-15-8983-6_42.
Full textScrocca, Mario, Marco Comerio, Alessio Carenini, and Irene Celino. "Modelling Business Agreements in the Multimodal Transportation Domain Through Ontological Smart Contracts." In Towards a Knowledge-Aware AI. IOS Press, 2022. http://dx.doi.org/10.3233/ssw220016.
Full textFarmer, Lesley S. J. "Extensions of Content Analysis in the Creation of Multimodal Knowledge Representations." In Advances in Library and Information Science, 63–81. IGI Global, 2018. http://dx.doi.org/10.4018/978-1-5225-5164-5.ch005.
Full textKhakhalin, Gennady K., Sergey S. Kurbatov, Xenia Naidenova, and Alex P. Lobzin. "Integration of the Image and NL-text Analysis/Synthesis Systems." In Intelligent Data Analysis for Real-Life Applications, 160–85. IGI Global, 2012. http://dx.doi.org/10.4018/978-1-4666-1806-0.ch009.
Full textCastellano Sanz, Margarida. "Challenging Picturebooks and Domestic Geographies." In Advances in Psychology, Mental Health, and Behavioral Studies, 213–35. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-6684-4735-2.ch015.
Full textConference papers on the topic "Multimodal Knowledge Representation"
"KNOWLEDGE-BASED MULTIMODAL DATA REPRESENTATION AND QUERYING." In International Conference on Knowledge Engineering and Ontology Development. SciTePress - Science and and Technology Publications, 2011. http://dx.doi.org/10.5220/0003627901520158.
Full textWang, Zikang, Linjing Li, Qiudan Li, and Daniel Zeng. "Multimodal Data Enhanced Representation Learning for Knowledge Graphs." In 2019 International Joint Conference on Neural Networks (IJCNN). IEEE, 2019. http://dx.doi.org/10.1109/ijcnn.2019.8852079.
Full textSun, Chenkai, Weijiang Li, Jinfeng Xiao, Nikolaus Nova Parulian, ChengXiang Zhai, and Heng Ji. "Fine-Grained Chemical Entity Typing with Multimodal Knowledge Representation." In 2021 IEEE International Conference on Bioinformatics and Biomedicine (BIBM). IEEE, 2021. http://dx.doi.org/10.1109/bibm52615.2021.9669360.
Full textMousselly Sergieh, Hatem, Teresa Botschen, Iryna Gurevych, and Stefan Roth. "A Multimodal Translation-Based Approach for Knowledge Graph Representation Learning." In Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics. Stroudsburg, PA, USA: Association for Computational Linguistics, 2018. http://dx.doi.org/10.18653/v1/s18-2027.
Full textJenkins, Porter, Ahmad Farag, Suhang Wang, and Zhenhui Li. "Unsupervised Representation Learning of Spatial Data via Multimodal Embedding." In CIKM '19: The 28th ACM International Conference on Information and Knowledge Management. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3357384.3358001.
Full textĆalić, J., N. Campbell, S. Dasiopoulou, and Y. Kompatsiaris. "An overview of multimodal video representation for semantic analysis." In 2nd European Workshop on the Integration of Knowledge, Semantics and Digital Media Technology (EWIMT 2005). IET, 2005. http://dx.doi.org/10.1049/ic.2005.0708.
Full textLiu, Wenxuan, Hao Duan, Zeng Li, Jingdong Liu, Hong Huo, and Tao Fang. "Entity Representation Learning with Multimodal Neighbors for Link Prediction in Knowledge Graph." In 2021 7th International Conference on Computer and Communications (ICCC). IEEE, 2021. http://dx.doi.org/10.1109/iccc54389.2021.9674496.
Full textGôlo, Marcos P. S., Rafael G. Rossi, and Ricardo M. Marcacini. "Triple-VAE: A Triple Variational Autoencoder to Represent Events in One-Class Event Detection." In Encontro Nacional de Inteligência Artificial e Computacional. Sociedade Brasileira de Computação - SBC, 2021. http://dx.doi.org/10.5753/eniac.2021.18291.
Full textMoctezuma, Daniela, Víctor Muníz, and Jorge García. "Multimodal Data Evaluation for Classification Problems." In 7th International Conference on VLSI and Applications (VLSIA 2021). Academy and Industry Research Collaboration Center (AIRCC), 2021. http://dx.doi.org/10.5121/csit.2021.112105.
Full textOliveira, Angelo Schranko, and Renato José Sassi. "Hunting Android Malware Using Multimodal Deep Learning and Hybrid Analysis Data." In Congresso Brasileiro de Inteligência Computacional. SBIC, 2021. http://dx.doi.org/10.21528/cbic2021-32.
Full text