Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Multimodal Knowledge Representation.

Статті в журналах з теми "Multimodal Knowledge Representation"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Multimodal Knowledge Representation".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Azañón, Elena, Luigi Tamè, Angelo Maravita, Sally A. Linkenauger, Elisa R. Ferrè, Ana Tajadura-Jiménez, and Matthew R. Longo. "Multimodal Contributions to Body Representation." Multisensory Research 29, no. 6-7 (2016): 635–61. http://dx.doi.org/10.1163/22134808-00002531.

Повний текст джерела
Анотація:
Our body is a unique entity by which we interact with the external world. Consequently, the way we represent our body has profound implications in the way we process and locate sensations and in turn perform appropriate actions. The body can be the subject, but also the object of our experience, providing information from sensations on the body surface and viscera, but also knowledge of the body as a physical object. However, the extent to which different senses contribute to constructing the rich and unified body representations we all experience remains unclear. In this review, we aim to bring together recent research showing important roles for several different sensory modalities in constructing body representations. At the same time, we hope to generate new ideas of how and at which level the senses contribute to generate the different levels of body representations and how they interact. We will present an overview of some of the most recent neuropsychological evidence about multisensory control of pain, and the way that visual, auditory, vestibular and tactile systems contribute to the creation of coherent representations of the body. We will focus particularly on some of the topics discussed in the symposium on Multimodal Contributions to Body Representation held on the 15th International Multisensory Research Forum (2015, Pisa, Italy).
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Coelho, Ana, Paulo Marques, Ricardo Magalhães, Nuno Sousa, José Neves, and Victor Alves. "A Knowledge Representation and Reasoning System for Multimodal Neuroimaging Studies." Inteligencia Artificial 20, no. 59 (February 6, 2017): 42. http://dx.doi.org/10.4114/intartif.vol20iss59pp42-52.

Повний текст джерела
Анотація:
Multimodal neuroimaging analyses are of major interest for both research and clinical practice, enabling the combined evaluation of the structure and function of the human brain. These analyses generate large volumes of data and consequently increase the amount of possibly useful information. Indeed, BrainArchive was developed in order to organize, maintain and share this complex array of neuroimaging data. It stores all the information available for each participant/patient, being dynamic by nature. Notably, the application of reasoning systems to this multimodal data has the potential to provide tools for the identification of undiagnosed diseases. As a matter of fact, in this work we explore how Artificial Intelligence techniques for decision support work, namely Case-Based Reasoning (CBR) that may be used to achieve such endeavour. Particularly, it is proposed a reasoning system that uses the information stored in BrainArchive as past knowledge for the identification of individuals that are at risk of contracting some brain disease.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Bruni, E., N. K. Tran, and M. Baroni. "Multimodal Distributional Semantics." Journal of Artificial Intelligence Research 49 (January 23, 2014): 1–47. http://dx.doi.org/10.1613/jair.4135.

Повний текст джерела
Анотація:
Distributional semantic models derive computational representations of word meaning from the patterns of co-occurrence of words in text. Such models have been a success story of computational linguistics, being able to provide reliable estimates of semantic relatedness for the many semantic tasks requiring them. However, distributional models extract meaning information exclusively from text, which is an extremely impoverished basis compared to the rich perceptual sources that ground human semantic knowledge. We address the lack of perceptual grounding of distributional models by exploiting computer vision techniques that automatically identify discrete “visual words” in images, so that the distributional representation of a word can be extended to also encompass its co-occurrence with the visual words of images it is associated with. We propose a flexible architecture to integrate text- and image-based distributional information, and we show in a set of empirical tests that our integrated model is superior to the purely text-based approach, and it provides somewhat complementary semantic information with respect to the latter.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Toraldo, Maria Laura, Gazi Islam, and Gianluigi Mangia. "Modes of Knowing." Organizational Research Methods 21, no. 2 (July 14, 2016): 438–65. http://dx.doi.org/10.1177/1094428116657394.

Повний текст джерела
Анотація:
The current article argues that video-based methodologies offer unique potential for multimodal research applications. Multimodal research, further, can respond to the problem of “elusive knowledges,” that is, tacit, aesthetic, and embodied aspects of organizational life that are difficult to articulate in traditional methodological paradigms. We argue that the multimodal qualities of video, including but not limited to its visual properties, provide a scaffold for translating embodied, tacit, and aesthetic knowledge into discursive and textual forms, enabling the representation of organizational knowledge through academic discourse. First, we outline the problem of representation by comparing different forms of elusive knowledge, framing this problem as one of cross-modal translation. Second, we describe how video’s unique affordances place it in an ideal position to address this problem. Third, we demonstrate how video-based solutions can contribute to research, providing examples both from the literature and our own applied case work as models for video-based approaches. Finally, we discuss the implications and limitations of the proposed video approaches as a methodological support.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Gül, Davut, and Bayram Costu. "To What Extent Do Teachers of Gifted Students Identify Inner and Intermodal Relations in Knowledge Representation?" Mimbar Sekolah Dasar 8, no. 1 (April 30, 2021): 55–80. http://dx.doi.org/10.53400/mimbar-sd.v8i1.31333.

Повний текст джерела
Анотація:
Gifted students get bored of reading authoritative and descriptive multimodal texts. They need coherent, explanatory, and interactive texts. Moreover, because of the pandemic, gifted students took courses online, and teachers had to conduct their lessons on digital online tools with multimodal representations. They posted supplementary teaching materials as multimodal texts to the students. Hence, teachers of gifted students should pay attention to inner and intermodal relations to meet the needs of gifted students and support their learning experience. The research aims at examining to what extent teachers of gifted students identify inner and intermodal relations because before designing these relations, the teacher should recognize these types of relations. The educational descriptive case study was applied. Six experienced primary school teachers were involved. The data were analyzed via content analysis. The results showed that teachers just identified the primitive level of inner and intermodal relations. The conclusion can be drawn that several educational design research should be increased to construct professional development courses for teachers about this issue. Learning and applying inner and intermodal relations are crucial for teachers of gifted students, in addition to having curiosity, they have a high cognitive level in different areas, thus they demand advanced forms of multimodal texts.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Tomskaya, Maria, and Irina Zaytseva. "MULTIMEDIA REPRESENTATION OF KNOWLEDGE IN ACADEMIC DISCOURSE." Verbum 8, no. 8 (January 19, 2018): 129. http://dx.doi.org/10.15388/verb.2017.8.11357.

Повний текст джерела
Анотація:
The article focuses on academic presentations created with the help of multimedia programmes. The presentation is regarded as a special form of new academic knowledge representation. An academic presentation is explored as a multimodal phenomenon due to the fact that different channels or modes are activated during its perception. Data perception constitutes a part of the context which in itself is a semiotic event involving various components (an addresser, an addressee, the message itself, the channel of communication and the code). The choice of the code and the channel depends on different factors (type of the audience, the nature of the message, etc). In this way, the information for non-professionals will be most likely presented through visualization with the help of infographics (schemes, figures, charts, etc). Talking about the professional audience the speaker may resort to visualization to a lesser degree or he may not use it at all. His message will be transmitted only with the help of verbal means, which will not prevent the audience from perceiving and understanding new knowledge correctly. The presentation regime of rapid successive slide show may be regarded the heritage of ‘clip thinking’ which is characterized by a non-linear, simultaneous way of information perception. At the present stage of technology development visualization is becoming the most common means of transmitting information in academic discourse, due to peculiarities of data perception by the man of today.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Cholewa, Wojciech, Marcin Amarowicz, Paweł Chrzanowski, and Tomasz Rogala. "Development Environment for Diagnostic Multimodal Statement Networks." Key Engineering Materials 588 (October 2013): 74–83. http://dx.doi.org/10.4028/www.scientific.net/kem.588.74.

Повний текст джерела
Анотація:
Development of effective diagnostic systems for the recognition of technical conditions of complex objects or processes requires the use of knowledge from multiple sources. Gathering of diagnostic knowledge acquired from diagnostic experiments as well as independent experts in the form of an information system database is one of the most important stages in the process of designing diagnostic systems. The task can be supported through suitable modeling activities and diagnostic knowledge management. Briefly, this paper presents an example of an application of multimodal diagnostic statement networks for the purpose of knowledge representation. Multimodal statement networks allow for approximate diagnostic reasoning based on a knowledge that is imprecise or even contradictory in part. The authors also describe the software environment REx for the development and testing of multimodal statement networks. The environment is a system for integrating knowledge from various sources and from independent domain experts in particular.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Prieto-Velasco, Juan Antonio, and Clara I. López Rodríguez. "Managing graphic information in terminological knowledge bases." Terminology 15, no. 2 (November 11, 2009): 179–213. http://dx.doi.org/10.1075/term.15.2.02pri.

Повний текст джерела
Анотація:
The cognitive shift in Linguistics has affected the way linguists, lexicographers and terminologists understand and describe specialized language, and the way they represent scientific and technical concepts. The representation of terminological knowledge, as part of our encyclopaedic knowledge about the world, is crucial in multimedia terminological knowledge bases, where different media coexist to enhance the multidimensional character of knowledge representations. However, so far little attention has been paid in Terminology and Linguistics to graphic information, including visual resources and pictorial material. Frame-based Terminology (Faber et al. 2005, 2006, 2007, 2008) advocates a multimodal conceptual description in which the structured information in terminographic definitions meshes with visual information for a better understanding of specialized concepts. In this article, we explore the relationship between visual and textual information, and search for a principled way to select images that best represent the linguistic, conceptual and contextual information contained in terminological knowledge bases, in order to contribute to a better transfer of specialized knowledge.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Laenen, Katrien, and Marie-Francine Moens. "Learning Explainable Disentangled Representations of E-Commerce Data by Aligning Their Visual and Textual Attributes." Computers 11, no. 12 (December 10, 2022): 182. http://dx.doi.org/10.3390/computers11120182.

Повний текст джерела
Анотація:
Understanding multimedia content remains a challenging problem in e-commerce search and recommendation applications. It is difficult to obtain item representations that capture the relevant product attributes since these product attributes are fine-grained and scattered across product images with huge visual variations and product descriptions that are noisy and incomplete. In addition, the interpretability and explainability of item representations have become more important in order to make e-commerce applications more intelligible to humans. Multimodal disentangled representation learning, where the independent generative factors of multimodal data are identified and encoded in separate subsets of features in the feature space, is an interesting research area to explore in an e-commerce context given the benefits of the resulting disentangled representations such as generalizability, robustness and interpretability. However, the characteristics of real-word e-commerce data, such as the extensive visual variation, noisy and incomplete product descriptions, and complex cross-modal relations of vision and language, together with the lack of an automatic interpretation method to explain the contents of disentangled representations, means that current approaches for multimodal disentangled representation learning do not suffice for e-commerce data. Therefore, in this work, we design an explainable variational autoencoder framework (E-VAE) which leverages visual and textual item data to obtain disentangled item representations by jointly learning to disentangle the visual item data and to infer a two-level alignment of the visual and textual item data in a multimodal disentangled space. As such, E-VAE tackles the main challenges in disentangling multimodal e-commerce data. Firstly, with the weak supervision of the two-level alignment our E-VAE learns to steer the disentanglement process towards discovering the relevant factors of variations in the multimodal data and to ignore irrelevant visual variations which are abundant in e-commerce data. Secondly, to the best of our knowledge our E-VAE is the first VAE-based framework that has an automatic interpretation mechanism that allows to explain the components of the disentangled item representations with text. With our textual explanations we provide insight in the quality of the disentanglement. Furthermore, we demonstrate that with our explainable disentangled item representations we achieve state-of-the-art outfit recommendation results on the Polyvore Outfits dataset and report new state-of-the-art cross-modal search results on the Amazon Dresses dataset.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Li, Jinghua, Runze Liu, Dehui Kong, Shaofan Wang, Lichun Wang, Baocai Yin, and Ronghua Gao. "Attentive 3D-Ghost Module for Dynamic Hand Gesture Recognition with Positive Knowledge Transfer." Computational Intelligence and Neuroscience 2021 (November 18, 2021): 1–12. http://dx.doi.org/10.1155/2021/5044916.

Повний текст джерела
Анотація:
Hand gesture recognition is a challenging topic in the field of computer vision. Multimodal hand gesture recognition based on RGB-D is with higher accuracy than that of only RGB or depth. It is not difficult to conclude that the gain originates from the complementary information existing in the two modalities. However, in reality, multimodal data are not always easy to acquire simultaneously, while unimodal RGB or depth hand gesture data are more general. Therefore, one hand gesture system is expected, in which only unimordal RGB or Depth data is supported for testing, while multimodal RGB-D data is available for training so as to attain the complementary information. Fortunately, a kind of method via multimodal training and unimodal testing has been proposed. However, unimodal feature representation and cross-modality transfer still need to be further improved. To this end, this paper proposes a new 3D-Ghost and Spatial Attention Inflated 3D ConvNet (3DGSAI) to extract high-quality features for each modality. The baseline of 3DGSAI network is Inflated 3D ConvNet (I3D), and two main improvements are proposed. One is 3D-Ghost module, and the other is the spatial attention mechanism. The 3D-Ghost module can extract richer features for hand gesture representation, and the spatial attention mechanism makes the network pay more attention to hand region. This paper also proposes an adaptive parameter for positive knowledge transfer, which ensures that the transfer always occurs from the strong modality network to the weak one. Extensive experiments on SKIG, VIVA, and NVGesture datasets demonstrate that our method is competitive with the state of the art. Especially, the performance of our method reaches 97.87% on the SKIG dataset using only RGB, which is the current best result.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Hocaoǧlu, Cem, and Arthur C. Sanderson. "Multimodal Function Optimization Using Minimal Representation Size Clustering and Its Application to Planning Multipaths." Evolutionary Computation 5, no. 1 (March 1997): 81–104. http://dx.doi.org/10.1162/evco.1997.5.1.81.

Повний текст джерела
Анотація:
A novel genetic algorithm (GA) using minimal representation size cluster (MRSC) analysis is designed and implemented for solving multimodal function optimization problems. The problem of multimodal function optimization is framed within a hypothesize-and-test paradigm using minimal representation size (minimal complexity) for species formation and a GA. A multiple-population GA is developed to identify different species. The number of populations, thus the number of different species, is determined by the minimal representation size criterion. Therefore, the proposed algorithm reveals the unknown structure of the multimodal function when a priori knowledge about the function is unknown. The effectiveness of the algorithm is demonstrated on a number of multimodal test functions. The proposed scheme results in a highly parallel algorithm for finding multiple local minima. In this paper, a path-planning algorithm is also developed based on the MRSC-GA algorithm. The algorithm utilizes MRSC_GA for planning paths for mobile robots, piano-mover problems, and N-link manipulators. The MRSC_GA is used for generating multipaths to provide alternative solutions to the path-planning problem. The generation of alternative solutions is especially important for planning paths in dynamic environments. A novel iterative multiresolution path representation is used as a basis for the GA coding. The effectiveness of the algorithm is demonstrated on a number of two-dimensional path-planning problems.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Jaiswal, Mimansa, and Emily Mower Provost. "Privacy Enhanced Multimodal Neural Representations for Emotion Recognition." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (April 3, 2020): 7985–93. http://dx.doi.org/10.1609/aaai.v34i05.6307.

Повний текст джерела
Анотація:
Many mobile applications and virtual conversational agents now aim to recognize and adapt to emotions. To enable this, data are transmitted from users' devices and stored on central servers. Yet, these data contain sensitive information that could be used by mobile applications without user's consent or, maliciously, by an eavesdropping adversary. In this work, we show how multimodal representations trained for a primary task, here emotion recognition, can unintentionally leak demographic information, which could override a selected opt-out option by the user. We analyze how this leakage differs in representations obtained from textual, acoustic, and multimodal data. We use an adversarial learning paradigm to unlearn the private information present in a representation and investigate the effect of varying the strength of the adversarial component on the primary task and on the privacy metric, defined here as the inability of an attacker to predict specific demographic information. We evaluate this paradigm on multiple datasets and show that we can improve the privacy metric while not significantly impacting the performance on the primary task. To the best of our knowledge, this is the first work to analyze how the privacy metric differs across modalities and how multiple privacy concerns can be tackled while still maintaining performance on emotion recognition.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Chu, Hanlu, Haien Zeng, Hanjiang Lai, and Yong Tang. "Efficient modal-aware feature learning with application in multimodal hashing." Intelligent Data Analysis 26, no. 2 (March 14, 2022): 345–60. http://dx.doi.org/10.3233/ida-215780.

Повний текст джерела
Анотація:
Many retrieval applications can benefit from multiple modalities, for which how to represent multimodal data is the critical component. Most deep multimodal learning methods typically involve two steps to construct the joint representations: 1) learning of multiple intermediate features, with each intermediate feature corresponding to a modality, using separate and independent deep models; 2) merging the intermediate features into a joint representation using a fusion strategy. However, in the first step, these intermediate features do not have previous knowledge of each other and cannot fully exploit the information contained in the other modalities. In this paper, we present a modal-aware operation as a generic building block to capture the non-linear dependencies among the heterogeneous intermediate features, which can learn the underlying correlation structures in other multimodal data as soon as possible. The modal-aware operation consists of a kernel network and an attention network. The kernel network is utilized to learn the non-linear relationships with other modalities. The attention network finds the informative regions of these modal-aware features that are favorable for retrieval. We verify the proposed modal-aware feature learning in the multimodal hashing task. The experiments conducted on three public benchmark datasets demonstrate significant improvements in the performance of our method relative to state-of-the-art methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

He, Yang, Ling Tian, Lizong Zhang, and Xi Zeng. "Knowledge Graph Representation Fusion Framework for Fine-Grained Object Recognition in Smart Cities." Complexity 2021 (July 13, 2021): 1–9. http://dx.doi.org/10.1155/2021/8041029.

Повний текст джерела
Анотація:
Autonomous object detection powered by cutting-edge artificial intelligent techniques has been an essential component for sustaining complex smart city systems. Fine-grained image classification focuses on recognizing subcategories of specific levels of images. As a result of the high similarity between images in the same category and the high dissimilarity in the same subcategories, it has always been a challenging problem in computer vision. Traditional approaches usually rely on exploring only the visual information in images. Therefore, this paper proposes a novel Knowledge Graph Representation Fusion (KGRF) framework to introduce prior knowledge into fine-grained image classification task. Specifically, the Graph Attention Network (GAT) is employed to learn the knowledge representation from the constructed knowledge graph modeling the categories-subcategories and subcategories-attributes associations. By introducing the Multimodal Compact Bilinear (MCB) module, the framework can fully integrate the knowledge representation and visual features for learning the high-level image features. Extensive experiments on the Caltech-UCSD Birds-200-2011 dataset verify the superiority of our proposed framework over several existing state-of-the-art methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Feng, Debing, and Xiangxiang Wu. "Coronavirus, Demons, and War: Visual and Multimodal Metaphor in Chinese Public Service Advertisements." SAGE Open 12, no. 1 (January 2022): 215824402210788. http://dx.doi.org/10.1177/21582440221078855.

Повний текст джерела
Анотація:
Metaphors in public service advertisements, or PSAs, have played an important role in promoting the knowledge of COVID-19 and China’s anti-epidemic activities. Based primarily on Feng and O’Halloran’s visual representation of multimodal metaphor, this article examines visual and multimodal metaphors created in the online PSAs that were produced in early 2020 to publicize China’s epidemic prevention and control activities. It is found that those metaphors fall into three general groups, namely “coronavirus” metaphor, “anti-epidemic worker” metaphor, and “medical instrument” metaphor. Nearly all of them were created to serve an overarching metaphor, namely ANTI-EPIDEMIC WORK IS WAR, of which coronaviruses were depicted as enemies, anti-epidemic workers as warriors, and medical instruments as weapons. Most of the metaphors were constructed through visual or multimodal anomaly realized through strategies such as participant substitution, verbal/visual superimposition, and verbo-visual integration/fusion in the representational structure, while their metaphorical meanings became supplemented or reinforced by the deployment of compositional and interactive resources such as spatial position, color contrast, gaze, and size. Finally, the causes and implications of the findings are discussed from three aspects: social background, genre, and audience.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Zhong, Dong, Yi-An Zhu, Lanqing Wang, Junhua Duan, and Jiaxuan He. "A Cognition Knowledge Representation Model Based on Multidimensional Heterogeneous Data." Complexity 2020 (December 28, 2020): 1–17. http://dx.doi.org/10.1155/2020/8812459.

Повний текст джерела
Анотація:
The information in the working environment of industrial Internet is characterized by diversity, semantics, hierarchy, and relevance. However, the existing representation methods of environmental information mostly emphasize the concepts and relationships in the environment and have an insufficient understanding of the items and relationships at the instance level. There are also some problems such as low visualization of knowledge representation, poor human-machine interaction ability, insufficient knowledge reasoning ability, and slow knowledge search speed, which cannot meet the needs of intelligent and personalized service. Based on this, this paper designs a cognitive information representation model based on a knowledge graph, which combines the perceptual information of industrial robot ontology with semantic description information such as functional attributes obtained from the Internet to form a structured and logically reasoned cognitive knowledge graph including perception layer and cognition layer. Aiming at the problem that the data sources of the knowledge base for constructing the cognitive knowledge graph are wide and heterogeneous, and there are entity semantic differences and knowledge system differences among different data sources, a multimodal entity semantic fusion model based on vector features and a system fusion framework based on HowNet are designed, and the environment description information such as object semantics, attributes, relations, spatial location, and context acquired by industrial robots and their own state information are unified and standardized. The automatic representation of robot perceived information is realized, and the universality, systematicness, and intuition of robot cognitive information representation are enhanced, so that the cognition reasoning ability and knowledge retrieval efficiency of robots in the industrial Internet environment can be effectively improved.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Nguyen, Nhu Van, Alain Boucher, and Jean-Marc Ogier. "Keyword Visual Representation for Image Retrieval and Image Annotation." International Journal of Pattern Recognition and Artificial Intelligence 29, no. 06 (August 12, 2015): 1555010. http://dx.doi.org/10.1142/s0218001415550101.

Повний текст джерела
Анотація:
Keyword-based image retrieval is more comfortable for users than content-based image retrieval. Because of the lack of semantic description of images, image annotation is often used a priori by learning the association between the semantic concepts (keywords) and the images (or image regions). This association issue is particularly difficult but interesting because it can be used for annotating images but also for multimodal image retrieval. However, most of the association models are unidirectional, from image to keywords. In addition to that, existing models rely on a fixed image database and prior knowledge. In this paper, we propose an original association model, which provides image-keyword bidirectional transformation. Based on the state-of-the-art Bag of Words model dealing with image representation, including a strategy of interactive incremental learning, our model works well with a zero-or-weak-knowledge image database and evolving from it. Some objective quantitative and qualitative evaluations of the model are proposed, in order to highlight the relevance of the method.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Höllerer, Markus A., Dennis Jancsary, and Maria Grafström. "‘A Picture is Worth a Thousand Words’: Multimodal Sensemaking of the Global Financial Crisis." Organization Studies 39, no. 5-6 (April 7, 2018): 617–44. http://dx.doi.org/10.1177/0170840618765019.

Повний текст джерела
Анотація:
Through its specific rhetorical potential that is distinct from verbal text, visual material facilitates and plays a pivotal role in linking novel phenomena to established and taken-for-granted social categories and discourses within the social stock of knowledge. Employing data from the worldwide news coverage of the global financial crisis in the Financial Times between 2008 and 2012, we analyse sensemaking and sensegiving efforts in the business media. We identify a set of specific multimodal compositions that construct and shape a limited number of narratives on the global financial crisis through distinct relationships between visual and verbal text. By outlining how multimodal compositions enhance representation, theorization, resonance, and perceived validity of narratives, we contribute to the phenomenological tradition in institutional organization theory and to research on multimodal meaning construction. We argue that elaborate multimodal compositions of verbal text, images, and other visual artifacts constitute a key resource for sensemaking and, consequently, sensegiving.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Li, Mimi, and Julie Dell-Jones. "The same topic, different products: Pre-/in-service teachers’ linguistic knowledge representation in a multimodal project." Computers and Composition 67 (March 2023): 102754. http://dx.doi.org/10.1016/j.compcom.2023.102754.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Ahmad, Arinal Haqqiyah, Bukhari Daud, and Dohra Fitrisia. "The analysis of best-seller fantasy novel covers in 2019 through multimodal lens." English Education Journal 12, no. 1 (January 31, 2021): 1–18. http://dx.doi.org/10.24815/eej.v12i1.19140.

Повний текст джерела
Анотація:
The purpose of this research was to analyze ten covers of 2019 best-seller fantasy novels through multimodal. The research method used was qualitative research. The objects in this research were ten book covers of 2019 best-seller fantasy novels. The instruments used were documentation that aimed at obtaining data, including relevant books, study, activity reporting, relevant research data. Content analysis was used to obtain the data. This study used five phased cycles in analyzing the data; compiling, disassembling, reassembling, interpreting, and concluding. The result of analyzing the novels is emphasized in two focuses, including representation and interactive function. Several novels have a narrative aspect, while others contain conceptual interpretation, which is part of a representative function. It was very challenging to interpret some implicit meaning of the symbols in some of the novels as it requires mythical knowledge. Therefore, it is expected that understanding the implicit meaning comprehensively will make readers easier to understand the story outline of the novel.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Masita, Ella. "The Representation of Indonesian National Identity in English Textbook." Indonesian Research Journal in Education |IRJE| 5, no. 1 (June 17, 2021): 226–44. http://dx.doi.org/10.22437/irje.v5i1.10599.

Повний текст джерела
Анотація:
This article interrogates the conceptualization of Indonesian national identity from the point of view of the Indonesian government. The data were taken from a mandatory English textbook for Grade XI. Through the lens of Representation theory, this research explores the key issues within the textbook. In analyzing the data, a multimodal discourse analysis is utilized, specifically through the verbal analysis and visual analysis of the texts within the textbook. The results of analysis reveal that there are four themes namely: spirituality and morality, personal attribute, nationalism, and knowledge and scientific attitude. However, the research results indicate the inclusion of selected values and norms of personal attributes, the unbalanced portion of the themes within the textbook and minimal representation of the knowledge development, specifically in regards to the development of English skills. Apart from that, it is realized that textbook is only one part of elements in English teaching process. Therefore, further studies with a broader scope of investigation are required to achieve more comprehensive information about how Indonesian national identity in conceptualized in English teaching.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Zhang, Jing, Meng Chen, Jie Liu, Dongdong Peng, Zong Dai, Xiaoyong Zou, and Zhanchao Li. "A Knowledge-Graph-Based Multimodal Deep Learning Framework for Identifying Drug–Drug Interactions." Molecules 28, no. 3 (February 3, 2023): 1490. http://dx.doi.org/10.3390/molecules28031490.

Повний текст джерела
Анотація:
The identification of drug–drug interactions (DDIs) plays a crucial role in various areas of drug development. In this study, a deep learning framework (KGCN_NFM) is presented to recognize DDIs using coupling knowledge graph convolutional networks (KGCNs) with neural factorization machines (NFMs). A KGCN is used to learn the embedding representation containing high-order structural information and semantic information in the knowledge graph (KG). The embedding and the Morgan molecular fingerprint of drugs are then used as input of NFMs to predict DDIs. The performance and effectiveness of the current method have been evaluated and confirmed based on the two real-world datasets with different sizes, and the results demonstrate that KGCN_NFM outperforms the state-of-the-art algorithms. Moreover, the identified interactions between topotecan and dantron by KGCN_NFM were validated through MTT assays, apoptosis experiments, cell cycle analysis, and molecular docking. Our study shows that the combination therapy of the two drugs exerts a synergistic anticancer effect, which provides an effective treatment strategy against lung carcinoma. These results reveal that KGCN_NFM is a valuable tool for integrating heterogeneous information to identify potential DDIs.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Li, Ke, and Sang-Bing Tsai. "An Empirical Study on the Countermeasures of Implementing 5G Multimedia Network Technology in College Education." Mobile Information Systems 2021 (October 13, 2021): 1–14. http://dx.doi.org/10.1155/2021/2547648.

Повний текст джерела
Анотація:
Aiming at the problem of 5G multimedia heterogeneous multimodal network representation learning, this paper proposes a collaborative multimodal heterogeneous network representation learning method based on attention mechanism. This method learns different representations for nodes based on heterogeneous network structure information and multimodal content and designs an attention mechanism to learn weights for different representations to fuse them to obtain robust node representations. Combining the general process of exploring the college physical education model and the characteristics of the multimedia network classroom environment, this article constructs the process of exploring the college physical education teaching model of the multimedia network classroom. Through the research and practice of the inquiry college physical education teaching model in the multimedia network classroom, it is verified that the implementation of the inquiry college physical education teaching in the multimedia network classroom can effectively influence and increase the students’ interest in learning and stimulate the students’ inner learning motivation. Through the guidance and training of teachers, a variety of disciplines can be used to carry out college physical education in multimedia network classrooms, so that the integration between courses can be truly realized, with the aim that all courses can share the excellent results brought by the development of modern education technology. More educators understand, accept, and participate in the practice of college physical education based on multimedia network classrooms and better serve the education of college physical education. The construction of the college physical education evaluation system should be combined with the characteristics of the 5G multimedia network era. The evaluation process includes data collection, data analysis, result output, and result feedback. Each link is an indispensable part of the college physical education evaluation process. Based on the relevant knowledge of the 5G multimedia network, the evaluation indicators determined in this study can basically reflect the various elements of the physical education process in colleges and universities. The distribution of index weight coefficients is more scientific and reasonable. Compared with the current system, the college physical education evaluation system constructed by exploration has a certain degree of objectivity and scientificity. Therefore, it is feasible to apply the 5G multimedia network to the evaluation of college physical education.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Hurdley, Rachel, and Bella Dicks. "In-between practice: working in the ‘thirdspace’ of sensory and multimodal methodology." Qualitative Research 11, no. 3 (June 2011): 277–92. http://dx.doi.org/10.1177/1468794111399837.

Повний текст джерела
Анотація:
This article discusses how emergent sensory and multimodal methodologies can work in interaction to produce innovative social enquiry. A juxtaposition of two research projects — an ethnography of corridors and a mixed methods study of multimodal authoring and ‘reading’ practices — opened up this encounter. Sensory ethnography within social research methods aims to create empathetic, experiential ways of knowing participants’ and researchers’ worlds. The linguistic field of multimodality offers a rather different framework for research attending to the visual, material and acoustic textures of participants’ interactions. While both these approaches address the multidimensional character of social worlds, the ‘sensory turn’ centres the sensuous, bodied person — participant, researcher and audience/reader — as the ‘place’ for intimate, affective forms of knowing. In contrast, multimodal knowledge production is premised on multiple analytic gaps — between modes and media, participants and materials, recording and representation. Eliciting the tensions between sensorial closeness and modal distances offers a new space for reflexive research practice and multiple ways of knowing social worlds.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Reyes-Torres, Agustín, Matilde Portalés-Raga, and Clara Torres-Mañá. "The potential of sound picturebooks as multimodal narratives." AILA Review 34, no. 2 (December 31, 2021): 300–324. http://dx.doi.org/10.1075/aila.21006.rey.

Повний текст джерела
Анотація:
Abstract In this article, we study how Sound Picturebooks constitute a multimodal narrative that enables students to develop their literacy, not only in terms of basic reading and writing skills, but also as a multidimensional interaction with other forms of representation such as images, sounds and actions. In line with the aims of the Pedagogy of Multiliteracies (New London Group 1996), we select and analyze fifteen Sound Picturebooks whose features allows us to implement the Learning by Design tenets and the four pedagogical components of the Knowledge Processes Framework: experiencing, conceptualizing, analyzing and applying. The goal is to foster basic multimodal literacies – literary, linguistic, visual and musical – and provide learners with the opportunity to construct meaning as a dynamic process of transformation and creative inquiry. Specifically, we explore the auditory features that these Sound Picturebooks contain and the extent to which the themes conveyed in the stories can be connected with the United Nations Sustainable Development Goals for the further discussion of social concerns. Our analyzes show that such multimodal narratives integrate crucial features to cultivate and broaden students’ multiliteracies in the classroom.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Korolainen, Kari Tapio. "The Handwork of Folkloristic-Ethnological Knowledge." Ethnologia Fennica 44 (December 31, 2017): 35–51. http://dx.doi.org/10.23991/ef.v44i0.59693.

Повний текст джерела
Анотація:
Drawing is discussed here, both from the historical and from the contemporary folklore and material culture stance. Folklore collector Samuli Paulaharju’s (1875–1944) drawings serve as a point of departure; again, cultural studies constitute the background, as the notion of representation and the construction of folkloristic-ethnologic knowledge are stressed. Material and visual culture comprises yet other central viewpoints. The research material consists of Paulaharju’s folkloristic descriptions (at the SKS) of the interlacements, as knots and lattices. The materials are discussed in the context of magic and belief, at first, and of folk games and plays further back. The research question is: how Paulaharju constructs the meanings of the interlacements by means of drawings? The method of membership categorization analysis (MCA) is combined with multimodal analysis, since the drawing–texts relations are analysed in detail. Thus, the examination demonstrates, that not only several drawing methods are utilised, but also the contexts, as agrarian life, appear diversified when the drawings are concerned. Then, by applying drawing innovatively and experimenting with it, Paulaharju operated between distinct viewpoints, and also challenged the established folkloristic practises. Accordingly, wide interestedness and learning-by-drawing are emphasised more than drawing as a restricted – or restrictive – orientation.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Ellis, Elizabeth Marrkilyi, Jennifer Green, and Inge Kral. "Family in mind." Research on Children and Social Interaction 1, no. 2 (December 18, 2017): 164–98. http://dx.doi.org/10.1558/rcsi.28442.

Повний текст джерела
Анотація:
In the Ngaanyatjarra Lands in remote Western Australia children play a guessing game called mama mama ngunytju ngunytju ‘father father mother mother’. It is mainly girls who play the game, along with other members of their social network, including age-mates, older kin and adults. They offer clues about target referents and establish mutual understandings through multimodal forms of representation that include semi-conventionalized drawings on the sand. In this paper we show how speech, gesture, and graphic schemata are negotiated and identify several recurrent themes, particularly focusing on the domains of kinship and spatial awareness. We discuss the implications this case study has for understanding the changing nature of language socialization in remote Indigenous Australia. Multimodal analyses of games and other indirect teaching routines deepen our understandings of the acquisition of cultural knowledge and the development of communicative competence in this context.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Rayón Rumayor, Laura, Ana María De las Heras Cuenca, and José Hernández Ortega. "Didáctica universitaria híbrida: identidad digital creativa y multimodalidad." Altre Modernità, no. 27 (May 30, 2022): 48–64. http://dx.doi.org/10.54103/2035-7680/17876.

Повний текст джерела
Анотація:
The incipient interest in hybrid and collaborative didactics in times of COVID-19 highlights the fundamental role of digital hybrids in the training of university students. New mobile devices allow the ubiquity of teaching-learning processes. But they not only allow the possibility of sharing and communicating at any time and place. In these devices, different tools that integrate different representation systems converge, and offer new forms of construction, representation and communication of knowledge. From the questioning of the hegemonic technological culture, and from a didactic approach that advocates comprehensive and critical teaching, we want to present the creative functions of mobile devices so that students can build their own texts, new multimodal narratives, for the construction of a creative digital identity. An identity that allows them to perceive, analyse, reflect, produce, communicate and participate in their environment in a collaborative way. We present a transgressive didactic proposal based on photographic narration that displays new semiotic ways of building knowledge and making it known. We discuss its value to rethink other functions of academic knowledge and assessment, beyond achievement tests, critical dimensions in the University.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Kirova, Anna, Christine Massing, Larry Prochner, and Ailie Cleghorn. "Shaping the “Habits of mind” of diverse learners in early childhood teacher education programs through powerpoint: An illustrative case." Journal of Pedagogy 7, no. 1 (June 1, 2016): 59–78. http://dx.doi.org/10.1515/jped-2016-0004.

Повний текст джерела
Анотація:
Abstract This study examines the use of PowerPoint as a teaching tool in a workplace- embedded program aimed at bridging immigrant/refugee early childhood educators into post-secondary studies, and how, in the process, it shapes students’ “habits of mind” (Turkle, 2004). The premise of the study is that it is not only the bodies of knowledge shaping teacher education programs which must be interrogated, but also the ways in which instructors and programs choose to represent and impart these understandings to students. The use of PowerPoint to advance an authoritative western, linear, rule-governed form of logic is analyzed based on McLuhan and McLuhan’s (1988) and Adams’ (2006) tetrads. The findings demonstrate that Power- Point enhances western authoritative ways of being through its modes of communication and representation, means of organizing information, forms of representing content and pedagogical approaches, thus obsolescing or displacing immigrant/refugee students’ own indigenous ways of knowing. Since learning always involves the development, integration, and reorganization of tools, and the medium is an extension of the self (McLuhan, 2003), the students should have multimodal opportunities to engage with and represent knowledge. When such opportunities are not provided, the life experiences and cultural knowledges of immigrant/refugee students are silenced. Expanding communicative and representative forms in early childhood teacher education programs is necessary to promote a more inclusive environment.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Liao, Danlu. "Construction of Knowledge Graph English Online Homework Evaluation System Based on Multimodal Neural Network Feature Extraction." Computational Intelligence and Neuroscience 2022 (May 13, 2022): 1–12. http://dx.doi.org/10.1155/2022/7941414.

Повний текст джерела
Анотація:
This paper defines the data schema of the multimodal knowledge graph, that is, the definition of entity types and relationships between entities. The knowledge point entities are defined as three types of structures, algorithms, and related terms, speech is also defined as one type of entities, and six semantic relationships are defined between entities. This paper adopts a named entity recognition model that combines bidirectional long short-term memory network and convolutional neural network, combines local information and global information of text, uses conditional random field algorithm to label feature sequences, and combines domain dictionary. A knowledge evaluation method based on triplet context information is designed, which combines triplet context information (internal relationship path information in knowledge graph and external text information related to entities in triplet) through knowledge representation learning. The knowledge of triples is evaluated. The knowledge evaluation ability of the English online homework evaluation system was evaluated on the knowledge graph noise detection task, the knowledge graph completion task (entity link prediction task), and the triplet classification task. The experimental results show that the English online homework evaluation system has good noise processing ability and knowledge credibility calculation ability, and has a stronger evaluation ability for low-noise data. Using the online homework platform to implement personalized English homework is conducive to improving students’ homework mood, and students’ “happy” homework mood has been significantly improved. The implementation of English personalized homework based on the online homework platform is conducive to improving students’ homework initiative. With the help of the online homework platform to implement personalized English homework, students’ homework time has been reduced, and the homework has been completed well, achieving the purpose of “reducing burden and increasing efficiency.”
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Dourlens, Sébastien, and Amar Ramdane-Cherif. "Cognitive Memory for Semantic Agents Architecture in Robotic Interaction." International Journal of Cognitive Informatics and Natural Intelligence 5, no. 1 (January 2011): 43–58. http://dx.doi.org/10.4018/jcini.2011010103.

Повний текст джерела
Анотація:
Since 1960, AI researchers have worked on intelligent and reactive architectures capable of managing multiple events and acts in the environment. This issue is part of the Robotics domain. An extraction of meaning at different levels of abstraction and the decision process must be implemented in the robot brain to accomplish the multimodal interaction with humans in a human environment. This paper presents a semantic agents architecture giving the robot the ability to understand what is happening and thus provide more robust responses. Intelligence and knowledge about objects like behaviours in the environment are stored in two ontologies linked to an inference engine. To store and exchange information, an event knowledge representation language is used by semantic agents. This architecture brings other advantages: pervasive, cooperating, redundant, automatically adaptable, and interoperable. It is independent of platforms.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Sun, Guofei, Yongkang Wong, Mohan S. Kankanhalli, Xiangdong Li, and Weidong Geng. "Enhanced 3D Shape Reconstruction With Knowledge Graph of Category Concept." ACM Transactions on Multimedia Computing, Communications, and Applications 18, no. 3 (August 31, 2022): 1–20. http://dx.doi.org/10.1145/3491224.

Повний текст джерела
Анотація:
Reconstructing three-dimensional (3D) objects from images has attracted increasing attention due to its wide applications in computer vision and robotic tasks. Despite the promising progress of recent deep learning–based approaches, which directly reconstruct the full 3D shape without considering the conceptual knowledge of the object categories, existing models have limited usage and usually create unrealistic shapes. 3D objects have multiple forms of representation, such as 3D volume, conceptual knowledge, and so on. In this work, we show that the conceptual knowledge for a category of objects, which represents objects as prototype volumes and is structured by graph, can enhance the 3D reconstruction pipeline. We propose a novel multimodal framework that explicitly combines graph-based conceptual knowledge with deep neural networks for 3D shape reconstruction from a single RGB image. Our approach represents conceptual knowledge of a specific category as a structure-based knowledge graph. Specifically, conceptual knowledge acts as visual priors and spatial relationships to assist the 3D reconstruction framework to create realistic 3D shapes with enhanced details. Our 3D reconstruction framework takes an image as input. It first predicts the conceptual knowledge of the object in the image, then generates a 3D object based on the input image and the predicted conceptual knowledge. The generated 3D object satisfies the following requirements: (1) it is consistent with the predicted graph in concept, and (2) consistent with the input image in geometry. Extensive experiments on public datasets (i.e., ShapeNet, Pix3D, and Pascal3D+) with 13 object categories show that (1) our method outperforms the state-of-the-art methods, (2) our prototype volume-based conceptual knowledge representation is more effective, and (3) our pipeline-agnostic approach can enhance the reconstruction quality of various 3D shape reconstruction pipelines.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Davis, Brian. "Instrumentalizing the book: Anne Carson’s Nox and books as archives." Frontiers of Narrative Studies 7, no. 1 (July 1, 2021): 84–109. http://dx.doi.org/10.1515/fns-2021-0005.

Повний текст джерела
Анотація:
Abstract This article introduces an experimental mode of contemporary writing and bookmaking that I call multimodal book-archives, an emergent mode of contemporary literature that constructs narratives and textual sequences through the collection and representation of reproduced texts and other artifacts. In multimodal book-archives the book-object is presented as a container designed to preserve and transmit textual artifacts. In this article, I examine Anne Carson’s Nox (2010) as a case study in archival poetics, exemplifying the “archival turn” in contemporary literature. My analysis draws attention to how writing, subjectivity, knowledge, history, and memory in the digital age are increasingly configured through distributed networks of people and artifacts in different social and institutional spaces, demonstrating how Nox functions not only as an instrument of psychological rejuvenation, but as an aesthetic instrument for documenting, ordering, listing, and juxtaposing disparate bits of information and memory into cathartic self-knowledge. Carson’s archival poetics is deeply personal, laden with private symbols and metaphors that readers are asked to collocate, cross-reference, and translate as part of the archival reading process. If grief is a kind of chaos, then Carson’s archival poetics instrumentalizes the book as a tool for ordering that chaos into something manageable, useful, even beautiful.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

BOURBAKIS, NIKOLAOS, and MICHAEL MILLS. "CONVERTING NATURAL LANGUAGE TEXT SENTENCES INTO SPN REPRESENTATIONS FOR ASSOCIATING EVENTS." International Journal of Semantic Computing 06, no. 03 (September 2012): 353–70. http://dx.doi.org/10.1142/s1793351x12500067.

Повний текст джерела
Анотація:
A better understanding of events many times requires the association and the efficient representation of multi-modal information. A good approach to this important issue is the development of a common platform for converting different modalities (such as images, text, etc.) into the same medium and associating them for efficient processing and understanding. In a previous paper we have presented a Local-Global graph model for the conversion of images into graphs with attributes and then into natural language (NL) text sentences [25]. Here, in this paper we propose the conversion of NL text sentences into graphs and then into Stochastic Petri-nets (SPN) descriptions in order to efficiently offer a model of associating "activities or changes" in multimodal information for events representation and understanding. The selection of the SPN graph model is due to its capability for efficiently representing structural and functional knowledge. Simple illustrative examples are provided for proving the concept proposed here.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Sugiura, Motoaki, Yuko Sassa, Jobu Watanabe, Yuko Akitsuki, Yasuhiro Maeda, Yoshihiko Matsue, and Ryuta Kawashima. "Anatomical Segregation of Representations of Personally Familiar and Famous People in the Temporal and Parietal Cortices." Journal of Cognitive Neuroscience 21, no. 10 (October 2009): 1855–68. http://dx.doi.org/10.1162/jocn.2008.21150.

Повний текст джерела
Анотація:
Person recognition has been assumed to entail many types of person-specific cognitive responses, including retrieval of knowledge, episodic recollection, and emotional responses. To demonstrate the cortical correlates of this modular structure of multimodal person representation, we investigated neural responses preferential to personally familiar people and responses dependent on familiarity with famous people in the temporal and parietal cortices. During functional magnetic resonance imaging (fMRI) measurements, normal subjects recognized personally familiar names (personal) or famous names with high or low degrees of familiarity (high or low, respectively). Effects of familiarity with famous people (i.e., high–low) were identified in the bilateral angular gyri, the left supramarginal gyrus, the middle part of the bilateral posterior cingulate cortices, and the left precuneus. Activation preferentially relevant to personally familiar people (i.e., personal–high) was identified in the bilateral temporo-parietal junctions, the right anterolateral temporal cortices, posterior middle temporal gyrus, posterior cingulate cortex (with a peak in the posterodorsal part), and the left precuneus; these activation foci exhibited varying degrees of activation for high and low names. An equivalent extent of activation was observed for all familiar names in the bilateral temporal poles, the left orbito-insular junction, the middle temporal gyrus, and the anterior part of the posterior cingulate cortex. The results demonstrated that distinct cortical areas supported different types of cognitive responses, induced to different degrees during recognition of famous and personally familiar people, providing neuroscientific evidence for the modularity of multimodal person representation.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Kalantzis, Mary, and Bill Cope. "From Gutenberg to the Internet: How Digitisation Transforms Culture and Knowledge." Logos 21, no. 1-2 (2010): 12–39. http://dx.doi.org/10.1163/095796510x546887.

Повний текст джерела
Анотація:
AbstractIn this paper, we explore the changes wrought by digitisation upon the domains of culture and knowledge. Half a century into the process of the digitisation of text, we argue that only now are we on the cusp of a series of paradigm shifts in the processes of writing, and concomitantly, our modes of cultural expression and our social processes of knowing. We describe the transition underway in the fundamental mechanics of rendering, the new navigational order which is associated with this transition, the demise of isolated written text that accompanies the rise of multimodality, the ubiquity of recording and documentation, a shift in the balance of representation agency, and its correlate in the emergence of a new dynamics of difference. The shape of these hugely significant changes is just beginning to become clear in the new, internet-mediated social media. The potential of the new textual regime is to transform our very means of production of meaning. However, when we come to examine the domain of formal knowledge production, historically pivoting on the peer reviewed journal and published monograph, there are as yet few signs of change. This paper points in a tentative way to potentials for knowledge-making which are as yet unrealised: new semantic markup processes which will improve knowledge discovery, data mining and machine translation; a new navigational order in which knowledge is not simply presented in a linear textual exegesis; the multimodal representation of knowledge in which knowledge evaluators and validators gain a broader, deeper and less mediated view of the knowledge they are assessing; navigable databanks in which reviewers and readers alike can make what they will of data and interactions recorded incidental to knowledge making; co-construction of knowledge through recursive dialogue between knowledge creators and knowledge users, to the extent of eliding that distinction; and a polylingual, polysemic knowledge world in which source natural language is arbitrary and narrowly specialised discourses and bodies of knowledge can be valued by their intellectual quality instead of the quantitative mass of their readership and citation.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Kryssanov, Victor V., Shizuka Kumokawa, Igor Goncharenko, and Hitoshi Ogawa. "Perceiving the Social." International Journal of Software Science and Computational Intelligence 2, no. 1 (January 2010): 24–37. http://dx.doi.org/10.4018/jssci.2010101902.

Повний текст джерела
Анотація:
This article describes a system developed to help people explore local communities by providing navigation services in social spaces created by the community members via communication and knowledge sharing. The proposed system utilizes data of a community’s social network to reconstruct the social space, which is otherwise not physically perceptible but imaginary, experiential, yet learnable. The social space is modeled with an agent network, where each agent stands for a member of the community and has knowledge about expertise and personal characteristics of some other members. An agent can gather information, using its social “connections,” to find community members most suitable to communicate to in a specific situation defined by the system’s user. The system then deploys its multimodal interface, which “maps” the social space onto a representation of the relevant physical space, to locate the potential interlocutors and advise the user on an efficient communication strategy for the given community.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Prado, Jan Alyne Barbosa. "Formal and Multimodal Approach to Hard News as Genre, Structure and Metalanguage in Social and Digital Media Contexts. The Example of Twitter." Bakhtiniana: Revista de Estudos do Discurso 17, no. 4 (October 2022): 163–93. http://dx.doi.org/10.1590/2176-4573e57554.

Повний текст джерела
Анотація:
ABSTRACT The goal of this paper is to improve heuristics for hard news discourse by proposing a cognitive model of abstraction, regarding social media contexts. To this end, hard news is discussed as a genre, structure and metalanguage, under the formal definition of a semiotic mode. Annotation is a successful technology to control the effects of genre operations, reveal relations, and to inquire about data and documents. It proceeds to characterize Twitter’s interface, in terms of formal and material regularities, employed recursively. It further demonstrates the formalization of a semantics for hard news discourse in the so-called logical forms, which has been adapted from analytical tools developed by the Segmented Theory of Discourse Representation (SDRT) in order to examine coherence in different levels of detail. Finally, the implication of this approach as a discipline is discussed, regarding the production of transversal knowledge aiming at digital literacy, which urges in present times.1
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Yuan, Hui, Yuanyuan Tang, Wei Xu, and Raymond Yiu Keung Lau. "Exploring the influence of multimodal social media data on stock performance: an empirical perspective and analysis." Internet Research 31, no. 3 (January 12, 2021): 871–91. http://dx.doi.org/10.1108/intr-11-2019-0461.

Повний текст джерела
Анотація:
PurposeDespite the extensive academic interest in social media sentiment for financial fields, multimodal data in the stock market has been neglected. The purpose of this paper is to explore the influence of multimodal social media data on stock performance, and investigate the underlying mechanism of two forms of social media data, i.e. text and pictures.Design/methodology/approachThis research employs panel vector autoregressive models to quantify the effect of the sentiment derived from two modalities in social media, i.e. text information and picture information. Through the models, the authors examine the short-term and long-term associations between social media sentiment and stock performance, measured by three metrics. Specifically, the authors design an enhanced sentiment analysis method, integrating random walk and word embeddings through Global Vectors for Word Representation (GloVe), to construct a domain-specific lexicon and apply it to textual sentiment analysis. Secondly, the authors exploit a deep learning framework based on convolutional neural networks to analyze the sentiment in picture data.FindingsThe empirical results derived from vector autoregressive models reveal that both measures of the sentiment extracted from textual information and pictorial information in social media are significant leading indicators of stock performance. Moreover, pictorial information and textual information have similar relationships with stock performance.Originality/valueTo the best of the authors’ knowledge, this is the first study that incorporates multimodal social media data for sentiment analysis, which is valuable in understanding pictures of social media data. The study offers significant implications for researchers and practitioners. This research informs researchers on the attention of multimodal social media data. The study’s findings provide some managerial recommendations, e.g. watching not only words but also pictures in social media.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Kotis, Konstantinos, Sotiris Angelis, Efthymia Moraitou, Vasilis Kopsachilis, Ermioni-Eirini Papadopoulou, Nikolaos Soulakellis, and Michail Vaitis. "A KG-Based Integrated UAV Approach for Engineering Semantic Trajectories in the Cultural Heritage Documentation Domain." Remote Sensing 15, no. 3 (January 31, 2023): 821. http://dx.doi.org/10.3390/rs15030821.

Повний текст джерела
Анотація:
Data recordings of the movement of vehicles can be enriched with heterogeneous and multimodal data beyond latitude, longitude, and timestamp and enhanced with complementary segmentations, constituting a semantic trajectory. Semantic Web (SW) technologies have been extensively used for the semantic integration of heterogeneous and multimodal movement-related data, and for the effective modeling of semantic trajectories, in several domains. In this paper, we present an integrated solution for the engineering of cultural heritage semantic trajectories generated from unmanned aerial vehicles (UAVs) and represented as knowledge graphs (KGs). Particularly, this work is motivated by, and evaluated based on, the application domain of UAV missions for documenting regions/points of cultural heritage interest. In this context, this research work extends our previous work on UAV semantic trajectories, contributing (a) an updated methodology for the engineering of semantic trajectories as KGs (STaKG), (b) an implemented toolset for the management of KG-based semantic trajectories, (c) a refined ontology for the representation of knowledge related to UAV semantic trajectories and to cultural heritage documentation, and d) the application and evaluation of the proposed methodology, the developed toolset, and the ontology within the domain of UAV-based cultural heritage documentation. The evaluation of the integrated UAV solution was achieved by exploiting real datasets collected during three UAV missions to document sites of cultural interest in Lesvos, Greece, i.e., the UNESCO-protected petrified forest of Lesvos Petrified Forest/Geopark, the village of Vrissa, and University Hill.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Arend, Johannes M., Melissa Ramírez, Heinrich R. Liesefeld, and Christoph Pӧrschmann. "Do near-field cues enhance the plausibility of non-individual binaural rendering in a dynamic multimodal virtual acoustic scene?" Acta Acustica 5 (2021): 55. http://dx.doi.org/10.1051/aacus/2021048.

Повний текст джерела
Анотація:
It is commonly believed that near-field head-related transfer functions (HRTFs) provide perceptual benefits over far-field HRTFs that enhance the plausibility of binaural rendering of nearby sound sources. However, to the best of our knowledge, no study has systematically investigated whether using near-field HRTFs actually provides a perceptually more plausible virtual acoustic environment. To assess this question, we conducted two experiments in a six-degrees-of-freedom multimodal augmented reality experience where participants had to compare non-individual anechoic binaural renderings based on either synthesized near-field HRTFs or intensity-scaled far-field HRTFs and judge which of the two rendering methods led to a more plausible representation. Participants controlled the virtual sound source position by moving a small handheld loudspeaker along a prescribed trajectory laterally and frontally near the head, which provided visual and proprioceptive cues in addition to the auditory cues. The results of both experiments show no evidence that near-field cues enhance the plausibility of non-individual binaural rendering of nearby anechoic sound sources in a dynamic multimodal virtual acoustic scene as examined in this study. These findings suggest that, at least in terms of plausibility, the additional effort of including near-field cues in binaural rendering may not always be worthwhile for virtual or augmented reality applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Pazzaglia, Mariella, and Marta Zantedeschi. "Plasticity and Awareness of Bodily Distortion." Neural Plasticity 2016 (2016): 1–7. http://dx.doi.org/10.1155/2016/9834340.

Повний текст джерела
Анотація:
Knowledge of the body is filtered by perceptual information, recalibrated through predominantly innate stored information, and neurally mediated by direct sensory motor information. Despite multiple sources, the immediate prediction, construction, and evaluation of one’s body are distorted. The origins of such distortions are unclear. In this review, we consider three possible sources of awareness that inform body distortion. First, the precision in the body metric may be based on the sight and positioning sense of a particular body segment. This view provides information on the dual nature of body representation, the reliability of a conscious body image, and implicit alterations in the metrics and positional correspondence of body parts. Second, body awareness may reflect an innate organizational experience of unity and continuity in the brain, with no strong isomorphism to body morphology. Third, body awareness may be based on efferent/afferent neural signals, suggesting that major body distortions may result from changes in neural sensorimotor experiences. All these views can be supported empirically, suggesting that body awareness is synthesized from multimodal integration and the temporal constancy of multiple body representations. For each of these views, we briefly discuss abnormalities and therapeutic strategies for correcting the bodily distortions in various clinical disorders.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Sunkara, Adhira, Gregory C. DeAngelis, and Dora E. Angelaki. "Joint representation of translational and rotational components of optic flow in parietal cortex." Proceedings of the National Academy of Sciences 113, no. 18 (April 19, 2016): 5077–82. http://dx.doi.org/10.1073/pnas.1604818113.

Повний текст джерела
Анотація:
Terrestrial navigation naturally involves translations within the horizontal plane and eye rotations about a vertical (yaw) axis to track and fixate targets of interest. Neurons in the macaque ventral intraparietal (VIP) area are known to represent heading (the direction of self-translation) from optic flow in a manner that is tolerant to rotational visual cues generated during pursuit eye movements. Previous studies have also reported that eye rotations modulate the response gain of heading tuning curves in VIP neurons. We tested the hypothesis that VIP neurons simultaneously represent both heading and horizontal (yaw) eye rotation velocity by measuring heading tuning curves for a range of rotational velocities of either real or simulated eye movements. Three findings support the hypothesis of a joint representation. First, we show that rotation velocity selectivity based on gain modulations of visual heading tuning is similar to that measured during pure rotations. Second, gain modulations of heading tuning are similar for self-generated eye rotations and visually simulated rotations, indicating that the representation of rotation velocity in VIP is multimodal, driven by both visual and extraretinal signals. Third, we show that roughly one-half of VIP neurons jointly represent heading and rotation velocity in a multiplicatively separable manner. These results provide the first evidence, to our knowledge, for a joint representation of translation direction and rotation velocity in parietal cortex and show that rotation velocity can be represented based on visual cues, even in the absence of efference copy signals.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Hnini, Ghizlane, Jamal Riffi, Mohamed Adnane Mahraz, Ali Yahyaouy, and Hamid Tairi. "MMPC-RF: A Deep Multimodal Feature-Level Fusion Architecture for Hybrid Spam E-mail Detection." Applied Sciences 11, no. 24 (December 16, 2021): 11968. http://dx.doi.org/10.3390/app112411968.

Повний текст джерела
Анотація:
Hybrid spam is an undesirable e-mail (electronic mail) that contains both image and text parts. It is more harmful and complex as compared to image-based and text-based spam e-mail. Thus, an efficient and intelligent approach is required to distinguish between spam and ham. To our knowledge, a small number of studies have been aimed at detecting hybrid spam e-mails. Most of these multimodal architectures adopted the decision-level fusion method, whereby the classification scores of each modality were concatenated and fed to another classification model to make a final decision. Unfortunately, this method not only demands many learning steps, but it also loses correlation in mixed feature space. In this paper, we propose a deep multimodal feature-level fusion architecture that concatenates two embedding vectors to have a strong representation of e-mails and increase the performance of the classification. The paragraph vector distributed bag of words (PV-DBOW) and the convolutional neural network (CNN) were used as feature extraction techniques for text and image parts, respectively, of the same e-mail. The extracted feature vectors were concatenated and fed to the random forest (RF) model to classify a hybrid e-mail as either spam or ham. The experiments were conducted on three hybrid datasets made using three publicly available corpora: Enron, Dredze, and TREC 2007. According to the obtained results, the proposed model provides a higher accuracy of 99.16% compared to recent state-of-the-art methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Kendrick, Maureen, Elizabeth Namazzi, Ava Becker-Zayas, and Esther Nancy Tibwamulala. "Closing the HIV and AIDS “Information Gap” Between Children and Parents: An Exploration of Makerspaces in a Ugandan Primary School." Education Sciences 10, no. 8 (July 23, 2020): 193. http://dx.doi.org/10.3390/educsci10080193.

Повний текст джерела
Анотація:
In this study, we address the research question: “How might child-created billboards about HIV and AIDS help facilitate more open discussions between parents and children?" The premise of our study is that there may be considerable potential for using multimodal forms of representation in makerspaces with young children to create more open dialogue with parents about culturally sensitive information. Drawing on multimodal literacies and visual methodologies, we designed a makerspace in a grade 5 classroom (with students aged 9–10) in a Ugandan residential primary school. Our makerspace included soliciting students’ knowledge about HIV and AIDS as part of a class discussion focused on billboards in the local community and providing art materials for students to explore their understandings of HIV and AIDS through the creation of billboards as public service announcements. Parents were engaged in the work as audience members during a public exhibition at the school. Data sources include the billboards as artifacts, observations within the makerspace, and interviews with parents and children following the public exhibition. The findings show that, for parents and children, the billboards enhanced communication; new understandings about HIV and AIDS were gained; and real-life concerns about HIV and AIDS were made more visible. Although these more open conversations may depend to some degree on family relationships more broadly, we see great potential for makerspaces to serve as a starting point for closing the HIV and AIDS information gap between children and parents.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Miao, Haotian, Yifei Zhang, Daling Wang, and Shi Feng. "Multi-Output Learning Based on Multimodal GCN and Co-Attention for Image Aesthetics and Emotion Analysis." Mathematics 9, no. 12 (June 20, 2021): 1437. http://dx.doi.org/10.3390/math9121437.

Повний текст джерела
Анотація:
With the development of social networks and intelligent terminals, it is becoming more convenient to share and acquire images. The massive growth of the number of social images makes people have higher demands for automatic image processing, especially in the aesthetic and emotional perspective. Both aesthetics assessment and emotion recognition require a higher ability for the computer to simulate high-level visual perception understanding, which belongs to the field of image processing and pattern recognition. However, existing methods often ignore the prior knowledge of images and intrinsic relationships between aesthetic and emotional perspectives. Recently, machine learning and deep learning have become powerful methods for researchers to solve mathematical problems in computing, such as image processing and pattern recognition. Both images and abstract concepts can be converted into numerical matrices and then establish the mapping relations using mathematics on computers. In this work, we propose an end-to-end multi-output deep learning model based on multimodal Graph Convolutional Network (GCN) and co-attention for aesthetic and emotion conjoint analysis. In our model, a stacked multimodal GCN network is proposed to encode the features under the guidance of the correlation matrix, and a co-attention module is designed to help the aesthetics and emotion feature representation learn from each other interactively. Experimental results indicate that our proposed model achieves competitive performance on the IAE dataset. Progressive results on the AVA and ArtPhoto datasets also prove the generalization ability of our model.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

De Paepe, Annick, Valéry Legrain, and Geert Crombez. "Visual stimuli within peripersonal space prioritize pain." Seeing and Perceiving 25 (2012): 88. http://dx.doi.org/10.1163/187847612x647072.

Повний текст джерела
Анотація:
Localizing pain not only requires a simple somatotopic representation of the body, but also knowledge about the limb position (i.e., proprioception), and a visual localization of the pain source in external space. Therefore, nociceptive events are remapped into a multimodal representation of the body and the space nearby (i.e., a peripersonal schema of the body). We investigated the influence of visual cues presented either in peripersonal, or in extrapersonal space on the localization of nociceptive stimuli in a temporal order judgement (TOJ) task. 24 psychology students made TOJs concerning which of two nociceptive stimuli (one applied to each hand) had been presented first (or last). A spatially non-predictive visual cue (i.e., lighting of a LED) preceded (80 ms) the nociceptive stimuli. This cue was presented randomly either on the hand of the participant (in peripersonal space), or 70 cm in front of the hand (in extrapersonal space), and either on the left or on the right side of space. Biases in spatial attention are reflected by the point of subjective simultaneity (PSS). The results revealed that TOJs were more biased towards the visual cue in peripersonal space in comparison with the visual cue in extrapersonal space. This study provides evidence for the crossmodal integration of visual and nociceptive stimuli in a peripersonal schema of the body. Future research with this paradigm will explore crossmodal attention deficits in chronic pain populations.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Bălan, Oana, Alin Moldoveanu, Florica Moldoveanu, Hunor Nagy, György Wersényi, and Rúnar Unnórsson. "Improving the Audio Game–Playing Performances of People with Visual Impairments through Multimodal Training." Journal of Visual Impairment & Blindness 111, no. 2 (March 2017): 148–64. http://dx.doi.org/10.1177/0145482x1711100206.

Повний текст джерела
Анотація:
Introduction As the number of people with visual impairments (that is, those who are blind or have low vision) is continuously increasing, rehabilitation and engineering researchers have identified the need to design sensory-substitution devices that would offer assistance and guidance to these people for performing navigational tasks. Auditory and haptic cues have been shown to be an effective approach towards creating a rich spatial representation of the environment, so they are considered for inclusion in the development of assistive tools that would enable people with visual impairments to acquire knowledge of the surrounding space in a way close to the visually based perception of sighted individuals. However, achieving efficiency through a sensory substitution device requires extensive training for visually impaired users to learn how to process the artificial auditory cues and convert them into spatial information. Methods Considering all the potential advantages game-based learning can provide, we propose a new method for training sound localization and virtual navigational skills of visually impaired people in a 3D audio game with hierarchical levels of difficulty. The training procedure is focused on a multimodal (auditory and haptic) learning approach in which the subjects have been asked to listen to 3D sounds while simultaneously perceiving a series of vibrations on a haptic headband that corresponds to the direction of the sound source in space. Results The results we obtained in a sound-localization experiment with 10 visually impaired people showed that the proposed training strategy resulted in significant improvements in auditory performance and navigation skills of the subjects, thus ensuring behavioral gains in the spatial perception of the environment.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Vasiliu, Laurentiu, Keith Cortis, Ross McDermott, Aphra Kerr, Arne Peters, Marc Hesse, Jens Hagemeyer, et al. "CASIE – Computing affect and social intelligence for healthcare in an ethical and trustworthy manner." Paladyn, Journal of Behavioral Robotics 12, no. 1 (January 1, 2021): 437–53. http://dx.doi.org/10.1515/pjbr-2021-0026.

Повний текст джерела
Анотація:
Abstract This article explores the rapidly advancing innovation to endow robots with social intelligence capabilities in the form of multilingual and multimodal emotion recognition, and emotion-aware decision-making capabilities, for contextually appropriate robot behaviours and cooperative social human–robot interaction for the healthcare domain. The objective is to enable robots to become trustworthy and versatile social robots capable of having human-friendly and human assistive interactions, utilised to better assist human users’ needs by enabling the robot to sense, adapt, and respond appropriately to their requirements while taking into consideration their wider affective, motivational states, and behaviour. We propose an innovative approach to the difficult research challenge of endowing robots with social intelligence capabilities for human assistive interactions, going beyond the conventional robotic sense-think-act loop. We propose an architecture that addresses a wide range of social cooperation skills and features required for real human–robot social interaction, which includes language and vision analysis, dynamic emotional analysis (long-term affect and mood), semantic mapping to improve the robot’s knowledge of the local context, situational knowledge representation, and emotion-aware decision-making. Fundamental to this architecture is a normative ethical and social framework adapted to the specific challenges of robots engaging with caregivers and care-receivers.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Kearney, Matthew, Marta Bornstein, Marieme Fall, Roch Nianogo, Deborah Glik, and Philip Massey. "Cross-sectional study of COVID-19 knowledge, beliefs and prevention behaviours among adults in Senegal." BMJ Open 12, no. 5 (May 2022): e057914. http://dx.doi.org/10.1136/bmjopen-2021-057914.

Повний текст джерела
Анотація:
ObjectivesThe aim of the study was to explore COVID-19 beliefs and prevention behaviours in a francophone West African nation, Senegal.DesignThis was a cross-sectional analysis of survey data collected via a multimodal observational study.ParticipantsSenegalese adults aged 18 years or older (n=1452).Primary and secondary outcome measuresPrimary outcome measures were COVID-19 prevention behaviours. Secondary outcome measures included COVID-19 knowledge and beliefs. Univariate, bivariate and multivariate statistics were generated to describe the sample and explore potential correlations.SettingParticipants from Senegal were recruited online and telephonically between June and August 2020.ResultsMask wearing, hand washing and use of hand sanitiser were most frequently reported. Social distancing and staying at home were also reported although to a lower degree. Knowledge and perceived risk of COVID-19 were very high in general, but risk was a stronger and more influential predictor of COVID-19 prevention behaviours. Men, compared with women, had lower odds (adjusted OR (aOR)=0.59, 95% CI 0.46 to 0.75, p<0.001) of reporting prevention behaviours. Rural residents (vs urban; aOR=1.49, 95% CI 1.12 to 1.98, p=0.001) and participants with at least a high school education (vs less than high school education; aOR=1.33, 95% CI 1.01 to 1.76, p=0.006) were more likely to report COVID-19 prevention behaviours.ConclusionsIn Senegal, we observed high compliance with recommended COVID-19 prevention behaviours among our sample of respondents, in particular for masking and personal hygiene practice. We also identified a range of psychosocial and demographic predictors for COVID-19 prevention behaviours such as knowledge and perceived risk. Stakeholders and decision makers in Senegal and across Africa can use place-based evidence like ours to address COVID-19 risk factors and intervene effectively with policies and programming. Use of both phone and online surveys enhances representation and study generalisability and should be considered in future research with hard-to-reach populations.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії