Добірка наукової літератури з теми "Multimodal Concepts"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Multimodal Concepts".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Multimodal Concepts"

1

Van der Linden, P. "Multimodal Blood Sparing Concepts." ains · Anästhesiologie · Intensivmedizin · Notfallmedizin · Schmerztherapie 36, Suppl 2 (November 2001): 101. http://dx.doi.org/10.1055/s-2001-18197.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Mangin, Olivier. "Multimodal concepts for social robots." AI Matters 3, no. 1 (May 25, 2017): 19–20. http://dx.doi.org/10.1145/3054837.3054844.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Mangin, Olivier. "Multimodal concepts for social robots." AI Matters 3, no. 1 (March 20, 2017): 19–20. http://dx.doi.org/10.1145/3067682.3067688.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Gulen, Elvan, Turgay Yilmaz, and Adnan Yazici. "Multimodal Information Fusion for Semantic Video Analysis." International Journal of Multimedia Data Engineering and Management 3, no. 4 (October 2012): 52–74. http://dx.doi.org/10.4018/jmdem.2012100103.

Повний текст джерела
Анотація:
Multimedia data by its very nature contains multimodal information in it. For a successful analysis of multimedia content, all available multimodal information should be utilized. Additionally, since concepts can contain valuable cues about other concepts, concept interaction is a crucial source of multimedia information and helps to increase the fusion performance. The aim of this study is to show that integrating existing modalities along with the concept interactions can yield a better performance in detecting semantic concepts. Therefore, in this paper, the authors present a multimodal fusion approach that integrates semantic information obtained from various modalities along with additional semantic cues. The experiments conducted on TRECVID 2007 and CCV Database datasets validates the superiority of such combination over best single modality and alternative modality combinations. The results show that the proposed fusion approach provides 16.7% relative performance gain on TRECVID dataset and 47.7% relative performance improvement on CCV database over the results of best unimodal approaches.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Wust, P., B. Rau, M. Gremmler, P. Schlag, A. Jordan, J. Löffel, H. Riess, and R. Felix. "Radio-Thermotherapy in Multimodal Surgical Treatment Concepts." Oncology Research and Treatment 18, no. 2 (1995): 110–21. http://dx.doi.org/10.1159/000218570.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Ledezma, Carlos J., and Max Wintermark. "Multimodal CT in Stroke Imaging: New Concepts." Radiologic Clinics of North America 47, no. 1 (January 2009): 109–16. http://dx.doi.org/10.1016/j.rcl.2008.10.008.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Vidosavljević, Milena. "Multimodal and digital literacy as new concepts in education." Zbornik radova Filozofskog fakulteta u Pristini 52, no. 4 (2022): 129–44. http://dx.doi.org/10.5937/zrffp52-37477.

Повний текст джерела
Анотація:
The digital age requires each individual to redevelop new skills and many literacy skills in order to function in modern society. Therefore, new concepts are emerging in education that explain what all the skills are necessary in order to adequately perform certain activities in the online environment. One of the modern concepts is multimodality, which is inevitably accompanied by modes, multimodal literacy, and digital, as well as broader literacy. Multimodality is a communication phenomenon that relies on the use of modes, i.e., semiotic resources, whose role is to create meaning. Namely, mods include images, charts, sounds, colours, audio, videos, maps and so on. Their function is to help digital texts convey a specific message in an online environment. In all this, the key role is played by multimodal literacy, which is the ability to successfully work with texts that include different modes, as well as digital literacy, which includes the ability to search, locate, select and analyse digital information. Having in mind that both literacies are interconnected and complementary, the aim of this paper is to draw attention to the mentioned concepts as well as to present their importance for the realization of multimodality in education.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Lou, Adrian. "Multimodal simile." English Text Construction 10, no. 1 (June 15, 2017): 106–31. http://dx.doi.org/10.1075/etc.10.1.06lou.

Повний текст джерела
Анотація:
This paper analyzes the “when” meme, a popular internet meme, which prototypically juxtaposes a when clause with an ostensibly unrelated image. Despite the initial incongruity, I contend this image prompts selective mapping between verbal and visual elements to produce a multimodal simile. First, I attempt to define and more clearly distinguish simile from metaphor. Second, I show how this multimodal simile exhibits unique viewpoint mapping by prompting audiences to subsume viewpoints that are both unfamiliar and bizarre. Third, I connect the like construction in simile with the like reported speech marker to show how both concepts are intimately related. Ultimately, the paper seeks to contribute to studies of simile by bolstering its ties with multimodality, blending, metonymy, viewpoint, and embodiment.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Shcherbin, Vyacheslav K. "Methods of modelling of socio-economic phenomena – method of constructing multiple spirals and multimodal analysis." Journal of the Belarusian State University. Sociology, no. 4 (December 16, 2021): 15–25. http://dx.doi.org/10.33581/2521-6821-2021-4-15-25.

Повний текст джерела
Анотація:
The article considers the methods of semiotic modelling (method of constructing multiple spirals, and multimodal analysis, etc.). The relevance of the use of these methods for the study of socio-economic phenomena is determined. Features of the use of these methods are described with help of two groups of key notions: a) the group of concept-variables of such concepts as code (genetic, iconic, information, cultural, memetic, social, civilisation, language), gene (biological, cultural, social, philosophic, economic), spiral (Archimedes, double, multiple, plane, spatial, triple, etc.); b) the group of concept-variables of such concepts as modalisation, modality, mode and multimodality. Differences between the methods under consideration are due not only to the different sets of concepts used to describe them, but also due to the main objects studied using these methods. So, as the main object of modeling using the method of constructing multiple spirals, various types of such complex signs as codes and genes are most often used. The objects of multimodal analysis are, as rule, macrolevel semiotic units (video, comic books, creolised and poly-code texts, posters, and other polymodal texts). The conclusion is substantiated that the method of constructing multiple spirals and multimodal analysis, together with the previously considered semiotic and chain analyses, form a single methodological system of social semiotics.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Daulay, Nahdyah Sari, SitiIsma Sari Lubis, and Widya Wulandari. "MULTIMODAL METAPHOR IN ADVERTISEMENT." AICLL: ANNUAL INTERNATIONAL CONFERENCE ON LANGUAGE AND LITERATURE 1, no. 1 (April 17, 2018): 170–75. http://dx.doi.org/10.30743/aicll.v1i1.24.

Повний текст джерела
Анотація:
Metaphor based on the cognitive linguistic view can be defined as a tool which allows us to understand one conceptual domain in terms of another. What usually happens is that we use a physical. What we need to comprehend, is the target domain. It means that human cognition is organized in conceptual schema. Rodriguez (2015) stated that multimodal needs a mental comprehension process which differs from processing visual or verbal concepts alone. Metaphor has been used in many advertising. The metaphor can be interpreted differently from one to others. This paper was to present an analysis of visual metaphors, and to illustrate the existence of a possible of multimodal metaphors in advertising. Multimodal needs a mental comprehension process which differs from processing visual or verbal concepts alone. In this case this study only focuses on the analysis of multimodality metaphor which found in some advertisements. In analyzing the multimodal metaphors in commercial advertising, corpus private static adverts from the TV were selected. All of the pictures presented are a verbal part.
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "Multimodal Concepts"

1

Schmüdderich, Jens M. [Verfasser]. "Multimodal Learning of Grounded Concepts in Embodied Systems / Jens M Schmüdderich." Aachen : Shaker, 2010. http://d-nb.info/1120864771/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Nguyen, Nhu Van. "Représentations visuelles de concepts textuels pour la recherche et l'annotation interactives d'images." Phd thesis, Université de La Rochelle, 2011. http://tel.archives-ouvertes.fr/tel-00730707.

Повний текст джерела
Анотація:
En recherche d'images aujourd'hui, nous manipulons souvent de grands volumes d'images, qui peuvent varier ou même arriver en continu. Dans une base d'images, on se retrouve ainsi avec certaines images anciennes et d'autres nouvelles, les premières déjà indexées et possiblement annotées et les secondes en attente d'indexation ou d'annotation. Comme la base n'est pas annotée uniformément, cela rend l'accès difficile par le biais de requêtes textuelles. Nous présentons dans ce travail différentes techniques pour interagir, naviguer et rechercher dans ce type de bases d'images. Premièrement, un modèle d'interaction à court terme est utilisé pour améliorer la précision du système. Deuxièmement, en se basant sur un modèle d'interaction à long terme, nous proposons d'associer mots textuels et caractéristiques visuelles pour la recherche d'images par le texte, par le contenu visuel, ou mixte texte/visuel. Ce modèle de recherche d'images permet de raffiner itérativement l'annotation et la connaissance des images. Nous identifions quatre contributions dans ce travail. La première contribution est un système de recherche multimodale d'images qui intègre différentes sources de données, comme le contenu de l'image et le texte. Ce système permet l'interrogation par l'image, l'interrogation par mot-clé ou encore l'utilisation de requêtes hybrides. La deuxième contribution est une nouvelle technique pour le retour de pertinence combinant deux techniques classiques utilisées largement dans la recherche d'information~: le mouvement du point de requête et l'extension de requêtes. En profitant des images non pertinentes et des avantages de ces deux techniques classiques, notre méthode donne de très bons résultats pour une recherche interactive d'images efficace. La troisième contribution est un modèle nommé "Sacs de KVR" (Keyword Visual Representation) créant des liens entre des concepts sémantiques et des représentations visuelles, en appui sur le modèle de Sac de Mots. Grâce à une stratégie d'apprentissage incrémental, ce modèle fournit l'association entre concepts sémantiques et caractéristiques visuelles, ce qui contribue à améliorer la précision de l'annotation sur l'image et la performance de recherche. La quatrième contribution est un mécanisme de construction incrémentale des connaissances à partir de zéro. Nous ne séparons pas les phases d'annotation et de recherche, et l'utilisateur peut ainsi faire des requêtes dès la mise en route du système, tout en laissant le système apprendre au fur et à mesure de son utilisation. Les contributions ci-dessus sont complétées par une interface permettant la visualisation et l'interrogation mixte textuelle/visuelle. Même si pour l'instant deux types d'informations seulement sont utilisées, soit le texte et le contenu visuel, la généricité du modèle proposé permet son extension vers d'autres types d'informations externes à l'image, comme la localisation (GPS) et le temps.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Feuerstein, Marco. "Augmented reality in laparoscopic surgery new concepts and methods for intraoperative multimodal imaging and hybrid tracking in computer aided surgery." Saarbrücken VDM Verlag Dr. Müller, 2007. http://d-nb.info/991301250/04.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Mangin, Olivier. "Emergence de concepts multimodaux : de la perception de mouvements primitifs à l'ancrage de mots acoustiques." Thesis, Bordeaux, 2014. http://www.theses.fr/2014BORD0002/document.

Повний текст джерела
Анотація:
Cette thèse considère l'apprentissage de motifs récurrents dans la perception multimodale. Elle s'attache à développer des modèles robotiques de ces facultés telles qu'observées chez l'enfant, et elle s'inscrit en cela dans le domaine de la robotique développementale.Elle s'articule plus précisément autour de deux thèmes principaux qui sont d'une part la capacité d'enfants ou de robots à imiter et à comprendre le comportement d'humains, et d'autre part l'acquisition du langage. A leur intersection, nous examinons la question de la découverte par un agent en développement d'un répertoire de motifs primitifs dans son flux perceptuel. Nous spécifions ce problème et établissons son lien avec ceux de l'indétermination de la traduction décrit par Quine et de la séparation aveugle de source tels qu'étudiés en acoustique.Nous en étudions successivement quatre sous-problèmes et formulons une définition expérimentale de chacun. Des modèles d'agents résolvant ces problèmes sont également décrits et testés. Ils s'appuient particulièrement sur des techniques dites de sacs de mots, de factorisation de matrices et d'apprentissage par renforcement inverse. Nous approfondissons séparément les trois problèmes de l'apprentissage de sons élémentaires tels les phonèmes ou les mots, de mouvements basiques de danse et d'objectifs primaires composant des tâches motrices complexes. Pour finir nous étudions le problème de l'apprentissage d'éléments primitifs multimodaux, ce qui revient à résoudre simultanément plusieurs des problèmes précédents. Nous expliquons notamment en quoi cela fournit un modèle de l'ancrage de mots acoustiques
This thesis focuses on learning recurring patterns in multimodal perception. For that purpose it develops cognitive systems that model the mechanisms providing such capabilities to infants; a methodology that fits into thefield of developmental robotics.More precisely, this thesis revolves around two main topics that are, on the one hand the ability of infants or robots to imitate and understand human behaviors, and on the other the acquisition of language. At the crossing of these topics, we study the question of the how a developmental cognitive agent can discover a dictionary of primitive patterns from its multimodal perceptual flow. We specify this problem and formulate its links with Quine's indetermination of translation and blind source separation, as studied in acoustics.We sequentially study four sub-problems and provide an experimental formulation of each of them. We then describe and test computational models of agents solving these problems. They are particularly based on bag-of-words techniques, matrix factorization algorithms, and inverse reinforcement learning approaches. We first go in depth into the three separate problems of learning primitive sounds, such as phonemes or words, learning primitive dance motions, and learning primitive objective that compose complex tasks. Finally we study the problem of learning multimodal primitive patterns, which corresponds to solve simultaneously several of the aforementioned problems. We also details how the last problems models acoustic words grounding
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Myers, Isaac [Verfasser]. "Improved survival of patients with HCC through new therapeutic options and the use of multimodal therapy concepts : data from a large German university hospital / Isaac Myers." Berlin : Medizinische Fakultät Charité - Universitätsmedizin Berlin, 2014. http://d-nb.info/1061023567/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Díaz, Silva Luis Eduardo, and Antezana Diego Federico Rioseco. "El transporte multimodal: concepto, problemática y proyección." Tesis, Universidad de Chile, 2001. http://www.repositorio.uchile.cl/handle/2250/114557.

Повний текст джерела
Анотація:
Memoria (licenciado en ciencias jurídicas y sociales)
El éxito de las transacciones comerciales nacionales e internacionales depende, entre muchos otros factores, de la eficiencia de las cadenas de transporte de las mercaderías transadas. En la actualidad el transporte ha tenido que adaptarse a las exigencias de la “nueva economía”, cuya demanda por la integración económica, permiten la creación de la figura jurídica del transporte multimodal de mercancías. El transporte multimodal de mercancías es aquel por el cual se transportan mercancías, por dos o más modos diferentes de transporte, de acuerdo a un sólo contrato. La presente obra tiene como objetivo presentar e introducir el novedoso concepto del transporte multimodal, diferenciándolo del tradicional transporte unimodal y segmentado; plantear sus actuales dificultades y divergencias, como también proponer los cambios necesarios para la implementación del mismo como nuevo sistema de transporte que aporte al crecimiento económico de los países en vías de desarrollo.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Bellik, Yacine. "Interfaces multimodales : concepts, modeles et architectures." Paris 11, 1995. http://www.theses.fr/1995PA112178.

Повний текст джерела
Анотація:
Cette these s'inscrit dans le domaine de la communication homme-machine et plus specifiquement dans celui de la conception et de la realisation des interfaces multimodales. Les travaux de recherche presentes decrivent les nouveaux problemes poses par ce type d'interfaces et proposent des solutions qui sont testees a travers des realisations abouties. Le travail est divise en trois etapes. La premiere, a caractere exploratoire, decrit la conception et la realisation d'une interface multimodale pour une application de dessin graphique (limsi-draw). Cette premiere experimentation permet de reveler des problemes importants lies aux contraintes technologiques actuelles, de souligner l'importance du facteur temporel souvent neglige dans les interfaces classiques et de proposer une methode efficace pour la fusion des informations. Le modele d'architecture adopte est articule autour d'interpreteurs independants et d'un controleur de dialogue central utilisant des regles de decision qui permettent d'assurer une fusion robuste. Cette partie de l'etude se termine par une evaluation avec des sujets humains d'ou sont degages des enseignements interessants sur l'utilisation des modalites. La seconde etape a pour but la conception d'un outil pour la specification des interactions multimodales. Cet outil, baptise specimen, est fonde sur un modele combinant une specification par des reseaux de transitions augmentes a une specification par messages a l'aide d'operateurs de composition permettant de decrire des actions sequentielles et/ou paralleles. Par ailleurs, l'elaboration d'une methode de detection de messages repartie a travers des agents specialises permet la definition de mecanismes de fusion generaux. Dans la derniere etape, specimen est applique pour la construction d'une interface multimodale pour non-voyants (meditor). L'objectif vise est double: d'une part valider cet outil a travers une realisation concrete, d'autre part etudier l'apport de la multimodalite au probleme de l'acces aux systemes informatiques par des utilisateurs non-voyants. Des resultats preliminaires encourageants sont obtenus et des perspectives prometteuses pour une communication homme-machine intelligente combinant des modeles d'interaction anthropomorphiques et physiques sont discutees en conclusion
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Halonen, Maria. "Design för lärande och multimodala texter i svenskämnet : En produktorienterad studie av två läromedel i svenska." Thesis, Högskolan i Gävle, Avdelningen för humaniora, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-18844.

Повний текст джерела
Анотація:
This paper presents a study of educational materials used in Swedish language education. The aim of the study is to understand in which way multimodal resources can be used in texts, to benefit in the process of meaning making among pupils in the nine-year compulsory school. The theoretical framework used as a basis for understanding and analysing these educational materials is the social semiotic multimodal perspective and the design theoretical multimodal perspective. The study is a multimodal text analysis but it also involves analyses of the syllabi connected to the subject of Swedish language education. The extended concept of text was introduced in the year of 2000 in the syllabus and today multimodal texts are supposed to be part of the Swedish language education. In course of this study the researcher found that multimodal resources can be used in different ways to benefit in the process of meaning making. The study shows that the use of resources is connected to the different aims among texts and to the affordances of meaning making resources.  The aim of texts differs among and in-between the educational materials connected to the different syllabi.  The researcher also found that the texts supposed to be included in Swedish language education has increased since the extended concept of text was introduced and according to the process of time. Pupils however, aren´t introduced to strategies for dealing with these new kinds of texts, in the same extent.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Liu, Yuanting. "Multimodal interaction: developing an interaction concept for a touchscreen incorporating tactile feedback." Diss., lmu, 2012. http://nbn-resolving.de/urn:nbn:de:bvb:19-138991.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Liu, Ningning. "Contributions to generic and affective visual concept recognition." Thesis, Ecully, Ecole centrale de Lyon, 2013. http://www.theses.fr/2013ECDL0038.

Повний текст джерела
Анотація:
Cette thèse de doctorat est consacrée à la reconnaissance de concepts visuels (VCR pour "Visual Concept Recognition"). En raison des nombreuses difficultés qui la caractérisent, cette tâche est toujours considérée comme l’une des plus difficiles en vision par ordinateur et reconnaissance de formes. Dans ce contexte, nous avons proposé plusieurs contributions, particulièrement dans le cadre d’une approche de reconnaissance multimodale combinant efficacement les informations visuelles et textuelles. Tout d’abord, nous avons étudié différents types de descripteurs visuels de bas-niveau sémantique pour la tâche de VCR incluant des descripteurs de couleur, de texture et de forme. Plus précisément, nous pensons que chaque concept nécessite différents descripteurs pour le caractériser efficacement pour permettre sa reconnaissance automatique. Ainsi, nous avons évalué l’efficacité de diverses représentations visuelles, non seulement globales comme la couleur, la texture et la forme, mais également locales telles que SIFT, Color SIFT, HOG, DAISY, LBP et Color LBP. Afin de faciliter le franchissement du fossé sémantique entre les descripteurs bas-niveau et les concepts de haut niveau sémantique, et particulièrement ceux relatifs aux émotions, nous avons proposé des descripteurs visuels de niveau intermédiaire basés sur l’harmonie visuelle et le dynamisme exprimés dans les images. De plus, nous avons utilisé une décomposition spatiale pyramidale des images pour capturer l’information locale et spatiale lors de la construction des descripteurs d’harmonie et de dynamisme. Par ailleurs, nous avons également proposé une nouvelle représentation reposant sur les histogrammes de couleur HSV en utilisant un modèle d’attention visuelle pour identifier les régions d’intérêt dans les images. Ensuite, nous avons proposé un nouveau descripteur textuel dédié au problème de VCR. En effet, la plupart des photos publiées sur des sites de partage en ligne (Flickr, Facebook, ...) sont accompagnées d’une description textuelle sous la forme de mots-clés ou de légende. Ces descriptions constituent une riche source d’information sur la sémantique contenue dans les images et il semble donc particulièrement intéressant de les considérer dans un système de VCR. Ainsi, nous avons élaboré des descripteurs HTC ("Histograms of Textual Concepts") pour capturer les liens sémantiques entre les concepts. L’idée générale derrière HTC est de représenter un document textuel comme un histogramme de concepts textuels selon un dictionnaire (ou vocabulaire), pour lequel chaque valeur associée à un concept est l’accumulation de la contribution de chaque mot du texte pour ce concept, en fonction d’une mesure de distance sémantique. Plusieurs variantes de HTC ont été proposées qui se sont révélées être très efficaces pour la tâche de VCR. Inspirés par la démarche de l’analyse cepstrale de la parole, nous avons également développé Cepstral HTC pour capturer à la fois l’information de fréquence d’occurrence des mots (comme TF-IDF) et les liens sémantiques entre concepts fournis par HTC à partir des mots-clés associés aux images. Enfin, nous avons élaboré une méthode de fusion (SWLF pour "Selective Weighted Later Fusion") afin de combiner efficacement différentes sources d’information pour le problème de VCR. Cette approche de fusion est conçue pour sélectionner les meilleurs descripteurs et pondérer leur contribution pour chaque concept à reconnaître. SWLF s’est révélé être particulièrement efficace pour fusion des modalités visuelles et textuelles, par rapport à des schémas de fusion standards. [...]
This Ph.D thesis is dedicated to visual concept recognition (VCR). Due to many realistic difficulties, it is still considered to be one of the most challenging problems in computer vision and pattern recognition. In this context, we have proposed some innovative contributions for the task of VCR, particularly in building multimodal approaches that efficiently combine visual and textual information. Firstly, we have proposed semantic features for VCR and have investigated the efficiency of different types of low-level visual features for VCR including color, texture and shape. Specifically, we believe that different concepts require different features to efficiently characterize them for the recognition. Therefore, we have investigated in the context of VCR various visual representations, not only global features including color, shape and texture, but also the state-of-the-art local visual descriptors such as SIFT, Color SIFT, HOG, DAISY, LBP, Color LBP. To help bridging the semantic gap between low-level visual features and high level semantic concepts, and particularly those related to emotions and feelings, we have proposed mid-level visual features based on the visual harmony and dynamism semantics using Itten’s color theory and psychological interpretations. Moreover, we have employed a spatial pyramid strategy to capture the spatial information when building our mid-level features harmony and dynamism. We have also proposed a new representation of color HSV histograms by employing a visual attention model to identify the regions of interest in images. Secondly, we have proposed a novel textual feature designed for VCR. Indeed, most of online-shared photos provide textual descriptions in the form of tags or legends. In fact, these textual descriptions are a rich source of semantic information on visual data that is interesting to consider for the purpose of VCR or multimedia information retrieval. We propose the Histograms of Textual Concepts (HTC) to capture the semantic relatedness of concepts. The general idea behind HTC is to represent a text document as a histogram of textual concepts towards a vocabulary or dictionary, whereas its value is the accumulation of the contribution of each word within the text document toward the underlying concept according to a predefined semantic similarity measure. Several variants of HTC have been proposed that revealed to be very efficient for VCR. Inspired by the Cepstral speech analysis process, we have also developed Cepstral HTC to capture both term frequency-based information (like TF-IDF) and the relatedness of semantic concepts in the sparse image tags, which overcomes the HTC’s shortcoming of ignoring term frequency-based information. Thirdly, we have proposed a fusion scheme to combine different sources of Later Fusion, (SWLF) is designed to select the best features and to weight their scores for each concept to be recognized. SWLF proves particularly efficient for fusing visual and textual modalities in comparison with some other standard fusion schemes. While a late fusion at score level is reputed as a simple and effective way to fuse features of different nature for machine-learning problems, the proposed SWLF builds on two simple insights. First, the score delivered by a feature type should be weighted by its intrinsic quality for the classification problem at hand. Second, in a multi-label scenario where several visual concepts may be assigned to an image, different visual concepts may require different features which best recognize them. In addition to SWLF, we also propose a novel combination approach based on Dempster-Shafer’s evidence theory, whose interesting properties allow fusing different ambiguous sources of information for visual affective recognition. [...]
Стилі APA, Harvard, Vancouver, ISO та ін.

Книги з теми "Multimodal Concepts"

1

Brown, J. Martin, Minesh P. Mehta, and Carsten Nieder, eds. Multimodal Concepts for Integration of Cytotoxic Drugs. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/3-540-35662-2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Pattern Mining and Concept Discovery for Multimodal Content Analysis. [New York, N.Y.?]: [publisher not identified], 2016.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Nes, Ir R. Design of multimodal transport systems: Setting the scene, review of literature and basic concept. Delft: TRAIL Research School, 2000.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Mehta, M. P., L. W. Brady, J. M. Brown, C. Nieder, and H. P. Heilmann. Multimodal Concepts for Integration of Cytotoxic Drugs. Springer Berlin / Heidelberg, 2010.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Górska, Elżbieta. Understanding Abstract Concepts across Modes in Multimodal Discourse. Routledge, 2019. http://dx.doi.org/10.4324/9780429282737.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Górska, Elzbieta. Understanding Abstract Concepts Across Modes in Multimodal Discourse. Taylor & Francis Group, 2021.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Understanding Abstract Concepts Across Modes in Multimodal Discourse. Taylor & Francis Group, 2019.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Multimodal Concepts for Integration of Cytotoxic Drugs (Medical Radiology). Springer, 2006.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

(Foreword), L. W. Brady, H. P. Heilmann (Foreword), M. Molls (Foreword), J. M. Brown (Editor), M. P. Mehta (Editor), and C. Nieder (Editor), eds. Multimodal Concepts for Integration of Cytotoxic Drugs (Medical Radiology / Radiation Oncology). Springer, 2006.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Górska, Elżbieta. Understanding Abstract Concepts Across Modes in Multimodal Discourse: A Cognitive Linguistic Approach. Taylor & Francis Group, 2019.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Multimodal Concepts"

1

Sheppard, Jennifer. "Issues in Digital and Multimodal Writing." In Concepts in Composition, 379–445. Third edition. | New York : Routledge, 2019.: Routledge, 2019. http://dx.doi.org/10.4324/9780203728659-10.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Carroll, Michael A., and Ema C. Yamamoto. "Level of Service Concepts in Multimodal Environments." In Traffic Engineering Handbook, 149–76. Hoboken, NJ, USA: John Wiley & Sons, Inc, 2016. http://dx.doi.org/10.1002/9781119174738.ch5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Hürst, Wolfgang, and Casper van Wezel. "Multimodal Interaction Concepts for Mobile Augmented Reality Applications." In Lecture Notes in Computer Science, 157–67. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-17829-0_15.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

GhasemAghaei, Reza, Ali Arya, and Robert Biddle. "MADE Ratio: Affective Multimodal Software for Mathematical Concepts." In Lecture Notes in Computer Science, 487–98. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-39483-1_44.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Stopka, Ulrike. "Multimodal Mobility Packages – Concepts and Methodological Design Approaches." In HCI in Mobility, Transport, and Automotive Systems. Driving Behavior, Urban and Smart Mobility, 318–39. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-50537-0_24.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Fyles, Anthony W., Michael Milosevic, and Amit Oza. "Applications to Gynecological Cancers." In Multimodal Concepts for Integration of Cytotoxic Drugs, 303–15. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/3-540-35662-2_20.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Signorello, Giovanni, Giovanni Maria Farinella, Giovanni Gallo, Luciano Santo, Antonino Lopes, and Emanuele Scuderi. "Exploring Protected Nature Through Multimodal Navigation of Multimedia Contents." In Advanced Concepts for Intelligent Vision Systems, 841–52. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-25903-1_72.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Chetty, Girija, Julian Goodwin, and Monica Singh. "Digital Image Tamper Detection Based on Multimodal Fusion of Residue Features." In Advanced Concepts for Intelligent Vision Systems, 79–87. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-17691-3_8.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Linebarger, Deborah L., and Lori Norton-Meier. "Scientific Concepts, Multiple Modalities, and Young Children." In Using Multimodal Representations to Support Learning in the Science Classroom, 97–116. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-16450-2_6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Álvarez-Carmona, Miguel Á., Esaú Villatoro-Tello, Luis Villaseñor-Pineda, and Manuel Montes-y-Gómez. "Classifying the Social Media Author Profile Through a Multimodal Representation." In Intelligent Technologies: Concepts, Applications, and Future Directions, 57–81. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-1021-0_3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Multimodal Concepts"

1

Wang, Zhichun, and Minqiang Li. "A Coevolution Approach for Learning Multimodal Concepts." In Third International Conference on Natural Computation (ICNC 2007). IEEE, 2007. http://dx.doi.org/10.1109/icnc.2007.12.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Rajaby Faghihi, Hossein, Roshanak Mirzaee, Sudarshan Paliwal, and Parisa Kordjamshidi. "Latent Alignment of Procedural Concepts in Multimodal Recipes." In Proceedings of the First Workshop on Advances in Language and Vision Research. Stroudsburg, PA, USA: Association for Computational Linguistics, 2020. http://dx.doi.org/10.18653/v1/2020.alvr-1.5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Li, PengYuan, and YongLi Wang. "A Multimodal Entity Linking Approach Incorporating Topic Concepts." In 2021 International Conference on Computer Information Science and Artificial Intelligence (CISAI). IEEE, 2021. http://dx.doi.org/10.1109/cisai54367.2021.00100.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Maiera, Kathrin, Jürgen Hellbrücka, and Heike Sacherb. "A Visuohaptic Collision Warning Approach for High-Priority Braking Scenarios." In Applied Human Factors and Ergonomics Conference. AHFE International, 2022. http://dx.doi.org/10.54941/ahfe100751.

Повний текст джерела
Анотація:
Referring to the great importance of an intuitive HMI for Advanced Driver Assistance Systems, a driving study was conducted to test innovative warning concepts for high-priority, imminent braking scenarios. Based on previous findings, a peripheral visual illumination stripe warning was expected to show important brake reaction time benefits compared to an auditory alarm, especially in multimodal presentation mode along with a haptic brake pulse warning. Based on previous findings recommending multimodal instead of unimodal warnings to minimize brake reaction times, the optimal timing of multimodal warning components was additionally evaluated. Using the EVITA test system, almost rear-end collision scenarios were provoked to test the different warning concepts. The results indicate a visuohaptic warning approach based on a synchronous presentation of multimodal warning components to communicate imminent braking advices. Further implications for warning concept design will be discussed.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Nakamura, Tomoaki, Takayuki Nagai, and Naoto Iwahashi. "Grounding of word meanings in multimodal concepts using LDA." In 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2009). IEEE, 2009. http://dx.doi.org/10.1109/iros.2009.5354736.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Zhu, Qiang, Mei-Chen Yeh, and Kwang-Ting Cheng. "Multimodal fusion using learned text concepts for image categorization." In the 14th annual ACM international conference. New York, New York, USA: ACM Press, 2006. http://dx.doi.org/10.1145/1180639.1180698.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Calabrese, Agostina, Michele Bevilacqua, and Roberto Navigli. "EViLBERT: Learning Task-Agnostic Multimodal Sense Embeddings." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/67.

Повний текст джерела
Анотація:
The problem of grounding language in vision is increasingly attracting scholarly efforts. As of now, however, most of the approaches have been limited to word embeddings, which are not capable of handling polysemous words. This is mainly due to the limited coverage of the available semantically-annotated datasets, hence forcing research to rely on alternative technologies (i.e., image search engines). To address this issue, we introduce EViLBERT, an approach which is able to perform image classification over an open set of concepts, both concrete and non-concrete. Our approach is based on the recently introduced Vision-Language Pretraining (VLP) model, and builds upon a manually-annotated dataset of concept-image pairs. We use our technique to clean up the image-to-concept mapping that is provided within a multilingual knowledge base, resulting in over 258,000 images associated with 42,500 concepts. We show that our VLP-based model can be used to create multimodal sense embeddings starting from our automatically-created dataset. In turn, we also show that these multimodal embeddings improve the performance of a Word Sense Disambiguation architecture over a strong unimodal baseline. We release code, dataset and embeddings at http://babelpic.org.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Chaplot, Devendra Singh, Lisa Lee, Ruslan Salakhutdinov, Devi Parikh, and Dhruv Batra. "Embodied Multimodal Multitask Learning." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/338.

Повний текст джерела
Анотація:
Visually-grounded embodied language learning models have recently shown to be effective at learning multiple multimodal tasks such as following navigational instructions and answering questions. In this paper, we address two key limitations of these models, (a) the inability to transfer the grounded knowledge across different tasks and (b) the inability to transfer to new words and concepts not seen during training using only a few examples. We propose a multitask model which facilitates knowledge transfer across tasks by disentangling the knowledge of words and visual attributes in the intermediate representations. We create scenarios and datasets to quantify cross-task knowledge transfer and show that the proposed model outperforms a range of baselines in simulated 3D environments. We also show that this disentanglement of representations makes our model modular and interpretable which allows for transfer to instructions containing new concepts.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Wang, Liming, and Mark A. Hasegawa-Johnson. "Multimodal Word Discovery and Retrieval with Phone Sequence and Image Concepts." In Interspeech 2019. ISCA: ISCA, 2019. http://dx.doi.org/10.21437/interspeech.2019-1487.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Bendre, Nihar, Kevin Desai, and Peyman Najafirad. "Generalized Zero-Shot Learning Using Multimodal Variational Auto-Encoder With Semantic Concepts." In 2021 IEEE International Conference on Image Processing (ICIP). IEEE, 2021. http://dx.doi.org/10.1109/icip42928.2021.9506108.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії