Letteratura scientifica selezionata sul tema "Multimodal Embeddings"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Consulta la lista di attuali articoli, libri, tesi, atti di convegni e altre fonti scientifiche attinenti al tema "Multimodal Embeddings".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Articoli di riviste sul tema "Multimodal Embeddings"

1

Tyshchuk, Kirill, Polina Karpikova, Andrew Spiridonov, Anastasiia Prutianova, Anton Razzhigaev e Alexander Panchenko. "On Isotropy of Multimodal Embeddings". Information 14, n. 7 (10 luglio 2023): 392. http://dx.doi.org/10.3390/info14070392.

Testo completo
Abstract (sommario):
Embeddings, i.e., vector representations of objects, such as texts, images, or graphs, play a key role in deep learning methodologies nowadays. Prior research has shown the importance of analyzing the isotropy of textual embeddings for transformer-based text encoders, such as the BERT model. Anisotropic word embeddings do not use the entire space, instead concentrating on a narrow cone in such a pretrained vector space, negatively affecting the performance of applications, such as textual semantic similarity. Transforming a vector space to optimize isotropy has been shown to be beneficial for improving performance in text processing tasks. This paper is the first comprehensive investigation of the distribution of multimodal embeddings using the example of OpenAI’s CLIP pretrained model. We aimed to deepen the understanding of the embedding space of multimodal embeddings, which has previously been unexplored in this respect, and study the impact on various end tasks. Our initial efforts were focused on measuring the alignment of image and text embedding distributions, with an emphasis on their isotropic properties. In addition, we evaluated several gradient-free approaches to enhance these properties, establishing their efficiency in improving the isotropy/alignment of the embeddings and, in certain cases, the zero-shot classification accuracy. Significantly, our analysis revealed that both CLIP and BERT models yielded embeddings situated within a cone immediately after initialization and preceding training. However, they were mostly isotropic in the local sense. We further extended our investigation to the structure of multilingual CLIP text embeddings, confirming that the observed characteristics were language-independent. By computing the few-shot classification accuracy and point-cloud metrics, we provide evidence of a strong correlation among multilingual embeddings. Embeddings transformation using the methods described in this article makes it easier to visualize embeddings. At the same time, multiple experiments that we conducted showed that, in regard to the transformed embeddings, the downstream tasks performance does not drop substantially (and sometimes is even improved). This means that one could obtain an easily visualizable embedding space, without substantially losing the quality of downstream tasks.
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Guo, Zhiqiang, Jianjun Li, Guohui Li, Chaoyang Wang, Si Shi e Bin Ruan. "LGMRec: Local and Global Graph Learning for Multimodal Recommendation". Proceedings of the AAAI Conference on Artificial Intelligence 38, n. 8 (24 marzo 2024): 8454–62. http://dx.doi.org/10.1609/aaai.v38i8.28688.

Testo completo
Abstract (sommario):
The multimodal recommendation has gradually become the infrastructure of online media platforms, enabling them to provide personalized service to users through a joint modeling of user historical behaviors (e.g., purchases, clicks) and item various modalities (e.g., visual and textual). The majority of existing studies typically focus on utilizing modal features or modal-related graph structure to learn user local interests. Nevertheless, these approaches encounter two limitations: (1) Shared updates of user ID embeddings result in the consequential coupling between collaboration and multimodal signals; (2) Lack of exploration into robust global user interests to alleviate the sparse interaction problems faced by local interest modeling. To address these issues, we propose a novel Local and Global Graph Learning-guided Multimodal Recommender (LGMRec), which jointly models local and global user interests. Specifically, we present a local graph embedding module to independently learn collaborative-related and modality-related embeddings of users and items with local topological relations. Moreover, a global hypergraph embedding module is designed to capture global user and item embeddings by modeling insightful global dependency relations. The global embeddings acquired within the hypergraph embedding space can then be combined with two decoupled local embeddings to improve the accuracy and robustness of recommendations. Extensive experiments conducted on three benchmark datasets demonstrate the superiority of our LGMRec over various state-of-the-art recommendation baselines, showcasing its effectiveness in modeling both local and global user interests.
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Shang, Bin, Yinliang Zhao, Jun Liu e Di Wang. "LAFA: Multimodal Knowledge Graph Completion with Link Aware Fusion and Aggregation". Proceedings of the AAAI Conference on Artificial Intelligence 38, n. 8 (24 marzo 2024): 8957–65. http://dx.doi.org/10.1609/aaai.v38i8.28744.

Testo completo
Abstract (sommario):
Recently, an enormous amount of research has emerged on multimodal knowledge graph completion (MKGC), which seeks to extract knowledge from multimodal data and predict the most plausible missing facts to complete a given multimodal knowledge graph (MKG). However, existing MKGC approaches largely ignore that visual information may introduce noise and lead to uncertainty when adding them to the traditional KG embeddings due to the contribution of each associated image to entity is different in diverse link scenarios. Moreover, treating each triple independently when learning entity embeddings leads to local structural and the whole graph information missing. To address these challenges, we propose a novel link aware fusion and aggregation based multimodal knowledge graph completion model named LAFA, which is composed of link aware fusion module and link aware aggregation module. The link aware fusion module alleviates noise of irrelevant visual information by calculating the importance between an entity and its associated images in different link scenarios, and fuses the visual and structural embeddings according to the importance through our proposed modality embedding fusion mechanism. The link aware aggregation module assigns neighbor structural information to a given central entity by calculating the importance between the entity and its neighbors, and aggregating the fused embeddings through linear combination according to the importance. Extensive experiments on standard datasets validate that LAFA can obtain state-of-the-art performance.
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Sun, Zhongkai, Prathusha Sarma, William Sethares e Yingyu Liang. "Learning Relationships between Text, Audio, and Video via Deep Canonical Correlation for Multimodal Language Analysis". Proceedings of the AAAI Conference on Artificial Intelligence 34, n. 05 (3 aprile 2020): 8992–99. http://dx.doi.org/10.1609/aaai.v34i05.6431.

Testo completo
Abstract (sommario):
Multimodal language analysis often considers relationships between features based on text and those based on acoustical and visual properties. Text features typically outperform non-text features in sentiment analysis or emotion recognition tasks in part because the text features are derived from advanced language models or word embeddings trained on massive data sources while audio and video features are human-engineered and comparatively underdeveloped. Given that the text, audio, and video are describing the same utterance in different ways, we hypothesize that the multimodal sentiment analysis and emotion recognition can be improved by learning (hidden) correlations between features extracted from the outer product of text and audio (we call this text-based audio) and analogous text-based video. This paper proposes a novel model, the Interaction Canonical Correlation Network (ICCN), to learn such multimodal embeddings. ICCN learns correlations between all three modes via deep canonical correlation analysis (DCCA) and the proposed embeddings are then tested on several benchmark datasets and against other state-of-the-art multimodal embedding algorithms. Empirical results and ablation studies confirm the effectiveness of ICCN in capturing useful information from all three views.
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Merkx, Danny, e Stefan L. Frank. "Learning semantic sentence representations from visually grounded language without lexical knowledge". Natural Language Engineering 25, n. 4 (luglio 2019): 451–66. http://dx.doi.org/10.1017/s1351324919000196.

Testo completo
Abstract (sommario):
AbstractCurrent approaches to learning semantic representations of sentences often use prior word-level knowledge. The current study aims to leverage visual information in order to capture sentence level semantics without the need for word embeddings. We use a multimodal sentence encoder trained on a corpus of images with matching text captions to produce visually grounded sentence embeddings. Deep Neural Networks are trained to map the two modalities to a common embedding space such that for an image the corresponding caption can be retrieved and vice versa. We show that our model achieves results comparable to the current state of the art on two popular image-caption retrieval benchmark datasets: Microsoft Common Objects in Context (MSCOCO) and Flickr8k. We evaluate the semantic content of the resulting sentence embeddings using the data from the Semantic Textual Similarity (STS) benchmark task and show that the multimodal embeddings correlate well with human semantic similarity judgements. The system achieves state-of-the-art results on several of these benchmarks, which shows that a system trained solely on multimodal data, without assuming any word representations, is able to capture sentence level semantics. Importantly, this result shows that we do not need prior knowledge of lexical level semantics in order to model sentence level semantics. These findings demonstrate the importance of visual information in semantics.
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Tang, Zhenchao, Jiehui Huang, Guanxing Chen e Calvin Yu-Chian Chen. "Comprehensive View Embedding Learning for Single-Cell Multimodal Integration". Proceedings of the AAAI Conference on Artificial Intelligence 38, n. 14 (24 marzo 2024): 15292–300. http://dx.doi.org/10.1609/aaai.v38i14.29453.

Testo completo
Abstract (sommario):
Motivation: Advances in single-cell measurement techniques provide rich multimodal data, which helps us to explore the life state of cells more deeply. However, multimodal integration, or, learning joint embeddings from multimodal data remains a current challenge. The difficulty in integrating unpaired single-cell multimodal data is that different modalities have different feature spaces, which easily leads to information loss in joint embedding. And few existing methods have fully exploited and fused the information in single-cell multimodal data. Result: In this study, we propose CoVEL, a deep learning method for unsupervised integration of single-cell multimodal data. CoVEL learns single-cell representations from a comprehensive view, including regulatory relationships between modalities, fine-grained representations of cells, and relationships between different cells. The comprehensive view embedding enables CoVEL to remove the gap between modalities while protecting biological heterogeneity. Experimental results on multiple public datasets show that CoVEL is accurate and robust to single-cell multimodal integration. Data availability: https://github.com/shapsider/scintegration.
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Zhang, Linhai, Deyu Zhou, Yulan He e Zeng Yang. "MERL: Multimodal Event Representation Learning in Heterogeneous Embedding Spaces". Proceedings of the AAAI Conference on Artificial Intelligence 35, n. 16 (18 maggio 2021): 14420–27. http://dx.doi.org/10.1609/aaai.v35i16.17695.

Testo completo
Abstract (sommario):
Previous work has shown the effectiveness of using event representations for tasks such as script event prediction and stock market prediction. It is however still challenging to learn the subtle semantic differences between events based solely on textual descriptions of events often represented as (subject, predicate, object) triples. As an alternative, images offer a more intuitive way of understanding event semantics. We observe that event described in text and in images show different abstraction levels and therefore should be projected onto heterogeneous embedding spaces, as opposed to what have been done in previous approaches which project signals from different modalities onto a homogeneous space. In this paper, we propose a Multimodal Event Representation Learning framework (MERL) to learn event representations based on both text and image modalities simultaneously. Event textual triples are projected as Gaussian density embeddings by a dual-path Gaussian triple encoder, while event images are projected as point embeddings by a visual event component-aware image encoder. Moreover, a novel score function motivated by statistical hypothesis testing is introduced to coordinate two embedding spaces. Experiments are conducted on various multimodal event-related tasks and results show that MERL outperforms a number of unimodal and multimodal baselines, demonstrating the effectiveness of the proposed framework.
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Sah, Shagan, Sabarish Gopalakishnan e Raymond Ptucha. "Aligned attention for common multimodal embeddings". Journal of Electronic Imaging 29, n. 02 (25 marzo 2020): 1. http://dx.doi.org/10.1117/1.jei.29.2.023013.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Zhang, Rongchao, Yiwei Lou, Dexuan Xu, Yongzhi Cao, Hanpin Wang e Yu Huang. "A Learnable Discrete-Prior Fusion Autoencoder with Contrastive Learning for Tabular Data Synthesis". Proceedings of the AAAI Conference on Artificial Intelligence 38, n. 15 (24 marzo 2024): 16803–11. http://dx.doi.org/10.1609/aaai.v38i15.29621.

Testo completo
Abstract (sommario):
The actual collection of tabular data for sharing involves confidentiality and privacy constraints, leaving the potential risks of machine learning for interventional data analysis unsafely averted. Synthetic data has emerged recently as a privacy-protecting solution to address this challenge. However, existing approaches regard discrete and continuous modal features as separate entities, thus falling short in properly capturing their inherent correlations. In this paper, we propose a novel contrastive learning guided Gaussian Transformer autoencoder, termed GTCoder, to synthesize photo-realistic multimodal tabular data for scientific research. Our approach introduces a transformer-based fusion module that seamlessly integrates multimodal features, permitting for mining more informative latent representations. The attention within the fusion module directs the integrated output features to focus on critical components that facilitate the task of generating latent embeddings. Moreover, we formulate a contrastive learning strategy to implicitly constrain the embeddings from discrete features in the latent feature space by encouraging the similar discrete feature distributions closer while pushing the dissimilar further away, in order to better enhance the representation of the latent embedding. Experimental results indicate that GTCoder is effective to generate photo-realistic synthetic data, with interactive interpretation of latent embedding, and performs favorably against some baselines on most real-world and simulated datasets.
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Lin, Kaiyi, Xing Xu, Lianli Gao, Zheng Wang e Heng Tao Shen. "Learning Cross-Aligned Latent Embeddings for Zero-Shot Cross-Modal Retrieval". Proceedings of the AAAI Conference on Artificial Intelligence 34, n. 07 (3 aprile 2020): 11515–22. http://dx.doi.org/10.1609/aaai.v34i07.6817.

Testo completo
Abstract (sommario):
Zero-Shot Cross-Modal Retrieval (ZS-CMR) is an emerging research hotspot that aims to retrieve data of new classes across different modality data. It is challenging for not only the heterogeneous distributions across different modalities, but also the inconsistent semantics across seen and unseen classes. A handful of recently proposed methods typically borrow the idea from zero-shot learning, i.e., exploiting word embeddings of class labels (i.e., class-embeddings) as common semantic space, and using generative adversarial network (GAN) to capture the underlying multimodal data structures, as well as strengthen relations between input data and semantic space to generalize across seen and unseen classes. In this paper, we propose a novel method termed Learning Cross-Aligned Latent Embeddings (LCALE) as an alternative to these GAN based methods for ZS-CMR. Unlike using the class-embeddings as the semantic space, our method seeks for a shared low-dimensional latent space of input multimodal features and class-embeddings by modality-specific variational autoencoders. Notably, we align the distributions learned from multimodal input features and from class-embeddings to construct latent embeddings that contain the essential cross-modal correlation associated with unseen classes. Effective cross-reconstruction and cross-alignment criterions are further developed to preserve class-discriminative information in latent space, which benefits the efficiency for retrieval and enable the knowledge transfer to unseen classes. We evaluate our model using four benchmark datasets on image-text retrieval tasks and one large-scale dataset on image-sketch retrieval tasks. The experimental results show that our method establishes the new state-of-the-art performance for both tasks on all datasets.
Gli stili APA, Harvard, Vancouver, ISO e altri

Tesi sul tema "Multimodal Embeddings"

1

Engilberge, Martin. "Deep Inside Visual-Semantic Embeddings". Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS150.

Testo completo
Abstract (sommario):
De nos jours l’Intelligence artificielle (IA) est omniprésente dans notre société. Le récent développement des méthodes d’apprentissage basé sur les réseaux de neurones profonds aussi appelé “Deep Learning” a permis une nette amélioration des modèles de représentation visuelle et textuelle. Cette thèse aborde la question de l’apprentissage de plongements multimodaux pour représenter conjointement des données visuelles et sémantiques. C’est une problématique centrale dans le contexte actuel de l’IA et du deep learning, qui présente notamment un très fort potentiel pour l’interprétabilité des modèles. Nous explorons dans cette thèse les espaces de représentations conjoints visuels et sémantiques. Nous proposons deux nouveaux modèles permettant de construire de tels espaces. Nous démontrons également leur capacité à localiser des concepts sémantiques dans le domaine visuel. Nous introduisons également une nouvelle méthode permettant d’apprendre une approximation différentiable des fonctions d’évaluation basée sur le rang
Nowadays Artificial Intelligence (AI) is omnipresent in our society. The recentdevelopment of learning methods based on deep neural networks alsocalled "Deep Learning" has led to a significant improvement in visual representation models.and textual.In this thesis, we aim to further advance image representation and understanding.Revolving around Visual Semantic Embedding (VSE) approaches, we explore different directions: We present relevant background covering images and textual representation and existing multimodal approaches. We propose novel architectures further improving retrieval capability of VSE and we extend VSE models to novel applications and leverage embedding models to visually ground semantic concept. Finally, we delve into the learning process andin particular the loss function by learning differentiable approximation of ranking based metric
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Deschamps-Berger, Théo. "Social Emotion Recognition with multimodal deep learning architecture in emergency call centers". Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASG036.

Testo completo
Abstract (sommario):
Cette thèse porte sur les systèmes de reconnaissance automatique des émotions dans la parole, dans un contexte d'urgence médicale. Elle aborde certains des défis rencontrés lors de l'étude des émotions dans les interactions sociales et est ancrée dans les théories modernes des émotions, en particulier celles de Lisa Feldman Barrett sur la construction des émotions. En effet, la manifestation des émotions spontanées dans les interactions humaines est complexe et souvent caractérisée par des nuances, des mélanges et étroitement liée au contexte. Cette étude est fondée sur le corpus CEMO, composé de conversations téléphoniques entre appelants et Agents de Régulation Médicale (ARM) d'un centre d'appels d'urgence français. Ce corpus fournit un ensemble riche de données pour évaluer la capacité des systèmes d'apprentissage profond, tels que les Transformers et les modèles pré-entraînés, à reconnaître les émotions spontanées dans les interactions parlées. Les applications pourraient être de fournir des indices émotionnels susceptibles d'améliorer la gestion des appels et la prise de décision des ARM ou encore de faire des synthèses des appels. Les travaux menés dans ma thèse ont porté sur différentes techniques liées à la reconnaissance des émotions vocales, notamment l'apprentissage par transfert à partir de modèles pré-entraînés, les stratégies de fusion multimodale, l'intégration du contexte dialogique et la détection d'émotions mélangées. Un système acoustique initial basé sur des convolutions temporelles et des réseaux récurrents a été développé et validé sur un corpus émotionnel connu de la communauté affective, appelé IEMOCAP puis sur le corpus CEMO. Des recherches approfondies sur des systèmes multimodaux, pré-entraînés en acoustique et linguistique et adaptés à la reconnaissance des émotions, sont présentées. En outre, l'intégration du contexte dialogique dans la détection des émotions a été explorée, mettant en lumière la dynamique complexe des émotions dans les interactions sociales. Enfin, des travaux ont été initiés sur des systèmes multi-étiquettes multimodaux capables de traiter les subtilités des émotions mélangées dues à l'ambiguïté de la perception des annotateurs et du contexte social. Nos recherches mettent en évidence certaines solutions et défis liés à la reconnaissance des émotions dans des situations "in the wild". Cette thèse est financée par la Chaire CNRS AI HUMAAINE : HUman-MAchine Interaction Affective & Ethique
This thesis explores automatic speech-emotion recognition systems in a medical emergency context. It addresses some of the challenges encountered when studying emotions in social interactions. It is rooted in modern theories of emotions, particularly those of Lisa Feldman Barrett on the construction of emotions. Indeed, the manifestation of emotions in human interactions is complex and often characterized by nuanced, mixed, and is highly linked to the context. This study is based on the CEMO corpus, which is composed of telephone conversations between callers and emergency medical dispatchers (EMD) from a French emergency call center. This corpus provides a rich dataset to explore the capacity of deep learning systems, such as Transformers and pre-trained models, to recognize spontaneous emotions in spoken interactions. The applications could be to provide emotional cues that could improve call handling and decision-making by EMD, or to summarize calls. The work carried out in my thesis focused on different techniques related to speech emotion recognition, including transfer learning from pre-trained models, multimodal fusion strategies, dialogic context integration, and mixed emotion detection. An initial acoustic system based on temporal convolutions and recurrent networks was developed and validated on an emotional corpus widely used by the affective community, called IEMOCAP, and then on the CEMO corpus. Extensive research on multimodal systems, pre-trained in acoustics and linguistics and adapted to emotion recognition, is presented. In addition, the integration of dialog context in emotion recognition was explored, underlining the complex dynamics of emotions in social interactions. Finally, research has been initiated towards developing multi-label, multimodal systems capable of handling the subtleties of mixed emotions, often due to the annotator's perception and social context. Our research highlights some solutions and challenges in recognizing emotions in the wild. The CNRS AI HUMAAINE Chair: HUman-MAchine Affective Interaction & Ethics funded this thesis
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Vukotic, Verdran. "Deep Neural Architectures for Automatic Representation Learning from Multimedia Multimodal Data". Thesis, Rennes, INSA, 2017. http://www.theses.fr/2017ISAR0015/document.

Testo completo
Abstract (sommario):
La thèse porte sur le développement d'architectures neuronales profondes permettant d'analyser des contenus textuels ou visuels, ou la combinaison des deux. De manière générale, le travail tire parti de la capacité des réseaux de neurones à apprendre des représentations abstraites. Les principales contributions de la thèse sont les suivantes: 1) Réseaux récurrents pour la compréhension de la parole: différentes architectures de réseaux sont comparées pour cette tâche sur leurs facultés à modéliser les observations ainsi que les dépendances sur les étiquettes à prédire. 2) Prédiction d’image et de mouvement : nous proposons une architecture permettant d'apprendre une représentation d'une image représentant une action humaine afin de prédire l'évolution du mouvement dans une vidéo ; l'originalité du modèle proposé réside dans sa capacité à prédire des images à une distance arbitraire dans une vidéo. 3) Encodeurs bidirectionnels multimodaux : le résultat majeur de la thèse concerne la proposition d'un réseau bidirectionnel permettant de traduire une modalité en une autre, offrant ainsi la possibilité de représenter conjointement plusieurs modalités. L'approche été étudiée principalement en structuration de collections de vidéos, dons le cadre d'évaluations internationales où l'approche proposée s'est imposée comme l'état de l'art. 4) Réseaux adverses pour la fusion multimodale: la thèse propose d'utiliser les architectures génératives adverses pour apprendre des représentations multimodales en offrant la possibilité de visualiser les représentations dans l'espace des images
In this dissertation, the thesis that deep neural networks are suited for analysis of visual, textual and fused visual and textual content is discussed. This work evaluates the ability of deep neural networks to learn automatic multimodal representations in either unsupervised or supervised manners and brings the following main contributions:1) Recurrent neural networks for spoken language understanding (slot filling): different architectures are compared for this task with the aim of modeling both the input context and output label dependencies.2) Action prediction from single images: we propose an architecture that allow us to predict human actions from a single image. The architecture is evaluated on videos, by utilizing solely one frame as input.3) Bidirectional multimodal encoders: the main contribution of this thesis consists of neural architecture that translates from one modality to the other and conversely and offers and improved multimodal representation space where the initially disjoint representations can translated and fused. This enables for improved multimodal fusion of multiple modalities. The architecture was extensively studied an evaluated in international benchmarks within the task of video hyperlinking where it defined the state of the art today.4) Generative adversarial networks for multimodal fusion: continuing on the topic of multimodal fusion, we evaluate the possibility of using conditional generative adversarial networks to lean multimodal representations in addition to providing multimodal representations, generative adversarial networks permit to visualize the learned model directly in the image domain
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Rubio, Romano Antonio. "Fashion discovery : a computer vision approach". Doctoral thesis, TDX (Tesis Doctorals en Xarxa), 2021. http://hdl.handle.net/10803/672423.

Testo completo
Abstract (sommario):
Performing semantic interpretation of fashion images is undeniably one of the most challenging domains for computer vision. Subtle variations in color and shape might confer different meanings or interpretations to an image. Not only is it a domain tightly coupled with human understanding, but also with scene interpretation and context. Being able to extract fashion-specific information from images and interpret that information in a proper manner can be useful in many situations and help understanding the underlying information in an image. Fashion is also one of the most important businesses around the world, with an estimated value of 3 trillion dollars and a constantly growing online market, which increases the utility of image-based algorithms to search, classify or recommend garments. This doctoral thesis aims to solve specific problems related with the treatment of fashion e-commerce data, from low-level pure pixel information to high-level abstract conclusions of the garments appearing in an image, taking advantage of the multi-modality of the available data for developing some of the solutions. The contributions include: - A new superpixel extraction method focused on improving the annotation process for clothing images. - The construction of an image and text embedding for fashion data. - The application of this embedding space to the task of retrieving the main product in an image showing a complete outfit. In summary, fashion is a complex computer vision and machine learning problem at many levels, and developing specific algorithms that are able to capture essential information from pictures and text is not trivial. In order to solve some of the challenges it proposes, and taking into account that this is an Industrial Ph.D., we contribute with a variety of solutions that can boost the performance of many tasks useful for the fashion e-commerce industry.
La interpretación semántica de imágenes del mundo de la moda es sin duda uno de los dominios más desafiantes para la visión por computador. Leves variaciones en color y forma pueden conferir significados o interpretaciones distintas a una imagen. Es un dominio estrechamente ligado a la comprensión humana subjetiva, pero también a la interpretación y reconocimiento de escenarios y contextos. Ser capaz de extraer información específica sobre moda de imágenes e interpretarla de manera correcta puede ser útil en muchas situaciones y puede ayudar a entender la información subyacente en una imagen. Además, la moda es uno de los negocios más importantes a nivel global, con un valor estimado de tres trillones de dólares y un mercado online en constante crecimiento, lo cual aumenta el interés de los algoritmos basados en imágenes para buscar, clasificar o recomendar prendas. Esta tesis doctoral pretende resolver problemas específicos relacionados con el tratamiento de datos de tiendas virtuales de moda, yendo desde la información más básica a nivel de píxel hasta un entendimiento más abstracto que permita extraer conclusiones sobre las prendas presentes en una imagen, aprovechando para ello la Multi-modalidad de los datos disponibles para desarrollar algunas de las soluciones. Las contribuciones incluyen: - Un nuevo método de extracción de superpíxeles enfocado a mejorar el proceso de anotación de imágenes de moda. - La construcción de un espacio común para representar imágenes y textos referentes a moda. - La aplicación de ese espacio en la tarea de identificar el producto principal dentro de una imagen que muestra un conjunto de prendas. En resumen, la moda es un dominio complejo a muchos niveles en términos de visión por computador y aprendizaje automático, y desarrollar algoritmos específicos capaces de capturar la información esencial a partir de imágenes y textos no es una tarea trivial. Con el fin de resolver algunos de los desafíos que esta plantea, y considerando que este es un doctorado industrial, contribuimos al tema con una variedad de soluciones que pueden mejorar el rendimiento de muchas tareas extremadamente útiles para la industria de la moda online
Automàtica, robòtica i visió
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Couairon, Guillaume. "Text-Based Semantic Image Editing". Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS248.

Testo completo
Abstract (sommario):
L’objectif de cette thèse est de proposer des algorithmes pour la tâche d’édition d’images basée sur le texte (TIE), qui consiste à éditer des images numériques selon une instruction formulée en langage naturel. Par exemple, étant donné une image d’un chien et la requête "Changez le chien en un chat", nous voulons produire une nouvelle image où le chien a été remplacé par un chat, en gardant tous les autres aspects de l’image inchangés (couleur et pose de l’animal, arrière- plan). L’objectif de l’étoile du nord est de permettre à tout un chacun de modifier ses images en utilisant uniquement des requêtes en langage naturel. Une des spécificités de l’édition d’images basée sur du texte est qu’il n’y a pratiquement pas de données d’entraînement pour former un algorithme supervisé. Dans cette thèse, nous proposons différentes solutions pour l’édition d’images, basées sur l’adaptation de grands modèles multimodaux entraînés sur d’énormes ensembles de données. Nous étudions tout d’abord une configuration d’édition simplifiée, appelée édition d’image basée sur la recherche, qui ne nécessite pas de modifier directement l’image d’entrée. Au lieu de cela, étant donné l’image et la requête de modification, nous recherchons dans une grande base de données une image qui correspond à la modification demandée. Nous nous appuyons sur des modèles multimodaux d’alignement image/texte entraînés sur des ensembles de données à l’échelle du web (comme CLIP) pour effectuer de telles transformations sans aucun exemple. Nous proposons également le cadre SIMAT pour évaluer l’édition d’images basée sur la recherche. Nous étudions ensuite comment modifier directement l’image d’entrée. Nous proposons FlexIT, une méthode qui modifie itérativement l’image d’entrée jus- qu’à ce qu’elle satisfasse un "objectif d’édition" abstrait défini dans un espace d’intégration multimodal. Nous introduisons des termes de régularisation pour imposer des transformations réalistes. Ensuite, nous nous concentrons sur les modèles de diffusion, qui sont des modèles génératifs puissants capables de synthétiser de nouvelles images conditionnées par une grande variété d’invites textuelles. Nous démontrons leur polyvalence en proposant DiffEdit, un algorithme qui adapte les modèles de diffusion pour l’édition d’images sans réglage fin. Nous proposons une stratégie "zero-shot" pour trouver automatiquement où l’image initiale doit être modifiée pour satisfaire la requête de transformation de texte
The aim of this thesis is to propose algorithms for the task of Text-based Image Editing (TIE), which consists in editing digital images according to an instruction formulated in natural language. For instance, given an image of a dog, and the query "Change the dog into a cat", we want to produce a novel image where the dog has been replaced by a cat, keeping all other image aspects unchanged (animal color and pose, background). The north-star goal is to enable anyone to edit their images using only queries in natural language. One specificity of text-based image editing is that there is practically no training data to train a supervised algorithm. In this thesis, we propose different solutions for editing images, based on the adaptation of large multimodal models trained on huge datasets. We first study a simplified editing setup, named Retrieval-based image edit- ing, which does not require to directly modify the input image. Instead, given the image and modification query, we search in a large database an image that corresponds to the requested edit. We leverage multimodal image/text alignment models trained on web-scale datasets (like CLIP) to perform such transformations without any examples. We also propose the SIMAT framework for evaluating retrieval-based image editing. We then study how to directly modify the input image. We propose FlexIT, a method which iteratively changes the input image until it satisfies an abstract "editing objective" defined in a multimodal embedding space. We introduce a variety of regularization terms to enforce realistic transformations. Next, we focus on diffusion models, which are powerful generative models able to synthetize novel images conditioned on a wide variety of textual prompts. We demonstrate their versatility by proposing DiffEdit, an algorithm which adapts diffusion models for image editing without finetuning. We propose a zero-shot strategy for finding automatically where the initial image should be changed to satisfy the text transformation query. Finally, we study a specific challenge useful in the context of image editing: how to synthetize a novel image by giving as constraint a spatial layout of objects with textual descriptions, a task which is known as Semantic Image Synthesis. We adopt the same strategy, consisting in adapting diffusion models to solve the task without any example. We propose the ZestGuide algorithm, which leverages the spatio-semantic information encoded in the attention layers of diffusion models
Gli stili APA, Harvard, Vancouver, ISO e altri
6

ur, Réhman Shafiq. "Expressing emotions through vibration for perception and control". Doctoral thesis, Umeå universitet, Institutionen för tillämpad fysik och elektronik, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-32990.

Testo completo
Abstract (sommario):
This thesis addresses a challenging problem: “how to let the visually impaired ‘see’ others emotions”. We, human beings, are heavily dependent on facial expressions to express ourselves. A smile shows that the person you are talking to is pleased, amused, relieved etc. People use emotional information from facial expressions to switch between conversation topics and to determine attitudes of individuals. Missing emotional information from facial expressions and head gestures makes the visually impaired extremely difficult to interact with others in social events. To enhance the visually impaired’s social interactive ability, in this thesis we have been working on the scientific topic of ‘expressing human emotions through vibrotactile patterns’. It is quite challenging to deliver human emotions through touch since our touch channel is very limited. We first investigated how to render emotions through a vibrator. We developed a real time “lipless” tracking system to extract dynamic emotions from the mouth and employed mobile phones as a platform for the visually impaired to perceive primary emotion types. Later on, we extended the system to render more general dynamic media signals: for example, render live football games through vibration in the mobile for improving mobile user communication and entertainment experience. To display more natural emotions (i.e. emotion type plus emotion intensity), we developed the technology to enable the visually impaired to directly interpret human emotions. This was achieved by use of machine vision techniques and vibrotactile display. The display is comprised of a ‘vibration actuators matrix’ mounted on the back of a chair and the actuators are sequentially activated to provide dynamic emotional information. The research focus has been on finding a global, analytical, and semantic representation for facial expressions to replace state of the art facial action coding systems (FACS) approach. We proposed to use the manifold of facial expressions to characterize dynamic emotions. The basic emotional expressions with increasing intensity become curves on the manifold extended from the center. The blends of emotions lie between those curves, which could be defined analytically by the positions of the main curves. The manifold is the “Braille Code” of emotions. The developed methodology and technology has been extended for building assistive wheelchair systems to aid a specific group of disabled people, cerebral palsy or stroke patients (i.e. lacking fine motor control skills), who don’t have ability to access and control the wheelchair with conventional means, such as joystick or chin stick. The solution is to extract the manifold of the head or the tongue gestures for controlling the wheelchair. The manifold is rendered by a 2D vibration array to provide user of the wheelchair with action information from gestures and system status information, which is very important in enhancing usability of such an assistive system. Current research work not only provides a foundation stone for vibrotactile rendering system based on object localization but also a concrete step to a new dimension of human-machine interaction.
Taktil Video
Gli stili APA, Harvard, Vancouver, ISO e altri

Capitoli di libri sul tema "Multimodal Embeddings"

1

Zhao, Xiang, Weixin Zeng e Jiuyang Tang. "Multimodal Entity Alignment". In Entity Alignment, 229–47. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-4250-3_9.

Testo completo
Abstract (sommario):
AbstractIn various tasks related to artificial intelligence, data is often present in multiple forms or modalities. Recently, it has become a popular approach to combine these different forms of information into a knowledge graph, creating a multi-modal knowledge graph (MMKG). However, multi-modal knowledge graphs (MMKGs) often face issues of insufficient data coverage and incompleteness. In order to address this issue, a possible strategy is to incorporate supplemental information from other multi-modal knowledge graphs (MMKGs). To achieve this goal, current methods for aligning entities could be utilized; however, these approaches work within the Euclidean space, and the resulting entity representations can distort the hierarchical structure of the knowledge graph. Additionally, the potential benefits of visual information have not been fully utilized.To address these concerns, we present a new approach for aligning entities across multiple modalities, which we call hyperbolic multi-modal entity alignment (). This method expands upon the conventional Euclidean representation by incorporating a hyperboloid manifold. Initially, we utilize hyperbolic graph convolutional networks() to acquire structural representations of entities. In terms of visual data, we create image embeddings using the model and subsequently map them into the hyperbolic space utilizing . Lastly, we merge the structural and visual representations within the hyperbolic space and utilize the combined embeddings to forecast potential entity alignment outcomes. Through a series of thorough experiments and ablation studies, we validate the efficacy of our proposed model and its individual components.
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Dolphin, Rian, Barry Smyth e Ruihai Dong. "A Machine Learning Approach to Industry Classification in Financial Markets". In Communications in Computer and Information Science, 81–94. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-26438-2_7.

Testo completo
Abstract (sommario):
AbstractIndustry classification schemes provide a taxonomy for segmenting companies based on their business activities. They are relied upon in industry and academia as an integral component of many types of financial and economic analysis. However, even modern classification schemes have failed to embrace the era of big data and remain a largely subjective undertaking prone to inconsistency and misclassification. To address this, we propose a multimodal neural model for training company embeddings, which harnesses the dynamics of both historical pricing data and financial news to learn objective company representations that capture nuanced relationships. We explain our approach in detail and highlight the utility of the embeddings through several case studies and application to the downstream task of industry classification.
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Gornishka, Iva, Stevan Rudinac e Marcel Worring. "Interactive Search and Exploration in Discussion Forums Using Multimodal Embeddings". In MultiMedia Modeling, 388–99. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-37734-2_32.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Dadwal, Rajjat, Ran Yu e Elena Demidova. "A Multimodal and Multitask Approach for Adaptive Geospatial Region Embeddings". In Advances in Knowledge Discovery and Data Mining, 363–75. Singapore: Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-97-2262-4_29.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Pandey, Sandeep Kumar, Hanumant Singh Shekhawat, Shalendar Bhasin, Ravi Jasuja e S. R. M. Prasanna. "Alzheimer’s Dementia Recognition Using Multimodal Fusion of Speech and Text Embeddings". In Intelligent Human Computer Interaction, 718–28. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-98404-5_64.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Zhou, Liting, e Cathal Gurrin. "Multimodal Embedding for Lifelog Retrieval". In MultiMedia Modeling, 416–27. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-98358-1_33.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Truchan, Hubert, Evgenii Naumov, Rezaul Abedin, Gregory Palmer e Zahra Ahmadi. "Multimodal Isotropic Neural Architecture with Patch Embedding". In Neural Information Processing, 173–87. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-8079-6_14.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Hazman, Muzhaffar, Susan McKeever e Josephine Griffith. "Meme Sentiment Analysis Enhanced with Multimodal Spatial Encoding and Face Embedding". In Communications in Computer and Information Science, 318–31. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-26438-2_25.

Testo completo
Abstract (sommario):
AbstractInternet memes are characterised by the interspersing of text amongst visual elements. State-of-the-art multimodal meme classifiers do not account for the relative positions of these elements across the two modalities, despite the latent meaning associated with where text and visual elements are placed. Against two meme sentiment classification datasets, we systematically show performance gains from incorporating the spatial position of visual objects, faces, and text clusters extracted from memes. In addition, we also present facial embedding as an impactful enhancement to image representation in a multimodal meme classifier. Finally, we show that incorporating this spatial information allows our fully automated approaches to outperform their corresponding baselines that rely on additional human validation of OCR-extracted text.
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Frilling, Andrea, e Ashley K. Clift. "Surgery in Combination with Peptide Receptor Radionuclide Therapy: A Novel Approach for the Treatment of Advanced Neuroendocrine Tumours". In Beyond Becquerel and Biology to Precision Radiomolecular Oncology: Festschrift in Honor of Richard P. Baum, 31–40. Cham: Springer International Publishing, 2024. http://dx.doi.org/10.1007/978-3-031-33533-4_3.

Testo completo
Abstract (sommario):
AbstractNeuroendocrine tumours/neoplasms (NEN) are clinically challenging entities, often due to their late stage at initial diagnosis. Whilst surgery is the cornerstone of curative treatment, many patients are not eligible for a radical surgical approach, and instead other targeted or systemic treatments may be utilised. Neoadjuvant concepts such as downstaging borderline resectable tumours are more established in some adenocarcinomas than in neuroendocrine oncology, yet the diverse armamentarium for the latter offers promise for novel multimodal concepts that may offer prolonged disease control by complementarily targeting micro- and macro-neuroendocrine disease. One promising option, as yet only explored in small case series, is the combination of surgery and peptide receptor radionuclide therapy (PPRT). Here, the authors review the challenges posed by advanced NEN, review the fledgling evidence regarding the combination of PRRT and surgery, and present the case for a wider examination of embedding PRRT and surgery within a multimodal treatment strategy.
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Auer, Peter, Barbara Laner, Martin Pfeiffer e Kerstin Botsch. "Noticing and assessing nature". In Studies in Language and Social Interaction, 245–75. Amsterdam: John Benjamins Publishing Company, 2024. http://dx.doi.org/10.1075/slsi.36.09aue.

Testo completo
Abstract (sommario):
We analyze how walkers employ a verbal format, i.e., the combination of a perception imperative followed by a wie ‘how’-exclamative (e.g., KUCK ma wie TRAUMhaft das is; ‘look PTCL how wonderful that is’), in its multimodal embedding, thus contributing to a multimodal extension of interactional linguistics. The analysis heavily relies on mobile eye-tracking as a method to collect naturally occurring data. It is argued that this kind of analysis would not be possible without the use of this novel technology. We focus on the role of the verbal format in the process of transforming individual perceptions into intersubjective experiences of nature, for which the precise documentation of gaze is essential. It is shown that the interactional function of this combined format is to draw the co-walker’s attention to an object in the surroundings and to express an affective stance towards it, treating the noticed referent as noteworthy and remarkable.
Gli stili APA, Harvard, Vancouver, ISO e altri

Atti di convegni sul tema "Multimodal Embeddings"

1

Liu, Ruizhou, Zongsheng Cao, Zhe Wu, Qianqian Xu e Qingming Huang. "Multimodal Knowledge Graph Embeddings via Lorentz-based Contrastive Learning". In 2024 IEEE International Conference on Multimedia and Expo (ICME), 1–6. IEEE, 2024. http://dx.doi.org/10.1109/icme57554.2024.10687608.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Takemaru, Lina, Shu Yang, Ruiming Wu, Bing He, Christos Davtzikos, Jingwen Yan e Li Shen. "Mapping Alzheimer’s Disease Pseudo-Progression With Multimodal Biomarker Trajectory Embeddings". In 2024 IEEE International Symposium on Biomedical Imaging (ISBI), 1–5. IEEE, 2024. http://dx.doi.org/10.1109/isbi56570.2024.10635249.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Alhabashi, Yasser, Abdullah Alharbi, Samar Ahmad, Serry Sibaee, Omer Nacar, Lahouari Ghouti e Anis Koubaa. "ASOS at ArAIEval Shared Task: Integrating Text and Image Embeddings for Multimodal Propaganda Detection in Arabic Memes". In Proceedings of The Second Arabic Natural Language Processing Conference, 473–77. Stroudsburg, PA, USA: Association for Computational Linguistics, 2024. http://dx.doi.org/10.18653/v1/2024.arabicnlp-1.46.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Chaabouni, Rahma, Ewan Dunbar, Neil Zeghidour e Emmanuel Dupoux. "Learning Weakly Supervised Multimodal Phoneme Embeddings". In Interspeech 2017. ISCA: ISCA, 2017. http://dx.doi.org/10.21437/interspeech.2017-1689.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Mustafina, Sofia, Andrey Akimov e Svetlana Mustafina. "Multimodal Embeddings In Emotion Recognition Research". In 2023 5th International Conference on Control Systems, Mathematical Modeling, Automation and Energy Efficiency (SUMMA). IEEE, 2023. http://dx.doi.org/10.1109/summa60232.2023.10349422.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Calabrese, Agostina, Michele Bevilacqua e Roberto Navigli. "EViLBERT: Learning Task-Agnostic Multimodal Sense Embeddings". In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/67.

Testo completo
Abstract (sommario):
The problem of grounding language in vision is increasingly attracting scholarly efforts. As of now, however, most of the approaches have been limited to word embeddings, which are not capable of handling polysemous words. This is mainly due to the limited coverage of the available semantically-annotated datasets, hence forcing research to rely on alternative technologies (i.e., image search engines). To address this issue, we introduce EViLBERT, an approach which is able to perform image classification over an open set of concepts, both concrete and non-concrete. Our approach is based on the recently introduced Vision-Language Pretraining (VLP) model, and builds upon a manually-annotated dataset of concept-image pairs. We use our technique to clean up the image-to-concept mapping that is provided within a multilingual knowledge base, resulting in over 258,000 images associated with 42,500 concepts. We show that our VLP-based model can be used to create multimodal sense embeddings starting from our automatically-created dataset. In turn, we also show that these multimodal embeddings improve the performance of a Word Sense Disambiguation architecture over a strong unimodal baseline. We release code, dataset and embeddings at http://babelpic.org.
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Lu, Yuxing, Weichen Zhao, Nan Sun e Jinzhuo Wang. "Enhancing Multimodal Knowledge Graph Representation Learning through Triple Contrastive Learning". In Thirty-Third International Joint Conference on Artificial Intelligence {IJCAI-24}. California: International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/ijcai.2024/659.

Testo completo
Abstract (sommario):
Multimodal knowledge graphs incorporate multimodal information rather than pure symbols, which significantly enhance the representation of knowledge graphs and their capacity to understand the world. Despite these advancements, existing multimodal fusion techniques still face significant challenges in representing modalities and fully integrating the diverse attributes of entities, particularly when dealing with more than one modality. To address this issue, this article proposes a Knowledge Graph Multimodal Representation Learning (KG-MRI) method. This method utilizes foundation models to represent different modalities and incorporates a triple contrastive learning model and a dual-phase training strategy to effectively fuse the different modalities with knowledge graph embeddings. We conducted comprehensive comparisons with several different knowledge graph embedding methods to validate the effectiveness of our KG-MRI model. Furthermore validation on a real-world Non-Alcohol Fatty Liver Disease (NAFLD) cohort demonstrated that the vector representations learned through our methodology possess enhanced representational capabilities, showing promise for broader applications in complex multimodal environments.
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Zhang, Miaoran, Marius Mosbach, David Adelani, Michael Hedderich e Dietrich Klakow. "MCSE: Multimodal Contrastive Learning of Sentence Embeddings". In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg, PA, USA: Association for Computational Linguistics, 2022. http://dx.doi.org/10.18653/v1/2022.naacl-main.436.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Neculai, Andrei, Yanbei Chen e Zeynep Akata. "Probabilistic Compositional Embeddings for Multimodal Image Retrieval". In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, 2022. http://dx.doi.org/10.1109/cvprw56347.2022.00501.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Mahajan, Shweta, Teresa Botschen, Iryna Gurevych e Stefan Roth. "Joint Wasserstein Autoencoders for Aligning Multimodal Embeddings". In 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW). IEEE, 2019. http://dx.doi.org/10.1109/iccvw.2019.00557.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!

Vai alla bibliografia