Academic literature on the topic 'Multimodal Transformers'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Multimodal Transformers.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Multimodal Transformers"

1

Jaiswal, Sushma, Harikumar Pallthadka, Rajesh P. Chinchewadi, and Tarun Jaiswal. "Optimized Image Captioning: Hybrid Transformers Vision Transformers and Convolutional Neural Networks: Enhanced with Beam Search." International Journal of Intelligent Systems and Applications 16, no. 2 (April 8, 2024): 53–61. http://dx.doi.org/10.5815/ijisa.2024.02.05.

Full text
Abstract:
Deep learning has improved image captioning. Transformer, a neural network architecture built for natural language processing, excels at image captioning and other computer vision applications. This paper reviews Transformer-based image captioning methods in detail. Convolutional neural networks (CNNs) extracted image features and RNNs or LSTM networks generated captions in traditional image captioning. This method often has information bottlenecks and trouble capturing long-range dependencies. Transformer architecture revolutionized natural language processing with its attention strategy and parallel processing. Researchers used Transformers' language success to solve image captioning problems. Transformer-based image captioning systems outperform previous methods in accuracy and efficiency by integrating visual and textual information into a single model. This paper discusses how the Transformer architecture's self-attention mechanisms and positional encodings are adapted for image captioning. Vision Transformers (ViTs) and CNN-Transformer hybrid models are discussed. We also discuss pre-training, fine-tuning, and reinforcement learning to improve caption quality. Transformer-based image captioning difficulties, trends, and future approaches are also examined. Multimodal fusion, visual-text alignment, and caption interpretability are challenges. We expect research to address these issues and apply Transformer-based image captioning to medical imaging and distant sensing. This paper covers how Transformer-based approaches have changed image captioning and their potential to revolutionize multimodal interpretation and generation, advancing artificial intelligence and human-computer interactions.
APA, Harvard, Vancouver, ISO, and other styles
2

Bayat, Nasrin, Jong-Hwan Kim, Renoa Choudhury, Ibrahim F. Kadhim, Zubaidah Al-Mashhadani, Mark Aldritz Dela Virgen, Reuben Latorre, Ricardo De La Paz, and Joon-Hyuk Park. "Vision Transformer Customized for Environment Detection and Collision Prediction to Assist the Visually Impaired." Journal of Imaging 9, no. 8 (August 15, 2023): 161. http://dx.doi.org/10.3390/jimaging9080161.

Full text
Abstract:
This paper presents a system that utilizes vision transformers and multimodal feedback modules to facilitate navigation and collision avoidance for the visually impaired. By implementing vision transformers, the system achieves accurate object detection, enabling the real-time identification of objects in front of the user. Semantic segmentation and the algorithms developed in this work provide a means to generate a trajectory vector of all identified objects from the vision transformer and to detect objects that are likely to intersect with the user’s walking path. Audio and vibrotactile feedback modules are integrated to convey collision warning through multimodal feedback. The dataset used to create the model was captured from both indoor and outdoor settings under different weather conditions at different times across multiple days, resulting in 27,867 photos consisting of 24 different classes. Classification results showed good performance (95% accuracy), supporting the efficacy and reliability of the proposed model. The design and control methods of the multimodal feedback modules for collision warning are also presented, while the experimental validation concerning their usability and efficiency stands as an upcoming endeavor. The demonstrated performance of the vision transformer and the presented algorithms in conjunction with the multimodal feedback modules show promising prospects of its feasibility and applicability for the navigation assistance of individuals with vision impairment.
APA, Harvard, Vancouver, ISO, and other styles
3

Hendricks, Lisa Anne, John Mellor, Rosalia Schneider, Jean-Baptiste Alayrac, and Aida Nematzadeh. "Decoupling the Role of Data, Attention, and Losses in Multimodal Transformers." Transactions of the Association for Computational Linguistics 9 (2021): 570–85. http://dx.doi.org/10.1162/tacl_a_00385.

Full text
Abstract:
Abstract Recently, multimodal transformer models have gained popularity because their performance on downstream tasks suggests they learn rich visual-linguistic representations. Focusing on zero-shot image retrieval tasks, we study three important factors that can impact the quality of learned representations: pretraining data, the attention mechanism, and loss functions. By pretraining models on six datasets, we observe that dataset noise and language similarity to our downstream task are important indicators of model performance. Through architectural analysis, we learn that models with a multimodal attention mechanism can outperform deeper models with modality-specific attention mechanisms. Finally, we show that successful contrastive losses used in the self-supervised learning literature do not yield similar performance gains when used in multimodal transformers.
APA, Harvard, Vancouver, ISO, and other styles
4

Shao, Zilei. "A literature review on multimodal deep learning models for detecting mental disorders in conversational data: Pre-transformer and transformer-based approaches." Applied and Computational Engineering 18, no. 1 (October 23, 2023): 215–24. http://dx.doi.org/10.54254/2755-2721/18/20230993.

Full text
Abstract:
This paper provides a comprehensive review of multimodal deep learning models that utilize conversational data to detect mental health disorders. In addition to discussing models based on the Transformer, such as BERT (Bidirectional Encoder Representations from Transformers), this paper addresses models that existed prior to the Transformer, such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs). The paper covers the application of these models in the construction of multimodal deep learning systems to detect mental disorders. In addition, the difficulties encountered by multimodal deep learning systems are brought up. Furthermore, the paper proposes research directions for enhancing the performance and robustness of these models in mental health applications. By shedding light on the potential of multimodal deep learning in mental health care, this paper aims to foster further research and development in this critical domain.
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, LeiChen, Simon Giebenhain, Carsten Anklam, and Bastian Goldluecke. "Radar Ghost Target Detection via Multimodal Transformers." IEEE Robotics and Automation Letters 6, no. 4 (October 2021): 7758–65. http://dx.doi.org/10.1109/lra.2021.3100176.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Salin, Emmanuelle, Badreddine Farah, Stéphane Ayache, and Benoit Favre. "Are Vision-Language Transformers Learning Multimodal Representations? A Probing Perspective." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 10 (June 28, 2022): 11248–57. http://dx.doi.org/10.1609/aaai.v36i10.21375.

Full text
Abstract:
In recent years, joint text-image embeddings have significantly improved thanks to the development of transformer-based Vision-Language models. Despite these advances, we still need to better understand the representations produced by those models. In this paper, we compare pre-trained and fine-tuned representations at a vision, language and multimodal level. To that end, we use a set of probing tasks to evaluate the performance of state-of-the-art Vision-Language models and introduce new datasets specifically for multimodal probing. These datasets are carefully designed to address a range of multimodal capabilities while minimizing the potential for models to rely on bias. Although the results confirm the ability of Vision-Language models to understand color at a multimodal level, the models seem to prefer relying on bias in text data for object position and size. On semantically adversarial examples, we find that those models are able to pinpoint fine-grained multimodal differences. Finally, we also notice that fine-tuning a Vision-Language model on multimodal tasks does not necessarily improve its multimodal ability. We make all datasets and code available to replicate experiments.
APA, Harvard, Vancouver, ISO, and other styles
7

Sun, Qixuan, Nianhua Fang, Zhuo Liu, Liang Zhao, Youpeng Wen, and Hongxiang Lin. "HybridCTrm: Bridging CNN and Transformer for Multimodal Brain Image Segmentation." Journal of Healthcare Engineering 2021 (October 1, 2021): 1–10. http://dx.doi.org/10.1155/2021/7467261.

Full text
Abstract:
Multimodal medical image segmentation is always a critical problem in medical image segmentation. Traditional deep learning methods utilize fully CNNs for encoding given images, thus leading to deficiency of long-range dependencies and bad generalization performance. Recently, a sequence of Transformer-based methodologies emerges in the field of image processing, which brings great generalization and performance in various tasks. On the other hand, traditional CNNs have their own advantages, such as rapid convergence and local representations. Therefore, we analyze a hybrid multimodal segmentation method based on Transformers and CNNs and propose a novel architecture, HybridCTrm network. We conduct experiments using HybridCTrm on two benchmark datasets and compare with HyperDenseNet, a network based on fully CNNs. Results show that our HybridCTrm outperforms HyperDenseNet on most of the evaluation metrics. Furthermore, we analyze the influence of the depth of Transformer on the performance. Besides, we visualize the results and carefully explore how our hybrid methods improve on segmentations.
APA, Harvard, Vancouver, ISO, and other styles
8

Yu Tian, Qiyang Zhao, Zine el abidine Kherroubi, Fouzi Boukhalfa, Kebin Wu, and Faouzi Bader. "Multimodal transformers for wireless communications: A case study in beam prediction." ITU Journal on Future and Evolving Technologies 4, no. 3 (September 5, 2023): 461–71. http://dx.doi.org/10.52953/jwra8095.

Full text
Abstract:
Wireless communications at high-frequency bands with large antenna arrays face challenges in beam management, which can potentially be improved by multimodality sensing information from cameras, LiDAR, radar, and GPS. In this paper, we present a multimodal transformer deep learning framework for sensing-assisted beam prediction. We employ a convolutional neural network to extract the features from a sequence of images, point clouds, and radar raw data sampled over time. At each convolutional layer, we use transformer encoders to learn the hidden relations between feature tokens from different modalities and time instances over abstraction space and produce encoded vectors for the next-level feature extraction. We train the model on a combination of different modalities with supervised learning. We try to enhance the model over imbalanced data by utilizing focal loss and exponential moving average. We also evaluate data processing and augmentation techniques such as image enhancement, segmentation, background filtering, multimodal data flipping, radar signal transformation, and GPS angle calibration. Experimental results show that our solution trained on image and GPS data produces the best distance-based accuracy of predicted beams at 78.44%, with effective generalization to unseen day scenarios near 73% and night scenarios over 84%. This outperforms using other modalities and arbitrary data processing techniques, which demonstrates the effectiveness of transformers with feature fusion in performing radio beam prediction from images and GPS. Furthermore, our solution could be pretrained from large sequences of multimodality wireless data, on fine-tuning for multiple downstream radio network tasks.
APA, Harvard, Vancouver, ISO, and other styles
9

Chen, Yu, Ming Yin, Yu Li, and Qian Cai. "CSU-Net: A CNN-Transformer Parallel Network for Multimodal Brain Tumour Segmentation." Electronics 11, no. 14 (July 16, 2022): 2226. http://dx.doi.org/10.3390/electronics11142226.

Full text
Abstract:
Medical image segmentation techniques are vital to medical image processing and analysis. Considering the significant clinical applications of brain tumour image segmentation, it represents a focal point of medical image segmentation research. Most of the work in recent times has been centred on Convolutional Neural Networks (CNN) and Transformers. However, CNN has some deficiencies in modelling long-distance information transfer and contextual processing information, while Transformer is relatively weak in acquiring local information. To overcome the above defects, we propose a novel segmentation network with an “encoder–decoder” architecture, namely CSU-Net. The encoder consists of two parallel feature extraction branches based on CNN and Transformer, respectively, in which the features of the same size are fused. The decoder has a dual Swin Transformer decoder block with two learnable parameters for feature upsampling. The features from multiple resolutions in the encoder and decoder are merged via skip connections. On the BraTS 2020, our model achieves 0.8927, 0.8857, and 0.8188 for the Whole Tumour (WT), Tumour Core (TC), and Enhancing Tumour (ET), respectively, in terms of Dice scores.
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Zhaokai, Renda Bao, Qi Wu, and Si Liu. "Confidence-aware Non-repetitive Multimodal Transformers for TextCaps." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 4 (May 18, 2021): 2835–43. http://dx.doi.org/10.1609/aaai.v35i4.16389.

Full text
Abstract:
When describing an image, reading text in the visual scene is crucial to understand the key information. Recent work explores the TextCaps task, i.e. image captioning with reading Optical Character Recognition (OCR) tokens, which requires models to read text and cover them in generated captions. Existing approaches fail to generate accurate descriptions because of their (1) poor reading ability; (2) inability to choose the crucial words among all extracted OCR tokens; (3) repetition of words in predicted captions. To this end, we propose a Confidence-aware Non-repetitive Multimodal Transformers (CNMT) to tackle the above challenges. Our CNMT consists of a reading, a reasoning and a generation modules, in which Reading Module employs better OCR systems to enhance text reading ability and a confidence embedding to select the most noteworthy tokens. To address the issue of word redundancy in captions, our Generation Module includes a repetition mask to avoid predicting repeated word in captions. Our model outperforms state-of-the-art models on TextCaps dataset, improving from 81.0 to 93.0 in CIDEr. Our source code is publicly available.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Multimodal Transformers"

1

Greco, Claudio. "Transfer Learning and Attention Mechanisms in a Multimodal Setting." Doctoral thesis, Università degli studi di Trento, 2022. http://hdl.handle.net/11572/341874.

Full text
Abstract:
Humans are able to develop a solid knowledge of the world around them: they can leverage information coming from different sources (e.g., language, vision), focus on the most relevant information from the input they receive in a given life situation, and exploit what they have learned before without forgetting it. In the field of Artificial Intelligence and Computational Linguistics, replicating these human abilities in artificial models is a major challenge. Recently, models based on pre-training and on attention mechanisms, namely pre-trained multimodal Transformers, have been developed. They seem to perform tasks surprisingly well compared to other computational models in multiple contexts. They simulate a human-like cognition in that they supposedly rely on previously acquired knowledge (transfer learning) and focus on the most important information (attention mechanisms) of the input. Nevertheless, we still do not know whether these models can deal with multimodal tasks that require merging different types of information simultaneously to be solved, as humans would do. This thesis attempts to fill this crucial gap in our knowledge of multimodal models by investigating the ability of pre-trained Transformers to encode multimodal information; and the ability of attention-based models to remember how to deal with previously-solved tasks. With regards to pre-trained Transformers, we focused on their ability to rely on pre-training and on attention while dealing with tasks requiring to merge information coming from language and vision. More precisely, we investigate if pre-trained multimodal Transformers are able to understand the internal structure of a dialogue (e.g., organization of the turns); to effectively solve complex spatial questions requiring to process different spatial elements (e.g., regions of the image, proximity between elements, etc.); and to make predictions based on complementary multimodal cues (e.g., guessing the most plausible action by leveraging the content of a sentence and of an image). The results of this thesis indicate that pre-trained Transformers outperform other models. Indeed, they are able to some extent to integrate complementary multimodal information; they manage to pinpoint both the relevant turns in a dialogue and the most important regions in an image. These results suggest that pre-training and attention play a key role in pre-trained Transformers’ encoding. Nevertheless, their way of processing information cannot be considered as human-like. Indeed, when compared to humans, they struggle (as non-pre-trained models do) to understand negative answers, to merge spatial information in difficult questions, and to predict actions based on complementary linguistic and visual cues. With regards to attention-based models, we found out that these kinds of models tend to forget what they have learned in previously-solved tasks. However, training these models on easy tasks before more complex ones seems to mitigate this catastrophic forgetting phenomenon. These results indicate that, at least in this context, attention-based models (and, supposedly, pre-trained Transformers too) are sensitive to tasks’ order. A better control of this variable may therefore help multimodal models learn sequentially and continuously as humans do.
APA, Harvard, Vancouver, ISO, and other styles
2

Vazquez, Rodriguez Juan Fernando. "Transformateurs multimodaux pour la reconnaissance des émotions." Electronic Thesis or Diss., Université Grenoble Alpes, 2023. http://www.theses.fr/2023GRALM057.

Full text
Abstract:
La santé mentale et le bien-être émotionnel ont une influence significative sur la santé physique et sont particulièrement importants pour un viellissement en bonne santé. Les avancées continues dans le domaine des capteurs et de la microélectronique en général ont permis l’avènement de nouvelles technologies pouvant être déployées dans les maisons pour surveiller la santé et le bien-être des occupants. Ces technologies de captation peuvent être combinées aux avancées récentes sur l’apprentissage automatique pour proposer des services utiles pour vieillir en bonne santé. Dans ce cadre, un système de reconnaissance automatique d’émotions peut être un outil s’assurant du bien-être de personnes fragiles. Dès lors, il est intéressant de développer un système pouvant déduire des informations sur les émotions humaines à partir de modalités de captation multiples, et pouvant être entrainé sans requérir de larges jeux de données labellisées d’apprentissage.Cette thèse aborde le problème de la reconnaissance d’émotions à partir de différents types de signaux qu’un environnement intelligent peut capter, tels que des signaux visuels, audios, et physiologiques. Pour ce faire, nous développons différents modèles basés sur l’architecture extit{Transformer}, possédant des caractéristiques utiles à nos besoins comme la capacité à modéliser des dépendances longues et à sélectionner les parties importantes des signaux entrants. Nous proposons en premier lieu un modèle pour reconnaitre les émotions à partir de signaux physiologiques individuels. Nous proposons une technique de pré-apprentissage auto-supervisé utilisant des données physiologiques non-labellisées, qui améliore les performances du modèle. Cette approche est ensuite étendue pour exploiter la complémentarité de différents types de signaux physiologiques. Nous développons un modèle qui combine ces différents signaux physiologiques, et qui exploite également le pré-apprentissage auto-supervisé. Nous proposons une méthode de pré-apprentissage qui ne nécessite pas un jeu de données unique contenant tous les types de signaux utilisés, pouvant au contraire être pré-entrainé avec des jeux de données différents pour chaque type de signal.Pour tirer parti des différentes modalités qu’un environnement connecté peut offrir, nous proposons un modèle multimodal exploitant des signaux vidéos, audios, et physiologiques. Ces signaux étant de natures différentes, ils capturent des modes distincts d’expression des émotions, qui peuvent être complémentaires et qu’il est donc intéressant d’exploiter simultanément. Cependant, dans des situations d’usage réelles, il se peut que certaines de ces modalités soient manquantes. Notre modèle est suffisamment flexible pour continuer à fonctionner lorsqu’une modalité est manquante, mais sera moins performant. Nous proposons alors une stratégie d’apprentissage permettant de réduire ces baisses de performances lorsqu’une modalité est manquante.Les méthodes développées dans cette thèse sont évaluées sur plusieurs jeux de données. Les résultats obtenus montrent que nos approches de extit{Transformer} pré-entrainé sont performantes pour reconnaitre les émotions à partir de signaux physiologiques. Nos résultats mettent également en lumière les capacités de notre solution à aggréger différents signaux multimodaux, et à s’adapter à l’absence de l’un d’entre eux. Ces résultats montrent que les approches proposées sont adaptées pour reconnaitre les émotions à partir de multiples capteurs de l’environnement. Nos travaux ouvrent de nouvelles pistes de recherche sur l’utilisation des extit{Transformers} pour traiter les informations de capteurs d’environnements intelligents et sur la reconnaissance d’émotions robuste dans les cas où des modalités sont manquantes. Les résultats de ces travaux peuvent contribuer à améliorer l’attention apportée à la santé mentale des personnes fragiles
Mental health and emotional well-being have significant influence on physical health, and are especially important for healthy aging. Continued progress on sensors and microelectronics has provided a number of new technologies that can be deployed in homes and used to monitor health and well-being. These can be combined with recent advances in machine learning to provide services that enhance the physical and emotional well-being of individuals to promote healthy aging. In this context, an automatic emotion recognition system can provide a tool to help assure the emotional well-being of frail people. Therefore, it is desirable to develop a technology that can draw information about human emotions from multiple sensor modalities and can be trained without the need for large labeled training datasets.This thesis addresses the problem of emotion recognition using the different types of signals that a smart environment may provide, such as visual, audio, and physiological signals. To do this, we develop different models based on the Transformer architecture, which has useful characteristics such as their capacity to model long-range dependencies, as well as their capability to discern the relevant parts of the input. We first propose a model to recognize emotions from individual physiological signals. We propose a self-supervised pre-training technique that uses unlabeled physiological signals, showing that that pre-training technique helps the model to perform better. This approach is then extended to take advantage of the complementarity of information that may exist in different physiological signals. For this, we develop a model that combines different physiological signals and also uses self-supervised pre-training to improve its performance. We propose a method for pre-training that does not require a dataset with the complete set of target signals, but can rather, be trained on individual datasets from each target signal.To further take advantage of the different modalities that a smart environment may provide, we also propose a model that uses as inputs multimodal signals such as video, audio, and physiological signals. Since these signals are of a different nature, they cover different ways in which emotions are expressed, thus they should provide complementary information concerning emotions, and therefore it is appealing to use them together. However, in real-world scenarios, there might be cases where a modality is missing. Our model is flexible enough to continue working when a modality is missing, albeit with a reduction in its performance. To address this problem, we propose a training strategy that reduces the drop in performance when a modality is missing.The methods developed in this thesis are evaluated using several datasets, obtaining results that demonstrate the effectiveness of our approach to pre-train Transformers to recognize emotions from physiological signals. The results also show the efficacy of our Transformer-based solution to aggregate multimodal information, and to accommodate missing modalities. These results demonstrate the feasibility of the proposed approaches to recognizing emotions from multiple environmental sensors. This opens new avenues for deeper exploration of using Transformer-based approaches to process information from environmental sensors and allows the development of emotion recognition technologies robust to missing modalities. The results of this work can contribute to better care for the mental health of frail people
APA, Harvard, Vancouver, ISO, and other styles
3

Mills, Kathy Ann. "Multiliteracies : a critical ethnography : pedagogy, power, discourse and access to multiliteracies." Thesis, Queensland University of Technology, 2006. https://eprints.qut.edu.au/16244/1/Kathy_Mills_Thesis.pdf.

Full text
Abstract:
The multiliteracies pedagogy of the New London Group is a response to the emergence of new literacies and changing forms of meaning-making in contemporary contexts of increased cultural and linguistic diversity. This critical ethnographic research investigates the interactions between pedagogy, power, discourses, and differential access to multiliteracies, among a group of culturally and linguistically diverse learners in a mainstream Australian classroom. The study documents the way in which a teacher enacted the multiliteracies pedagogy through a series of mediabased lessons with her year six (aged 11-12 years) class. The reporting of this research is timely because the multiliteracies pedagogy has become a key feature of Australian educational policy initiatives and syllabus requirements. The methodology of this study was based on Carspecken's critical ethnography. This method includes five stages: Stage One involved eighteen days of observational data collection over the course of ten weeks in the classroom. The multiliteracies lessons aimed to enable learners to collaboratively design a claymation movie. Stage Two was the initial analysis of data, including verbatim transcribing, coding, and applying analytic tools to the data. Stage Three involved semi-structured, forty-five minute interviews with the principal, teacher, and four culturally and linguistically diverse students. In Stages Four and Five, the results of micro-level data analysis were compared with macro-level phenomena using structuration theory and extant literature about access to multiliteracies. The key finding was that students' access to multiliteracies differed among the culturally and linguistically diverse group. Existing degrees of access were reproduced, based on the learners' relation to the dominant culture. In the context of the media-based lessons in which students designed claymation movies, students from Anglo-Australian, middle-class backgrounds had greater access to transformed designing than those who were culturally marginalised. These experiences were mediated by pedagogy, power, and discourses in the classroom, which were in turn influenced by the agency of individuals. The individuals were both enabled and constrained by structures of power within the school and the wider educational and social systems. Recommendations arising from the study were provided for teachers, principals, policy makers and researchers who seek to monitor and facilitate the success of the multiliteracies pedagogy in culturally and linguistically diverse educational contexts.
APA, Harvard, Vancouver, ISO, and other styles
4

Mills, Kathy Ann. "Multiliteracies : a critical ethnography : pedagogy, power, discourse and access to multiliteracies." Queensland University of Technology, 2006. http://eprints.qut.edu.au/16244/.

Full text
Abstract:
The multiliteracies pedagogy of the New London Group is a response to the emergence of new literacies and changing forms of meaning-making in contemporary contexts of increased cultural and linguistic diversity. This critical ethnographic research investigates the interactions between pedagogy, power, discourses, and differential access to multiliteracies, among a group of culturally and linguistically diverse learners in a mainstream Australian classroom. The study documents the way in which a teacher enacted the multiliteracies pedagogy through a series of mediabased lessons with her year six (aged 11-12 years) class. The reporting of this research is timely because the multiliteracies pedagogy has become a key feature of Australian educational policy initiatives and syllabus requirements. The methodology of this study was based on Carspecken's critical ethnography. This method includes five stages: Stage One involved eighteen days of observational data collection over the course of ten weeks in the classroom. The multiliteracies lessons aimed to enable learners to collaboratively design a claymation movie. Stage Two was the initial analysis of data, including verbatim transcribing, coding, and applying analytic tools to the data. Stage Three involved semi-structured, forty-five minute interviews with the principal, teacher, and four culturally and linguistically diverse students. In Stages Four and Five, the results of micro-level data analysis were compared with macro-level phenomena using structuration theory and extant literature about access to multiliteracies. The key finding was that students' access to multiliteracies differed among the culturally and linguistically diverse group. Existing degrees of access were reproduced, based on the learners' relation to the dominant culture. In the context of the media-based lessons in which students designed claymation movies, students from Anglo-Australian, middle-class backgrounds had greater access to transformed designing than those who were culturally marginalised. These experiences were mediated by pedagogy, power, and discourses in the classroom, which were in turn influenced by the agency of individuals. The individuals were both enabled and constrained by structures of power within the school and the wider educational and social systems. Recommendations arising from the study were provided for teachers, principals, policy makers and researchers who seek to monitor and facilitate the success of the multiliteracies pedagogy in culturally and linguistically diverse educational contexts.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Multimodal Transformers"

1

Revanur, Ambareesh, Ananyananda Dasari, Conrad S. Tucker, and László A. Jeni. "Instantaneous Physiological Estimation Using Video Transformers." In Multimodal AI in Healthcare, 307–19. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-14771-5_22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kant, Yash, Dhruv Batra, Peter Anderson, Alexander Schwing, Devi Parikh, Jiasen Lu, and Harsh Agrawal. "Spatially Aware Multimodal Transformers for TextVQA." In Computer Vision – ECCV 2020, 715–32. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58545-7_41.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Mojtahedi, Ramtin, Mohammad Hamghalam, Richard K. G. Do, and Amber L. Simpson. "Towards Optimal Patch Size in Vision Transformers for Tumor Segmentation." In Multiscale Multimodal Medical Imaging, 110–20. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-18814-5_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ramesh, Krithik, and Yun Sing Koh. "Investigation of Explainability Techniques for Multimodal Transformers." In Communications in Computer and Information Science, 90–98. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-8746-5_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Bucur, Ana-Maria, Adrian Cosma, Paolo Rosso, and Liviu P. Dinu. "It’s Just a Matter of Time: Detecting Depression with Time-Enriched Multimodal Transformers." In Lecture Notes in Computer Science, 200–215. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-28244-7_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sun, Zhengxiao, Feiyu Chen, and Jie Shao. "Synesthesia Transformer with Contrastive Multimodal Learning." In Neural Information Processing, 431–42. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-30105-6_36.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Xie, Long-Fei, and Xu-Yao Zhang. "Gate-Fusion Transformer for Multimodal Sentiment Analysis." In Pattern Recognition and Artificial Intelligence, 28–40. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-59830-3_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Wenxuan, Chen Chen, Meng Ding, Hong Yu, Sen Zha, and Jiangyun Li. "TransBTS: Multimodal Brain Tumor Segmentation Using Transformer." In Medical Image Computing and Computer Assisted Intervention – MICCAI 2021, 109–19. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-87193-2_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Liu, Dan, Wei Song, and Xiaobing Zhao. "Pedestrian Attribute Recognition Based on Multimodal Transformer." In Pattern Recognition and Computer Vision, 422–33. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-8429-9_34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Reyes, Abel A., Sidike Paheding, Makarand Deo, and Michel Audette. "Gabor Filter-Embedded U-Net with Transformer-Based Encoding for Biomedical Image Segmentation." In Multiscale Multimodal Medical Imaging, 76–88. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-18814-5_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Multimodal Transformers"

1

Parthasarathy, Srinivas, and Shiva Sundaram. "Detecting Expressions with Multimodal Transformers." In 2021 IEEE Spoken Language Technology Workshop (SLT). IEEE, 2021. http://dx.doi.org/10.1109/slt48900.2021.9383573.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chua, Watson W. K., Lu Li, and Alvina Goh. "Classifying Multimodal Data Using Transformers." In KDD '22: The 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3534678.3542634.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Yikai, Xinghao Chen, Lele Cao, Wenbing Huang, Fuchun Sun, and Yunhe Wang. "Multimodal Token Fusion for Vision Transformers." In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2022. http://dx.doi.org/10.1109/cvpr52688.2022.01187.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Tang, Wenzhuo, Hongzhi Wen, Renming Liu, Jiayuan Ding, Wei Jin, Yuying Xie, Hui Liu, and Jiliang Tang. "Single-Cell Multimodal Prediction via Transformers." In CIKM '23: The 32nd ACM International Conference on Information and Knowledge Management. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3583780.3615061.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Liu, Yicheng, Jinghuai Zhang, Liangji Fang, Qinhong Jiang, and Bolei Zhou. "Multimodal Motion Prediction with Stacked Transformers." In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2021. http://dx.doi.org/10.1109/cvpr46437.2021.00749.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bhargava, Prajjwal. "Adaptive Transformers for Learning Multimodal Representations." In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop. Stroudsburg, PA, USA: Association for Computational Linguistics, 2020. http://dx.doi.org/10.18653/v1/2020.acl-srw.1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Vazquez-Rodriguez, Juan. "Using Multimodal Transformers in Affective Computing." In 2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW). IEEE, 2021. http://dx.doi.org/10.1109/aciiw52867.2021.9666396.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Shang, Xindi, Zehuan Yuan, Anran Wang, and Changhu Wang. "Multimodal Video Summarization via Time-Aware Transformers." In MM '21: ACM Multimedia Conference. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3474085.3475321.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Wu, Zhengtao, Lingbo Liu, Yang Zhang, Mingzhi Mao, Liang Lin, and Guanbin Li. "Multimodal Crowd Counting with Mutual Attention Transformers." In 2022 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2022. http://dx.doi.org/10.1109/icme52920.2022.9859777.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ma, Mengmeng, Jian Ren, Long Zhao, Davide Testuggine, and Xi Peng. "Are Multimodal Transformers Robust to Missing Modality?" In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2022. http://dx.doi.org/10.1109/cvpr52688.2022.01764.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Multimodal Transformers"

1

Glushko, E. Ya, and A. N. Stepanyuk. The multimode island kind photonic crystal resonator: states classification. SME Burlaka, 2017. http://dx.doi.org/10.31812/0564/1561.

Full text
Abstract:
In this work, we consider a new calculation method to solve the eigenvalue problem for electromagnetic field in finite 2D structures including the modes distribution through the system. The field amplitude distribution is valuable if the signal energy inside the system should be transformed in most effective way. The method proposed for finite resonators operates with open boundary conditions that are important to account the electromagnetic field non-periodicity in a finite system.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography