Journal articles on the topic 'Multi-modal studies'

To see the other types of publications on this topic, follow the link: Multi-modal studies.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Multi-modal studies.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Wang, Yang. "Survey on Deep Multi-modal Data Analytics: Collaboration, Rivalry, and Fusion." ACM Transactions on Multimedia Computing, Communications, and Applications 17, no. 1s (March 31, 2021): 1–25. http://dx.doi.org/10.1145/3408317.

Full text
Abstract:
With the development of web technology, multi-modal or multi-view data has surged as a major stream for big data, where each modal/view encodes individual property of data objects. Often, different modalities are complementary to each other. This fact motivated a lot of research attention on fusing the multi-modal feature spaces to comprehensively characterize the data objects. Most of the existing state-of-the-arts focused on how to fuse the energy or information from multi-modal spaces to deliver a superior performance over their counterparts with single modal. Recently, deep neural networks have been exhibited as a powerful architecture to well capture the nonlinear distribution of high-dimensional multimedia data, so naturally does for multi-modal data. Substantial empirical studies are carried out to demonstrate its advantages that are benefited from deep multi-modal methods, which can essentially deepen the fusion from multi-modal deep feature spaces. In this article, we provide a substantial overview of the existing state-of-the-arts in the field of multi-modal data analytics from shallow to deep spaces. Throughout this survey, we further indicate that the critical components for this field go to collaboration, adversarial competition, and fusion over multi-modal spaces. Finally, we share our viewpoints regarding some future directions in this field.
APA, Harvard, Vancouver, ISO, and other styles
2

Bianchi, Matteo, Robert Haschke, Gereon Büscher, Simone Ciotti, Nicola Carbonaro, and Alessandro Tognetti. "A Multi-Modal Sensing Glove for Human Manual-Interaction Studies." Electronics 5, no. 4 (July 20, 2016): 42. http://dx.doi.org/10.3390/electronics5030042.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

James, T. W., and I. Gauthier. "fMRI studies of multi-modal semantic knowledge using artificial concepts." Journal of Vision 3, no. 9 (March 16, 2010): 197. http://dx.doi.org/10.1167/3.9.197.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zhang, Dong, Xincheng Ju, Wei Zhang, Junhui Li, Shoushan Li, Qiaoming Zhu, and Guodong Zhou. "Multi-modal Multi-label Emotion Recognition with Heterogeneous Hierarchical Message Passing." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 16 (May 18, 2021): 14338–46. http://dx.doi.org/10.1609/aaai.v35i16.17686.

Full text
Abstract:
As an important research issue in affective computing community, multi-modal emotion recognition has become a hot topic in the last few years. However, almost all existing studies perform multiple binary classification for each emotion with focus on complete time series data. In this paper, we focus on multi-modal emotion recognition in a multi-label scenario. In this scenario, we consider not only the label-to-label dependency, but also the feature-to-label and modality-to-label dependencies. Particularly, we propose a heterogeneous hierarchical message passing network to effectively model above dependencies. Furthermore, we propose a new multi-modal multi-label emotion dataset based on partial time-series content to show predominant generalization of our model. Detailed evaluation demonstrates the effectiveness of our approach.
APA, Harvard, Vancouver, ISO, and other styles
5

Irfan, Bahar, Michael Garcia Ortiz, Natalia Lyubova, and Tony Belpaeme. "Multi-modal Open World User Identification." ACM Transactions on Human-Robot Interaction 11, no. 1 (March 31, 2022): 1–50. http://dx.doi.org/10.1145/3477963.

Full text
Abstract:
User identification is an essential step in creating a personalised long-term interaction with robots. This requires learning the users continuously and incrementally, possibly starting from a state without any known user. In this article, we describe a multi-modal incremental Bayesian network with online learning, which is the first method that can be applied in such scenarios. Face recognition is used as the primary biometric, and it is combined with ancillary information, such as gender, age, height, and time of interaction to improve the recognition. The Multi-modal Long-term User Recognition Dataset is generated to simulate various human-robot interaction (HRI) scenarios and evaluate our approach in comparison to face recognition, soft biometrics, and a state-of-the-art open world recognition method (Extreme Value Machine). The results show that the proposed methods significantly outperform the baselines, with an increase in the identification rate up to 47.9% in open-set and closed-set scenarios, and a significant decrease in long-term recognition performance loss. The proposed models generalise well to new users, provide stability, improve over time, and decrease the bias of face recognition. The models were applied in HRI studies for user recognition, personalised rehabilitation, and customer-oriented service, which showed that they are suitable for long-term HRI in the real world.
APA, Harvard, Vancouver, ISO, and other styles
6

Sezerer, Erhan, and Selma Tekir. "Incorporating Concreteness in Multi-Modal Language Models with Curriculum Learning." Applied Sciences 11, no. 17 (September 6, 2021): 8241. http://dx.doi.org/10.3390/app11178241.

Full text
Abstract:
Over the last few years, there has been an increase in the studies that consider experiential (visual) information by building multi-modal language models and representations. It is shown by several studies that language acquisition in humans starts with learning concrete concepts through images and then continues with learning abstract ideas through the text. In this work, the curriculum learning method is used to teach the model concrete/abstract concepts through images and their corresponding captions to accomplish multi-modal language modeling/representation. We use the BERT and Resnet-152 models on each modality and combine them using attentive pooling to perform pre-training on the newly constructed dataset, which is collected from the Wikimedia Commons based on concrete/abstract words. To show the performance of the proposed model, downstream tasks and ablation studies are performed. The contribution of this work is two-fold: A new dataset is constructed from Wikimedia Commons based on concrete/abstract words, and a new multi-modal pre-training approach based on curriculum learning is proposed. The results show that the proposed multi-modal pre-training approach contributes to the success of the model.
APA, Harvard, Vancouver, ISO, and other styles
7

Blinowska, Katarzyna, Gernot Müller-Putz, Vera Kaiser, Laura Astolfi, Katrien Vanderperren, Sabine Van Huffel, and Louis Lemieux. "Multimodal Imaging of Human Brain Activity: Rational, Biophysical Aspects and Modes of Integration." Computational Intelligence and Neuroscience 2009 (2009): 1–10. http://dx.doi.org/10.1155/2009/813607.

Full text
Abstract:
Until relatively recently the vast majority of imaging and electrophysiological studies of human brain activity have relied on single-modality measurements usually correlated with readily observable or experimentally modified behavioural or brain state patterns. Multi-modal imaging is the concept of bringing together observations or measurements from different instruments. We discuss the aims of multi-modal imaging and the ways in which it can be accomplished using representative applications. Given the importance of haemodynamic and electrophysiological signals in current multi-modal imaging applications, we also review some of the basic physiology relevant to understanding their relationship.
APA, Harvard, Vancouver, ISO, and other styles
8

Cheetham, Dominic. "Multi-modal language input: A learned superadditive effect." Applied Linguistics Review 10, no. 2 (May 26, 2019): 179–200. http://dx.doi.org/10.1515/applirev-2017-0036.

Full text
Abstract:
AbstractReview of psychological and language acquisition research into seeing faces while listening, seeing gesture while listening, illustrated text, reading while listening, and same language subtitled video, confirms that bi-modal input has a consistently positive effect on language learning over a variety of input types. This effect is normally discussed using a simple additive model where bi-modal input increases the total amount of data and adds redundancy to duplicated input thus increasing comprehension and then learning. Parallel studies in neuroscience suggest that bi-modal integration is a general effect using common brain areas and following common neural paths. Neuroscience also shows that bi-modal effects are more complex than simple addition, showing early integration of inputs, a learning/developmental effect, and a superadditive effect for integrated bi-modal input. The different bodies of research produce a revised model of bi-modal input as a learned, active system. The implications for language learning are that bi- or multi-modal input can powerfully enhance language learning and that the learning benefits of such input will increase alongside the development of neurological integration of the inputs.
APA, Harvard, Vancouver, ISO, and other styles
9

Gunaydin, Kadir, Ahmet Yavuz, and Aykut Tamer. "Free Vibration Characteristics of Multi-Material Lattice Structures." Vibration 6, no. 1 (January 16, 2023): 82–101. http://dx.doi.org/10.3390/vibration6010007.

Full text
Abstract:
This paper presents a modal analysis of honeycomb and re-entrant lattice structures to understand the change in natural frequencies when multi-material configuration is implemented. For this purpose, parallel nylon ligaments within re-entrant and honeycomb lattice structures are replaced with chopped and continuous carbon fibre to constitute multi-material lattice configurations. For each set, the first five natural frequencies were compared using detailed finite element models. For each configuration, three different boundary conditions were considered, which are free–free and clamping at the two sides that are parallel and perpendicular to the vertical parts of the structure. The comparison of the natural frequencies was based on mode-shape matching using modal assurance criteria to identify the correct modes of different configurations. The results showed that the natural frequency of the multi-material configurations increases from 4% to 18% depending on the configuration and material.
APA, Harvard, Vancouver, ISO, and other styles
10

Harvie, Jen. "“A Multi-modal and Durational Praxis of Decolonization”: Performance Studies in Canada." Canadian Theatre Review 176 (September 2018): 115–17. http://dx.doi.org/10.3138/ctr.176.023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Liu, Hao, Jindong Han, Yanjie Fu, Jingbo Zhou, Xinjiang Lu, and Hui Xiong. "Multi-modal transportation recommendation with unified route representation learning." Proceedings of the VLDB Endowment 14, no. 3 (November 2020): 342–50. http://dx.doi.org/10.14778/3430915.3430924.

Full text
Abstract:
Multi-modal transportation recommendation aims to provide the most appropriate travel route with various transportation modes according to certain criteria. After analyzing large-scale navigation data, we find that route representations exhibit two patterns: spatio-temporal autocorrelations within transportation networks and the semantic coherence of route sequences. However, there are few studies that consider both patterns when developing multi-modal transportation systems. To this end, in this paper, we study multi-modal transportation recommendation with unified route representation learning by exploiting both spatio-temporal dependencies in transportation networks and the semantic coherence of historical routes. Specifically, we propose to unify both dynamic graph representation learning and hierarchical multi-task learning for multi-modal transportation recommendations. Along this line, we first transform the multi-modal transportation network into time-dependent multi-view transportation graphs and propose a spatiotemporal graph neural network module to capture the spatial and temporal autocorrelation. Then, we introduce a coherent-aware attentive route representation learning module to project arbitrary-length routes into fixed-length representation vectors, with explicit modeling of route coherence from historical routes. Moreover, we develop a hierarchical multi-task learning module to differentiate route representations for different transport modes, and this is guided by the final recommendation feedback as well as multiple auxiliary tasks equipped in different network layers. Extensive experimental results on two large-scale real-world datasets demonstrate the performance of the proposed system outperforms eight baselines.
APA, Harvard, Vancouver, ISO, and other styles
12

Sandel, Todd L., and Yusa Wang. "Selling intimacy online: The multi-modal discursive techniques of China’s wanghong." Discourse, Context & Media 47 (June 2022): 100606. http://dx.doi.org/10.1016/j.dcm.2022.100606.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

SINGH, APOORVA, Soumyodeep Dey, Anamitra Singha, and Sriparna Saha. "Sentiment and Emotion-Aware Multi-Modal Complaint Identification." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 11 (June 28, 2022): 12163–71. http://dx.doi.org/10.1609/aaai.v36i11.21476.

Full text
Abstract:
The expression of displeasure on a consumer's behalf towards an organization, product, or event is denoted via the speech act known as complaint. Customers typically post reviews on retail websites and various social media platforms about the products or services they purchase, and the reviews may include complaints about the products or services. Automatic detection of consumers' complaints about items or services they buy can be critical for organizations and online merchants since they can use this insight to meet the customers' requirements, including handling and addressing the complaints. Previous studies on Complaint Identification (CI) are limited to text. Images posted with the reviews can provide cues to identify complaints better, thus emphasizing the importance of incorporating multi-modal inputs into the process. Furthermore, the customer's emotional state significantly impacts the complaint expression since emotions generally influence any speech act. As a result, the impact of emotion and sentiment on automatic complaint identification must also be investigated. One of the major contributions of this work is the creation of a new dataset- Complaint, Emotion, and Sentiment Annotated Multi-modal Amazon Reviews Dataset (CESAMARD), a collection of opinionated texts (reviews) and images of the products posted on the website of the retail giant Amazon. We present an attention-based multi-modal, adversarial multi-task deep neural network model for complaint detection to demonstrate the utility of the multi-modal dataset. Experimental results indicate that the multi-modality and multi-tasking complaint identification outperforms uni-modal and single-task variants.
APA, Harvard, Vancouver, ISO, and other styles
14

Shieh, Jyh-Ren, Ching-Yung Lin, Shun-Xuan Wang, and Ja-Ling Wu. "Building Multi-Modal Relational Graphs for Multimedia Retrieval." International Journal of Multimedia Data Engineering and Management 2, no. 2 (April 2011): 19–41. http://dx.doi.org/10.4018/jmdem.2011040102.

Full text
Abstract:
The abundance of Web 2.0 social media in various media formats calls for integration that takes into account tags associated with these resources. The authors present a new approach to multi-modal media search, based on novel related-tag graphs, in which a query is a resource in one modality, such as an image, and the results are semantically similar resources in various modalities, for instance text and video. Thus the use of resource tagging enables the use of multi-modal results and multi-modal queries, a marked departure from the traditional text-based search paradigm. Tag relation graphs are built based on multi-partite networks of existing Web 2.0 social media such as Flickr and Wikipedia. These multi-partite linkage networks (contributor-tag, tag-category, and tag-tag) are extracted from Wikipedia to construct relational tag graphs. In fusing these networks, the authors propose incorporating contributor-category networks to model contributor’s specialization; it is shown that this step significantly enhances the accuracy of the inferred relatedness of the term-semantic graphs. Experiments based on 200 TREC-5 ad-hoc topics show that the algorithms outperform existing approaches. In addition, user studies demonstrate the superiority of this visualization system and its usefulness in the real world.
APA, Harvard, Vancouver, ISO, and other styles
15

Wu, Tianxing, Chaoyu Gao, Lin Li, and Yuxiang Wang. "Leveraging Multi-Modal Information for Cross-Lingual Entity Matching across Knowledge Graphs." Applied Sciences 12, no. 19 (October 8, 2022): 10107. http://dx.doi.org/10.3390/app121910107.

Full text
Abstract:
In recent years, the scale of knowledge graphs and the number of entities have grown rapidly. Entity matching across different knowledge graphs has become an urgent problem to be solved for knowledge fusion. With the importance of entity matching being increasingly evident, the use of representation learning technologies to find matched entities has attracted extensive attention due to the computability of vector representations. However, existing studies on representation learning technologies cannot make full use of knowledge graph relevant multi-modal information. In this paper, we propose a new cross-lingual entity matching method (called CLEM) with knowledge graph representation learning on rich multi-modal information. The core is the multi-view intact space learning method to integrate embeddings of multi-modal information for matching entities. Experimental results on cross-lingual datasets show the superiority and competitiveness of our proposed method.
APA, Harvard, Vancouver, ISO, and other styles
16

Xu, Yining, Yang Song, Dong Sun, Gusztáv Fekete, and Yaodong Gu. "Effect of Multi-Modal Therapies for Kinesiophobia Caused by Musculoskeletal Disorders: A Systematic Review and Meta-Analysis." International Journal of Environmental Research and Public Health 17, no. 24 (December 16, 2020): 9439. http://dx.doi.org/10.3390/ijerph17249439.

Full text
Abstract:
This systematic review and meta-analysis aimed to identify the effect of multi-modal therapies that combined physical and psychological therapies for kinesiophobia caused by musculoskeletal disorders compared with uni-modal therapy of only phycological therapy or psychological therapy. The search terms and their logical connector were as following: (1) “kinesiophobia” at the title or abstract; and (2) “randomized” OR “randomized” at title or abstract; not (3) ”design” OR “protocol” at the title. They were typed into the databases of Medline (EBSCO), PubMed, and Ovid, following the different input rules of these databases. The eligibility criteria were: (1) Adults with musculoskeletal disorders or illness as patients; (2) Multi-modal therapies combined physical and psychological therapy as interventions; (3) Uni-modal therapy of only physical or psychological therapy as a comparison; (4) The scores of the 17-items version of the Tampa Scale of Kinesiophobia as the outcome; (5) Randomized controlled trials as study design. As a result, 12 studies were included with a statistically significant polled effect of 6.99 (95% CI 4.59 to 9.38). Despite a large heterogeneity within studies, multi-modal therapies might be more effective in reducing kinesiophobia than the unimodal of only physical or psychological therapy both in the total and subdivision analysis. The effect might decrease with age. What’s more, this review’s mathematical methods were feasible by taking test-retest reliability of the Tampa Scale of Kinesiophobia into consideration.
APA, Harvard, Vancouver, ISO, and other styles
17

Bekele, Mafkereseb Kassahun. "Clouds-Based Collaborative and Multi-Modal Mixed Reality for Virtual Heritage." Heritage 4, no. 3 (July 28, 2021): 1447–59. http://dx.doi.org/10.3390/heritage4030080.

Full text
Abstract:
Recent technological advancements in immersive reality technologies have become a focus area in the virtual heritage (VH) domain. In this regard, this paper attempts to design and implement clouds-based collaborative and multi-modal MR application aiming at enhancing cultural learning in VH. The design and implementation can be adopted by the VH domain for various application themes. The application utilises cloud computing and immersive reality technologies. The use of cloud computing, collaborative, and multi-modal interaction methods is influenced by the following three issues. First, studies show that users’ interaction with immersive reality technologies and virtual environments determines their learning outcome and the overall experience. Second, studies also demonstrate that collaborative and multi-modal interaction methods enable engagement in immersive reality environments. Third, the integration of immersive reality technologies with traditional museums and cultural heritage sites is getting significant attention in the domain. However, a robust approach, development platforms (frameworks) and easily adopted design and implementation approaches, or guidelines are not commonly available to the VH community. This paper, therefore, will attempt to achieve two major goals. First, it attempts to design and implement a novel application that integrates cloud computing, immersive reality technology and VH. Second, it attempts to apply the proposed application to enhance cultural learning. From the perspective of cultural learning and users’ experience, the assumption is that the proposed approach (clouds-based collaborative and multi-modal MR) can enhance cultural learning by (1) establishing a contextual relationship and engagement between users, virtual environments and cultural context in museums and heritage sites, and (2) by enabling collaboration between users.
APA, Harvard, Vancouver, ISO, and other styles
18

Sun, Yan, Maoxiang Lang, and Danzhu Wang. "Optimization Models and Solution Algorithms for Freight Routing Planning Problem in the Multi-Modal Transportation Networks: A Review of the State-of-the-Art." Open Civil Engineering Journal 9, no. 1 (September 17, 2015): 714–23. http://dx.doi.org/10.2174/1874149501509010714.

Full text
Abstract:
With the remarkable development of international trade, global commodity circulation has grown significantly. To accomplish commodity circulation among various regions and countries, multi-modal transportation scheme has been widely adopted by a large number of companies. Meanwhile, according to the relevant statistics, the international logistics costs reach up to approximate 30-50% of the total production cost of the companies. Lowering the transportation costs has become one of the most important sources for a company to raise profits and maintain competitiveness in the global market. Thus, how to optimize freight routes selection to move commodities through the multi-modal transportation network has gained great concern of both the decision makers of the companies and the multi-modal transport operators. In this study, we present a systematical review on the multi-modal transportation freight routing planning problem from the aspects of model formulation and algorithm design. Following contents are covered in this review: (1) distinguishing the formulation characteristics of various optimization models; (2) identifying the optimization models in recent studies according to the formulation characteristics; and (3) discussing the solution approaches that are developed to solve the optimization models, especially the heuristic algorithms.
APA, Harvard, Vancouver, ISO, and other styles
19

Wu, Cheng-Lin, Hsun-Ping Hsieh, Jiawei Jiang, Yi-Chieh Yang, Chris Shei, and Yu-Wen Chen. "MUFFLE: Multi-Modal Fake News Influence Estimator on Twitter." Applied Sciences 12, no. 1 (January 4, 2022): 453. http://dx.doi.org/10.3390/app12010453.

Full text
Abstract:
To alleviate the impact of fake news on our society, predicting the popularity of fake news posts on social media is a crucial problem worthy of study. However, most related studies on fake news emphasize detection only. In this paper, we focus on the issue of fake news influence prediction, i.e., inferring how popular a fake news post might become on social platforms. To achieve our goal, we propose a comprehensive framework, MUFFLE, which captures multi-modal dynamics by encoding the representation of news-related social networks, user characteristics, and content in text. The attention mechanism developed in the model can provide explainability for social or psychological analysis. To examine the effectiveness of MUFFLE, we conducted extensive experiments on real-world datasets. The experimental results show that our proposed method outperforms both state-of-the-art methods of popularity prediction and machine-based baselines in top-k NDCG and hit rate. Through the experiments, we also analyze the feature importance for predicting fake news influence via the explainability provided by MUFFLE.
APA, Harvard, Vancouver, ISO, and other styles
20

Dariush, Behzad, Hooshang Hemami, and Mohamad Parnianpour. "Multi-Modal Analysis of Human Motion From External Measurements." Journal of Dynamic Systems, Measurement, and Control 123, no. 2 (February 1, 2001): 272–78. http://dx.doi.org/10.1115/1.1370375.

Full text
Abstract:
The “analysis” or “inverse dynamics” problem in human motion studies assumes knowledge of the motion of the dynamical system in various forms and/or measurements of ground reaction forces to determine the applied forces and moments at the joints. Conceptually, methods of attacking such problems are well developed and satisfactory solutions have been obtained if the input signals are noise free and the dynamic model is perfect. In this ideal case, an inverse solution exists, is unique, and depends continuously on the initial data. However, the inverse solution may require the calculation of higher order derivatives of experimental observations contaminated by noise—a notoriously difficult problem. The byproduct of errors due to numerical differentiation is grossly erroneous joint force and moment calculations. This paper provides a framework for analyzing human motion for different sensing conditions in a manner that avoids or minimizes the number of derivative computations. In particular, two sensing modalities are considered: 1) image based and 2) multi-modal sensing: combining imaging, force plate, and accelerometery.
APA, Harvard, Vancouver, ISO, and other styles
21

Choi, Sanghyuk Roy, and Minhyeok Lee. "Estimating the Prognosis of Low-Grade Glioma with Gene Attention Using Multi-Omics and Multi-Modal Schemes." Biology 11, no. 10 (October 5, 2022): 1462. http://dx.doi.org/10.3390/biology11101462.

Full text
Abstract:
The prognosis estimation of low-grade glioma (LGG) patients with deep learning models using gene expression data has been extensively studied in recent years. However, the deep learning models used in these studies do not utilize the latest deep learning techniques, such as residual learning and ensemble learning. To address this limitation, in this study, a deep learning model using multi-omics and multi-modal schemes, namely the Multi-Prognosis Estimation Network (Multi-PEN), is proposed. When using Multi-PEN, gene attention layers are employed for each datatype, including mRNA and miRNA, thereby allowing us to identify prognostic genes. Additionally, recent developments in deep learning, such as residual learning and layer normalization, are utilized. As a result, Multi-PEN demonstrates competitive performance compared to conventional models for prognosis estimation. Furthermore, the most significant prognostic mRNA and miRNA were identified using the attention layers in Multi-PEN. For instance, MYBL1 was identified as the most significant prognostic mRNA. Such a result accords with the findings in existing studies that have demonstrated that MYBL1 regulates cell survival, proliferation, and differentiation. Additionally, hsa-mir-421 was identified as the most significant prognostic miRNA, and it has been extensively reported that hsa-mir-421 is highly associated with various cancers. These results indicate that the estimations of Multi-PEN are valid and reliable and showcase Multi-PEN’s capacity to present hypotheses regarding prognostic mRNAs and miRNAs.
APA, Harvard, Vancouver, ISO, and other styles
22

Goeller, Adrien, Jean-Luc Dion, Ronan Le Breton, and Thierry Soriano. "Kinematic SAMI : a new real-time multi-sensor data assimilation strategy for nonlinear modal identification." Mechanics & Industry 21, no. 4 (2020): 413. http://dx.doi.org/10.1051/meca/2020035.

Full text
Abstract:
In many engineering applications, the vibration analysis of a structure requires the set up of a large number of sensors. These studies are mostly performed in post processing and based on linear modal analysis. However, many studied devices highlight that modal parameters depend on the vibration level non linearities and are performed with sensors as accelerometers that modify the dynamics of the device. This work proposes a significant evolution of modal testing based on the real time identification of non linear parameters (natural frequencies and damping) tracked with a linear modal basis. This method, called Kinematic-SAMI (for multiSensors Assimilation Modal Identification) is assessed firstly on a numerical case with known non linearities and secondly in the framework of a classical cantilever beam with contactless measurement technique (high speed and high resolution cameras). Finally, the efficiency and the limits of the method are discussed.
APA, Harvard, Vancouver, ISO, and other styles
23

Bekele, Mafkereseb Kassahun, Erik Champion, David A. McMeekin, and Hafizur Rahaman. "The Influence of Collaborative and Multi-Modal Mixed Reality: Cultural Learning in Virtual Heritage." Multimodal Technologies and Interaction 5, no. 12 (December 5, 2021): 79. http://dx.doi.org/10.3390/mti5120079.

Full text
Abstract:
Studies in the virtual heritage (VH) domain identify collaboration (social interaction), engagement, and a contextual relationship as key elements of interaction design that influence users’ experience and cultural learning in VH applications. The purpose of this study is to validate whether collaboration (social interaction), engaging experience, and a contextual relationship enhance cultural learning in a collaborative and multi-modal mixed reality (MR) heritage environment. To this end, we have designed and implemented a cloud-based collaborative and multi-modal MR application aiming at enhancing user experience and cultural learning in museums. A conceptual model was proposed based on collaboration, engagement, and relationship in the context of MR experience. The MR application was then evaluated at the Western Australian Shipwrecks Museum by experts, archaeologists, and curators from the gallery and the Western Australian Museum. Questionnaire, semi-structured interview, and observation were used to collect data. The results suggest that integrating collaborative and multi-modal interaction methods with MR technology facilitates enhanced cultural learning in VH.
APA, Harvard, Vancouver, ISO, and other styles
24

Moon, Jucheol, Nelson Hebert Minaya, Nhat Anh Le, Hee-Chan Park, and Sang-Il Choi. "Can Ensemble Deep Learning Identify People by Their Gait Using Data Collected from Multi-Modal Sensors in Their Insole?" Sensors 20, no. 14 (July 18, 2020): 4001. http://dx.doi.org/10.3390/s20144001.

Full text
Abstract:
Gait is a characteristic that has been utilized for identifying individuals. As human gait information is now able to be captured by several types of devices, many studies have proposed biometric identification methods using gait information. As research continues, the performance of this technology in terms of identification accuracy has been improved by gathering information from multi-modal sensors. However, in past studies, gait information was collected using ancillary devices while the identification accuracy was not high enough for biometric identification. In this study, we propose a deep learning-based biometric model to identify people by their gait information collected through a wearable device, namely an insole. The identification accuracy of the proposed model when utilizing multi-modal sensing is over 99%.
APA, Harvard, Vancouver, ISO, and other styles
25

Groarke, Leo. "Gilbert as Disrupter." Informal Logic 42, no. 3 (September 7, 2022): 507–20. http://dx.doi.org/10.22329/il.v42i3.7498.

Full text
Abstract:
Michael Gilbert’s multi-modal theory of argument challenges earlier accounts of arguing assumed in formal and informal logic. His account of emotional, visceral, and kisceral modes of arguing rejects the assumption that all arguments must be treated as instances of one “logical mode.” This paper compares his alternative modes to other modes proposed by those who have argued for visual, auditory, and other “multimodal” modes of arguing. I conclude that multi-modal and multimodal (without the hyphen) modes are complementary. Collectively, they represent an important attempt to radically expand the scope of informal logic and the argumentation that it studies.
APA, Harvard, Vancouver, ISO, and other styles
26

Han, Ning, Jingjing Chen, Hao Zhang, Huanwen Wang, and Hao Chen. "Adversarial Multi-Grained Embedding Network for Cross-Modal Text-Video Retrieval." ACM Transactions on Multimedia Computing, Communications, and Applications 18, no. 2 (May 31, 2022): 1–23. http://dx.doi.org/10.1145/3483381.

Full text
Abstract:
Cross-modal retrieval between texts and videos has received consistent research interest in the multimedia community. Existing studies follow a trend of learning a joint embedding space to measure the distance between text and video representations. In common practice, video representation is constructed by feeding clips into 3D convolutional neural networks for a coarse-grained global visual feature extraction. In addition, several studies have attempted to align the local objects of video with the text. However, these representations share a drawback of neglecting rich fine-grained relation features capturing spatial-temporal object interactions that benefits mapping textual entities in the real-world retrieval system. To tackle this problem, we propose an adversarial multi-grained embedding network (AME-Net), a novel cross-modal retrieval framework that adopts both fine-grained local relation and coarse-grained global features in bridging text-video modalities. Additionally, with the newly proposed visual representation, we also integrate an adversarial learning strategy into AME-Net, to further narrow the domain gap between text and video representations. In summary, we contribute AME-Net with an adversarial learning strategy for learning a better joint embedding space, and experimental results on MSR-VTT and YouCook2 datasets demonstrate that our proposed framework consistently outperforms the state-of-the-art method.
APA, Harvard, Vancouver, ISO, and other styles
27

Johari, Mansour, and Hossein Haghshenas. "Modeling the cordon pricing policy for a multi-modal transportation system." Case Studies on Transport Policy 7, no. 3 (September 2019): 531–39. http://dx.doi.org/10.1016/j.cstp.2019.07.012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Cocks, Naomi, Gary Morgan, and Sotaro Kita. "Iconic gesture and speech integration in younger and older adults." Gesture 11, no. 1 (September 8, 2011): 24–39. http://dx.doi.org/10.1075/gest.11.1.02coc.

Full text
Abstract:
This study investigated the impact of age on iconic gesture and speech integration. The performance of a group of older adults (60–76 years) and a group of younger adults (22–30 years) were compared on a task which required the comprehension of information presented in 3 different conditions: verbal only, gesture only, and verbal and gesture combined. The older adults in the study did not benefit as much from multi-modal input as the younger adults and were more likely to ignore gesture when decoding the multi-modal information.
APA, Harvard, Vancouver, ISO, and other styles
29

Mu, Ying Na, Lei Shi, and Zhe Zhang. "Studies and Improvements on Modal Pushover Analysis and Application on Bridge." Advanced Materials Research 163-167 (December 2010): 4076–82. http://dx.doi.org/10.4028/www.scientific.net/amr.163-167.4076.

Full text
Abstract:
Because the traditional pushover analysis can not take the contributions of higher modes into account, To overcome this limitation, a modal pushover analysis procedure (MPA) is proposed by some researchers, which can involve the combination of multi-mode contributions to response. In this paper, much improvement on MPA procedure is made with consideration of the changes of seismic response after structural yielding and anew distribution of inertia forces. The method is verified by one example of bridge structure. It is concluded that the improvement part-sectionalized MPA presented in this paper has high accuracy.
APA, Harvard, Vancouver, ISO, and other styles
30

Yan, Dayun, Alisa Malyavko, Qihui Wang, Kostya (Ken) Ostrikov, Jonathan H. Sherman, and Michael Keidar. "Multi-Modal Biological Destruction by Cold Atmospheric Plasma: Capability and Mechanism." Biomedicines 9, no. 9 (September 18, 2021): 1259. http://dx.doi.org/10.3390/biomedicines9091259.

Full text
Abstract:
Cold atmospheric plasma (CAP) is a near-room-temperature, partially ionized gas composed of reactive neutral and charged species. CAP also generates physical factors, including ultraviolet (UV) radiation and thermal and electromagnetic (EM) effects. Studies over the past decade demonstrated that CAP could effectively induce death in a wide range of cell types, from mammalian to bacterial cells. Viruses can also be inactivated by a CAP treatment. The CAP-triggered cell-death types mainly include apoptosis, necrosis, and autophagy-associated cell death. Cell death and virus inactivation triggered by CAP are the foundation of the emerging medical applications of CAP, including cancer therapy, sterilization, and wound healing. Here, we systematically analyze the entire picture of multi-modal biological destruction by CAP treatment and their underlying mechanisms based on the latest discoveries particularly the physical effects on cancer cells.
APA, Harvard, Vancouver, ISO, and other styles
31

Pamart, Anthony, Livio De Luca, and Philippe Véron. "metadata enriched system for the documentation of multi-modal digital imaging surveys." Studies in Digital Heritage 6, no. 1 (June 30, 2022): 1–24. http://dx.doi.org/10.14434/sdh.v6i1.33767.

Full text
Abstract:
In the field of Digital Heritage Studies, data provenance has always been an open and challenging issue. As Cultural Heritage objects are unique by definition, the methods, the practices and the strategies to build digital documentation are not homogeneous, universal or standardized. Metadata is a minimalistic yet powerful form to source and describe a digital document. They are often required or mandatory at an advanced stage of a Digital Heritage project. Our approach is to integrate since data capture steps meaningful information to document a Digital Heritage asset as it is moreover being composed nowadays from multiple sources or multimodal imaging surveys. This article exposes the methodological and technical aspects related to the ongoing development of MEMoS; standing for Metadata Enriched Multimodal documentation System. MEMoS aims to contribute to data provenance issues in current multimodal imaging surveys. It explores a way to document CH oriented capture data sets with a versatile descriptive metadata scheme inspired from the W7 ontological model. In addition, an experiment illustrated by several case studies, explores the possibility to integrate those metadata encoded into 2D barcodes directly to the captured image set. The article lays the foundation of a three parted methodology namely describe - encode and display toward metadata enriched documentation of CH objects.
APA, Harvard, Vancouver, ISO, and other styles
32

Lu, Lyujian, Saad Elbeleidy, Lauren Zoe Baker, and Hua Wang. "Learning Multi-Modal Biomarker Representations via Globally Aligned Longitudinal Enrichments." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 01 (April 3, 2020): 817–24. http://dx.doi.org/10.1609/aaai.v34i01.5426.

Full text
Abstract:
Alzheimer's Disease (AD) is a chronic neurodegenerative disease that severely impacts patients' thinking, memory and behavior. To aid automatic AD diagnoses, many longitudinal learning models have been proposed to predict clinical outcomes and/or disease status, which, though, often fail to consider missing temporal phenotypic records of the patients that can convey valuable information of AD progressions. Another challenge in AD studies is how to integrate heterogeneous genotypic and phenotypic biomarkers to improve diagnosis prediction. To cope with these challenges, in this paper we propose a longitudinal multi-modal method to learn enriched genotypic and phenotypic biomarker representations in the format of fixed-length vectors that can simultaneously capture the baseline neuroimaging measurements of the entire dataset and progressive variations of the varied counts of follow-up measurements over time of every participant from different biomarker sources. The learned global and local projections are aligned by a soft constraint and the structured-sparsity norm is used to uncover the multi-modal structure of heterogeneous biomarker measurements. While the proposed objective is clearly motivated to characterize the progressive information of AD developments, it is a nonsmooth objective that is difficult to efficiently optimize in general. Thus, we derive an efficient iterative algorithm, whose convergence is rigorously guaranteed in mathematics. We have conducted extensive experiments on the Alzheimer's Disease Neuroimaging Initiative (ADNI) data using one genotypic and two phenotypic biomarkers. Empirical results have demonstrated that the learned enriched biomarker representations are more effective in predicting the outcomes of various cognitive assessments. Moreover, our model has successfully identified disease-relevant biomarkers supported by existing medical findings that additionally warrant the correctness of our method from the clinical perspective.
APA, Harvard, Vancouver, ISO, and other styles
33

Choi, Sang-Il, Jucheol Moon, Hee-Chan Park, and Sang Tae Choi. "User Identification from Gait Analysis Using Multi-Modal Sensors in Smart Insole." Sensors 19, no. 17 (August 31, 2019): 3785. http://dx.doi.org/10.3390/s19173785.

Full text
Abstract:
Recent studies indicate that individuals can be identified by their gait pattern. A number of sensors including vision, acceleration, and pressure have been used to capture humans’ gait patterns, and a number of methods have been developed to recognize individuals from their gait pattern data. This study proposes a novel method of identifying individuals using null-space linear discriminant analysis on humans’ gait pattern data. The gait pattern data consists of time series pressure and acceleration data measured from multi-modal sensors in a smart insole used while walking. We compare the identification accuracies from three sensing modalities, which are acceleration, pressure, and both in combination. Experimental results show that the proposed multi-modal features identify 14 participants with high accuracy over 95% from their gait pattern data of walking.
APA, Harvard, Vancouver, ISO, and other styles
34

Cooke, Mike, Nick Watkins, and Corrine Moy. "A Hybrid Online and Offline Approach to Market Measurement Studies." International Journal of Market Research 51, no. 1 (January 2009): 1–16. http://dx.doi.org/10.1177/147078530905100101.

Full text
Abstract:
This paper presents a case study of how GfK NOP is moving one of the UK's major market measurement studies online. In this case study we share our learning and illustrate, with empirical data, the limits and possibilities that panel-based research offers in this most demanding arena for online research. Our conclusion is that, in this instance, it is inappropriate to replace the traditional face-to-face methodology with a wholly online solution, but that, instead, a multi-modal approach that combines face-to-face with online interviewing is the way forward.
APA, Harvard, Vancouver, ISO, and other styles
35

Lambon Ralph, Matthew A. "Neurocognitive insights on conceptual knowledge and its breakdown." Philosophical Transactions of the Royal Society B: Biological Sciences 369, no. 1634 (January 19, 2014): 20120392. http://dx.doi.org/10.1098/rstb.2012.0392.

Full text
Abstract:
Conceptual knowledge reflects our multi-modal ‘semantic database’. As such, it brings meaning to all verbal and non-verbal stimuli, is the foundation for verbal and non-verbal expression and provides the basis for computing appropriate semantic generalizations. Multiple disciplines (e.g. philosophy, cognitive science, cognitive neuroscience and behavioural neurology) have striven to answer the questions of how concepts are formed, how they are represented in the brain and how they break down differentially in various neurological patient groups. A long-standing and prominent hypothesis is that concepts are distilled from our multi-modal verbal and non-verbal experience such that sensation in one modality (e.g. the smell of an apple) not only activates the intramodality long-term knowledge, but also reactivates the relevant intermodality information about that item (i.e. all the things you know about and can do with an apple). This multi-modal view of conceptualization fits with contemporary functional neuroimaging studies that observe systematic variation of activation across different modality-specific association regions dependent on the conceptual category or type of information. A second vein of interdisciplinary work argues, however, that even a smorgasbord of multi-modal features is insufficient to build coherent, generalizable concepts. Instead, an additional process or intermediate representation is required. Recent multidisciplinary work, which combines neuropsychology, neuroscience and computational models, offers evidence that conceptualization follows from a combination of modality-specific sources of information plus a transmodal ‘hub’ representational system that is supported primarily by regions within the anterior temporal lobe, bilaterally.
APA, Harvard, Vancouver, ISO, and other styles
36

Li, X. M., W. X. Wang, S. J. Tang, J. Z. Xia, Z. G. Zhao, Y. Li, Y. Zheng, and R. Z. Guo. "A NEW CLOUD-EDGE-TERMINAL RESOURCES COLLABORATIVE SCHEDULING FRAMEWORK FOR MULTI-LEVEL VISUALIZATION TASKS OF LARGE-SCALE SPATIO-TEMPORAL DATA." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B4-2020 (August 25, 2020): 477–83. http://dx.doi.org/10.5194/isprs-archives-xliii-b4-2020-477-2020.

Full text
Abstract:
Abstract. To address the multi-modal spatio-temporal data efficient scheduling problem of the diverse and highly concurrent visualization applications in cloud-edge-terminal environment, this paper systematically studies the cloud-edge-terminal integrated scheduling model of multi-level visualization tasks of multi-modal spatio-temporal data. By accurately defining the hierarchical semantic mapping relationship between the diverse visual application requirements of different terminals and scheduling tasks, we propose a multi-level task-driven cloud-edge-terminal multi-granularity storage-computing-rendering resource collaborative scheduling method. Based on the workflow, the flexible allocation strategy of cloud-edge-terminal scheduling service chain that consider the characteristics of spatio-temporal task is constructed. Finally, we established a cloud-edge-terminal scheduling adaptive optimization mechanism based on the service quality evaluation model, and developed a prototype system. Experiments are conducted with the urban construction and construction management, the results show that the new method breaks through the bottleneck of traditional spatio-temporal data visualization scheduling, and it can provide theoretical and methodological support for the visualization and scheduling of spatio-temporal big data.
APA, Harvard, Vancouver, ISO, and other styles
37

Belhoussine Drissi, Taoufiq, Bruno Morvan, Mihai Predoi, Jean Louis Izbicki, and Pascal Pareige. "Study of the Transmission of Ultrasonic Guided Wave at the Junction of Two Different Elastic Plates with the Presence of a Defect." Key Engineering Materials 482 (June 2011): 21–29. http://dx.doi.org/10.4028/www.scientific.net/kem.482.21.

Full text
Abstract:
We are interested in a right junction of two plates of different materials (aluminum and copper) placed in contact edge to edge. The aim of this study is the interaction of Lamb waves with a defect located at the junction. The reflection and transmission of the fundamental symmetrical S0 wave is analyzed. The theoretical coefficients of reflection and transmission are obtained by a multi-modal approach based on the orthogonality relations involving different modes. Using the Finite Element Method (FEM), we estimate the limit value of the ratio between the dimension of the defect and the thickness of the structure, for which the multi-modal approach is applicable. In experimental and numerical studies, it is also brought to light the effects of diffraction by the defect.
APA, Harvard, Vancouver, ISO, and other styles
38

Zheng, Fuzhong, Weipeng Li, Xu Wang, Luyao Wang, Xiong Zhang, and Haisu Zhang. "A Cross-Attention Mechanism Based on Regional-Level Semantic Features of Images for Cross-Modal Text-Image Retrieval in Remote Sensing." Applied Sciences 12, no. 23 (November 29, 2022): 12221. http://dx.doi.org/10.3390/app122312221.

Full text
Abstract:
With the rapid development of remote sensing (RS) observation technology over recent years, the high-level semantic association-based cross-modal retrieval of RS images has drawn some attention. However, few existing studies on cross-modal retrieval of RS images have addressed the issue of mutual interference between semantic features of images caused by “multi-scene semantics”. Therefore, we proposed a novel cross-attention (CA) model, called CABIR, based on regional-level semantic features of RS images for cross-modal text-image retrieval. This technique utilizes the CA mechanism to implement cross-modal information interaction and guides the network with textual semantics to allocate weights and filter redundant features for image regions, reducing the effect of irrelevant scene semantics on retrieval. Furthermore, we proposed BERT plus Bi-GRU, a new approach to generating statement-level textual features, and designed an effective temperature control function to steer the CA network toward smooth running. Our experiment suggested that CABIR not only outperforms other state-of-the-art cross-modal image retrieval methods but also demonstrates high generalization ability and stability, with an average recall rate of up to 18.12%, 48.30%, and 55.53% over the datasets RSICD, UCM, and Sydney, respectively. The model proposed in this paper will be able to provide a possible solution to the problem of mutual interference of RS images with “multi-scene semantics” due to complex terrain objects.
APA, Harvard, Vancouver, ISO, and other styles
39

Deng, Lei, Yibiao Huang, Xuejun Liu, and Hui Liu. "Graph2MDA: a multi-modal variational graph embedding model for predicting microbe–drug associations." Bioinformatics 38, no. 4 (November 23, 2021): 1118–25. http://dx.doi.org/10.1093/bioinformatics/btab792.

Full text
Abstract:
Abstract Motivation Accumulated clinical studies show that microbes living in humans interact closely with human hosts, and get involved in modulating drug efficacy and drug toxicity. Microbes have become novel targets for the development of antibacterial agents. Therefore, screening of microbe–drug associations can benefit greatly drug research and development. With the increase of microbial genomic and pharmacological datasets, we are greatly motivated to develop an effective computational method to identify new microbe–drug associations. Results In this article, we proposed a novel method, Graph2MDA, to predict microbe–drug associations by using variational graph autoencoder (VGAE). We constructed multi-modal attributed graphs based on multiple features of microbes and drugs, such as molecular structures, microbe genetic sequences and function annotations. Taking as input the multi-modal attribute graphs, VGAE was trained to learn the informative and interpretable latent representations of each node and the whole graph, and then a deep neural network classifier was used to predict microbe–drug associations. The hyperparameter analysis and model ablation studies showed the sensitivity and robustness of our model. We evaluated our method on three independent datasets and the experimental results showed that our proposed method outperformed six existing state-of-the-art methods. We also explored the meaning of the learned latent representations of drugs and found that the drugs show obvious clustering patterns that are significantly consistent with drug ATC classification. Moreover, we conducted case studies on two microbes and two drugs and found 75–95% predicted associations have been reported in PubMed literature. Our extensive performance evaluations validated the effectiveness of our proposed method. Availability and implementation Source codes and preprocessed data are available at https://github.com/moen-hyb/Graph2MDA. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
40

Gonçalves, Filipe, Davide Carneiro, José Pêgo, and Paulo Novais. "X3S: A multi-modal approach to monitor and assess stress through human-computer interaction." Computer Science and Information Systems 15, no. 3 (2018): 683–703. http://dx.doi.org/10.2298/csis180115033g.

Full text
Abstract:
There have been a variety of research approaches that have examined the stress issues related to human-computer interaction including laboratory studies, cross-sectional surveys, longitudinal case studies and intervention studies. A critical review of these studies indicates that there are important physiological, biochemical, somatic and psychological indicators of stress that are related to work activities where human-computer interaction occurs. In a medical or biological context, stress is a physical, mental, or emotional factor that causes bodily or mental tension, which can cause or influence the course of many medical conditions including psychological conditions such as depression and anxiety. In these cases, individuals are under an increasing demand for performance, driving them to be under constant pressure, and consequently to present variations in their levels of stress. To mitigate this condition, this paper proposes to add a new dimension in human?computer interaction through the development of a distributed multi-modal framework approach entitled X3S, which aims to monitor and assess the psychological stress of computer users during high-end tasks, in a non-intrusive and non-invasive way, through the access of soft sensors activity (e.g. task performance and human behaviour). This approach presents as its main innovative key the capacity to validate each stress model trained for each individual through the analysis of cortisol and stress assessment survey data. Overall, this paper discusses how groups of medical students can be monitored through their interactions with the computer. Its main aim is to provide a stress marker that can be effectively used in large numbers of users and without inconvenience.
APA, Harvard, Vancouver, ISO, and other styles
41

Cano Viktorsson, Carlos. "From Maps to Apps: Tracing the Organizational Responsiveness of an Early Multi-Modal Travel Planning Service." Journal of Urban Technology 22, no. 4 (October 2, 2015): 87–101. http://dx.doi.org/10.1080/10630732.2015.1073902.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

BOGATARKAN, AYSU, and ESRA ERDEM. "Explanation Generation for Multi-Modal Multi-Agent Path Finding with Optimal Resource Utilization using Answer Set Programming." Theory and Practice of Logic Programming 20, no. 6 (September 22, 2020): 974–89. http://dx.doi.org/10.1017/s1471068420000320.

Full text
Abstract:
AbstractThe multi-agent path finding (MAPF) problem is a combinatorial search problem that aims at finding paths for multiple agents (e.g., robots) in an environment (e.g., an autonomous warehouse) such that no two agents collide with each other, and subject to some constraints on the lengths of paths. We consider a general version of MAPF, called mMAPF, that involves multi-modal transportation modes (e.g., due to velocity constraints) and consumption of different types of resources (e.g., batteries). The real-world applications of mMAPF require flexibility (e.g., solving variations of mMAPF) as well as explainability. Our earlier studies on mMAPF have focused on the former challenge of flexibility. In this study, we focus on the latter challenge of explainability, and introduce a method for generating explanations for queries regarding the feasibility and optimality of solutions, the nonexistence of solutions, and the observations about solutions. Our method is based on answer set programming.
APA, Harvard, Vancouver, ISO, and other styles
43

Bodapati, Jyostna Devi, Veeranjaneyulu Naralasetti, Shaik Nagur Shareef, Saqib Hakak, Muhammad Bilal, Praveen Kumar Reddy Maddikunta, and Ohyun Jo. "Blended Multi-Modal Deep ConvNet Features for Diabetic Retinopathy Severity Prediction." Electronics 9, no. 6 (May 30, 2020): 914. http://dx.doi.org/10.3390/electronics9060914.

Full text
Abstract:
Diabetic Retinopathy (DR) is one of the major causes of visual impairment and blindness across the world. It is usually found in patients who suffer from diabetes for a long period. The major focus of this work is to derive optimal representation of retinal images that further helps to improve the performance of DR recognition models. To extract optimal representation, features extracted from multiple pre-trained ConvNet models are blended using proposed multi-modal fusion module. These final representations are used to train a Deep Neural Network (DNN) used for DR identification and severity level prediction. As each ConvNet extracts different features, fusing them using 1D pooling and cross pooling leads to better representation than using features extracted from a single ConvNet. Experimental studies on benchmark Kaggle APTOS 2019 contest dataset reveals that the model trained on proposed blended feature representations is superior to the existing methods. In addition, we notice that cross average pooling based fusion of features from Xception and VGG16 is the most appropriate for DR recognition. With the proposed model, we achieve an accuracy of 97.41%, and a kappa statistic of 94.82 for DR identification and an accuracy of 81.7% and a kappa statistic of 71.1% for severity level prediction. Another interesting observation is that DNN with dropout at input layer converges more quickly when trained using blended features, compared to the same model trained using uni-modal deep features.
APA, Harvard, Vancouver, ISO, and other styles
44

Daró, Valeria. "Experimental Studies on Memory in Conference Interpretation." Meta 42, no. 4 (September 30, 2002): 622–28. http://dx.doi.org/10.7202/002484ar.

Full text
Abstract:
Abstract Several studies in cognitive psychology and neuropsychology have shown that memory is not a unitary function of human cognition, since it comprises several multi-modal systems, which can be mutually independent. This article describes: a) the present state of the art on the functional organization of the most relevant memory systems (working memory and explicit vs. implicit memory systems), and b) what experimental studies have so far revealed about the role ofmnestic systems during the process of simultaneous and consecutive interpretation. Since these studies suggest that memory is multifaceted, there cannot and should not be a single, unique way to teach and acquire the techniques and strategies of these two types of conference interpretation, which sometimes are erroneously considered reciprocally complementary.
APA, Harvard, Vancouver, ISO, and other styles
45

Faigenbaum, Avery D., Jie Kang, Nicholas A. Ratamess, Anne C. Farrell, Mina Belfert, Sean Duffy, Cara Jenson, and Jill Bush. "Acute Cardiometabolic Responses to Multi-Modal Integrative Neuromuscular Training in Children." Journal of Functional Morphology and Kinesiology 4, no. 2 (June 24, 2019): 39. http://dx.doi.org/10.3390/jfmk4020039.

Full text
Abstract:
Integrative neuromuscular training (INT) has emerged as an effective strategy for improving health- and skill-related components of physical fitness, yet few studies have explored the cardiometabolic demands of this type of training in children. The aim of this study was to examine the acute cardiometabolic responses to a multi-modal INT protocol and to compare these responses to a bout of moderate-intensity treadmill (TM) walking in children. Participants (n = 14, age 10.7 ± 1.1 years) were tested for peak oxygen uptake (VO2) and peak heart rate (HR) on a maximal TM test and subsequently participated in two experimental conditions on nonconsecutive days: a 12-min INT protocol of six different exercises performed twice for 30 s with a 30 s rest interval between sets and exercises and a 12-min TM protocol of walking at 50% VO2peak. Throughout the INT protocol mean VO2 and HR increased significantly from 14.9 ± 3.6 mL∙kg−1∙min−1 (28.2% VO2 peak) to 34.0 ± 6.4 mL∙kg−1∙min−1 (64.3% VO2 peak) and from 121.1 ± 9.0 bpm (61.0% HR peak) to 183.5 ± 7.9 bpm (92.4% HR peak), respectively. While mean VO2 for the entire protocol did not differ between INT and TM, mean VO2 and HR during selected INT exercises and mean HR for the entire INT protocol were significantly higher than TM (all Ps ≤ 0.05). These findings suggest that INT can pose a moderate to vigorous cardiometabolic stimulus in children and selected INT exercises can be equal to or more metabolically challenging than TM walking.
APA, Harvard, Vancouver, ISO, and other styles
46

Lakshmi, K. "A Multi-Model Based Approach for the Detection of Subtle Structural Damage Considering Environmental Variability." International Journal of Structural Stability and Dynamics 20, no. 03 (February 20, 2020): 2050038. http://dx.doi.org/10.1142/s0219455420500388.

Full text
Abstract:
Minor structural damages like incipient cracks are difficult to detect as they alter the structural stiffness marginally. It is difficult to extract the features of minor damage, from the measured time history responses, which are usually contaminated by measurement noise. Also, the effect of environmental/operational variabilities often misleads the damage diagnostic process, especially for subtle damages. To tackle all the above challenges in detecting and locating the minor incipient damage, an automated multi-model based data-driven technique is proposed in this paper. It is based on the fact that a subtle damage alters only some structural modes, while the others remain unaltered. Hence, the proposal here is to decompose the measured time-history response into the modal components and then reconstruct the signal using only the modal components with the damage sensitive features. The reconstructed signal is used in damage diagnosis. As an improved version of the second-order blind identification, the blind source separation technique is proposed in this paper for signal decomposition and a crisp automated algorithm is presented for isolating the modal components with damage sensitive features. The autoregressive moving average with exogenous input model with the cepstral distance as the damage index is employed for localizing the damage. The proposed multi-model approach is completely automated. Numerical studies have been carried out to demonstrate the effectiveness of the proposed algorithm. Also, experimental studies have been conducted to ensure the practicality of the proposed technique.
APA, Harvard, Vancouver, ISO, and other styles
47

Larsen, Amy, Sarah Cox, Christopher Bridge, Deanna Horvath, and Michael Emmerling. "Short, multi-modal, pre-commencement transition programs for a diverse STEM cohort." Journal of University Teaching and Learning Practice 18, no. 3 (July 1, 2021): 49–65. http://dx.doi.org/10.53761/1.18.3.5.

Full text
Abstract:
A ‘quantum leap’ (Kift, 2015) in our understanding of the transition to university studies has brought about a reimagining of the role of transition programs from attempting to remediate deficiencies in ‘underprepared’ students, to instead using engagement with the curriculum to instil success-oriented behaviours and attitudes in them. In particular commencers from non-traditional backgrounds are confronted by greater sociocultural incongruities when starting higher education (Devlin, 2013), and face greater challenges in developing their new student identity. While affective change of this kind may necessarily be long-term in nature, semester or year-long ‘foundation’ or ‘bridging’ programs create barriers themselves in terms of time, cost, and stigma. This study provides evidence that significant results can be achieved with short, accessible, manageable, pre-commencement transition programs, that are situated in the curriculum, but also focussed on nurturing those behaviours and attitudes in at-risk students that are associated with greater likelihood of success and retention.
APA, Harvard, Vancouver, ISO, and other styles
48

Donaldson, C. C. Stuart, Christopher J. Rozell, P. Doneen Moran, and Erin N. Harlow. "Multi-Modal Assessment and Treatment of Chronic Headache: The First in a Series of Case Studies." Biofeedback 40, no. 2 (June 1, 2012): 67–74. http://dx.doi.org/10.5298/1081-5937-40.2.8.

Full text
Abstract:
The treatment of headache is challenging, and is made more so by the fragmentation of medicine into clinical specialties. Physiologically, migraine headache is a systemic event, affecting multiple neurophysiological systems. Treatment often calls for a multidisciplinary approach. Research supports the efficacy of both general biofeedback and, to a lesser extent, neurofeedback in the treatment of headache, including migraine. Abnormal electrophysiological patterns, detectable with quantitative EEG, are frequently found in patients with migraine, especially after closed head injury. Research has also shown the frequent presence of trigger point activity in several areas of the musculature of the head and neck in headache patients, including those with migraine. Finally, the role of stress has been reported in the onset and exacerbation of headache pain. The authors provide a case study showing the application of quantitative EEG, surface electromyography (SEMG), and psychophysiological stress profiling in the assessment of a 56-year-old female with closed head injury and migraine headache. The treatment included myofascial massage with trigger point release, SEMG training to balance asymmetric muscle tension patterns, and a stress management program, including guided visualization and breath training. This comprehensive intervention produced a significant reduction in headache symptoms and an improvement in work productivity.
APA, Harvard, Vancouver, ISO, and other styles
49

Aiello, Marco, Carlo Cavaliere, Dario Fiorenza, Andrea Duggento, Luca Passamonti, and Nicola Toschi. "Neuroinflammation in Neurodegenerative Diseases: Current Multi-modal Imaging Studies and Future Opportunities for Hybrid PET/MRI." Neuroscience 403 (April 2019): 125–35. http://dx.doi.org/10.1016/j.neuroscience.2018.07.033.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Andreassen, Ole, Josselin Houenou, Edouard Duchesnay, Pauline Favre, Melissa Pauling, Neeltje van Haren, Rachel Brouwer, Sonja de Zwarte, Paul Thompson, and Christopher Ching. "121. Biological Insight From Large-Scale Studies of Bipolar Disorder With Multi-Modal Imaging and Genomics." Biological Psychiatry 83, no. 9 (May 2018): S49—S50. http://dx.doi.org/10.1016/j.biopsych.2018.02.139.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography