To see the other types of publications on this topic, follow the link: Multi-modal approach.

Journal articles on the topic 'Multi-modal approach'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Multi-modal approach.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Winfield, M. J., A. Basden, and I. Cresswell. "Knowledge elicitation using a multi‐modal approach." World Futures 47, no. 1 (September 1996): 93–101. http://dx.doi.org/10.1080/02604027.1996.9972589.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bauman, N. James, and Christopher M. Carr. "A Multi-Modal Approach to Trauma Recovery." Psychotherapy Patient 10, no. 3-4 (May 14, 1998): 145–60. http://dx.doi.org/10.1300/j358v10n03_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Huang, Zen‐Kwei, Sheng‐De Wang, and Te‐Son Kuo. "Multi‐modal parameter identification by automata approach." Journal of the Chinese Institute of Engineers 16, no. 5 (July 1993): 603–13. http://dx.doi.org/10.1080/02533839.1993.9677534.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Fuchs, M., M. Wagner, H. A. Wischmann, Th Köhler, and A. Theißen. "A Multi-Modal Approach to Source Reconstruction." NeuroImage 7, no. 4 (May 1998): S683. http://dx.doi.org/10.1016/s1053-8119(18)31516-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Mailer, Markus. "A Multi-modal Approach for Highway Assessment." Transportation Research Procedia 15 (2016): 113–21. http://dx.doi.org/10.1016/j.trpro.2016.06.010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Jennekens-Schinkel, A. "The human memory: A multi-modal approach." Clinical Neurology and Neurosurgery 97, no. 4 (November 1995): 358. http://dx.doi.org/10.1016/0303-8467(95)90014-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Golec, Thomas S. "The multi-modal approach to dental implants." Journal of Oral and Maxillofacial Surgery 49, no. 8 (August 1991): 41–42. http://dx.doi.org/10.1016/0278-2391(91)90541-s.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Mewaldt, Steven P. "The human memory: A multi-modal approach." Journal of Chemical Neuroanatomy 11, no. 1 (July 1996): 78–79. http://dx.doi.org/10.1016/0891-0618(96)84166-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kleindienst, Jan, Ladislav Seredi, Pekka Kapanen, and Janne Bergman. "Loosely-coupled approach towards multi-modal browsing." Universal Access in the Information Society 2, no. 2 (June 1, 2003): 173–88. http://dx.doi.org/10.1007/s10209-003-0047-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kiruthika, M., and S. Sukumaran. "An Improved Multi-Modal Approach for Feature Extraction in Social Media Image Retrieval." Journal of Advanced Research in Dynamical and Control Systems 11, no. 10-SPECIAL ISSUE (October 31, 2019): 1447–56. http://dx.doi.org/10.5373/jardcs/v11sp10/20192990.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Buqing Cao, Buqing Cao, Weishi Zhong Buqing Cao, Xiang Xie Weishi Zhong, Lulu Zhang Xiang Xie, and Yueying Qing Lulu Zhang. "A Multi-modal Feature Fusion-based Approach for Mobile Application Classification and Recommendation." 網際網路技術學刊 23, no. 6 (November 2022): 1417–27. http://dx.doi.org/10.53106/160792642022112306023.

Full text
Abstract:
<p>With the rapid growth of the number and type of mobile applications, it becomes challenging to accurately classify and recommend mobile applications according to users&rsquo; individual requirements. The existing mobile application classification and recommendation methods, for one thing, do not take into account the correlation between large-scale data and model. For another, they also do not fully exploit the multi-modal, fine-grained interaction features with high-order and low-order in mobile application. To tackle this problem, we propose a mobile application classification and recommendation method based on multi-modal feature fusion. The method firstly extracts the image and description features of the mobile application using an &ldquo;involution residual network + pre-trained language representation&rdquo; model (i.e. the TRedBert model). Afterwards, these features are fused by using the attention mechanism in the transformer model. Then, the method classifies the mobile applications based on the fused features through a softmax classifier. Finally, the method extracts the high-order and low-order embedding features of the mobile app with a bi-linear feature interaction model (FiBiNET) based on the classification results of the mobile app, by combining the Hadamard product and inner product to achieve fine-grained high-order and low-order feature interaction, to update the mobile app representation and complete the recommendation task. The multiple sets of comparison experiments are performed on Kaggle&rsquo;s real dataset, i.e., 365K IOS Apps Dataset. And the experimental results demonstrated that the proposed approach outperforms other methods in terms of Macro F1, Accuracy, AUC and Logloss.</p> <p>&nbsp;</p>
APA, Harvard, Vancouver, ISO, and other styles
12

Carozza, Linda. "Amenable Argumentation Approach." Informal Logic 42, no. 3 (September 7, 2022): 563–82. http://dx.doi.org/10.22329/il.v42i3.7500.

Full text
Abstract:
This paper summarizes various interpretations of emotional arguments, with a focus on the emotional mode of argument introduced in the multi-modal argumentation model (Gilbert, 1994). From there the author shifts from a descriptive account of emotional arguments to a discussion about a normative framework. Pointing out problems with evaluative models of the emotional mode, a paradigmatic shift captured by the Amenable Argumentation Approach is explained as a way forward for the advancement of the emotional mode and multi-modal argumentation.
APA, Harvard, Vancouver, ISO, and other styles
13

Chuang, Ying-Ting. "Studying subtitle translation from a multi-modal approach." Babel. Revue internationale de la traduction / International Journal of Translation 52, no. 4 (December 31, 2006): 372–83. http://dx.doi.org/10.1075/babel.52.4.06chu.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Roheda, Siddharth, Hamid Krim, and Benjamin S. Riggan. "Robust Multi-Modal Sensor Fusion: An Adversarial Approach." IEEE Sensors Journal 21, no. 2 (January 15, 2021): 1885–96. http://dx.doi.org/10.1109/jsen.2020.3018698.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

SHEN, LINLIN, LI BAI, and ZHEN JI. "FPCODE: AN EFFICIENT APPROACH FOR MULTI-MODAL BIOMETRICS." International Journal of Pattern Recognition and Artificial Intelligence 25, no. 02 (March 2011): 273–86. http://dx.doi.org/10.1142/s0218001411008555.

Full text
Abstract:
Although face recognition technology has progressed substantially, its performance is still not satisfactory due to the challenges of great variations in illumination, expression and occlusion. This paper aims to improve the accuracy of personal identification, when only few samples are registered as templates, by integrating multiple modal biometrics, i.e. face and palmprint. We developed in this paper a feature code, namely FPCode, to represent the features of both face and palmprint. Though feature code has been used for palmprint recognition in literature, it is first applied in this paper for face recognition and multi-modal biometrics. As the same feature is used, fusion is much easier. Experimental results show that both feature level and decision level fusion strategies achieve much better performance than single modal biometrics. The proposed approach uses fixed length 1/0 bits coding scheme that is very efficient in matching, and at the same time achieves higher accuracy than other fusion methods available in literature.
APA, Harvard, Vancouver, ISO, and other styles
16

Brill, J. Christopher, Richard D. Gilson, and Mustapha Mouloua. "Indexing Cognitive Reserve Capacity: A Multi-modal Approach." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 51, no. 18 (October 2007): 1133–37. http://dx.doi.org/10.1177/154193120705101817.

Full text
Abstract:
The Multi-Sensory Workload Assessment Protocol (M-SWAP) is a newly developed standardized measure of cognitive reserve capacity. It is consists of a multi-modal counting task administered in a dual task environment. The goal of the present work was to further validate the measure by assessing the demand manipulation and perceived workload. Significant differences in performance and perceived workload were observed across demand levels, but not across modalities. These results suggest the secondary task protocol imposes demand in a manner consistent with the proposed model.
APA, Harvard, Vancouver, ISO, and other styles
17

Forman, Julie Lyng, and Michael Sørensen. "A transformation approach to modelling multi-modal diffusions." Journal of Statistical Planning and Inference 146 (March 2014): 56–69. http://dx.doi.org/10.1016/j.jspi.2013.09.013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

HUANG, ZEN-KWEI, SHENG-DE WANG, and TE-SON KUO. "Multi-modal parameter optimization by the automata approach." International Journal of Systems Science 24, no. 9 (September 1993): 1669–85. http://dx.doi.org/10.1080/00207729308949587.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Ciocan, Cristian. "Towards a Multi-modal Phenomenological Approach of Violence." Human Studies 43, no. 2 (July 2020): 151–58. http://dx.doi.org/10.1007/s10746-020-09551-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Dziedziech, Kajetan, Krzysztof Mendrok, Piotr Kurowski, and Tomasz Barszcz. "Multi-Variant Modal Analysis Approach for Large Industrial Machine." Energies 15, no. 5 (March 3, 2022): 1871. http://dx.doi.org/10.3390/en15051871.

Full text
Abstract:
Power generation technologies are essential for modern economies. Modal Analysis (MA) is advanced but well-established method for monitoring of structural integrity of critical assets, including power ones. Apart from classical MA, the Operational Modal Analysis approach is widely used in the study of dynamic properties of technical objects. The principal reasons are its advantages over the classical approach, such as the lack of necessity to apply the excitation force to the object and isolate it from other excitation sources. However, for industrial facilities, the operational excitation rarely takes the form of white noise. Especially in the case of rotating machines, the presence of rotational speed harmonics in the response signals causes problems with the correct identification of the modal model. The article presents a hybrid approach where combination of results of two Operational Modal Analyses and Experimental Modal Analysis is performed to improve the models’ quality. The proposed approach was tested on data obtained from a 215 MW turbogenerator operating in one of Polish power plants. With the proposed approach it was possible to diagnose the machine’s excessive vibration level correctly.
APA, Harvard, Vancouver, ISO, and other styles
21

Bashiri, Fereshteh, Ahmadreza Baghaie, Reihaneh Rostami, Zeyun Yu, and Roshan D’Souza. "Multi-Modal Medical Image Registration with Full or Partial Data: A Manifold Learning Approach." Journal of Imaging 5, no. 1 (December 30, 2018): 5. http://dx.doi.org/10.3390/jimaging5010005.

Full text
Abstract:
Multi-modal image registration is the primary step in integrating information stored in two or more images, which are captured using multiple imaging modalities. In addition to intensity variations and structural differences between images, they may have partial or full overlap, which adds an extra hurdle to the success of registration process. In this contribution, we propose a multi-modal to mono-modal transformation method that facilitates direct application of well-founded mono-modal registration methods in order to obtain accurate alignment of multi-modal images in both cases, with complete (full) and incomplete (partial) overlap. The proposed transformation facilitates recovering strong scales, rotations, and translations. We explain the method thoroughly and discuss the choice of parameters. For evaluation purposes, the effectiveness of the proposed method is examined and compared with widely used information theory-based techniques using simulated and clinical human brain images with full data. Using RIRE dataset, mean absolute error of 1.37, 1.00, and 1.41 mm are obtained for registering CT images with PD-, T1-, and T2-MRIs, respectively. In the end, we empirically investigate the efficacy of the proposed transformation in registering multi-modal partially overlapped images.
APA, Harvard, Vancouver, ISO, and other styles
22

Ultrasound Division, Hernandez, M., J. Lin, and M. Rivera. "223 A Multi-Modal Approach to Nerve Block Teaching." Annals of Emergency Medicine 80, no. 4 (October 2022): S98. http://dx.doi.org/10.1016/j.annemergmed.2022.08.247.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Kovur, Prashanthi. "(Invited) Multi-Modal Nanowire Sensors Using Electrical Resonance Approach." ECS Meeting Abstracts MA2021-02, no. 56 (October 19, 2021): 1672. http://dx.doi.org/10.1149/ma2021-02561672mtgabs.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Vega, Amaya. "A multi-modal approach to sustainable accessibility in Galway." Regional Insights 2, no. 2 (September 2011): 15–17. http://dx.doi.org/10.1080/20429843.2011.9727923.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Byrne, Sean, and Loraleigh Keashly. "Working with ethno‐political conflict: A multi‐modal approach." International Peacekeeping 7, no. 1 (March 2000): 97–120. http://dx.doi.org/10.1080/13533310008413821.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

van der Weijde, Adriaan Hendrik, Erik T. Verhoef, and Vincent A. C. van den Berg. "Competition in multi-modal transport networks: A dynamic approach." Transportation Research Part B: Methodological 53 (July 2013): 31–44. http://dx.doi.org/10.1016/j.trb.2013.03.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Ahmadian, Kushan, and Marina Gavrilova. "A multi-modal approach for high-dimensional feature recognition." Visual Computer 29, no. 2 (October 30, 2012): 123–30. http://dx.doi.org/10.1007/s00371-012-0741-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Shimon, Ilan. "Giant prolactinomas: Multi-modal approach to achieve tumor control." Endocrine 56, no. 2 (January 4, 2017): 227–28. http://dx.doi.org/10.1007/s12020-016-1225-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Bevrani, Bayan, Robert L. Burdett, Ashish Bhaskar, and Prasad K. D. V. Yarlagadda. "A capacity assessment approach for multi-modal transportation systems." European Journal of Operational Research 263, no. 3 (December 2017): 864–78. http://dx.doi.org/10.1016/j.ejor.2017.05.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Park, Byeong-Ho, and Kwang-Joon Kim. "Vector ARMAX modeling approach in multi-input modal analysis." Mechanical Systems and Signal Processing 3, no. 4 (October 1989): 373–87. http://dx.doi.org/10.1016/0888-3270(89)90044-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

M. Shahzad, H., Sohail Masood Bhatti, Arfan Jaffar, and Muhammad Rashid. "A Multi-Modal Deep Learning Approach for Emotion Recognition." Intelligent Automation & Soft Computing 36, no. 2 (2023): 1561–70. http://dx.doi.org/10.32604/iasc.2023.032525.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Zhang, Dong, Suzhong Wei, Shoushan Li, Hanqian Wu, Qiaoming Zhu, and Guodong Zhou. "Multi-modal Graph Fusion for Named Entity Recognition with Targeted Visual Guidance." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 16 (May 18, 2021): 14347–55. http://dx.doi.org/10.1609/aaai.v35i16.17687.

Full text
Abstract:
Multi-modal named entity recognition (MNER) aims to discover named entities in free text and classify them into pre-defined types with images. However, dominant MNER models do not fully exploit fine-grained semantic correspondences between semantic units of different modalities, which have the potential to refine multi-modal representation learning. To deal with this issue, we propose a unified multi-modal graph fusion (UMGF) approach for MNER. Specifically, we first represent the input sentence and image using a unified multi-modal graph, which captures various semantic relationships between multi-modal semantic units (words and visual objects). Then, we stack multiple graph-based multi-modal fusion layers that iteratively perform semantic interactions to learn node representations. Finally, we achieve an attention-based multi-modal representation for each word and perform entity labeling with a CRF decoder. Experimentation on the two benchmark datasets demonstrates the superiority of our MNER model.
APA, Harvard, Vancouver, ISO, and other styles
33

Kou, Ziyi, Yang Zhang, Daniel Zhang, and Dong Wang. "CrowdGraph: A Crowdsourcing Multi-modal Knowledge Graph Approach to Explainable Fauxtography Detection." Proceedings of the ACM on Human-Computer Interaction 6, CSCW2 (November 7, 2022): 1–28. http://dx.doi.org/10.1145/3555178.

Full text
Abstract:
Human-centric fauxtography is a category of multi-modal posts that spread misleading information on online information distribution and sharing platforms such as online social media. The reason of a human-centric post being fauxtography is closely related to its multi-modal content that consists of diversified human and non-human subjects with complex and implicit relationships. In this paper, we focus on an explainable fauxtography detection problem where the goal is to accurately identify and explain why a human-centric social media post is fauxtography (or not). Our problem is motivated by the limitations of current fauxtography detection solutions that focus primarily on the detection task but ignore the important aspect of explaining their results (e.g., why a certain component of the post delivers the misinformation). Two important challenges exist in solving our problem: 1) it is difficult to capture the implicit relations and attributions of different subjects in a fauxtography post given the fact that many of such knowledge is shared between different crowd workers; 2) it is not a trivial task to create a multi-modal knowledge graph from crowd workers to identify and explain human-centric fauxtography posts with multi-modal contents. To address the above challenges, we develop CrowdGraph, a crowdsourcing based multi-modal knowledge graph approach to address the explainable fauxtography detection problem. We evaluate the performance of CrowdGraph by creating a real-world dataset that consists of human-centric fauxtography posts from Twitter and Reddit. The results show that CrowdGraph not only detects the fauxtography posts more accurately than the state-of-the-arts but also provides well-justified explanations to the detection results with convincing evidence.
APA, Harvard, Vancouver, ISO, and other styles
34

V, Bhavana, and Krishnappa H. K. "Multi-modal image fusion using contourlet and wavelet transforms: a multi-resolution approach." Indonesian Journal of Electrical Engineering and Computer Science 28, no. 2 (November 1, 2022): 762. http://dx.doi.org/10.11591/ijeecs.v28.i2.pp762-768.

Full text
Abstract:
In recent years, vast improvement and progress has been observed in the field of medical research, especially in digital medical imaging technology. Medical image fusion has been widely used in clinical diagnosis to get valuable information from different modalities of medical images to enhance its quality by fusing images like computed tomography (CT), and magnetic resonance imaging (MRI). MRI gives clear information on delicate tissue while CT gives details about denser tissues. A multi-resolution approach is proposed in this work for fusing medical images using non-sub-sampled contourlet transform (NSCT) and discrete wavelet transform (DWT). In this approach, initially the input images are decomposed using DWT at 4 levels and NSCT at 2 levels which helps to protect the vital data from the source images. This work shows significant enhancement in pixel clarity and preserves the information at the corners and edges of the fused image without any data loss. The proposed methodology with an improved entropy and mutual information helps the doctors in better clinical diagnosis of brain diseases.
APA, Harvard, Vancouver, ISO, and other styles
35

Yang, Q. J., P. Q. Zhang, C. Q. Li, and X. P. Wu. "A system theory approach to multi-input multi-output modal parameters identification methods." Mechanical Systems and Signal Processing 8, no. 2 (March 1994): 159–74. http://dx.doi.org/10.1006/mssp.1994.1014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Asim, Yousra, Basit Raza, Ahmad Kamran Malik, Saima Rathore, Lal Hussain, and Mohammad Aksam Iftikhar. "A multi-modal, multi-atlas-based approach for Alzheimer detection via machine learning." International Journal of Imaging Systems and Technology 28, no. 2 (January 10, 2018): 113–23. http://dx.doi.org/10.1002/ima.22263.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Botea, Adi, Akihiro Kishimoto, Evdokia Nikolova, Stefano Braghin, Michele Berlingerio, and Elizabeth Daly. "Computing Multi-Modal Journey Plans under Uncertainty." Journal of Artificial Intelligence Research 65 (August 16, 2019): 633–74. http://dx.doi.org/10.1613/jair.1.11422.

Full text
Abstract:
Multi-modal journey planning, which allows multiple types of transport within a single trip, is becoming increasingly popular, due to a strong practical interest and an increasing availability of data. In real life, transport networks feature uncertainty. Yet, most approaches assume a deterministic environment, making plans more prone to failures such as missed connections and major delays in the arrival. This paper presents an approach to computing optimal contingent plans in multi-modal journey planning. The problem is modeled as a search in an and/or state space. We describe search enhancements used on top of the AO* algorithm. Enhancements include admissible heuristics, multiple types of pruning that preserve the completeness and the optimality, and a hybrid search approach with a deterministic and a nondeterministic search. We demonstrate an NP-hardness result, with the hardness stemming from the dynamically changing distributions of the travel time random variables. We perform a detailed empirical analysis on realistic transport networks from cities such as Montpellier, Rome and Dublin. The results demonstrate the effectiveness of our algorithmic contributions, and the benefits of contingent plans as compared to standard sequential plans, when the arrival and departure times of buses are characterized by uncertainty.
APA, Harvard, Vancouver, ISO, and other styles
38

Batkovic, Ivo, Ugo Rosolia, Mario Zanon, and Paolo Falcone. "A Robust Scenario MPC Approach for Uncertain Multi-Modal Obstacles." IEEE Control Systems Letters 5, no. 3 (July 2021): 947–52. http://dx.doi.org/10.1109/lcsys.2020.3006819.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Walker, Alexander D., Zachary N. J. Horn, and Camilla C. Knott. "Cognitive Readiness: The Need for a Multi-Modal Measurement Approach." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 56, no. 1 (September 2012): 443–47. http://dx.doi.org/10.1177/1071181312561100.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Greenberg, R. K., and K. Ouriel. "A multi-modal approach to the management of bypassgraft failure." Vascular Medicine 3, no. 3 (August 1, 1998): 215–20. http://dx.doi.org/10.1191/135886398666370146.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

ABDEL-ATY, MOHAMED A. "A Simplified Approach for Developing a Multi-modal Travel Planner." ITS Journal - Intelligent Transportation Systems Journal 5, no. 3 (January 1999): 195–215. http://dx.doi.org/10.1080/10248079908903766.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Lo, Hong K., Chun-Wing Yip, and Quentin K. Wan. "Modeling competitive multi-modal transit services: a nested logit approach." Transportation Research Part C: Emerging Technologies 12, no. 3-4 (June 2004): 251–72. http://dx.doi.org/10.1016/j.trc.2004.07.011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Morgan, Barbara M. "Stress Management for College Students: An Experiential Multi-Modal Approach." Journal of Creativity in Mental Health 12, no. 3 (January 6, 2017): 276–88. http://dx.doi.org/10.1080/15401383.2016.1245642.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Roeckner, Jared T. "Reducing Incomplete 3rd Trimester Labs: A Multi-Modal Approach [2Q]." Obstetrics & Gynecology 133, no. 1 (May 2019): 181S—182S. http://dx.doi.org/10.1097/01.aog.0000559490.94537.c3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Castillo, José Carlos, Davide Carneiro, Juan Serrano-Cuerda, Paulo Novais, Antonio Fernández-Caballero, and José Neves. "A multi-modal approach for activity classification and fall detection." International Journal of Systems Science 45, no. 4 (April 2, 2013): 810–24. http://dx.doi.org/10.1080/00207721.2013.784372.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Jeżowski, Jacek, Roman Bochenek, and Grzegorz Ziomek. "Random search optimization approach for highly multi-modal nonlinear problems." Advances in Engineering Software 36, no. 8 (August 2005): 504–17. http://dx.doi.org/10.1016/j.advengsoft.2005.02.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Zhang, Dong, Xincheng Ju, Wei Zhang, Junhui Li, Shoushan Li, Qiaoming Zhu, and Guodong Zhou. "Multi-modal Multi-label Emotion Recognition with Heterogeneous Hierarchical Message Passing." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 16 (May 18, 2021): 14338–46. http://dx.doi.org/10.1609/aaai.v35i16.17686.

Full text
Abstract:
As an important research issue in affective computing community, multi-modal emotion recognition has become a hot topic in the last few years. However, almost all existing studies perform multiple binary classification for each emotion with focus on complete time series data. In this paper, we focus on multi-modal emotion recognition in a multi-label scenario. In this scenario, we consider not only the label-to-label dependency, but also the feature-to-label and modality-to-label dependencies. Particularly, we propose a heterogeneous hierarchical message passing network to effectively model above dependencies. Furthermore, we propose a new multi-modal multi-label emotion dataset based on partial time-series content to show predominant generalization of our model. Detailed evaluation demonstrates the effectiveness of our approach.
APA, Harvard, Vancouver, ISO, and other styles
48

Pyrovolakis, Konstantinos, Paraskevi Tzouveli, and Giorgos Stamou. "Multi-Modal Song Mood Detection with Deep Learning." Sensors 22, no. 3 (January 29, 2022): 1065. http://dx.doi.org/10.3390/s22031065.

Full text
Abstract:
The production and consumption of music in the contemporary era results in big data generation and creates new needs for automated and more effective management of these data. Automated music mood detection constitutes an active task in the field of MIR (Music Information Retrieval). The first approach to correlating music and mood was made in 1990 by Gordon Burner who researched the way that musical emotion affects marketing. In 2016, Lidy and Schiner trained a CNN for the task of genre and mood classification based on audio. In 2018, Delbouys et al. developed a multi-modal Deep Learning system combining CNN and LSTM architectures and concluded that multi-modal approaches overcome single channel models. This work will examine and compare single channel and multi-modal approaches for the task of music mood detection applying Deep Learning architectures. Our first approach tries to utilize the audio signal and the lyrics of a musical track separately, while the second approach applies a uniform multi-modal analysis to classify the given data into mood classes. The available data we will use to train and evaluate our models comes from the MoodyLyrics dataset, which includes 2000 song titles with labels from four mood classes, {happy, angry, sad, relaxed}. The result of this work leads to a uniform prediction of the mood that represents a music track and has usage in many applications.
APA, Harvard, Vancouver, ISO, and other styles
49

Irfan, Bahar, Michael Garcia Ortiz, Natalia Lyubova, and Tony Belpaeme. "Multi-modal Open World User Identification." ACM Transactions on Human-Robot Interaction 11, no. 1 (March 31, 2022): 1–50. http://dx.doi.org/10.1145/3477963.

Full text
Abstract:
User identification is an essential step in creating a personalised long-term interaction with robots. This requires learning the users continuously and incrementally, possibly starting from a state without any known user. In this article, we describe a multi-modal incremental Bayesian network with online learning, which is the first method that can be applied in such scenarios. Face recognition is used as the primary biometric, and it is combined with ancillary information, such as gender, age, height, and time of interaction to improve the recognition. The Multi-modal Long-term User Recognition Dataset is generated to simulate various human-robot interaction (HRI) scenarios and evaluate our approach in comparison to face recognition, soft biometrics, and a state-of-the-art open world recognition method (Extreme Value Machine). The results show that the proposed methods significantly outperform the baselines, with an increase in the identification rate up to 47.9% in open-set and closed-set scenarios, and a significant decrease in long-term recognition performance loss. The proposed models generalise well to new users, provide stability, improve over time, and decrease the bias of face recognition. The models were applied in HRI studies for user recognition, personalised rehabilitation, and customer-oriented service, which showed that they are suitable for long-term HRI in the real world.
APA, Harvard, Vancouver, ISO, and other styles
50

Hu, Lingyue, Kailong Zhao, Xueling Zhou, Bingo Wing-Kuen Ling, and Guozhao Liao. "Empirical Mode Decomposition Based Multi-Modal Activity Recognition." Sensors 20, no. 21 (October 24, 2020): 6055. http://dx.doi.org/10.3390/s20216055.

Full text
Abstract:
This paper aims to develop an activity recognition algorithm to allow parents to monitor their children at home after school. A common method used to analyze electroencephalograms is to use infinite impulse response filters to decompose the electroencephalograms into various brain wave components. However, nonlinear phase distortions will be introduced by these filters. To address this issue, this paper applies empirical mode decomposition to decompose the electroencephalograms into various intrinsic mode functions and categorize them into four groups. In addition, common features used to analyze electroencephalograms are energy and entropy. However, because there are only two features, the available information is limited. To address this issue, this paper extracts 11 different physical quantities from each group of intrinsic mode functions, and these are employed as the features. Finally, this paper uses the random forest to perform activity recognition. It is worth noting that the conventional approach for performing activity recognition is based on a single type of signal, which limits the recognition performance. In this paper, a multi-modal system based on electroencephalograms, image sequences, and motion signals is used for activity recognition. The numerical simulation results show that the percentage accuracies based on three types of signal are higher than those based on two types of signal or the individual signals. This demonstrates the advantages of using the multi-modal approach for activity recognition. In addition, our proposed empirical mode decomposition-based method outperforms the conventional filtering-based method. This demonstrates the advantages of using the nonlinear and adaptive time frequency approach for activity recognition.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography