Littérature scientifique sur le sujet « Dati multimodali »
Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres
Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Dati multimodali ».
À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.
Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.
Articles de revues sur le sujet "Dati multimodali"
Fatigante, Marilena, Cristina Zucchermaglio, Francesca Alby et Mariacristina Nutricato. « La struttura della prima visita oncologica : uno studio conversazionale ». PSICOLOGIA DELLA SALUTE, no 1 (janvier 2021) : 53–77. http://dx.doi.org/10.3280/pds2021-001005.
Texte intégralKaratayli-Ozgursoy, S., J. A. Bishop, A. T. Hillel, L. M. Akst et S. R. Best. « Tumori maligni delle ghiandole salivari della laringe : un'unica review istituzionale ». Acta Otorhinolaryngologica Italica 36, no 4 (août 2016) : 289–94. http://dx.doi.org/10.14639/0392-100x-807.
Texte intégralMoneglia, Massimo. « Le unità di informazione Parentetiche alla periferia destra del Comment nella Teoria della Lingua in Atto ». DILEF. Rivista digitale del Dipartimento di Lettere e Filosofia, no 1 (27 mars 2022) : 88–123. http://dx.doi.org/10.35948/dilef/2022.3294.
Texte intégralSpaliviero, Camilla. « Teaching Italian as a second language through digital storytelling : Students’ perceptions towards izi.TRAVEL ». EuroAmerican Journal of Applied Linguistics and Languages 9, no 1 (10 avril 2022) : 91–121. http://dx.doi.org/10.21283/2376905x.15.1.265.
Texte intégralDeng, Wan-Yu, Dan Liu et Ying-Ying Dong. « Feature Selection and Classification for High-Dimensional Incomplete Multimodal Data ». Mathematical Problems in Engineering 2018 (12 août 2018) : 1–9. http://dx.doi.org/10.1155/2018/1583969.
Texte intégralAmundrud, Thomas. « Multimodal knowledge building in a Japanese secondary English as a foreign language class ». Multimodality & ; Society 2, no 1 (mars 2022) : 64–85. http://dx.doi.org/10.1177/26349795221081300.
Texte intégralWan, Huan, Hui Wang, Bryan Scotney, Jun Liu et Wing W. Y. Ng. « Within-class multimodal classification ». Multimedia Tools and Applications 79, no 39-40 (11 août 2020) : 29327–52. http://dx.doi.org/10.1007/s11042-020-09238-1.
Texte intégralSilvestri, Katarina, Mary McVee, Christopher Jarmark, Lynn Shanahan et Kenneth English. « Multimodal positioning of artifacts in interaction in a collaborative elementary engineering club ». Multimodal Communication 10, no 3 (1 décembre 2021) : 289–309. http://dx.doi.org/10.1515/mc-2020-0017.
Texte intégralFarías, Miguel, et Leonardo Véliz. « Multimodal Texts in Chilean English Teaching Education : Experiences From Educators and Pre-Service Teachers ». Profile : Issues in Teachers´ Professional Development 21, no 2 (1 juillet 2019) : 13–27. http://dx.doi.org/10.15446/profile.v21n2.75172.
Texte intégralAbdullah, Fuad, Arini Nurul Hidayati, Agis Andriani, Dea Silvani, Ruslan Ruslan, Soni T. Tandiana et Nina Lisnawati. « Fostering students’ Multimodal Communicative Competence through genre-based multimodal text analysis ». Studies in English Language and Education 9, no 2 (23 mai 2022) : 632–50. http://dx.doi.org/10.24815/siele.v9i2.23440.
Texte intégralThèses sur le sujet "Dati multimodali"
GIANSANTI, VALENTINA. « Integration of heterogeneous single cell data with Wasserstein Generative Adversarial Networks ». Doctoral thesis, Università degli Studi di Milano-Bicocca, 2023. https://hdl.handle.net/10281/404516.
Texte intégralTissues, organs and organisms are complex biological systems. They are objects of many studies aiming at characterizing their biological processes. Understanding how they work and how they interact in healthy and unhealthy samples gives the possibility to interfere, correcting and preventing dysfunctions, possibly leading to diseases. Recent advances in single-cell technologies are expanding our capabilities to profile at single-cell resolution various molecular layers, by targeting the transcriptome, the genome, the epigenome and the proteome. The number of single-cell datasets, their size and the diverse modalities they describe is continuously increasing, prompting the need to develop robust methods to integrate multiomic datasets, whether paired from the same cells or, most challenging, from unpaired separate experiments. The integration of different source of information results in a more comprehensive description of the whole system. Most published methods allow the integration of limited number of omics (generally two) and make assumptions about their inter-relationships. They often impose the conversion of a data modality into the other one (e.g., ATAC peaks converted in a gene activity matrix). This step introduces an important level of approximation, which could affect the analysis later performed. Here we propose MOWGAN (Multi Omic Wasserstein Generative Adversarial Network), a deep-learning based framework to simulate paired multimodal data supporting high number of modalities (more than two) and agnostic about their relationships (no assumption is imposed). Each modality is embedded into feature spaces with same dimensionality across all modalities. This step prevents any conversion between data modalities. The embeddings are sorted based on the first Laplacian Eigenmap. Mini-batches are selected by a Bayesian ridge regressor to train a Wasserstein Generative Adversarial Network with gradient penalty. The output of the generative network is used to bridge real unpaired data. MOWGAN was prototyped on public data for which paired and unpaired RNA and ATAC experiments exists. Evaluation was conducted on the ability to produce data integrable with the original ones, on the amount of shared information between synthetic layers and on the ability to impose association between molecular layers that are truly connected. The organization of the embeddings in mini-batches allows MOWGAN to have a network architecture independent of the number of modalities evaluated. Indeed, the framework was also successfully applied to integrate three (e.g., RNA, ATAC and protein or histone modification data) and four modalities (e.g., RNA, ATAC, protein, histone modifications). MOWGAN’s performance was evaluated in terms of both computational scalability and biological meaning, being the latter the most important to avoid erroneous conclusion. A comparison was conducted with published methods, concluding that MOWGAN performs better when looking at the ability to retrieve the correct biological identity (e.g., cell types) and associations. In conclusion, MOWGAN is a powerful tool for multi-omics data integration in single-cell, which answer most of the critical issues observed in the field.
Medjahed, Hamid. « Distress situation identification by multimodal data fusion for home healthcare telemonitoring ». Thesis, Evry, Institut national des télécommunications, 2010. http://www.theses.fr/2010TELE0002/document.
Texte intégralThe population age increases in all societies throughout the world. In Europe, for example, the life expectancy for men is about 71 years and for women about 79 years. For North America the life expectancy, currently is about 75 for men and 81 for women. Moreover, the elderly prefer to preserve their independence, autonomy and way of life living at home the longest time possible. The current healthcare infrastructures in these countries are widely considered to be inadequate to meet the needs of an increasingly older population. Home healthcare monitoring is a solution to deal with this problem and to ensure that elderly people can live safely and independently in their own homes for as long as possible. Automatic in-home healthcare monitoring is a technological approach which helps people age in place by continuously telemonitoring. In this thesis, we explore automatic in-home healthcare monitoring by conducting a study of professionals who currently perform in-home healthcare monitoring, by combining and synchronizing various telemonitoring modalities,under a data synchronization and multimodal data fusion platform, FL-EMUTEM (Fuzzy Logic Multimodal Environment for Medical Remote Monitoring). This platform incorporates algorithms that process each modality and providing a technique of multimodal data fusion which can ensures a pervasive in-home health monitoring for elderly people based on fuzzy logic.The originality of this thesis which is the combination of various modalities in the home, about its inhabitant and their surroundings, will constitute an interesting benefit and impact for the elderly person suffering from loneliness. This work complements the stationary smart home environment in bringing to bear its capability for integrative continuous observation and detection of critical situations
Vielzeuf, Valentin. « Apprentissage neuronal profond pour l'analyse de contenus multimodaux et temporels ». Thesis, Normandie, 2019. http://www.theses.fr/2019NORMC229/document.
Texte intégralOur perception is by nature multimodal, i.e. it appeals to many of our senses. To solve certain tasks, it is therefore relevant to use different modalities, such as sound or image.This thesis focuses on this notion in the context of deep learning. For this, it seeks to answer a particular problem: how to merge the different modalities within a deep neural network?We first propose to study a problem of concrete application: the automatic recognition of emotion in audio-visual contents.This leads us to different considerations concerning the modeling of emotions and more particularly of facial expressions. We thus propose an analysis of representations of facial expression learned by a deep neural network.In addition, we observe that each multimodal problem appears to require the use of a different merge strategy.This is why we propose and validate two methods to automatically obtain an efficient fusion neural architecture for a given multimodal problem, the first one being based on a central fusion network and aimed at preserving an easy interpretation of the adopted fusion strategy. While the second adapts a method of neural architecture search in the case of multimodal fusion, exploring a greater number of strategies and therefore achieving better performance.Finally, we are interested in a multimodal view of knowledge transfer. Indeed, we detail a non-traditional method to transfer knowledge from several sources, i.e. from several pre-trained models. For that, a more general neural representation is obtained from a single model, which brings together the knowledge contained in the pre-trained models and leads to state-of-the-art performances on a variety of facial analysis tasks
Lazarescu, Mihai M. « Incremental learning for querying multimodal symbolic data ». Thesis, Curtin University, 2000. http://hdl.handle.net/20.500.11937/1660.
Texte intégralLazarescu, Mihai M. « Incremental learning for querying multimodal symbolic data ». Curtin University of Technology, School of Computing, 2000. http://espace.library.curtin.edu.au:80/R/?func=dbin-jump-full&object_id=10010.
Texte intégralDA, CRUZ GARCIA NUNO RICARDO. « Learning with Privileged Information using Multimodal Data ». Doctoral thesis, Università degli studi di Genova, 2020. http://hdl.handle.net/11567/997636.
Texte intégralXin, Bowen. « Multimodal Data Fusion and Quantitative Analysis for Medical Applications ». Thesis, The University of Sydney, 2021. https://hdl.handle.net/2123/26678.
Texte intégralPOLSINELLI, MATTEO. « Modelli di Intelligenza Artificiale per l'analisi di dati da neuroimaging multimodale ». Doctoral thesis, Università degli Studi dell'Aquila, 2022. http://hdl.handle.net/11697/192072.
Texte intégralKhan, Mohd Tauheed. « Multimodal Data Fusion Using Voice and Electromyography Data for Robotic Control ». University of Toledo / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=toledo156440368925597.
Texte intégralOztarak, Hakan. « Structural And Event Based Multimodal Video Data Modeling ». Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/12606919/index.pdf.
Texte intégralLivres sur le sujet "Dati multimodali"
Fernandes, Carla, Vito Evola et Cláudia Ribeiro. Dance Data, Cognition, and Multimodal Communication. London : Routledge, 2022. http://dx.doi.org/10.4324/9781003106401.
Texte intégralHazeldine, Lee, Gary Hazeldine et Christian Beighton. Analysing Multimodal Data in Complex Social Spaces. 1 Oliver’s Yard, 55 City Road, London EC1Y 1SP United Kingdom : SAGE Publications, Ltd., 2019. http://dx.doi.org/10.4135/9781526488282.
Texte intégralAdams, Teresa M. Guidelines for the implementation of multimodal transportation location referencing systems. Washington, D.C : National Academy Press, 2001.
Trouver le texte intégralSeng, Kah Phooi, Li-minn Ang, Alan Wee-Chung Liew et Junbin Gao, dir. Multimodal Analytics for Next-Generation Big Data Technologies and Applications. Cham : Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-319-97598-6.
Texte intégral1959-, Grifoni Patrizia, dir. Multimodal human computer interaction and pervasive services. Hershey PA : Information Science Reference, 2009.
Trouver le texte intégralVieten, Andrea. Monomodale und multimodale Registrierung von autoradiographischen und histologischen Bilddaten. Jülich : Forschungszentrum Jülich, Zentralbibliothek, 2005.
Trouver le texte intégralEnrique, Vidal, Casacuberta Francisco et SpringerLink (Online service), dir. Multimodal Interactive Pattern Recognition and Applications. London : Springer-Verlag London Limited, 2011.
Trouver le texte intégralNational Research Council (U.S.). Transportation Research Board et National Cooperative Highway Research Program, dir. Multimodal level of service analysis for urban streets. Washington, D.C : Transportation Research Board, 2008.
Trouver le texte intégral, English linguistics edited by Anthon., dir. Multimodality and multimediality in the distance learning age. Campobasso : Palladino, 2000.
Trouver le texte intégralBiswas, Pradipta. A Multimodal End-2-End Approach to Accessible Computing. London : Springer London, 2013.
Trouver le texte intégralChapitres de livres sur le sujet "Dati multimodali"
Bernsen, Niels Ole, et Laila Dybkjær. « Data Handling ». Dans Multimodal Usability, 315–49. London : Springer London, 2009. http://dx.doi.org/10.1007/978-1-84882-553-6_15.
Texte intégralLaflen, Angela. « Learning to “Speak Data” ». Dans Multimodal Composition, 127–43. New York : Routledge, 2021. http://dx.doi.org/10.4324/9781003163220-10.
Texte intégralBernsen, Niels Ole, et Laila Dybkjær. « Usability Data Analysis and Evaluation ». Dans Multimodal Usability, 351–85. London : Springer London, 2009. http://dx.doi.org/10.1007/978-1-84882-553-6_16.
Texte intégralWilliams, Ross N. « A Multimodal Algorithm ». Dans Adaptive Data Compression, 245–81. Boston, MA : Springer US, 1991. http://dx.doi.org/10.1007/978-1-4615-4046-5_5.
Texte intégralPalframan, Shirley. « Multimodal classroom data ». Dans Multimodal Signs of Learning, 27–39. London : Routledge, 2021. http://dx.doi.org/10.4324/9781003198802-3.
Texte intégralChen, Zhikui, Liang Zhao, Qiucen Li, Xin Song et Jianing Zhang. « Multimodal Data Fusion ». Dans Advances in Computing, Informatics, Networking and Cybersecurity, 53–91. Cham : Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-87049-2_3.
Texte intégralHuang, Lihe. « Collecting and processing multimodal data ». Dans Toward Multimodal Pragmatics, 99–108. London : Routledge, 2021. http://dx.doi.org/10.4324/9781003251774-5.
Texte intégralSteininger, Silke, Florian Schiel et Susen Rabold. « Annotation of Multimodal Data ». Dans SmartKom : Foundations of Multimodal Dialogue Systems, 571–96. Berlin, Heidelberg : Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/3-540-36678-4_35.
Texte intégralNie, Liqiang, Meng Liu et Xuemeng Song. « Data Collection ». Dans Multimodal Learning toward Micro-Video Understanding, 11–17. Cham : Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-031-02255-5_2.
Texte intégralHosseini, Mohammad-Parsa, Aaron Lau, Kost Elisevich et Hamid Soltanian-Zadeh. « Multimodal Analysis in Biomedicine ». Dans Big Data in Multimodal Medical Imaging, 193–203. Boca Raton : CRC Press, [2020] : Chapman and Hall/CRC, 2019. http://dx.doi.org/10.1201/b22410-8.
Texte intégralActes de conférences sur le sujet "Dati multimodali"
Sanchez-Rada, J. Fernando, Carlos A. Iglesias, Hesam Sagha, Bjorn Schuller, Ian Wood et Paul Buitelaar. « Multimodal multimodel emotion analysis as linked data ». Dans 2017 Seventh International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW). IEEE, 2017. http://dx.doi.org/10.1109/aciiw.2017.8272599.
Texte intégralLiao, Callie C., Duoduo Liao et Jesse Guessford. « Multimodal Lyrics-Rhythm Matching ». Dans 2022 IEEE International Conference on Big Data (Big Data). IEEE, 2022. http://dx.doi.org/10.1109/bigdata55660.2022.10021009.
Texte intégralYang, Lixin, Genshe Chen, Ronghua Xu, Sherry Chen et Yu Chen. « Decentralized autonomous imaging data processing using blockchain ». Dans Multimodal Biomedical Imaging XIV, sous la direction de Fred S. Azar, Xavier Intes et Qianqian Fang. SPIE, 2019. http://dx.doi.org/10.1117/12.2513243.
Texte intégralOosterhuis, Kas, et Arwin Hidding. « Participator, A Participatory Urban Design Instrument ». Dans International Conference on the 4th Game Set and Match (GSM4Q-2019). Qatar University Press, 2019. http://dx.doi.org/10.29117/gsm4q.2019.0008.
Texte intégralHu, Kangqiao, Abdullah Nazma Nowroz, Sherief Reda et Farinaz Koushanfar. « High-Sensitivity Hardware Trojan Detection Using Multimodal Characterization ». Dans Design Automation and Test in Europe. New Jersey : IEEE Conference Publications, 2013. http://dx.doi.org/10.7873/date.2013.263.
Texte intégralMahmood, Faisal, Daniel Borders, Richard Chen, Jordan Sweer, Steven Tilley, Norman S. Nishioka, J. Webster Stayman et Nicholas J. Durr. « Robust photometric stereo endoscopy via deep learning trained on synthetic data (Conference Presentation) ». Dans Multimodal Biomedical Imaging XIV, sous la direction de Fred S. Azar, Xavier Intes et Qianqian Fang. SPIE, 2019. http://dx.doi.org/10.1117/12.2509878.
Texte intégralLampkins, Joshua, Darren Chan, Alan Perry, Sasha Strelnikoff, Jiejun Xu et Alireza Esna Ashari. « Multimodal Road Sign Interpretation for Autonomous Vehicles ». Dans 2022 IEEE International Conference on Big Data (Big Data). IEEE, 2022. http://dx.doi.org/10.1109/bigdata55660.2022.10020808.
Texte intégralKohankhaki, Mohammad, Ahmad Ayad, Mahdi Barhoush, Bastian Leibe et Anke Schmeink. « Radiopaths : Deep Multimodal Analysis on Chest Radiographs ». Dans 2022 IEEE International Conference on Big Data (Big Data). IEEE, 2022. http://dx.doi.org/10.1109/bigdata55660.2022.10020356.
Texte intégralKouvaras, George, et George Kokolakis. « Random Multivariate Multimodal Distributions ». Dans Recent Advances in Stochastic Modeling and Data Analysis. WORLD SCIENTIFIC, 2007. http://dx.doi.org/10.1142/9789812709691_0009.
Texte intégralMarte Zorrilla, Edwin, Idalis Villanueva, Jenefer Husman et Matthew Graham. « Generating a Multimodal Dataset Using a Feature Extraction Toolkit for Wearable and Machine Learning : A pilot study ». Dans 13th International Conference on Applied Human Factors and Ergonomics (AHFE 2022). AHFE International, 2022. http://dx.doi.org/10.54941/ahfe1001448.
Texte intégralRapports d'organisations sur le sujet "Dati multimodali"
Linville, Lisa M., Joshua James Michalenko et Dylan Zachary Anderson. Multimodal Data Fusion via Entropy Minimization. Office of Scientific and Technical Information (OSTI), mars 2020. http://dx.doi.org/10.2172/1614682.
Texte intégralWu, Yao-Jan, Xianfeng Yang, Sirisha Kothuri, Abolfazl Karimpour, Qinzheng Wang et Jason Anderson. Data-Driven Mobility Strategies for Multimodal Transportation. Transportation Research and Education Center (TREC), 2021. http://dx.doi.org/10.15760/trec.262.
Texte intégralFolds, Dennis J., Carl T. Blunt et Raymond M. Stanley. Training for Rapid Interpretation of Voluminous Multimodal Data. Fort Belvoir, VA : Defense Technical Information Center, avril 2008. http://dx.doi.org/10.21236/ada480522.
Texte intégralHillsman, Edward. Enabling Cost-Effective Multimodal Trip Planners through Open Transit Data. Tampa, FL : University of South Florida, mai 2011. http://dx.doi.org/10.5038/cutr-nctr-rr-2010-05.
Texte intégralBarbeau, Sean. Improving the Quality and Cost Effectiveness of Multimodal Travel Behavior Data Collection. Tampa, FL : University of South Florida, février 2018. http://dx.doi.org/10.5038/cutr-nctr-rr-2018-10.
Texte intégralBalali, Vahid, Arash Tavakoli et Arsalan Heydarian. A Multimodal Approach for Monitoring Driving Behavior and Emotions. Mineta Transportation Institute, juillet 2020. http://dx.doi.org/10.31979/mti.2020.1928.
Texte intégralZhang, Yongping, Wen Cheng et Xudong Jia. Enhancement of Multimodal Traffic Safety in High-Quality Transit Areas. Mineta Transportation Institute, février 2021. http://dx.doi.org/10.31979/mti.2021.1920.
Texte intégralChen, Maximillian Gene, Michael Christopher Darling et David John Stracuzzi. Preliminary Results on Applying Nonparametric Clustering and Bayesian Consensus Clustering Methods to Multimodal Data. Office of Scientific and Technical Information (OSTI), septembre 2018. http://dx.doi.org/10.2172/1475256.
Texte intégralBoero, Riccardo, Peter Thomas Hraber, Kimberly Ann Kaufeld, Elisabeth Ann Moore, Ethan Romero-Severson, John Joseph Ambrosiano, John Leslie Whitton et Benjamin Hayden Sims. Analysis of Multimodal Wearable Sensor Data to Characterize Social Groups and Influence in Organizations. Office of Scientific and Technical Information (OSTI), octobre 2019. http://dx.doi.org/10.2172/1570596.
Texte intégralTufte, Kristin. Multimodal Data at Signalized Intersections : Strategies for Archiving Existing and New Data Streams to Support Operations and Planning Fusion and Integration of Arterial Performance Data. Portland State University Library, septembre 2013. http://dx.doi.org/10.15760/trec.46.
Texte intégral