Academic literature on the topic 'Dati multimodali'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Dati multimodali.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Dati multimodali"
Fatigante, Marilena, Cristina Zucchermaglio, Francesca Alby, and Mariacristina Nutricato. "La struttura della prima visita oncologica: uno studio conversazionale." PSICOLOGIA DELLA SALUTE, no. 1 (January 2021): 53–77. http://dx.doi.org/10.3280/pds2021-001005.
Full textKaratayli-Ozgursoy, S., J. A. Bishop, A. T. Hillel, L. M. Akst, and S. R. Best. "Tumori maligni delle ghiandole salivari della laringe: un'unica review istituzionale." Acta Otorhinolaryngologica Italica 36, no. 4 (August 2016): 289–94. http://dx.doi.org/10.14639/0392-100x-807.
Full textMoneglia, Massimo. "Le unità di informazione Parentetiche alla periferia destra del Comment nella Teoria della Lingua in Atto." DILEF. Rivista digitale del Dipartimento di Lettere e Filosofia, no. 1 (March 27, 2022): 88–123. http://dx.doi.org/10.35948/dilef/2022.3294.
Full textSpaliviero, Camilla. "Teaching Italian as a second language through digital storytelling: Students’ perceptions towards izi.TRAVEL." EuroAmerican Journal of Applied Linguistics and Languages 9, no. 1 (April 10, 2022): 91–121. http://dx.doi.org/10.21283/2376905x.15.1.265.
Full textDeng, Wan-Yu, Dan Liu, and Ying-Ying Dong. "Feature Selection and Classification for High-Dimensional Incomplete Multimodal Data." Mathematical Problems in Engineering 2018 (August 12, 2018): 1–9. http://dx.doi.org/10.1155/2018/1583969.
Full textAmundrud, Thomas. "Multimodal knowledge building in a Japanese secondary English as a foreign language class." Multimodality & Society 2, no. 1 (March 2022): 64–85. http://dx.doi.org/10.1177/26349795221081300.
Full textWan, Huan, Hui Wang, Bryan Scotney, Jun Liu, and Wing W. Y. Ng. "Within-class multimodal classification." Multimedia Tools and Applications 79, no. 39-40 (August 11, 2020): 29327–52. http://dx.doi.org/10.1007/s11042-020-09238-1.
Full textSilvestri, Katarina, Mary McVee, Christopher Jarmark, Lynn Shanahan, and Kenneth English. "Multimodal positioning of artifacts in interaction in a collaborative elementary engineering club." Multimodal Communication 10, no. 3 (December 1, 2021): 289–309. http://dx.doi.org/10.1515/mc-2020-0017.
Full textFarías, Miguel, and Leonardo Véliz. "Multimodal Texts in Chilean English Teaching Education: Experiences From Educators and Pre-Service Teachers." Profile: Issues in Teachers´ Professional Development 21, no. 2 (July 1, 2019): 13–27. http://dx.doi.org/10.15446/profile.v21n2.75172.
Full textAbdullah, Fuad, Arini Nurul Hidayati, Agis Andriani, Dea Silvani, Ruslan Ruslan, Soni T. Tandiana, and Nina Lisnawati. "Fostering students’ Multimodal Communicative Competence through genre-based multimodal text analysis." Studies in English Language and Education 9, no. 2 (May 23, 2022): 632–50. http://dx.doi.org/10.24815/siele.v9i2.23440.
Full textDissertations / Theses on the topic "Dati multimodali"
GIANSANTI, VALENTINA. "Integration of heterogeneous single cell data with Wasserstein Generative Adversarial Networks." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2023. https://hdl.handle.net/10281/404516.
Full textTissues, organs and organisms are complex biological systems. They are objects of many studies aiming at characterizing their biological processes. Understanding how they work and how they interact in healthy and unhealthy samples gives the possibility to interfere, correcting and preventing dysfunctions, possibly leading to diseases. Recent advances in single-cell technologies are expanding our capabilities to profile at single-cell resolution various molecular layers, by targeting the transcriptome, the genome, the epigenome and the proteome. The number of single-cell datasets, their size and the diverse modalities they describe is continuously increasing, prompting the need to develop robust methods to integrate multiomic datasets, whether paired from the same cells or, most challenging, from unpaired separate experiments. The integration of different source of information results in a more comprehensive description of the whole system. Most published methods allow the integration of limited number of omics (generally two) and make assumptions about their inter-relationships. They often impose the conversion of a data modality into the other one (e.g., ATAC peaks converted in a gene activity matrix). This step introduces an important level of approximation, which could affect the analysis later performed. Here we propose MOWGAN (Multi Omic Wasserstein Generative Adversarial Network), a deep-learning based framework to simulate paired multimodal data supporting high number of modalities (more than two) and agnostic about their relationships (no assumption is imposed). Each modality is embedded into feature spaces with same dimensionality across all modalities. This step prevents any conversion between data modalities. The embeddings are sorted based on the first Laplacian Eigenmap. Mini-batches are selected by a Bayesian ridge regressor to train a Wasserstein Generative Adversarial Network with gradient penalty. The output of the generative network is used to bridge real unpaired data. MOWGAN was prototyped on public data for which paired and unpaired RNA and ATAC experiments exists. Evaluation was conducted on the ability to produce data integrable with the original ones, on the amount of shared information between synthetic layers and on the ability to impose association between molecular layers that are truly connected. The organization of the embeddings in mini-batches allows MOWGAN to have a network architecture independent of the number of modalities evaluated. Indeed, the framework was also successfully applied to integrate three (e.g., RNA, ATAC and protein or histone modification data) and four modalities (e.g., RNA, ATAC, protein, histone modifications). MOWGAN’s performance was evaluated in terms of both computational scalability and biological meaning, being the latter the most important to avoid erroneous conclusion. A comparison was conducted with published methods, concluding that MOWGAN performs better when looking at the ability to retrieve the correct biological identity (e.g., cell types) and associations. In conclusion, MOWGAN is a powerful tool for multi-omics data integration in single-cell, which answer most of the critical issues observed in the field.
Medjahed, Hamid. "Distress situation identification by multimodal data fusion for home healthcare telemonitoring." Thesis, Evry, Institut national des télécommunications, 2010. http://www.theses.fr/2010TELE0002/document.
Full textThe population age increases in all societies throughout the world. In Europe, for example, the life expectancy for men is about 71 years and for women about 79 years. For North America the life expectancy, currently is about 75 for men and 81 for women. Moreover, the elderly prefer to preserve their independence, autonomy and way of life living at home the longest time possible. The current healthcare infrastructures in these countries are widely considered to be inadequate to meet the needs of an increasingly older population. Home healthcare monitoring is a solution to deal with this problem and to ensure that elderly people can live safely and independently in their own homes for as long as possible. Automatic in-home healthcare monitoring is a technological approach which helps people age in place by continuously telemonitoring. In this thesis, we explore automatic in-home healthcare monitoring by conducting a study of professionals who currently perform in-home healthcare monitoring, by combining and synchronizing various telemonitoring modalities,under a data synchronization and multimodal data fusion platform, FL-EMUTEM (Fuzzy Logic Multimodal Environment for Medical Remote Monitoring). This platform incorporates algorithms that process each modality and providing a technique of multimodal data fusion which can ensures a pervasive in-home health monitoring for elderly people based on fuzzy logic.The originality of this thesis which is the combination of various modalities in the home, about its inhabitant and their surroundings, will constitute an interesting benefit and impact for the elderly person suffering from loneliness. This work complements the stationary smart home environment in bringing to bear its capability for integrative continuous observation and detection of critical situations
Vielzeuf, Valentin. "Apprentissage neuronal profond pour l'analyse de contenus multimodaux et temporels." Thesis, Normandie, 2019. http://www.theses.fr/2019NORMC229/document.
Full textOur perception is by nature multimodal, i.e. it appeals to many of our senses. To solve certain tasks, it is therefore relevant to use different modalities, such as sound or image.This thesis focuses on this notion in the context of deep learning. For this, it seeks to answer a particular problem: how to merge the different modalities within a deep neural network?We first propose to study a problem of concrete application: the automatic recognition of emotion in audio-visual contents.This leads us to different considerations concerning the modeling of emotions and more particularly of facial expressions. We thus propose an analysis of representations of facial expression learned by a deep neural network.In addition, we observe that each multimodal problem appears to require the use of a different merge strategy.This is why we propose and validate two methods to automatically obtain an efficient fusion neural architecture for a given multimodal problem, the first one being based on a central fusion network and aimed at preserving an easy interpretation of the adopted fusion strategy. While the second adapts a method of neural architecture search in the case of multimodal fusion, exploring a greater number of strategies and therefore achieving better performance.Finally, we are interested in a multimodal view of knowledge transfer. Indeed, we detail a non-traditional method to transfer knowledge from several sources, i.e. from several pre-trained models. For that, a more general neural representation is obtained from a single model, which brings together the knowledge contained in the pre-trained models and leads to state-of-the-art performances on a variety of facial analysis tasks
Lazarescu, Mihai M. "Incremental learning for querying multimodal symbolic data." Thesis, Curtin University, 2000. http://hdl.handle.net/20.500.11937/1660.
Full textLazarescu, Mihai M. "Incremental learning for querying multimodal symbolic data." Curtin University of Technology, School of Computing, 2000. http://espace.library.curtin.edu.au:80/R/?func=dbin-jump-full&object_id=10010.
Full textDA, CRUZ GARCIA NUNO RICARDO. "Learning with Privileged Information using Multimodal Data." Doctoral thesis, Università degli studi di Genova, 2020. http://hdl.handle.net/11567/997636.
Full textXin, Bowen. "Multimodal Data Fusion and Quantitative Analysis for Medical Applications." Thesis, The University of Sydney, 2021. https://hdl.handle.net/2123/26678.
Full textPOLSINELLI, MATTEO. "Modelli di Intelligenza Artificiale per l'analisi di dati da neuroimaging multimodale." Doctoral thesis, Università degli Studi dell'Aquila, 2022. http://hdl.handle.net/11697/192072.
Full textKhan, Mohd Tauheed. "Multimodal Data Fusion Using Voice and Electromyography Data for Robotic Control." University of Toledo / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=toledo156440368925597.
Full textOztarak, Hakan. "Structural And Event Based Multimodal Video Data Modeling." Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/12606919/index.pdf.
Full textBooks on the topic "Dati multimodali"
Fernandes, Carla, Vito Evola, and Cláudia Ribeiro. Dance Data, Cognition, and Multimodal Communication. London: Routledge, 2022. http://dx.doi.org/10.4324/9781003106401.
Full textHazeldine, Lee, Gary Hazeldine, and Christian Beighton. Analysing Multimodal Data in Complex Social Spaces. 1 Oliver’s Yard, 55 City Road, London EC1Y 1SP United Kingdom: SAGE Publications, Ltd., 2019. http://dx.doi.org/10.4135/9781526488282.
Full textAdams, Teresa M. Guidelines for the implementation of multimodal transportation location referencing systems. Washington, D.C: National Academy Press, 2001.
Find full textSeng, Kah Phooi, Li-minn Ang, Alan Wee-Chung Liew, and Junbin Gao, eds. Multimodal Analytics for Next-Generation Big Data Technologies and Applications. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-319-97598-6.
Full text1959-, Grifoni Patrizia, ed. Multimodal human computer interaction and pervasive services. Hershey PA: Information Science Reference, 2009.
Find full textVieten, Andrea. Monomodale und multimodale Registrierung von autoradiographischen und histologischen Bilddaten. Jülich: Forschungszentrum Jülich, Zentralbibliothek, 2005.
Find full textEnrique, Vidal, Casacuberta Francisco, and SpringerLink (Online service), eds. Multimodal Interactive Pattern Recognition and Applications. London: Springer-Verlag London Limited, 2011.
Find full textNational Research Council (U.S.). Transportation Research Board and National Cooperative Highway Research Program, eds. Multimodal level of service analysis for urban streets. Washington, D.C: Transportation Research Board, 2008.
Find full text, English linguistics edited by Anthon., ed. Multimodality and multimediality in the distance learning age. Campobasso: Palladino, 2000.
Find full textBiswas, Pradipta. A Multimodal End-2-End Approach to Accessible Computing. London: Springer London, 2013.
Find full textBook chapters on the topic "Dati multimodali"
Bernsen, Niels Ole, and Laila Dybkjær. "Data Handling." In Multimodal Usability, 315–49. London: Springer London, 2009. http://dx.doi.org/10.1007/978-1-84882-553-6_15.
Full textLaflen, Angela. "Learning to “Speak Data”." In Multimodal Composition, 127–43. New York: Routledge, 2021. http://dx.doi.org/10.4324/9781003163220-10.
Full textBernsen, Niels Ole, and Laila Dybkjær. "Usability Data Analysis and Evaluation." In Multimodal Usability, 351–85. London: Springer London, 2009. http://dx.doi.org/10.1007/978-1-84882-553-6_16.
Full textWilliams, Ross N. "A Multimodal Algorithm." In Adaptive Data Compression, 245–81. Boston, MA: Springer US, 1991. http://dx.doi.org/10.1007/978-1-4615-4046-5_5.
Full textPalframan, Shirley. "Multimodal classroom data." In Multimodal Signs of Learning, 27–39. London: Routledge, 2021. http://dx.doi.org/10.4324/9781003198802-3.
Full textChen, Zhikui, Liang Zhao, Qiucen Li, Xin Song, and Jianing Zhang. "Multimodal Data Fusion." In Advances in Computing, Informatics, Networking and Cybersecurity, 53–91. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-87049-2_3.
Full textHuang, Lihe. "Collecting and processing multimodal data." In Toward Multimodal Pragmatics, 99–108. London: Routledge, 2021. http://dx.doi.org/10.4324/9781003251774-5.
Full textSteininger, Silke, Florian Schiel, and Susen Rabold. "Annotation of Multimodal Data." In SmartKom: Foundations of Multimodal Dialogue Systems, 571–96. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/3-540-36678-4_35.
Full textNie, Liqiang, Meng Liu, and Xuemeng Song. "Data Collection." In Multimodal Learning toward Micro-Video Understanding, 11–17. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-031-02255-5_2.
Full textHosseini, Mohammad-Parsa, Aaron Lau, Kost Elisevich, and Hamid Soltanian-Zadeh. "Multimodal Analysis in Biomedicine." In Big Data in Multimodal Medical Imaging, 193–203. Boca Raton : CRC Press, [2020]: Chapman and Hall/CRC, 2019. http://dx.doi.org/10.1201/b22410-8.
Full textConference papers on the topic "Dati multimodali"
Sanchez-Rada, J. Fernando, Carlos A. Iglesias, Hesam Sagha, Bjorn Schuller, Ian Wood, and Paul Buitelaar. "Multimodal multimodel emotion analysis as linked data." In 2017 Seventh International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW). IEEE, 2017. http://dx.doi.org/10.1109/aciiw.2017.8272599.
Full textLiao, Callie C., Duoduo Liao, and Jesse Guessford. "Multimodal Lyrics-Rhythm Matching." In 2022 IEEE International Conference on Big Data (Big Data). IEEE, 2022. http://dx.doi.org/10.1109/bigdata55660.2022.10021009.
Full textYang, Lixin, Genshe Chen, Ronghua Xu, Sherry Chen, and Yu Chen. "Decentralized autonomous imaging data processing using blockchain." In Multimodal Biomedical Imaging XIV, edited by Fred S. Azar, Xavier Intes, and Qianqian Fang. SPIE, 2019. http://dx.doi.org/10.1117/12.2513243.
Full textOosterhuis, Kas, and Arwin Hidding. "Participator, A Participatory Urban Design Instrument." In International Conference on the 4th Game Set and Match (GSM4Q-2019). Qatar University Press, 2019. http://dx.doi.org/10.29117/gsm4q.2019.0008.
Full textHu, Kangqiao, Abdullah Nazma Nowroz, Sherief Reda, and Farinaz Koushanfar. "High-Sensitivity Hardware Trojan Detection Using Multimodal Characterization." In Design Automation and Test in Europe. New Jersey: IEEE Conference Publications, 2013. http://dx.doi.org/10.7873/date.2013.263.
Full textMahmood, Faisal, Daniel Borders, Richard Chen, Jordan Sweer, Steven Tilley, Norman S. Nishioka, J. Webster Stayman, and Nicholas J. Durr. "Robust photometric stereo endoscopy via deep learning trained on synthetic data (Conference Presentation)." In Multimodal Biomedical Imaging XIV, edited by Fred S. Azar, Xavier Intes, and Qianqian Fang. SPIE, 2019. http://dx.doi.org/10.1117/12.2509878.
Full textLampkins, Joshua, Darren Chan, Alan Perry, Sasha Strelnikoff, Jiejun Xu, and Alireza Esna Ashari. "Multimodal Road Sign Interpretation for Autonomous Vehicles." In 2022 IEEE International Conference on Big Data (Big Data). IEEE, 2022. http://dx.doi.org/10.1109/bigdata55660.2022.10020808.
Full textKohankhaki, Mohammad, Ahmad Ayad, Mahdi Barhoush, Bastian Leibe, and Anke Schmeink. "Radiopaths: Deep Multimodal Analysis on Chest Radiographs." In 2022 IEEE International Conference on Big Data (Big Data). IEEE, 2022. http://dx.doi.org/10.1109/bigdata55660.2022.10020356.
Full textKouvaras, George, and George Kokolakis. "Random Multivariate Multimodal Distributions." In Recent Advances in Stochastic Modeling and Data Analysis. WORLD SCIENTIFIC, 2007. http://dx.doi.org/10.1142/9789812709691_0009.
Full textMarte Zorrilla, Edwin, Idalis Villanueva, Jenefer Husman, and Matthew Graham. "Generating a Multimodal Dataset Using a Feature Extraction Toolkit for Wearable and Machine Learning: A pilot study." In 13th International Conference on Applied Human Factors and Ergonomics (AHFE 2022). AHFE International, 2022. http://dx.doi.org/10.54941/ahfe1001448.
Full textReports on the topic "Dati multimodali"
Linville, Lisa M., Joshua James Michalenko, and Dylan Zachary Anderson. Multimodal Data Fusion via Entropy Minimization. Office of Scientific and Technical Information (OSTI), March 2020. http://dx.doi.org/10.2172/1614682.
Full textWu, Yao-Jan, Xianfeng Yang, Sirisha Kothuri, Abolfazl Karimpour, Qinzheng Wang, and Jason Anderson. Data-Driven Mobility Strategies for Multimodal Transportation. Transportation Research and Education Center (TREC), 2021. http://dx.doi.org/10.15760/trec.262.
Full textFolds, Dennis J., Carl T. Blunt, and Raymond M. Stanley. Training for Rapid Interpretation of Voluminous Multimodal Data. Fort Belvoir, VA: Defense Technical Information Center, April 2008. http://dx.doi.org/10.21236/ada480522.
Full textHillsman, Edward. Enabling Cost-Effective Multimodal Trip Planners through Open Transit Data. Tampa, FL: University of South Florida, May 2011. http://dx.doi.org/10.5038/cutr-nctr-rr-2010-05.
Full textBarbeau, Sean. Improving the Quality and Cost Effectiveness of Multimodal Travel Behavior Data Collection. Tampa, FL: University of South Florida, February 2018. http://dx.doi.org/10.5038/cutr-nctr-rr-2018-10.
Full textBalali, Vahid, Arash Tavakoli, and Arsalan Heydarian. A Multimodal Approach for Monitoring Driving Behavior and Emotions. Mineta Transportation Institute, July 2020. http://dx.doi.org/10.31979/mti.2020.1928.
Full textZhang, Yongping, Wen Cheng, and Xudong Jia. Enhancement of Multimodal Traffic Safety in High-Quality Transit Areas. Mineta Transportation Institute, February 2021. http://dx.doi.org/10.31979/mti.2021.1920.
Full textChen, Maximillian Gene, Michael Christopher Darling, and David John Stracuzzi. Preliminary Results on Applying Nonparametric Clustering and Bayesian Consensus Clustering Methods to Multimodal Data. Office of Scientific and Technical Information (OSTI), September 2018. http://dx.doi.org/10.2172/1475256.
Full textBoero, Riccardo, Peter Thomas Hraber, Kimberly Ann Kaufeld, Elisabeth Ann Moore, Ethan Romero-Severson, John Joseph Ambrosiano, John Leslie Whitton, and Benjamin Hayden Sims. Analysis of Multimodal Wearable Sensor Data to Characterize Social Groups and Influence in Organizations. Office of Scientific and Technical Information (OSTI), October 2019. http://dx.doi.org/10.2172/1570596.
Full textTufte, Kristin. Multimodal Data at Signalized Intersections: Strategies for Archiving Existing and New Data Streams to Support Operations and Planning Fusion and Integration of Arterial Performance Data. Portland State University Library, September 2013. http://dx.doi.org/10.15760/trec.46.
Full text