Littérature scientifique sur le sujet « Dati multimodali »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Dati multimodali ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Articles de revues sur le sujet "Dati multimodali"

1

Fatigante, Marilena, Cristina Zucchermaglio, Francesca Alby et Mariacristina Nutricato. « La struttura della prima visita oncologica : uno studio conversazionale ». PSICOLOGIA DELLA SALUTE, no 1 (janvier 2021) : 53–77. http://dx.doi.org/10.3280/pds2021-001005.

Texte intégral
Résumé :
La prima visita in oncologia è un evento istituzionale (Drew e Heritage, 1992) altamente complesso sia per medico che per paziente e accompagnatori coinvolti. Il lavoro presenta uno studio conversazionale su un corpus di prime visite oncologiche volto all'esame delle distinte fasi che compongono questo evento. Il corpus di dati è costituito da 36 video registrazioni di visite oncologiche, condotte da due oncologi senior in due differenti ospedali romani. Lo stu-dio aderisce alla prospettiva teorico-metodologica dell'Analisi Conversazionale (Schegloff, 2007), in particolare applicata al contesto medico (Heritage e Maynard, 2006). A partire da indicatori empirici discorsivi e multimodali, sono state identificate 8 fasi attraverso le quali, con diverse durate e complessità, si realizza la visita oncologica: Apertura, Anamnesi, Presentazio-ne della malattia, Stadiazione, Indicazione di trattamento, Prescrizioni e Chiusura. L'analisi qualitativa mostra come paziente e accompagnatore si orientino ai passaggi tra fasi distinte, cooperando con il medico ad assolvere la specifica agenda di attività della visita. Sono discusse le implicazioni dello studio per la ricerca e la comprensione delle forme di partecipazione e strategie di empowerment disponibili ai pazienti per fronteggiare la complessità comunicativa dell'incontro con l'oncologo e ridurre possibili stati d'ansia che si associano a questo evento.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Karatayli-Ozgursoy, S., J. A. Bishop, A. T. Hillel, L. M. Akst et S. R. Best. « Tumori maligni delle ghiandole salivari della laringe : un'unica review istituzionale ». Acta Otorhinolaryngologica Italica 36, no 4 (août 2016) : 289–94. http://dx.doi.org/10.14639/0392-100x-807.

Texte intégral
Résumé :
I tumori a istotipo salivare della laringe sono molto rari, con pochi report in letteratura in merito al loro andamento clinico. Nel presente manoscritto discutiamo un'esperienza di 10 anni presso una singola struttura. Abbiamo condotto una review retrospettiva della casistica di un centro di oncologia della testa e del collo di terzo livello. I pazienti sono stati individuati mediante analisi di un database e sono stati revisionati da un Anatomo Patologo testa collo. I dati inerenti la clinica, le modalità di trattamento e gli esiti sono stati prelevati da archivi elettronici. Sono stati inclusi sei pazienti nello studio, con un range di età dai 44 ai 69 anni. Tutti e sei erano affetti da neoplasie maligne a istotipo salivare della laringe. Gli istotipi includevano: tre carcinomi adenoido-cistici (2 sopraglottico, 1 sottoglottico), un carcinoma mucoepidermoidale (sopraglottico), un carcinoma epiteliale-mioepiteliale (sopraglottico), e un adenocarcinoma (transglottico). Tutti sono stati sottoposti a trattamento chirurgico (2 chirurgie laser, 4 open) e 5 dei 6 pazienti sono stati successivamente sottoposti a terapia adjuvante (4 a radioterapia, 1 a radio-chemioterapia concomitante). Un paziente era fumatore; nessun paziente aveva storia di abuso di alcolici. A un follow-up con mediana di 4,5 anni nessuno dei pazienti ha presentato recidiva o metastasi locali o a distanza. I tumori a istotipo salivare della laringe si presentano solitamente in pazienti della seconda/terza età, e possono essere trattati con successo mediante approcci multimodali, con un ottimo controllo locoregionale di malattia.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Moneglia, Massimo. « Le unità di informazione Parentetiche alla periferia destra del Comment nella Teoria della Lingua in Atto ». DILEF. Rivista digitale del Dipartimento di Lettere e Filosofia, no 1 (27 mars 2022) : 88–123. http://dx.doi.org/10.35948/dilef/2022.3294.

Texte intégral
Résumé :
AbstractSecondo la teoria della Lingua in Atto, le unità di informazione parentetiche inseriscono nell’enunciato informazioni poste su un piano locutivo secondario rispetto alla relazione Topic/Comment. Possono apparire in tutte le posizioni, ma non in prima posizione  e non sono mai composizionali con un’altra unità di informazione. Le Parentesi poste alla periferia destra del Comment possono essere confuse con le unità di Appendice,  in quanto la realizzazione prosodica risulta simile, nonostante le Appendici esprimano informazioni ridondanti che invece integrano il Comment. Sulla base del Data Base dell’Articolazione dell’Informazione IPIC il lavoro esplora le ragioni che consentono l’individuazione dei Parentetici nei Corpora di parlato spontaneo Italiano. Dati i loro valori semantici, le valutazioni modali e i commenti metalinguistici (che sono la grande maggioranza delle parentesi brevi nel discorso) introducono un salto nella prospettiva dell’enunciato che li colloca automaticamente su un piano locutivo secondario e non possono mai essere Appendici. Al contrario, altri tipi di espressioni che anche riempiono l’unità Parentetica (congiunzioni, se-frasi, avverbi e argomenti esterni, parentesi lunghe) possono essere in linea di principio considerate integrazioni del Comment. Il lavoro sostiene che l'interpretazione tra parentesi di queste unità di informazione risulta sottodeterminata a meno che il parlante  non segnali il valore parentetico attraverso segnali prosodici o multimodali. Secondo i risultati dell’analisi condotta, inoltre, le parentesi lunghe in posizione finale non vengono realizzate come unità informative dell’enunciato ma piuttosto come "enunciati parentetici", pienamente autonomi.   According to the Language Into Act Theory, Parenthetical units insert in the Utterance information that is placed on a secondary locutive plan with respect to the Topic / Comment relation. They can appear in all but not in the first position of the utterance and are never compositional with another information unit. Because of prosodic similarities, however, Parenthesis placed at the right periphery of the Comment can be confused with Appendix units, which on the contrary express redundant information complementing the Comment. On the basis of the IPIC Information Structure Data-Base the paper explore the reasons which allow their detection in Italian Spontaneous Speech Corpora. Because of their semantic values, Modal evaluations and Metalinguistic commentaries (the large majority of Parenthesis in speech) introduce a jump in the perspective of the utterance, which places them on a secondary locutive plan and can never be Appendixes. On the contrary other types of expressions that can also fill the Parenthetical unit (conjunctions, if-sentences, adverbials and external arguments, long parenthetical) can be in principle also integrations of the Comment. The paper argues that the parenthetical interpretation of these information units will remain under-determined unless the speaker signals this value through prosodic or multimodal cues. Moreover, according to our finding, long parenthesis in the final position are not performed as information units of the utterance but rather as fully autonomous “parenthetical utterance”.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Spaliviero, Camilla. « Teaching Italian as a second language through digital storytelling : Students’ perceptions towards izi.TRAVEL ». EuroAmerican Journal of Applied Linguistics and Languages 9, no 1 (10 avril 2022) : 91–121. http://dx.doi.org/10.21283/2376905x.15.1.265.

Texte intégral
Résumé :
EN The use of technology-enhanced language learning, representing an urgent issue due to the Covid-19 pandemic, has also been promoted by many studies in second language acquisition. Nevertheless, research in this field is only partially developed for the teaching of Italian as a second language (L2) within the university context and for investigating students’ perceptions. This article presents an action research project on the use of izi.TRAVEL, a website housing more than 15,000 audio guides for touring various sites in cities around the world. The aim of the study is to contribute to developing didactic practices relative to Italian as an L2 through digital storytelling, in order to raise and foster students’ linguistic and digital skills. Participants were a small group of students studying Italian as an L2 as part of a master’s program at an Italian university. Data were collected through a questionnaire, a focus group, and students’ multimodal artifacts. Results show the positive impact of project participation on students’ attitudes and perceived learning outcomes, as well as improvements in linguistic, cultural, environmental, and digital competences. Key words: TEACHING ITALIAN AS AN L2, DIGITAL STORYTELLING, IZI.TRAVEL, STUDENTS’ PERCEPTIONS, ACTION RESEARCH ES El uso de la tecnología para la adquisición lingüística es promovido por varios estudios, y constituye una cuestión urgente en el contexto de la pandemia del Covid-19. Sin embargo, este tema de investigación se ha desarrollado solo parcialmente respecto a la enseñanza del italiano como L2 en la universidad y a las percepciones del alumnado. En este artículo se presenta un proyecto de investigación-acción sobre el uso de izi.TRAVEL, un sitio web que tiene más de 15,000 audioguías para recorrer varios sitios de ciudades de todo el mundo. Izi.TRAVEL se implementó con un pequeño grupo de estudiantes de italiano como L2 matriculados en un Máster internacional de una universidad italiana. El objetivo del estudio es desarrollar prácticas didácticas del italiano como L2 a través de la narrativa digital para fomentar las habilidades lingüísticas y digitales del alumnado. Los datos se recogieron a través de un cuestionario, un grupo focal y los productos multimodales del alumnado. Los resultados muestran el impacto positivo de la participación en el proyecto en las actitudes del alumnado y en sus resultados de aprendizaje percibidos, así como mejoras en sus competencias lingüística, cultural, ambiental y digital. Palabras clave: ENSEÑAR ITALIANO L2, NARRATIVA DIGITAL, IZI.TRAVEL, PERCEPCIONES DE LOS ESTUDIANTES, INVESTIGACIÓN-ACCIÓN IT L’uso delle tecnologie per l’apprendimento linguistico è promosso da diversi studi e si è andato imponendo come questione urgente ne contesto della pandemia Covid-19. Tuttavia, in quest’ambito risultano ancora scarsi gli studi sulla didattica dell’italiano L2 nel contesto universitario e dalla prospettiva degli studenti. Su queste basi, nell’articolo si presenta un progetto di ricerca-azione riguardante l’uso di izi.TRAVEL, un sito web che contiene più di 15,000 audioguide per visitare le città di tutto il mondo. Il progetto è stato pilotato con un piccolo gruppo di apprendenti di italiano L2 iscritti in un Master internazionale di un’università italiana. L’obiettivo dello studio è quello di contribuire allo sviluppo di pratiche didattiche sull’italiano L2 attraverso il Digital Storytelling per promuovere le competenze linguistiche e digitali degli apprendenti. I dati sono stati raccolti attraverso un questionario, un focus group e i prodotti multimodali degli studenti. I risultati rivelano che l’impatto della partecipazione nel progetto sugli atteggiamenti degli/delle apprendenti e sui risultati dell’apprendimento percepiti è stato positivo poiché migliorano le competenze linguistiche, culturali, ambientali e digitali. Parole chiave: INSEGNARE ITALIANO L2, DIGITAL STORYTELLING, IZI.TRAVEL, PERCEZIONI DEGLI STUDENTI, RICERCA-AZIONE
Styles APA, Harvard, Vancouver, ISO, etc.
5

Deng, Wan-Yu, Dan Liu et Ying-Ying Dong. « Feature Selection and Classification for High-Dimensional Incomplete Multimodal Data ». Mathematical Problems in Engineering 2018 (12 août 2018) : 1–9. http://dx.doi.org/10.1155/2018/1583969.

Texte intégral
Résumé :
Due to missing values, incomplete dataset is ubiquitous in multimodal scene. Complete data is a prerequisite of the most existing multimodality data fusion methods. For incomplete multimodal high-dimensional data, we propose a feature selection and classification method. Our method mainly focuses on extracting the most relevant features from the high-dimensional features and then improving the classification accuracy. The experimental results show that our method produces considerably better performance on incomplete multimodal data such as ADNI dataset and Office dataset, compared to the case of complete data.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Amundrud, Thomas. « Multimodal knowledge building in a Japanese secondary English as a foreign language class ». Multimodality & ; Society 2, no 1 (mars 2022) : 64–85. http://dx.doi.org/10.1177/26349795221081300.

Texte intégral
Résumé :
Multimodal analysis examines how different modes, such as space, gesture, and language, instantiate meaning together. In this paper, a Systemic Functional-Multimodal Discourse Analysis demonstrates how teachers enact their pedagogy with their students across modes through what is represented experientially, how relationships between people are construed interpersonally, and how coherent texts are realized textually. This paper is a preliminary study of classroom data from a larger project looking at the multimodal pedagogy of Japanese secondary school teachers of English through the paired lenses of Systemic Functional-Multimodal Discourse Analysis and Legitimation Code Theory. It demonstrates how methods from these perspectives may be productively combined. How this teacher builds cumulative knowledge multimodally can be uncovered through the analysis of pedagogic register (Rose, 2018) and exchange (Berry, 1981; Martin and Rose, 2007), as well as classroom space and representing and textual action (Amundrud, 2017; Martin and Zappavigna, 2019). How both gesture and dialogic exchange between the teacher and students modulate the contextual relation of the knowledge construed in class is also explored via semantic gravity, which looks at how closely connected knowledge practices are to their context (Maton, 2014). As a preliminary study, the paper closes with limitations and future directions for this pedagogic multimodality research.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Wan, Huan, Hui Wang, Bryan Scotney, Jun Liu et Wing W. Y. Ng. « Within-class multimodal classification ». Multimedia Tools and Applications 79, no 39-40 (11 août 2020) : 29327–52. http://dx.doi.org/10.1007/s11042-020-09238-1.

Texte intégral
Résumé :
Abstract In many real-world classification problems there exist multiple subclasses (or clusters) within a class; in other words, the underlying data distribution is within-class multimodal. One example is face recognition where a face (i.e. a class) may be presented in frontal view or side view, corresponding to different modalities. This issue has been largely ignored in the literature or at least under studied. How to address the within-class multimodality issue is still an unsolved problem. In this paper, we present an extensive study of within-class multimodality classification. This study is guided by a number of research questions, and conducted through experimentation on artificial data and real data. In addition, we establish a case for within-class multimodal classification that is characterised by the concurrent maximisation of between-class separation, between-subclass separation and within-class compactness. Extensive experimental results show that within-class multimodal classification consistently leads to significant performance gains when within-class multimodality is present in data. Furthermore, it has been found that within-class multimodal classification offers a competitive solution to face recognition under different lighting and face pose conditions. It is our opinion that the case for within-class multimodal classification is established, therefore there is a milestone to be achieved in some machine learning algorithms (e.g. Gaussian mixture model) when within-class multimodal classification, or part of it, is pursued.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Silvestri, Katarina, Mary McVee, Christopher Jarmark, Lynn Shanahan et Kenneth English. « Multimodal positioning of artifacts in interaction in a collaborative elementary engineering club ». Multimodal Communication 10, no 3 (1 décembre 2021) : 289–309. http://dx.doi.org/10.1515/mc-2020-0017.

Texte intégral
Résumé :
Abstract This exploratory case study uses multimodal positioning analysis to determine and describe how a purposefully crafted emergent artifact comes to influence and/or manipulate social dynamics, structure, and positionings of one design team comprised of five third-graders in an afterschool elementary engineering and literacy club. In addition to social semiotic theories of multimodality (e.g., Kress, G. (2010). Multimodality: a social semiotic approach to contemporary communication. New York, NY: Routledge) and multimodal interactional analysis (Norris, S. (2004). Analyzing multimodal interaction: a methodological framework. New York, NY: Routledge, Norris, S. (2019). Systematically working with multimodal data: research methods in multimodal discourse analysis. Hoboken, NJ: Wiley-Blackwell), Positioning Theory (Harré, R. and Van Langenhove, L. (1991). Varieties of positioning. J. Theor. Soc. Behav. 21: 393–407) is used to examine group interactions with the artifact, with observational data collected from audio, video, researcher field notes, analytic memos, photographs, student artifacts (e.g., drawn designs, built designs), and transcriptions of audio and video data. Analysis of interactions of the artifact as it unfolds demonstrates multiple types of role-based positioning with students (e.g., builder, helper, idea-sharer). Foregrounding analysis of the artifact, rather than the student participants, exposed students’ alignment or opposition with their groupmates during the project. This study contributes to multimodal and artifactual scholarship through a close examination of positions emergent across time through multimodal communicative actions and illustrates how perspectives on multimodality may be analytically combined with Positioning Theory.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Farías, Miguel, et Leonardo Véliz. « Multimodal Texts in Chilean English Teaching Education : Experiences From Educators and Pre-Service Teachers ». Profile : Issues in Teachers´ Professional Development 21, no 2 (1 juillet 2019) : 13–27. http://dx.doi.org/10.15446/profile.v21n2.75172.

Texte intégral
Résumé :
Drawing on 10 pedagogical standards issued by the Chilean Ministry of Education, three dealing with multimodality, we, in this research, examined English language pre-service teachers’ and educators’ approaches to the use of multimodal texts. Data were gathered through two online surveys that explored the use of multimodal texts by teacher educators and pre-service teachers. Results indicate that educators were familiar with the standards and multimodality when teaching reading and writing, but lack of resources, preparation, and time prevents them from working with multimodal texts. Candidates read printed and digital newspapers, novels, and magazines outside university, but rarely use them academically. They extensively use social media, even for academic purposes. There is a mismatch between the use of multimodal texts by teacher candidates and teacher educators.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Abdullah, Fuad, Arini Nurul Hidayati, Agis Andriani, Dea Silvani, Ruslan Ruslan, Soni T. Tandiana et Nina Lisnawati. « Fostering students’ Multimodal Communicative Competence through genre-based multimodal text analysis ». Studies in English Language and Education 9, no 2 (23 mai 2022) : 632–50. http://dx.doi.org/10.24815/siele.v9i2.23440.

Texte intégral
Résumé :
The multiplicity of semiotic resources employed in communication, the rapid advancement of information, communication, and technology (ICT), and burgeoning interdisciplinary research into multimodality have led to a paradigmatic shift from a mono-modal to the multimodal perspective of communication. Conversely, actualising multimodal concepts in teaching and learning practises remains underexplored, notably in developing the students’ multimodal communicative competence (MCC). For this reason, this study endeavoured to probe genre-based multimodal text analysis in fostering the students’ MCC. Grounded on Action Research (AR), the present study facilitated students to cultivate their MCC through the activities of Genre-based multimodal text analysis (hereafter, GBMTA). Practically speaking, students performed the analysing practises in the course at an English Education Department of a state university in Tasikmalaya, West Java, Indonesia, namely Grammar in Multimodal Discourse (GiMD. Four Indonesian EFL students were recruited as the participants. The data were collected through semi-structured interviews and analysed with thematic analysis. The findings showed that the students could: (1) build their knowledge on multimodality, (2) engage with theoretical and practical learning activities, (3) assign analytical and reflective task-based learning activities, and (4) provide constructive feedback about their learning performances, and (5) raise awareness of the contributions of multimodality to prospective English teachers’ competences. The main implication of this study is the promotion of increased awareness of deploying multimodal aspects to English language teaching, learning, and investigative practises to attain optimum MCC.
Styles APA, Harvard, Vancouver, ISO, etc.

Thèses sur le sujet "Dati multimodali"

1

GIANSANTI, VALENTINA. « Integration of heterogeneous single cell data with Wasserstein Generative Adversarial Networks ». Doctoral thesis, Università degli Studi di Milano-Bicocca, 2023. https://hdl.handle.net/10281/404516.

Texte intégral
Résumé :
Tessuti, organi e organismi sono sistemi biologici complessi, oggetto di studi che mirano alla caratterizzazione dei loro processi biologici. Comprendere il loro funzionamento e la loro interazione in campioni sani e malati consente di interferire, correggere e prevenire le disfunzioni dalle quali si sviluppano possibilmente le malattie. I recenti sviluppi nelle tecnologie di sequenziamento single-cell stanno ampliano la capacità di profilare, a livello di singola cellula, diversi layer molecolari (trascrittoma, genoma, epigenoma, proteoma). Il numero, la grandezza e le diverse modalità dei dataset prodotti è in continua crescita. Ciò spinge allo sviluppo di robusti metodi per l’integrazione di dataset multiomici, che siano essi descrittivi o meno delle stesse cellule. L’integrazione di più fonti di informazione produce una descrizione più ampia e completa dell’intero sistema analizzato. La maggior parte dei sistemi di integrazione disponibili ad oggi consente l’analisi simultanea di un numero limitato di omiche (generalmente due) e richiede conoscenze pregresse riguardo le loro relazioni. Questi metodi spesso impongono la traduzione di una modalità nelle variabili espresse da un altro dato (ad esempio, i picchi di ATAC vengono convertiti in gene activity matrix). Questo step introduce un livello di approssimazione nel dato che potrebbe pregiudicare le analisi svolte in seguito. Da qui nasce MOWGAN (Multi Omic Wasserstein Generative Adversarial Network), un framework basato sul deep-learning, per la simulazione di dati multimodali appaiati in grado di supportare un alto numero di dataset (più di due) e agnostico sulle relazioni che intercorrono tra loro (non viene imposta alcuna assunzione). Ogni modalità viene proiettata in uno spazio descrittivo ridotto, le cui dimensioni sono fissate per tutti i datasets. Questo processo previene la traduzione tra modalità. Le cellule, descritte da vettori nello spazio ridotto, vengono ordinate in base alla prima componente della loro Laplacian Eigenmap. Un regressore Bayesian viene successivamente applicato per selezionare i mini-batch con i quali viene allenata una particolare architettura di deep-learning, la Wasserstein Generative Adversarial Network with gradient penalty. La componente generativa della rete restituisce in uscita un nuovo dataset, appaiato, che viene utilizzato come ponte per il passaggio di informazioni tra i dataset originali. Lo sviluppo di MOWGAN è stato condotto con l’ausilio di dati pubblici per i quali erano disponibili osservazioni di RNA e ATAC sia per le stesse cellule, che per cellule differenti. La valutazione dei risultati è stata condotta sulla base della capacità del dato prodotto di essere integrato con il dato originale. Inoltre, il dato sintetico deve avere informazione condivisa tra le diverse omiche. Questa deve rispettare la natura biologica del dato: le associazioni non devono essere presenti tra entità cellulari rappresentanti tipi cellulari differenti. L’organizzazione del dato in mini-batch consente a MOWGAN di avere una architettura di rete indipendente dal numero di modalità considerate. Infatti, il framework è stato applicato anche per l’integrazione di tre (RNA, ATAC e proteine, RNA ATAC e modificazioni istoniche) e quattro modalità (RNA, ATAC, proteine e modificazioni istoniche). Il rendimento di MOWGAN è stato dunque valutato in termini di scalabilità computazionale (integrazione di molteplici datasets) e significato biologico, essendo quest’ultimo il più importante per non giungere a conclusioni errate nello studio in essere. È stato eseguito un confronto con altri metodi già disponibili in letteratura, riscontrando la maggiore capacità di MOWGAN di creare associazioni inter-modali tra entità cellulari realmente legate. In conclusione, MOWGAN è uno strumento potente per l’integrazione di dati multi-modali in single-cell, che risponde a molte delle problematiche riscontrate nel campo.
Tissues, organs and organisms are complex biological systems. They are objects of many studies aiming at characterizing their biological processes. Understanding how they work and how they interact in healthy and unhealthy samples gives the possibility to interfere, correcting and preventing dysfunctions, possibly leading to diseases. Recent advances in single-cell technologies are expanding our capabilities to profile at single-cell resolution various molecular layers, by targeting the transcriptome, the genome, the epigenome and the proteome. The number of single-cell datasets, their size and the diverse modalities they describe is continuously increasing, prompting the need to develop robust methods to integrate multiomic datasets, whether paired from the same cells or, most challenging, from unpaired separate experiments. The integration of different source of information results in a more comprehensive description of the whole system. Most published methods allow the integration of limited number of omics (generally two) and make assumptions about their inter-relationships. They often impose the conversion of a data modality into the other one (e.g., ATAC peaks converted in a gene activity matrix). This step introduces an important level of approximation, which could affect the analysis later performed. Here we propose MOWGAN (Multi Omic Wasserstein Generative Adversarial Network), a deep-learning based framework to simulate paired multimodal data supporting high number of modalities (more than two) and agnostic about their relationships (no assumption is imposed). Each modality is embedded into feature spaces with same dimensionality across all modalities. This step prevents any conversion between data modalities. The embeddings are sorted based on the first Laplacian Eigenmap. Mini-batches are selected by a Bayesian ridge regressor to train a Wasserstein Generative Adversarial Network with gradient penalty. The output of the generative network is used to bridge real unpaired data. MOWGAN was prototyped on public data for which paired and unpaired RNA and ATAC experiments exists. Evaluation was conducted on the ability to produce data integrable with the original ones, on the amount of shared information between synthetic layers and on the ability to impose association between molecular layers that are truly connected. The organization of the embeddings in mini-batches allows MOWGAN to have a network architecture independent of the number of modalities evaluated. Indeed, the framework was also successfully applied to integrate three (e.g., RNA, ATAC and protein or histone modification data) and four modalities (e.g., RNA, ATAC, protein, histone modifications). MOWGAN’s performance was evaluated in terms of both computational scalability and biological meaning, being the latter the most important to avoid erroneous conclusion. A comparison was conducted with published methods, concluding that MOWGAN performs better when looking at the ability to retrieve the correct biological identity (e.g., cell types) and associations. In conclusion, MOWGAN is a powerful tool for multi-omics data integration in single-cell, which answer most of the critical issues observed in the field.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Medjahed, Hamid. « Distress situation identification by multimodal data fusion for home healthcare telemonitoring ». Thesis, Evry, Institut national des télécommunications, 2010. http://www.theses.fr/2010TELE0002/document.

Texte intégral
Résumé :
Aujourd'hui, la proportion des personnes âgées devient importante par rapport à l'ensemble de la population, et les capacités d'admission dans les hôpitaux sont limitées. En conséquence, plusieurs systèmes de télévigilance médicale ont été développés, mais il existe peu de solutions commerciales. Ces systèmes se concentrent soit sur la mise en oeuvre d’une architecture générique pour l'intégration des systèmes d'information médicale, soit sur l'amélioration de la vie quotidienne des patients en utilisant divers dispositifs automatiques avec alarme, soit sur l’offre de services de soins aux patients souffrant de certaines maladies comme l'asthme, le diabète, les problèmes cardiaques ou pulmonaires, ou la maladie d'Alzheimer. Dans ce contexte, un système automatique pour la télévigilance médicale à domicile est une solution pour faire face à ces problèmes et ainsi permettre aux personnes âgées de vivre en toute sécurité et en toute indépendance à leur domicile. Dans cette thèse, qui s’inscrit dans le cadre de la télévigilance médicale, un nouveau système de télévigilance médicale à plusieurs modalités nommé EMUTEM (Environnement Multimodale pour la Télévigilance Médicale) est présenté. Il combine et synchronise plusieurs modalités ou capteurs, grâce à une technique de fusion de données multimodale basée sur la logique floue. Ce système peut assurer une surveillance continue de la santé des personnes âgées. L'originalité de ce système avec la nouvelle approche de fusion est sa flexibilité à combiner plusieurs modalités de télévigilance médicale. Il offre un grand bénéfice aux personnes âgées en surveillant en permanence leur état de santé et en détectant d’éventuelles situations de détresse
The population age increases in all societies throughout the world. In Europe, for example, the life expectancy for men is about 71 years and for women about 79 years. For North America the life expectancy, currently is about 75 for men and 81 for women. Moreover, the elderly prefer to preserve their independence, autonomy and way of life living at home the longest time possible. The current healthcare infrastructures in these countries are widely considered to be inadequate to meet the needs of an increasingly older population. Home healthcare monitoring is a solution to deal with this problem and to ensure that elderly people can live safely and independently in their own homes for as long as possible. Automatic in-home healthcare monitoring is a technological approach which helps people age in place by continuously telemonitoring. In this thesis, we explore automatic in-home healthcare monitoring by conducting a study of professionals who currently perform in-home healthcare monitoring, by combining and synchronizing various telemonitoring modalities,under a data synchronization and multimodal data fusion platform, FL-EMUTEM (Fuzzy Logic Multimodal Environment for Medical Remote Monitoring). This platform incorporates algorithms that process each modality and providing a technique of multimodal data fusion which can ensures a pervasive in-home health monitoring for elderly people based on fuzzy logic.The originality of this thesis which is the combination of various modalities in the home, about its inhabitant and their surroundings, will constitute an interesting benefit and impact for the elderly person suffering from loneliness. This work complements the stationary smart home environment in bringing to bear its capability for integrative continuous observation and detection of critical situations
Styles APA, Harvard, Vancouver, ISO, etc.
3

Vielzeuf, Valentin. « Apprentissage neuronal profond pour l'analyse de contenus multimodaux et temporels ». Thesis, Normandie, 2019. http://www.theses.fr/2019NORMC229/document.

Texte intégral
Résumé :
Notre perception est par nature multimodale, i.e. fait appel à plusieurs de nos sens. Pour résoudre certaines tâches, il est donc pertinent d’utiliser différentes modalités, telles que le son ou l’image.Cette thèse s’intéresse à cette notion dans le cadre de l’apprentissage neuronal profond. Pour cela, elle cherche à répondre à une problématique en particulier : comment fusionner les différentes modalités au sein d’un réseau de neurones ?Nous proposons tout d’abord d’étudier un problème d’application concret : la reconnaissance automatique des émotions dans des contenus audio-visuels.Cela nous conduit à différentes considérations concernant la modélisation des émotions et plus particulièrement des expressions faciales. Nous proposons ainsi une analyse des représentations de l’expression faciale apprises par un réseau de neurones profonds.De plus, cela permet d’observer que chaque problème multimodal semble nécessiter l’utilisation d’une stratégie de fusion différente.C’est pourquoi nous proposons et validons ensuite deux méthodes pour obtenir automatiquement une architecture neuronale de fusion efficace pour un problème multimodal donné, la première se basant sur un modèle central de fusion et ayant pour visée de conserver une certaine interprétation de la stratégie de fusion adoptée, tandis que la seconde adapte une méthode de recherche d'architecture neuronale au cas de la fusion, explorant un plus grand nombre de stratégies et atteignant ainsi de meilleures performances.Enfin, nous nous intéressons à une vision multimodale du transfert de connaissances. En effet, nous détaillons une méthode non traditionnelle pour effectuer un transfert de connaissances à partir de plusieurs sources, i.e. plusieurs modèles pré-entraînés. Pour cela, une représentation neuronale plus générale est obtenue à partir d’un modèle unique, qui rassemble la connaissance contenue dans les modèles pré-entraînés et conduit à des performances à l'état de l'art sur une variété de tâches d'analyse de visages
Our perception is by nature multimodal, i.e. it appeals to many of our senses. To solve certain tasks, it is therefore relevant to use different modalities, such as sound or image.This thesis focuses on this notion in the context of deep learning. For this, it seeks to answer a particular problem: how to merge the different modalities within a deep neural network?We first propose to study a problem of concrete application: the automatic recognition of emotion in audio-visual contents.This leads us to different considerations concerning the modeling of emotions and more particularly of facial expressions. We thus propose an analysis of representations of facial expression learned by a deep neural network.In addition, we observe that each multimodal problem appears to require the use of a different merge strategy.This is why we propose and validate two methods to automatically obtain an efficient fusion neural architecture for a given multimodal problem, the first one being based on a central fusion network and aimed at preserving an easy interpretation of the adopted fusion strategy. While the second adapts a method of neural architecture search in the case of multimodal fusion, exploring a greater number of strategies and therefore achieving better performance.Finally, we are interested in a multimodal view of knowledge transfer. Indeed, we detail a non-traditional method to transfer knowledge from several sources, i.e. from several pre-trained models. For that, a more general neural representation is obtained from a single model, which brings together the knowledge contained in the pre-trained models and leads to state-of-the-art performances on a variety of facial analysis tasks
Styles APA, Harvard, Vancouver, ISO, etc.
4

Lazarescu, Mihai M. « Incremental learning for querying multimodal symbolic data ». Thesis, Curtin University, 2000. http://hdl.handle.net/20.500.11937/1660.

Texte intégral
Résumé :
In this thesis we present an incremental learning algorithm for learning and classifying the pattern of movement of multiple objects in a dynamic scene. The method that we describe is based on symbolic representations of the patterns. The typical representation has a spatial component that describes the relationships of the objects and a temporal component that describes the ordering of the actions of the objects in the scene. The incremental learning algorithm (ILF) uses evidence based forgetting, generates compact concept structures and can track concept drift.We also present two novel algorithms that combine incremental learning and image analysis. The first algorithm is used in an American Football application and shows how natural language parsing can be combined with image processing and expert background knowledge to address the difficult problem of classifying and learning American Football plays. We present in detail the model developed to representAmerican Football plays, the parser used to process the transcript of the American Football commentary and the algorithms developed to label the players and classify the queries. The second algorithm is used in a cricket application. It combines incremental machine learning and camera motion estimation to classify and learn common cricket shots. We describe the method used to extract and convert the camera motion parameter values to symbolic form and the processing involved in learning the shots.Finally, we explore the issues that arise from combining incremental learning with incremental recognition. Two methods that combine incremental recognition and incremental learning are presented along with a comparison between the algorithms.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Lazarescu, Mihai M. « Incremental learning for querying multimodal symbolic data ». Curtin University of Technology, School of Computing, 2000. http://espace.library.curtin.edu.au:80/R/?func=dbin-jump-full&object_id=10010.

Texte intégral
Résumé :
In this thesis we present an incremental learning algorithm for learning and classifying the pattern of movement of multiple objects in a dynamic scene. The method that we describe is based on symbolic representations of the patterns. The typical representation has a spatial component that describes the relationships of the objects and a temporal component that describes the ordering of the actions of the objects in the scene. The incremental learning algorithm (ILF) uses evidence based forgetting, generates compact concept structures and can track concept drift.We also present two novel algorithms that combine incremental learning and image analysis. The first algorithm is used in an American Football application and shows how natural language parsing can be combined with image processing and expert background knowledge to address the difficult problem of classifying and learning American Football plays. We present in detail the model developed to representAmerican Football plays, the parser used to process the transcript of the American Football commentary and the algorithms developed to label the players and classify the queries. The second algorithm is used in a cricket application. It combines incremental machine learning and camera motion estimation to classify and learn common cricket shots. We describe the method used to extract and convert the camera motion parameter values to symbolic form and the processing involved in learning the shots.Finally, we explore the issues that arise from combining incremental learning with incremental recognition. Two methods that combine incremental recognition and incremental learning are presented along with a comparison between the algorithms.
Styles APA, Harvard, Vancouver, ISO, etc.
6

DA, CRUZ GARCIA NUNO RICARDO. « Learning with Privileged Information using Multimodal Data ». Doctoral thesis, Università degli studi di Genova, 2020. http://hdl.handle.net/11567/997636.

Texte intégral
Résumé :
Computer vision is the science related to teaching machines to see and understand digital images or videos. During the last decade, computer vision has seen tremendous progress on perception tasks such as object detection, semantic segmentation, and video action recognition, which lead to the development and improvements of important industrial applications such as self-driving cars and medical image analysis. These advances are mainly due to fast computation offered by GPUs, the development of high capacity models such as deep neural networks, and the availability of large datasets, often composed by a variety of modalities. In this thesis, we explore how multimodal data can be used to train deep convolutional neural networks. Humans perceive the world through multiple senses, and reason over the multimodal space of stimuli to act and understand the environment. One way to improve the perception capabilities of deep learning methods is to use different modalities as input, as it offers different and complementary information about the scene. Recent multimodal datasets for computer vision tasks include modalities such as depth maps, infrared, skeleton coordinates, and others, besides the traditional RGB. This thesis investigates deep learning systems that learn from multiple visual modalities. In particular, we are interested in a very practical scenario in which an input modality is missing at test time. The question we address is the following: how can we take advantage of multimodal datasets for training our model, knowing that, at test time, a modality might be missing or too noisy? The case of having access to more information at training time than at test time is referred to as learning using privileged information. In this work, we develop methods to address this challenge, with special focus on the tasks of action and object recognition, and on the modalities of depth, optical flow, and RGB, that we use for inference at test time. This thesis advances the art of multimodal learning in three different ways. First, we develop a deep learning method for video classification that is trained on RGB and depth data, and is able to hallucinate depth features and predictions at test time. Second, we build on this method and propose a more generic mechanism based on adversarial learning to learn to mimic the predictions originated by the depth modality, and is able to automatically switch from true depth features to generated depth features in case of a noisy sensor. Third, we develop a method that learns a single network trained on RGB data, that is enriched with additional supervision information from other modalities such as depth and optical flow at training time, and that outperforms an ensemble of networks trained independently on these modalities.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Xin, Bowen. « Multimodal Data Fusion and Quantitative Analysis for Medical Applications ». Thesis, The University of Sydney, 2021. https://hdl.handle.net/2123/26678.

Texte intégral
Résumé :
Medical big data is not only enormous in its size, but also heterogeneous and complex in its data structure, which makes conventional systems or algorithms difficult to process. These heterogeneous medical data include imaging data (e.g., Positron Emission Tomography (PET), Computerized Tomography (CT), Magnetic Resonance Imaging (MRI)), and non-imaging data (e.g., laboratory biomarkers, electronic medical records, and hand-written doctor notes). Multimodal data fusion is an emerging vital field to address this urgent challenge, aiming to process and analyze the complex, diverse and heterogeneous multimodal data. The fusion algorithms bring great potential in medical data analysis, by 1) taking advantage of complementary information from different sources (such as functional-structural complementarity of PET/CT images) and 2) exploiting consensus information that reflects the intrinsic essence (such as the genetic essence underlying medical imaging and clinical symptoms). Thus, multimodal data fusion benefits a wide range of quantitative medical applications, including personalized patient care, more optimal medical operation plan, and preventive public health. Though there has been extensive research on computational approaches for multimodal fusion, there are three major challenges of multimodal data fusion in quantitative medical applications, which are summarized as feature-level fusion, information-level fusion and knowledge-level fusion: • Feature-level fusion. The first challenge is to mine multimodal biomarkers from high-dimensional small-sample multimodal medical datasets, which hinders the effective discovery of informative multimodal biomarkers. Specifically, efficient dimension reduction algorithms are required to alleviate "curse of dimensionality" problem and address the criteria for discovering interpretable, relevant, non-redundant and generalizable multimodal biomarkers. • Information-level fusion. The second challenge is to exploit and interpret inter-modal and intra-modal information for precise clinical decisions. Although radiomics and multi-branch deep learning have been used for implicit information fusion guided with supervision of the labels, there is a lack of methods to explicitly explore inter-modal relationships in medical applications. Unsupervised multimodal learning is able to mine inter-modal relationship as well as reduce the usage of labor-intensive data and explore potential undiscovered biomarkers; however, mining discriminative information without label supervision is an upcoming challenge. Furthermore, the interpretation of complex non-linear cross-modal associations, especially in deep multimodal learning, is another critical challenge in information-level fusion, which hinders the exploration of multimodal interaction in disease mechanism. • Knowledge-level fusion. The third challenge is quantitative knowledge distillation from multi-focus regions on medical imaging. Although characterizing imaging features from single lesions using either feature engineering or deep learning methods have been investigated in recent years, both methods neglect the importance of inter-region spatial relationships. Thus, a topological profiling tool for multi-focus regions is in high demand, which is yet missing in current feature engineering and deep learning methods. Furthermore, incorporating domain knowledge with distilled knowledge from multi-focus regions is another challenge in knowledge-level fusion. To address the three challenges in multimodal data fusion, this thesis provides a multi-level fusion framework for multimodal biomarker mining, multimodal deep learning, and knowledge distillation from multi-focus regions. Specifically, our major contributions in this thesis include: • To address the challenges in feature-level fusion, we propose an Integrative Multimodal Biomarker Mining framework to select interpretable, relevant, non-redundant and generalizable multimodal biomarkers from high-dimensional small-sample imaging and non-imaging data for diagnostic and prognostic applications. The feature selection criteria including representativeness, robustness, discriminability, and non-redundancy are exploited by consensus clustering, Wilcoxon filter, sequential forward selection, and correlation analysis, respectively. SHapley Additive exPlanations (SHAP) method and nomogram are employed to further enhance feature interpretability in machine learning models. • To address the challenges in information-level fusion, we propose an Interpretable Deep Correlational Fusion framework, based on canonical correlation analysis (CCA) for 1) cohesive multimodal fusion of medical imaging and non-imaging data, and 2) interpretation of complex non-linear cross-modal associations. Specifically, two novel loss functions are proposed to optimize the discovery of informative multimodal representations in both supervised and unsupervised deep learning, by jointly learning inter-modal consensus and intra-modal discriminative information. An interpretation module is proposed to decipher the complex non-linear cross-modal association by leveraging interpretation methods in both deep learning and multimodal consensus learning. • To address the challenges in knowledge-level fusion, we proposed a Dynamic Topological Analysis framework, based on persistent homology, for knowledge distillation from inter-connected multi-focus regions in medical imaging and incorporation of domain knowledge. Different from conventional feature engineering and deep learning, our DTA framework is able to explicitly quantify inter-region topological relationships, including global-level geometric structure and community-level clusters. K-simplex Community Graph is proposed to construct the dynamic community graph for representing community-level multi-scale graph structure. The constructed dynamic graph is subsequently tracked with a novel Decomposed Persistence algorithm. Domain knowledge is incorporated into the Adaptive Community Profile, summarizing the tracked multi-scale community topology with additional customizable clinically important factors.
Styles APA, Harvard, Vancouver, ISO, etc.
8

POLSINELLI, MATTEO. « Modelli di Intelligenza Artificiale per l'analisi di dati da neuroimaging multimodale ». Doctoral thesis, Università degli Studi dell'Aquila, 2022. http://hdl.handle.net/11697/192072.

Texte intégral
Résumé :
Medical imaging (MI) refers to several technologies that provide images of organs and tissues of human body for diagnosis and scientific purposes. Furthermore, the technologies that allow us to capture medical images and signals are advancing rapidly, providing higher quality images of previously unmeasured biological features at decreasing costs. This has mainly occurred for highly specialized applications, such as cardiology and neurology. Artificial Intelligence (AI), which to date has largely focused on non medical applications, such as computer vision, provides to be an instrumental toolkit that will help unleash the potential of MI. In fact, the significant variability in anatomy across individuals, the lack of specificity of the imaging techniques, the unpredictability of the diseases, the weakness of the biological signals, the presence of noise and artifacts and the complexities of the underlying biology often make it impossible to derive deterministic algorithmic solutions for the problems encountered in neurology. Aim of this thesis was to develop AI models capable of carrying out quantitative, objective, accurate and reliable analyzes of imaging tools, EEG and MRI, used in neurology. Beyond the development of AI models, attention was focused on the quality of data which can be lowered by the "uncertainty" produced by the issues cited above. Further, the uncertainty affecting data was also described, discussed and addressed. Main results have been the proposal of innovative AI-based strategies for signal and image improvement through artifact reduction and data stabilization both in EEG and in MRI. This has allowed to apply EEG for weak signals recognition and interpretation (infant 3M patients), to provide effective strategies for dealing MRI variability and uncertainty in multiple sclerosis segmentation, both for single source and multiple-source MRI. According to the used evaluation criteria, the obtained results are comparable with those obtained by human experts. Future developments will regard the generalization of the proposed strategies to cope with different diseases or with different applications of MI. Particular attention will be paid to the optimization of the models and to understand the processes underlying their behavior. To this aim, specific strategies for checking the deep structures of the proposed architectures will be studied. In this way, besides model optimization, it would be possible to get the functional relationships among the features generating from the model and use them to improve human knowledge (a sort of inverse transfer learning).
Styles APA, Harvard, Vancouver, ISO, etc.
9

Khan, Mohd Tauheed. « Multimodal Data Fusion Using Voice and Electromyography Data for Robotic Control ». University of Toledo / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=toledo156440368925597.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Oztarak, Hakan. « Structural And Event Based Multimodal Video Data Modeling ». Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/12606919/index.pdf.

Texte intégral
Résumé :
Investments on multimedia technology enable us to store many more reflections of the real world in digital world as videos. By recording videos about real world entities, we carry a lot of information to the digital world directly. In order to store and efficiently query this information, a video database system (VDBS) is necessary. In this thesis work, we propose a structural, event based and multimodal (SEBM) video data model for VDBSs. SEBM video data model supports three different modalities that are visual, auditory and textual modalities and we propose that we can dissolve these three modalities with a single SEBM video data model. This proposal is supported by the interpretation of the video data by human. Hence we can answer the content based, spatio-temporal and fuzzy queries of the user more easily, since we store the video data as the way that s/he interprets the real world data. We follow divide and conquer technique when answering very complicated queries. We have implemented the SEBM video data model in a Java based system that uses XML for representing the SEBM data model and Berkeley XML DBMS for storing the data based on the SEBM prototype system.
Styles APA, Harvard, Vancouver, ISO, etc.

Livres sur le sujet "Dati multimodali"

1

Fernandes, Carla, Vito Evola et Cláudia Ribeiro. Dance Data, Cognition, and Multimodal Communication. London : Routledge, 2022. http://dx.doi.org/10.4324/9781003106401.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Hazeldine, Lee, Gary Hazeldine et Christian Beighton. Analysing Multimodal Data in Complex Social Spaces. 1 Oliver’s Yard, 55 City Road, London EC1Y 1SP United Kingdom : SAGE Publications, Ltd., 2019. http://dx.doi.org/10.4135/9781526488282.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Adams, Teresa M. Guidelines for the implementation of multimodal transportation location referencing systems. Washington, D.C : National Academy Press, 2001.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Seng, Kah Phooi, Li-minn Ang, Alan Wee-Chung Liew et Junbin Gao, dir. Multimodal Analytics for Next-Generation Big Data Technologies and Applications. Cham : Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-319-97598-6.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

1959-, Grifoni Patrizia, dir. Multimodal human computer interaction and pervasive services. Hershey PA : Information Science Reference, 2009.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Vieten, Andrea. Monomodale und multimodale Registrierung von autoradiographischen und histologischen Bilddaten. Jülich : Forschungszentrum Jülich, Zentralbibliothek, 2005.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Enrique, Vidal, Casacuberta Francisco et SpringerLink (Online service), dir. Multimodal Interactive Pattern Recognition and Applications. London : Springer-Verlag London Limited, 2011.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

National Research Council (U.S.). Transportation Research Board et National Cooperative Highway Research Program, dir. Multimodal level of service analysis for urban streets. Washington, D.C : Transportation Research Board, 2008.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

, English linguistics edited by Anthon., dir. Multimodality and multimediality in the distance learning age. Campobasso : Palladino, 2000.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Biswas, Pradipta. A Multimodal End-2-End Approach to Accessible Computing. London : Springer London, 2013.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Chapitres de livres sur le sujet "Dati multimodali"

1

Bernsen, Niels Ole, et Laila Dybkjær. « Data Handling ». Dans Multimodal Usability, 315–49. London : Springer London, 2009. http://dx.doi.org/10.1007/978-1-84882-553-6_15.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Laflen, Angela. « Learning to “Speak Data” ». Dans Multimodal Composition, 127–43. New York : Routledge, 2021. http://dx.doi.org/10.4324/9781003163220-10.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Bernsen, Niels Ole, et Laila Dybkjær. « Usability Data Analysis and Evaluation ». Dans Multimodal Usability, 351–85. London : Springer London, 2009. http://dx.doi.org/10.1007/978-1-84882-553-6_16.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Williams, Ross N. « A Multimodal Algorithm ». Dans Adaptive Data Compression, 245–81. Boston, MA : Springer US, 1991. http://dx.doi.org/10.1007/978-1-4615-4046-5_5.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Palframan, Shirley. « Multimodal classroom data ». Dans Multimodal Signs of Learning, 27–39. London : Routledge, 2021. http://dx.doi.org/10.4324/9781003198802-3.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Chen, Zhikui, Liang Zhao, Qiucen Li, Xin Song et Jianing Zhang. « Multimodal Data Fusion ». Dans Advances in Computing, Informatics, Networking and Cybersecurity, 53–91. Cham : Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-87049-2_3.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Huang, Lihe. « Collecting and processing multimodal data ». Dans Toward Multimodal Pragmatics, 99–108. London : Routledge, 2021. http://dx.doi.org/10.4324/9781003251774-5.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Steininger, Silke, Florian Schiel et Susen Rabold. « Annotation of Multimodal Data ». Dans SmartKom : Foundations of Multimodal Dialogue Systems, 571–96. Berlin, Heidelberg : Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/3-540-36678-4_35.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Nie, Liqiang, Meng Liu et Xuemeng Song. « Data Collection ». Dans Multimodal Learning toward Micro-Video Understanding, 11–17. Cham : Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-031-02255-5_2.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Hosseini, Mohammad-Parsa, Aaron Lau, Kost Elisevich et Hamid Soltanian-Zadeh. « Multimodal Analysis in Biomedicine ». Dans Big Data in Multimodal Medical Imaging, 193–203. Boca Raton : CRC Press, [2020] : Chapman and Hall/CRC, 2019. http://dx.doi.org/10.1201/b22410-8.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Actes de conférences sur le sujet "Dati multimodali"

1

Sanchez-Rada, J. Fernando, Carlos A. Iglesias, Hesam Sagha, Bjorn Schuller, Ian Wood et Paul Buitelaar. « Multimodal multimodel emotion analysis as linked data ». Dans 2017 Seventh International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW). IEEE, 2017. http://dx.doi.org/10.1109/aciiw.2017.8272599.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Liao, Callie C., Duoduo Liao et Jesse Guessford. « Multimodal Lyrics-Rhythm Matching ». Dans 2022 IEEE International Conference on Big Data (Big Data). IEEE, 2022. http://dx.doi.org/10.1109/bigdata55660.2022.10021009.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Yang, Lixin, Genshe Chen, Ronghua Xu, Sherry Chen et Yu Chen. « Decentralized autonomous imaging data processing using blockchain ». Dans Multimodal Biomedical Imaging XIV, sous la direction de Fred S. Azar, Xavier Intes et Qianqian Fang. SPIE, 2019. http://dx.doi.org/10.1117/12.2513243.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Oosterhuis, Kas, et Arwin Hidding. « Participator, A Participatory Urban Design Instrument ». Dans International Conference on the 4th Game Set and Match (GSM4Q-2019). Qatar University Press, 2019. http://dx.doi.org/10.29117/gsm4q.2019.0008.

Texte intégral
Résumé :
A point cloud of reference points forms the programmable basis of a new method of urban and architectural modeling. Points in space from the smallest identifiable units that are informed to communicate with each other to form complex data structures. The data are visualized as spatial voxels [3d pixels] as to represent spaces and volumes that maintain their mutual relationships under varying circumstances. The subsequent steps in the development from point cloud to the multimodal urban strategy are driven by variable local and global parameters. Step by step new and more detailed actors are introduced in the serious design game. Values feeding the voxel units may be fixed, variables based on experience, or randomly generated. The target value may be fixed or kept open. Using lines or curves and groups of points from the original large along the X, Y and Z-axes organized crystalline set of points are selected to form the shape of actual working space. The concept of radical multimodality at the level of the smallest grain requires that at each stage in the design game individual units are addressed as to adopt a unique function during a unique amount of time. Each unit may be a home, a workplace, a workshop, a shop, a lounge area, a school, a garden or just an empty voxel anytime and anywhere in the selected working space. The concept of multimodality [MANIC, K Oosterhuis, 2018] is taken to its extreme as to stimulate the development of diversity over time and in its spatial arrangement. The programmable framework for urban multimodality acknowledges the rise and shine of the new international citizen, who travels the world, lives nowhere and everywhere, inhabits places and spaces for ultrashort, shorter or longer periods of time, lives her/his life as a new nomad [New Babylon, Constant Nieuwenhuys, 1958]. The new nomad lives on her/his own or in groups of like-minded people, effectuated by setting preferences and choices being made via the ubiquitous multimodality app, which organizes the unfolding of her / his life. In the serious design game nomadic life is facilitated by real time activation of a complex set of programmable monads. Playing and further developing the design journey was executed in 4 workshop sessions with different professional stakeholders, architects, engineers, entrepreneurs and project developers.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Hu, Kangqiao, Abdullah Nazma Nowroz, Sherief Reda et Farinaz Koushanfar. « High-Sensitivity Hardware Trojan Detection Using Multimodal Characterization ». Dans Design Automation and Test in Europe. New Jersey : IEEE Conference Publications, 2013. http://dx.doi.org/10.7873/date.2013.263.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Mahmood, Faisal, Daniel Borders, Richard Chen, Jordan Sweer, Steven Tilley, Norman S. Nishioka, J. Webster Stayman et Nicholas J. Durr. « Robust photometric stereo endoscopy via deep learning trained on synthetic data (Conference Presentation) ». Dans Multimodal Biomedical Imaging XIV, sous la direction de Fred S. Azar, Xavier Intes et Qianqian Fang. SPIE, 2019. http://dx.doi.org/10.1117/12.2509878.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Lampkins, Joshua, Darren Chan, Alan Perry, Sasha Strelnikoff, Jiejun Xu et Alireza Esna Ashari. « Multimodal Road Sign Interpretation for Autonomous Vehicles ». Dans 2022 IEEE International Conference on Big Data (Big Data). IEEE, 2022. http://dx.doi.org/10.1109/bigdata55660.2022.10020808.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Kohankhaki, Mohammad, Ahmad Ayad, Mahdi Barhoush, Bastian Leibe et Anke Schmeink. « Radiopaths : Deep Multimodal Analysis on Chest Radiographs ». Dans 2022 IEEE International Conference on Big Data (Big Data). IEEE, 2022. http://dx.doi.org/10.1109/bigdata55660.2022.10020356.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Kouvaras, George, et George Kokolakis. « Random Multivariate Multimodal Distributions ». Dans Recent Advances in Stochastic Modeling and Data Analysis. WORLD SCIENTIFIC, 2007. http://dx.doi.org/10.1142/9789812709691_0009.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Marte Zorrilla, Edwin, Idalis Villanueva, Jenefer Husman et Matthew Graham. « Generating a Multimodal Dataset Using a Feature Extraction Toolkit for Wearable and Machine Learning : A pilot study ». Dans 13th International Conference on Applied Human Factors and Ergonomics (AHFE 2022). AHFE International, 2022. http://dx.doi.org/10.54941/ahfe1001448.

Texte intégral
Résumé :
Studies for stress and student performance with multimodal sensor measurements have been a recent topic of discussion among research educators. With the advances in computational hardware and the use of Machine learning strategies, scholars can now deal with data of high dimensionality and provide a way to predict new estimates for future research designs. In this paper, the process to generate and obtain a multimodal dataset including physiological measurements (e.g., electrodermal activity- EDA) from wearable devices is presented. Through the use of a Feature Generation Toolkit for Wearable Data, the time to extract clean and generate the data was reduced. A machine learning model from an openly available multimodal dataset was developed and results were compared against previous studies to evaluate the utility of these approaches and tools. Keywords: Engineering Education, Physiological Sensing, Student Performance, Machine Learning, Multimodal, FLIRT, WESAD
Styles APA, Harvard, Vancouver, ISO, etc.

Rapports d'organisations sur le sujet "Dati multimodali"

1

Linville, Lisa M., Joshua James Michalenko et Dylan Zachary Anderson. Multimodal Data Fusion via Entropy Minimization. Office of Scientific and Technical Information (OSTI), mars 2020. http://dx.doi.org/10.2172/1614682.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Wu, Yao-Jan, Xianfeng Yang, Sirisha Kothuri, Abolfazl Karimpour, Qinzheng Wang et Jason Anderson. Data-Driven Mobility Strategies for Multimodal Transportation. Transportation Research and Education Center (TREC), 2021. http://dx.doi.org/10.15760/trec.262.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Folds, Dennis J., Carl T. Blunt et Raymond M. Stanley. Training for Rapid Interpretation of Voluminous Multimodal Data. Fort Belvoir, VA : Defense Technical Information Center, avril 2008. http://dx.doi.org/10.21236/ada480522.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Hillsman, Edward. Enabling Cost-Effective Multimodal Trip Planners through Open Transit Data. Tampa, FL : University of South Florida, mai 2011. http://dx.doi.org/10.5038/cutr-nctr-rr-2010-05.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Barbeau, Sean. Improving the Quality and Cost Effectiveness of Multimodal Travel Behavior Data Collection. Tampa, FL : University of South Florida, février 2018. http://dx.doi.org/10.5038/cutr-nctr-rr-2018-10.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Balali, Vahid, Arash Tavakoli et Arsalan Heydarian. A Multimodal Approach for Monitoring Driving Behavior and Emotions. Mineta Transportation Institute, juillet 2020. http://dx.doi.org/10.31979/mti.2020.1928.

Texte intégral
Résumé :
Studies have indicated that emotions can significantly be influenced by environmental factors; these factors can also significantly influence drivers’ emotional state and, accordingly, their driving behavior. Furthermore, as the demand for autonomous vehicles is expected to significantly increase within the next decade, a proper understanding of drivers’/passengers’ emotions, behavior, and preferences will be needed in order to create an acceptable level of trust with humans. This paper proposes a novel semi-automated approach for understanding the effect of environmental factors on drivers’ emotions and behavioral changes through a naturalistic driving study. This setup includes a frontal road and facial camera, a smart watch for tracking physiological measurements, and a Controller Area Network (CAN) serial data logger. The results suggest that the driver’s affect is highly influenced by the type of road and the weather conditions, which have the potential to change driving behaviors. For instance, when the research defines emotional metrics as valence and engagement, results reveal there exist significant differences between human emotion in different weather conditions and road types. Participants’ engagement was higher in rainy and clear weather compared to cloudy weather. More-over, engagement was higher on city streets and highways compared to one-lane roads and two-lane highways.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Zhang, Yongping, Wen Cheng et Xudong Jia. Enhancement of Multimodal Traffic Safety in High-Quality Transit Areas. Mineta Transportation Institute, février 2021. http://dx.doi.org/10.31979/mti.2021.1920.

Texte intégral
Résumé :
Numerous extant studies are dedicated to enhancing the safety of active transportation modes, but very few studies are devoted to safety analysis surrounding transit stations, which serve as an important modal interface for pedestrians and bicyclists. This study bridges the gap by developing joint models based on the multivariate conditionally autoregressive (MCAR) priors with a distance-oriented neighboring weight matrix. For this purpose, transit-station-centered data in Los Angeles County were used for model development. Feature selection relying on both random forest and correlation analyses was employed, which leads to different covariate inputs to each of the two jointed models, resulting in increased model flexibility. Utilizing an Integrated Nested Laplace Approximation (INLA) algorithm and various evaluation criteria, the results demonstrate that models with a correlation effect between pedestrians and bicyclists perform much better than the models without such an effect. The joint models also aid in identifying significant covariates contributing to the safety of each of the two active transportation modes. The research results can furnish transportation professionals with additional insights to create safer access to transit and thus promote active transportation.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Chen, Maximillian Gene, Michael Christopher Darling et David John Stracuzzi. Preliminary Results on Applying Nonparametric Clustering and Bayesian Consensus Clustering Methods to Multimodal Data. Office of Scientific and Technical Information (OSTI), septembre 2018. http://dx.doi.org/10.2172/1475256.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Boero, Riccardo, Peter Thomas Hraber, Kimberly Ann Kaufeld, Elisabeth Ann Moore, Ethan Romero-Severson, John Joseph Ambrosiano, John Leslie Whitton et Benjamin Hayden Sims. Analysis of Multimodal Wearable Sensor Data to Characterize Social Groups and Influence in Organizations. Office of Scientific and Technical Information (OSTI), octobre 2019. http://dx.doi.org/10.2172/1570596.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Tufte, Kristin. Multimodal Data at Signalized Intersections : Strategies for Archiving Existing and New Data Streams to Support Operations and Planning Fusion and Integration of Arterial Performance Data. Portland State University Library, septembre 2013. http://dx.doi.org/10.15760/trec.46.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie