Academic literature on the topic 'Bimodal Interaction'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Bimodal Interaction.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Bimodal Interaction"

1

Koulidobrova, Elena V. "Language interaction effects in bimodal bilingualism." Linguistic Approaches to Bilingualism 7, no. 5 (June 24, 2016): 583–613. http://dx.doi.org/10.1075/lab.13047.kou.

Full text
Abstract:
Abstract The focus of the paper is a phenomenon well documented in both monolingual and bilingual English acquisition: argument omission. Previous studies have shown that bilinguals acquiring a null and a non-null argument language simultaneously tend to exhibit unidirectional cross-language interaction effects — the non-null argument language remains unaffected but over-suppliance of overt elements in the null argument language is observed. Here subject and object omission in both ASL (null argument) and English (non-null argument) of young ASL-English bilinguals is examined. Results demonstrate that in spontaneous English production, ASL-English bilinguals omit subjects and objects to a higher rate, for longer, and in unexpected environments when compared with English monolinguals and bilinguals; no effect on ASL is observed. Findings also show that the children differentiate between their two languages — rates of argument omission in English are different during ASL vs. English target sessions differ. Implications for the general theory of bilingual effects are offered.
APA, Harvard, Vancouver, ISO, and other styles
2

Piwek, Lukasz, Karin Petrini, and Frank E. Pollick. "Auditory signal dominates visual in the perception of emotional social interactions." Seeing and Perceiving 25 (2012): 112. http://dx.doi.org/10.1163/187847612x647450.

Full text
Abstract:
Multimodal perception of emotions has been typically examined using displays of a solitary character (e.g., the face–voice and/or body–sound of one actor). We extend investigation to more complex, dyadic point-light displays combined with speech. A motion and voice capture system was used to record twenty actors interacting in couples with happy, angry and neutral emotional expressions. The obtained stimuli were validated in a pilot study and used in the present study to investigate multimodal perception of emotional social interactions. Participants were required to categorize happy and angry expressions displayed visually, auditorily, or using emotionally congruent and incongruent bimodal displays. In a series of cross-validation experiments we found that sound dominated the visual signal in the perception of emotional social interaction. Although participants’ judgments were faster in the bimodal condition, the accuracy of judgments was similar for both bimodal and auditory-only conditions. When participants watched emotionally mismatched bimodal displays, they predominantly oriented their judgments towards the auditory rather than the visual signal. This auditory dominance persisted even when the reliability of auditory signal was decreased with noise, although visual information had some effect on judgments of emotions when it was combined with a noisy auditory signal. Our results suggest that when judging emotions from observed social interaction, we rely primarily on vocal cues from the conversation, rather then visual cues from their body movement.
APA, Harvard, Vancouver, ISO, and other styles
3

Schindler, S. "Interaction in the bimodal galaxy cluster A3528." Monthly Notices of the Royal Astronomical Society 280, no. 1 (May 1, 1996): 309–18. http://dx.doi.org/10.1093/mnras/280.1.309.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

KROLL, JUDITH F., and KINSEY BICE. "Bimodal bilingualism reveals mechanisms of cross-language interaction." Bilingualism: Language and Cognition 19, no. 2 (July 20, 2015): 250–52. http://dx.doi.org/10.1017/s1366728915000449.

Full text
Abstract:
In the recent swell of research on bilingualism and its consequences for the mind and the brain, there has been a warning that we need to remember that not all bilinguals are the same (e.g., Green & Abutalebi, 2013; Kroll & Bialystok, 2013; Luk & Bialystok, 2013). There are bilinguals who acquired two languages in early childhood and have used them continuously throughout their lives, bilinguals who acquired one language early and then switched to another language when they entered school or emigrated from one country to another, and others who only acquired a second language (L2) as an adult. Among these forms of bilingualism there are differences in both the context and amount of time spent in each language and differences in the status of the languages themselves. The L2 may be a majority language, spoken by almost everyone in the environment, or a minority language, spoken only by a few. The native or first language (L1) may also be the dominant language or may have been overtaken by the influence of the L2 given the circumstances imposed by the environment. Likewise, the L1 and L2 may vary in how similar they are structurally, whether they share the same written script, or whether one language is spoken and the other signed.
APA, Harvard, Vancouver, ISO, and other styles
5

Chauhan, Ankur, Luis Straßberger, Ulrich Führer, Dimitri Litvinov, and Jarir Aktaa. "Creep-fatigue interaction in a bimodal 12Cr-ODS steel." International Journal of Fatigue 102 (September 2017): 92–111. http://dx.doi.org/10.1016/j.ijfatigue.2017.05.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Labuda, Aleksander, Marta Kocuń, Waiman Meinhold, Deron Walters, and Roger Proksch. "Generalized Hertz model for bimodal nanomechanical mapping." Beilstein Journal of Nanotechnology 7 (July 5, 2016): 970–82. http://dx.doi.org/10.3762/bjnano.7.89.

Full text
Abstract:
Bimodal atomic force microscopy uses a cantilever that is simultaneously driven at two of its eigenmodes (resonant modes). Parameters associated with both resonances can be measured and used to extract quantitative nanomechanical information about the sample surface. Driving the first eigenmode at a large amplitude and a higher eigenmode at a small amplitude simultaneously provides four independent observables that are sensitive to the tip–sample nanomechanical interaction parameters. To demonstrate this, a generalized theoretical framework for extracting nanomechanical sample properties from bimodal experiments is presented based on Hertzian contact mechanics. Three modes of operation for measuring cantilever parameters are considered: amplitude, phase, and frequency modulation. The experimental equivalence of all three modes is demonstrated on measurements of the second eigenmode parameters. The contact mechanics theory is then extended to power-law tip shape geometries, which is applied to analyze the experimental data and extract a shape and size of the tip interacting with a polystyrene surface.
APA, Harvard, Vancouver, ISO, and other styles
7

Locke, John L. "Bimodal signaling in infancy." Interaction Studies 8, no. 1 (June 13, 2007): 159–75. http://dx.doi.org/10.1075/is.8.1.11loc.

Full text
Abstract:
It has long been asserted that the evolutionary path to spoken language was paved by manual–gestural behaviors, a claim that has been revitalized in response to recent research on mirror neurons. Renewed interest in the relationship between manual and vocal behavior draws attention to its development. Here, the pointing and vocalization of 16.5-month-old infants are reported as a function of the context in which they occurred. When infants operated in a referential mode, the frequency of simultaneous vocalization and pointing exceeded the frequency of vocalization-only and pointing-only responses by a wide margin. In a non-communicative context, combinatorial effects persisted, but in weaker form. Manual–vocal signals thus appear to express the operation of an integrated system, arguably adaptive in the young from evolutionary times to the present. It was speculated, based on reported evidence, that manual behavior increases the frequency and complexity of vocal behaviors in modern infants. There may be merit in the claim that manual behavior facilitated the evolution of language because it helped make available, early in development, behaviors that under selection pressures in later ontogenetic stages elaborated into speech.
APA, Harvard, Vancouver, ISO, and other styles
8

Hoffmann, T. L., W. Chen, G. H. Koopmann, A. W. Scaroni, and L. Song. "Experimental and Numerical Analysis of Bimodal Acoustic Agglomeration." Journal of Vibration and Acoustics 115, no. 3 (July 1, 1993): 232–40. http://dx.doi.org/10.1115/1.2930338.

Full text
Abstract:
The interaction between fly ash particles (first mode) and sorbent particles (second mode) in coal combustion processes is studied under the influence of a low frequency, high intensity acoustic field. The effect of bimodal acoustic agglomeration is evaluated in a numerical sensitivity analysis on parameters such as residence time in the combustion chamber and mass loading of the particle modes. An Acoustic Agglomeration Simulation Model (AASM) developed by Song at the Pennsylvania State University is used for these numerical studies. Experimental examinations carried out in a down-fired combustor show evidence of bimodal agglomeration and enhanced particle interaction under the influence of a low frequency (44 Hz), high intensity (160 dB) sound field. The results of the experiments are compared to the equivalent numerical studies and good agreement can be shown between the two sets of data.
APA, Harvard, Vancouver, ISO, and other styles
9

Shi, Shuai, Dan Guo, and Jianbin Luo. "Interfacial interaction and enhanced image contrasts in higher mode and bimodal mode atomic force microscopy." RSC Advances 7, no. 87 (2017): 55121–30. http://dx.doi.org/10.1039/c7ra11635g.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zhang, J., M. Gonit, M. D. Salazar, A. Shatnawi, L. Shemshedini, R. Trumbly, and M. Ratnam. "C/EBPα redirects androgen receptor signaling through a unique bimodal interaction." Oncogene 29, no. 5 (November 9, 2009): 723–38. http://dx.doi.org/10.1038/onc.2009.373.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Bimodal Interaction"

1

Tullberg, Anna. "Intuitiva Gränssnitt : Utvärdering av bimodal display som potentielltstöd för helikopterpiloter." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-85333.

Full text
Abstract:
Människan inhämtar och bearbetar information via sina sinnen för att skapa en bild av omvärlden. Mängden information som människan kan bearbeta är begränsad, speciellt i komplexa miljöer där flera uppgifter ska utföras. Fokus för denna studie är att reducera piloters kognitiva belastning för att underlätta i svåra flygsituationer. Ett stort problem idag är att när en pilot förlorar yttre visuella referenser så kan spatial desorientering uppstå. Ett resultat av det kan vara att helikoptern börjar driva utan att piloten märker något. Förutom en grundpanel som piloter alltid har tillgång till har fyra olika potentiella stöd, för att motverka drift, jämförts i ett experiment. Displayerna som skulle kunna ge stöd avseende drift är en visuell display, en taktil väst samt en kombination av den visuella displayen och den taktila västen, även kallad bimodal display. 12 deltagare deltog i experimentet och 8 av dem var studenter. Resultatet visar att deltagarna presterar signifikant sämre med grundpanelen än med de övriga displayerna. Resultatet visar också att prestationen med en taktil väst eller bimodal display inte skiljer sig från en visuell display. Detta är ett lovande resultat som tyder på att prestationen av att undvika drift med en taktil- eller en bimodal display är likvärdig en välbeprövad visuell display. Det i sin tur betyder att informationen skulle kunna delas upp på flera sinnen och reducera kognitiv belastning. Om informationen avseende drift kan delas upp på flera sinnen finns det resurser kvar att lägga på andra uppgifter, till exempel att spana ut i luftrummet istället för att titta på instrument inne i cockpit.
APA, Harvard, Vancouver, ISO, and other styles
2

Patching, Geoffrey R. "The role of attention in auditory and visual interaction." Thesis, University of York, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.323686.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

SPACCASASSI, CHIARA. "FEELING THE EMOTIONS AROUND US: HOW AFFECTIVE STIMULI IMPACT VISUO-TACTILE INTERACTIONS IN SPACE." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2019. http://hdl.handle.net/10281/241107.

Full text
Abstract:
Lo spazio peripersonale (SPP) rappresenta una regione privilegiata di spazio immediatamente circostante il nostro corpo in cui stimoli visivi e tattili vengono integrati nelle aree cerebrali fronto-parietali (Hunley & Lourenco, 2018). L’ampiezza di SPP non è fissa ma può essere regolata da diversi fattori (Fogassi et al., 1996). Il presente lavoro di tesi si propone di indagare come l’integrazione visuo-tattile possa essere modulata dalla valenza intrinseca e acquisita degli stimoli visivi e da stati emotivi legati all’ansia. Negli Studi 1 e 3 è stato utilizzato il paradigma di interazione visuo-tattile (Canzoneri et al., 2012) in cui stimoli tattili venivano somministrati mentre stimoli visivi in avvicinamento a valenza intrinseca (Studio 1) e acquisita (Studio 3) si trovavano a specifiche distanze dal corpo dei partecipanti. I risultati dei due studi sono analoghi: a brevi distanze, tutti gli stimoli visivi comportano una facilitazione dell’elaborazione del tattile, a lunghe distanze dal corpo, invece, solo gli stimoli connotati da valenza modulano le interazioni visuo-tattili. Lo studio 2 è stato condotto per escludere che possibili effetti di aspettativa tattile potessero spiegare i risultati ottenuti nei due precedenti studi. Utilizzando lo stesso paradigma di interazione visuo-tattile, ora gli stimoli visivi si allontanano dal corpo anziché avvicinarsi ad esso. Contrariamente ai due studi summenzionati, si è qui riscontrato che la valenza degli stimoli non esercita nessun effetto sulla percezione spaziale, confermando dunque la validità degli Studi 1 e 3. Lo Studio 4 si propone di indagare le oscillazioni neurali sottostanti le interazioni visuo-tattili. Nello specifico, si vorrebbe replicare il risultato ottenuto da Wamain et al. (2016) il quale ha riportato un gradiente di attivazione della corteccia sensorimotoria dallo spazio peripersonale a quello extrapersonale, soltanto quando il compito richiedeva al soggetto una chiara intenzione motoria. Utilizzando un compito di discriminazione tattile, i partecipanti venivano invitati a rispondere ad una vibrazione consegnata sulla mano destra mentre osservavano stimoli visivi a valenza positiva e negativa, posizionati a varie distanze dal corpo. I risultati mostrano una chiara attivazione motoria quando tutti gli stimoli sono posizionati nello spazio peripersonale ma non in quello extrapersonale, portando prove a sostegno dell’esistenza di un sistema di codifica di SPP sottostante l’integrazione visuo-tattile (Maravita et al., 2003, Làdavas & Farnè, 2004). Nessun effetto legato alla valenza è stato registrato, avvalorando dunque i dati ottenuti nei precedenti esperimenti. Lo studio 5 è volto ad indagare come la congruenza tra stimoli visivi e tattili nello spazio sia modellata da stati emotivi legati ad ansia di stato e di tratto (Spielberger, 1983). Adottando una versione rivisitata del paradigma di Ordine di Giudizio Temporale (Filbrich et al., 2017), i partecipanti venivano invitati a riportare l’ordine di presentazione di stimoli visivi posizionati vicino o lontano dal proprio corpo, ignorando degli stimoli tattili consegnati 200 ms prima degli stimoli bersaglio. Tale procedura è stata somministrata prima e dopo il compito di induzione dell’ansia. Nonostante non sia stato replicato il generale effetto di facilitazione di congruenza visuo-tattile nello spazio vicino, è stato riscontrato che i soggetti con alta ansia di stato e di tratto mostrano rispettivamente un effetto inibitorio e facilitatorio dello stimolo tattile sull’elaborazione del visivo. Questo risultato è compatibile con studi già presenti in letteratura indicanti un ridotto controllo top-down per stimoli minacciosi nei soggetti ad alta ansia di stato (Bishop et al., 2004) e un compromesso controllo esecutivo nei soggetti ad alta ansia di tratto (Pacheco-Unguetti et al., 2010).
Peripersonal Space (PPS) is a privileged region of space, immediately surrounding our body, in which visual and bodily signals are promptly integrated in fronto-parietal areas of the brain (Hunley & Lourenco, 2018). PPS amplitude is not fixed, but it can be dynamically shaped by specific experimental manipulations (Fogassi et al., 1996). In Study 1 and 3, we tried to disentangle how visuo-tactile integration in space can be shaped by intrinsic and learned valence of objects. By using a visuo-tactile interaction paradigm, participants were asked to respond to a tactile stimulus while an approaching visual one (with intrinsic and learned valence in Study 1 and 3, respectively) was located at specific distances from their body (Canzoneri et al., 2012). The results of Study 1 and 3 seem aligned to each other: positive and negative stimuli entail larger visuo-tactile interactions in space than neutral ones. Indeed, at longer distances from the body, visuo-tactile interactions are dynamically modulated by valence-connoted looming visual stimuli. At shorter distances, instead, all stimuli acquire saliency regardless of their intrinsic or acquired valence, due to their proximity to the body. Study 3 aims to exclude that the above-mentioned results might be due to tactile expectancy (Kandula et al., 2017). Indeed, the more the visual stimulus approaches the body without tactile input, the more the bodily stimulus expectancy increases (Umbach et al. 2012). By using the same visual stimuli – that now recede away from participants’ body - and spatial distances as in Study 1, it was shown that the different valence of the stimuli is not able to produce any kind of effect in space, thus stressing the validity of the findings reported in Studies 1 and 3. Study 4 investigates the neuronal oscillations related to visuo-tactile coupling in near and far space for both positive and negative visual stimuli. In particular, we would like to replicate Wamain et al. (2016) results, which state that objects in near space are coded in motor terms, but only when the goal of the perceiver is to interact with them. By using a tactile discrimination task while valence-connoted visual stimuli were presented in near or far space, we found beta power desynchronization in near space over sensorimotor cortex, thus revealing a motor activation for valence-connoted visual stimuli close to the body but not when they were located far from it. This result corroborates the presence of such a multisensory system in the human brain (Maravita et al., 2003, Làdavas & Farnè, 2004). However, no effect of valence was found in the present EEG task, thus confirming Study 1 and 3 results. Study 5 explores how state and trait anxiety (Spielberger, 1983) can alter the prioritizing effect of congruent visuo-tactile stimulation in space. By adopting a revised version of the Temporal Order Judgment task as in Filbrich et al. (2017), participants were asked to report the order of near or far visual stimulus presentation before and after doing an anxiety provoking task, trying to ignore a tactile cue. Despite we were unable to report an overall prioritizing effect of congruent visuo-tactile interaction in near space, it has been found that participants who experienced a higher temporary state of anxiety showed an inhibitory effect of the congruent tactile cue on the near visual stimulus processing. On the other side, high trait anxiety participants’ response to the congruent multisensory stimulation seems to be more facilitated in near than in far space. This finding seems to be compatible with the reduced top-down control over threat-related distractors showed by high state anxiety individuals (Bishop et al., 2004) and with a reduced executive control in trait anxious subjects (Pacheco-Unguetti et al., 2010). Taken together, these five studies stress the privileged integration of visual and tactile stimuli inside PPS and its permeability to emotional related states.
APA, Harvard, Vancouver, ISO, and other styles
4

Norén, Caroline. "Utvärdering av gränssnitt i en helikoptersimulator : En taktil, en visuell samt en bimodal display som visar horisontell och vertikal drift." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-108116.

Full text
Abstract:
Föreliggande studie har undersökt flera gränssnitt i en helikoptersimulator som visar horisontell och vertikal drift; en taktil, en visuell och en bimodal display. Syftet med studien var att undersöka om det är med fördel att använda displayer som visar på drift horisontellt och vertikalt. Dessutom var syftet att undersöka om taktil display leder till lika bra prestation som med en visuell display, eller kommer kunna användas som komplement till den visuella i en bimodal kombination. I studien deltog 12 försöksdeltagare som fick i uppgift att i en helikoptersimulator försöka stå stilla i luften (s.k. hovra) på en höjd på 8000 fot med hjälp av varje driftdisplay under två minuter. Förutom de tre driftdisplayerna skulle även försöksdeltagarna hovra utan några hjälpmedel som visar på drift. Resultatet visade att det är med fördel att använda alla tre driftdisplayer för att minska drift horisontellt. Resultatet visade dessutom att det är med fördel att använda en visuell display och en bimodal display för att minska drift vertikalt. Slutsatsen av studien är således att det är med fördel att använda driftdisplayer för att minska på drift. Den taktila displayen fungerar inte lika bra som den visuella i undvikandet av drift, men går att användas som komplement i en bimodal kombination.
APA, Harvard, Vancouver, ISO, and other styles
5

Shi, Da. "Préparation et caractérisation de microbulles fonctionnelles stabilisées par des fluorocarbures et décorées de nanoparticules dendronisées : évaluation comme agents du contraste bimodaux pour l'IRM et l'imagerie par ultrasons." Thesis, Strasbourg, 2019. http://www.theses.fr/2019STRAF053.

Full text
Abstract:
Cette Thèse vise à préparer et caractériser des microbulles fonctionnelles stabilisées par un fluorocarbure gazeux et décorées des nanoparticules magnétiques dendronisées. L’influence du mode d’introduction du perfluorohexane sur les monocouches de phospholipides, ainsi que sur les propriétés des microbulles, a été évaluée. Les comportements en monocouches de Langmuir de dendrons, ainsi que des mélanges de dendrons et de phospholipides ont été étudiés.Les interactions fluor-fluor qui s’établissent entre le gaz fluoré et le groupe terminal fluoré des dendrons promeuvent l’adsorption des nanoparticules dendronisées à l’interface air/eau. Des microbulles petites et stables décorées de nanoparticules dendronisées ont été préparées. Ces microbulles magnétiques ont été évaluées en tant que agents de contraste bimodaux pour l’IRM et pour l’imagerie par ultrasons sur un modèle murin, en collaboration avec l’Universitätklinikum de Freiburg. Ce travail a été soutenu par INTERREG V (Nanotransmed)
This Thesis focuses on the preparation and characterization of microbubbles stabilized with a fluorocarbon gas and decorated with dendronized magnetic nanoparticles. The impacts of perfluorohexane exposure mode on Langmuir monolayers formed by phospholipids and on the properties of microbubbles were evaluated. The behaviours of Langmuir monolayers formed by dendrons and of the mixtures of dendrons and phospholipids were investigated. The attractive fluorine-fluorine interactions that develop between the fluorocarbon gas and the fluorinated terminal group prompt the adsorption of nanoparticles grafted with dendrons to the air/water interface. Small and stable microbubbles decorated with dendronized iron oxide nanoparticles were prepared. The magnetic microbubbles were examined as bimodal contrast agents for MRI and ultrasound imaging on a murine model in collaboration with the Universitätklinikum in Freiburg. This work was supported by the INTERREG V (Nanotransmed)
APA, Harvard, Vancouver, ISO, and other styles
6

Brozzoli, Claudio. "Peripersonal space : a multisensory interface for body-objects interactions." Phd thesis, Université Claude Bernard - Lyon I, 2009. http://tel.archives-ouvertes.fr/tel-00675247.

Full text
Abstract:
Our ability to interact with the environment requires the integration of multisensory information for the construction of spatial representations. The peripersonal space (i.e., the sector of space closely surrounding one's body) and the integrative processes between visual and tactile inputs originating from this sector of space have been at the center of recent years investigations. Neurophysiological studies provided evidence for the presence in the monkey brain of bimodal neurons, which are activated by tactile as well as visual information delivered near to a specific body part (e.g., the hand). Neuropsychological studies on right brain-damaged patients who present extinction and functional neuroimaging findings suggest the presence of similar bimodal systems in the human brain. Studies on the effects of tool-use on visual-tactile interaction revealed similar dynamic properties of the peripersonal space in monkeys and humans. The functional role of the multisensory coding of peripersonal space is, in our hypothesis, that of providing the brain with a sensori-motor interface for body-objects interactions. Thus, not only it could be involved in driving involuntary defensive movements in response to objects approaching the body, but could be also dynamically maintained and updated as a function of manual voluntary actions performed towards objects in the reaching space. We tested the hypothesis of an involvement of peripersonal space in executing both voluntary and defensive actions. To these aims, we joined a well known cross-modal congruency effect between visual and tactile information to a kinematic approach to demonstrate that voluntary grasping actions induce an on-line re-weighting of multisensory interactions in the peripersonal space. We additionally show that this modulation is handcentred. We also used a motor evoked potentials approach to investigate which coordinates system is used to code the peripersonal space during motor preparation if real objects rapidly approach the body. Our findings provide direct evidence for automatic hand-centred coding of visual space and suggest that peripersonal space may also serve to represent rapidly 3 approaching and potentially noxious objects, thus enabling the rapid selection of appropriate motor responses. These results clearly show that peripersonal space is a multisensori-motor interface that might have been selected through evolution for optimising the interactions between the body and the objects in the external world.
APA, Harvard, Vancouver, ISO, and other styles
7

Olivéro, Aurore. "Développement d'un instrument plasmonique bimodal couplant SPRI et SERS pour la détection et l'identification de molécules biologiques." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLO017/document.

Full text
Abstract:
L’imagerie par Résonance des Plasmons de Surface (SPRI) est une technique d’analyse d’interactions moléculaires présentant de nombreux avantages. Elle peut être appliquée en temps réel et sans marquage, pour étudier un grand nombre d’interactions simultanément sur un même échantillon. La transduction d’un événement d‘interaction entre deux molécules complémentaires en un signal optique, repose sur la perturbation de l’onde plasmonique évanescente créée à la surface d’un film métallique mince.Toutefois, bien que la mesure SPR soit directe et sans marquage, sa spécificité repose entièrement sur celle des molécules sondes déposées à la surface de la puce et donc sur la chimie ayant servi à les immobiliser. Cette limitation devient problématique pour adresser les grands enjeux de santé actuels, liés à la détection de molécules à l’état de traces. En particulier, de nouveaux systèmes d’analyse plus sensibles sont requis pour pouvoir diagnostiquer le cancer au plus tôt, ou encore détecter la présence de contaminants agro-alimentaires en faible concentration.Dans cette perspective d’amélioration de la spécificité de détection, ce travail porte sur la mise au point d’un instrument bimodal couplant la SPRI, capable de quantifier la capture de molécules cibles, à la Spectrométrie Raman Exaltée de Surface (SERS), qui permet d’identifier la nature des molécules capturées en déterminant leur « empreinte » moléculaire. Cette thèse s’inscrit dans un projet ANR regroupant un consortium de partenaires académiques et un industriel.Ce document se concentre sur le développement de l’instrument optique combinant les deux systèmes de détection en un seul prototype. La mesure SPRI est réalisée en configuration Kretschmann, tandis que l’analyse SERS s’effectue par le dessus, en milieu liquide, à travers un hublot. Ces deux mesures simultanées sont rendues possibles grâce à la mise au point d’un substrat métallique nanostructuré. Une caractérisation détaillée du système optique est tout d’abord présentée, puis de premiers résultats de validation de la mesure bimodale sur un cas modèle d’interaction biomoléculaire ADN sont démontrés. Ces expériences prometteuses confirment le fonctionnement de l’instrument bimodal dans la perspective d’applications d’intérêt biologique
Surface Plasmon Resonance Imaging (SPRI) is a powerful technique to study molecular interactions providing a real time, label free and high throughput analysis. The transduction of an interaction between complementary molecules into an optical signal is based on the perturbation of a plasmonic evanescent wave supported by a thin metallic film.However, despite its direct and label free assets, the specificity of SPR measurements is only guaranteed by the probe molecules grafted on the metallic surface and therefore by the quality of the surface chemistry. This limitation becomes an issue when addressing major health concerns relying on the detection of trace molecules. In particular, new systems are required to help early diagnosis and the control of food contaminants.In view of improving measurement’s specificity, this work reports the development of a bimodal instrument coupling SPRI, allowing the quantification of captured molecules, with Surface Enhanced Raman Spectroscopy (SERS), adding the precise identification of the molecules by measuring their spectroscopic fingerprint. This PhD is part of an ANR project bringing together academic and industrial partners.This manuscript focuses on the development of the optical instrument combining the two detection systems in a unique prototype. SPRI measurements are performed in the Kretschmann configuration while SERS analysis is implemented from the top, in solution, through a glass window. Nanostructured substrates have been designed and realized to allow the simultaneous experiment.The optical system is described, characterized and validated on the model case of a DNA hybridization. These first results prove the capabilities of the bimodal instrument in the perspective of more complex biological applications
APA, Harvard, Vancouver, ISO, and other styles
8

Latinus, Marianne. "De la perception unimodale à la perception bimodale des visages : corrélats électrophysiologiques et interactions entre traitements des visages et des voix." Toulouse 3, 2007. http://www.theses.fr/2007TOU30028.

Full text
Abstract:
Cette thèse a examiné le traitement des visages, des voix et des interactions entre ces traitements en utilisant la technique des potentiels évoqués, qui a permis d'étudier le décours temporels de ces différents processus. Mes études sur la perception des visages montre que 3 traitements relationnels différents sont recrutés successivement ; chacun sous-tend une étape de la perception des visages, de leur détection à leur identification. Dans une deuxième partie, la perception des voix est abordée ; il est montré que le traitement des voix diffèrent légèrement de celui du visage. Dans la dernière partie, les résultats obtenus dans l'étude des interactions bimodales des visages et des voix confirment la différence entre traitements des voix et traitements des visages ; les informations portées par le visage semblent prévaloir sur celle portée par la voix dans la perception du genre des individus. Un modèle résumant les résultats des différentes études menées pendant cette thèse est proposé à la fin. Ce modèle suggère une différence entre traitement des voix et des visages due à la spécialisation des systèmes sensoriels dans la communication verbale et non verbale respectivement
This thesis examined the processing of faces and voices, as well as the interaction between them, using evoked potentials; this technique informs on the temporal course of these processes. My experiments on face processing revealed that faces recruit successively the three configural processes described in the literature; each process underlies a stage of face perception from detection to identification. In a second part of this thesis, voice perception was approached. I showed that voices are processed in a slightly different way than faces. In the last part of this thesis, bimodal interactions between auditory and visual information was investigated using gender categorisation of faces and voices presented simultaneously. This study reinforced the view that face and voice processing differed; information carried by faces overruled voice information in gender processing. A summary model is presented at the end of the thesis. This model suggests that face and voice processing differ due to the specialisation of the auditory and visual systems in verbal and non verbal communication, respectively; these differences lead to a dominance of visual information in non verbal social interactions and a dominance of auditory information in language processing
APA, Harvard, Vancouver, ISO, and other styles
9

Cramér-Wolrath, Emelie. "Signs of Acquiring Bimodal Bilingualism Differently : A Longitudinal Case Study of Mediating a Deaf and a Hearing Twin in a Deaf Family." Doctoral thesis, Stockholms universitet, Specialpedagogiska institutionen, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-86237.

Full text
Abstract:
This dissertation based on a case study explores the acquisition and the guidance of Swedish Sign Language and spoken Swedish over a span of seven years. Interactions between a pair of fraternal twins, one deaf and one hearing, and their Deaf[1] family were video-observed within the home setting. The thesis consists of a frame which provides an overview of the relationship between four studies. These describe and analyze mainly storytime sessions over time. The first article addresses attentional expressions between the participants; the second article studies the mediation of the deaf twin’s first language acquisition; the third article analyses the hearing twins acquisition of parallel bimodal bilingualism; the fourth article concerns second language acquisition, sequential bimodal bilingualism following a cochlear implant (CI). In the frame, theoretical underpinnings such as mediation and language acquisition were compiled, within a sociocultural frame. This synthesis of results provides important information; in the 12- and 13-month sessions simultaneous-tactile-looking was noted in interchanges between the twins and their mother; mediation of bilingualism was scaffolded by the caregivers with the hearing twin by inserting single vocal words or signs into the language base used at that time, a finding that differs from other reported studies; a third finding is the simultaneousness in which the deaf child’s Swedish Sign Language skill worked as a cultural tool, to build a second and spoken language. The findings over time revealed actions that included all the family members. Irrespective of the number of modes and varied types of communication with more than one child, mediation included following-in the child’s initiation, intersubjective meaningfulness and encouragement. In accordance with previous research, these factors seem to promote the acquisition of languages. In conclusion, these findings should also prove useful in the more general educational field. [1] Deaf with a capital ‘D’ is commonly used for cultural affiliation whereas lower case ‘d’, as in deaf, refers to audiological status (Monaghan, Schmaling, Nakamura & Turner, 2003).

Disputationen tolkas till svensk teckenspråk, hörselslinga finns.

At the time of the doctoral defense, the following papers were unpublished and had a status as follows: Paper 2: Submitted. Paper 3: Accepted. Paper 4: Submitted.

APA, Harvard, Vancouver, ISO, and other styles
10

Pery, Emilie. "Spectroscopie bimodale en diffusion élastiqueet autofluorescence résolue spatialement :instrumentation, modélisation des interactions lumière-tissus et application à la caractérisation de tissus biologiques ex vivo et in vivo pour la détection de cancers." Phd thesis, Institut National Polytechnique de Lorraine - INPL, 2007. http://tel.archives-ouvertes.fr/tel-00199910.

Full text
Abstract:
L'objectif de ce travail de recherche est le développement, la mise au point et la validation d'une méthode de spectroscopie multi-modalités en diffusion élastique et autofluorescence pour caractériser des tissus biologiques in vitro et in vivo. Ces travaux s'organisent en quatre axes.
La première partie des travaux présente l'instrumentation : développement, réalisation et caractérisation expérimentale d'un système de spectrométrie bimodale multi-points fibrée permettant l'acquisition de spectres in vivo (distances variables, acquisition rapide).
La deuxième partie porte sur la modélisation des propriétés optiques du tissu : développement et validation expérimentale sur fantômes d'un algorithme de simulation de propagation de photons en milieux turbides et multi-fluorescents.
La troisième partie propose une étude expérimentale conduite ex vivo sur des anneaux artériels frais et cryoconservés. Elle confirme la complémentarité des mesures spectroscopiques en diffusion élastique et autofluorescence et valide la méthode de spectroscopie multi-modalités et l'algorithme de simulation de propagation de photons. Les résultats originaux obtenus montrent une corrélation entre propriétés rhéologiques et optiques.
La quatrième partie développe une seconde étude expérimentale in vivo sur un modèle pré-clinique tumoral de vessie. Elle met en évidence une différence significative en réflectance diffuse et/ou en autofluorescence et/ou en fluorescence intrinsèque entre tissus sains, inflammatoires et tumoraux, sur la base de longueurs d'onde particulières. Les résultats de la classification non supervisée réalisée montrent que la combinaison de différentes approches spectroscopiques augmente la fiabilité du diagnostic.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Bimodal Interaction"

1

Quinto-Pozos, David, and Robert Adam. Sign Language Contact. Edited by Robert Bayley, Richard Cameron, and Ceil Lucas. Oxford University Press, 2013. http://dx.doi.org/10.1093/oxfordhb/9780199744084.013.0019.

Full text
Abstract:
This chapter argues that language contact is the norm in Deaf communities, and that deaf people are typically multilingual. They use signed, written, and, in some cases, spoken languages for daily communication, which means that aspects of the spoken and/or written languages of the larger communities are in constant interaction with the signed languages. If one considers the contact that results from users of two different signed languages interacting, various comparisons can be made to contact that occurs across two or more spoken languages. The term unimodal contact, or that which comes about because of two languages within the same modality, can be used to characterize such contact. However, if one considers the contact that results from interaction between a signed and a spoken or written language, the term bimodal (or even multimodal) contact is more appropriate.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Bimodal Interaction"

1

Liu, C. P., and H. Kimura. "On the Bimodal Nature of the Particle-Size-Distribution Function of Cometary Dust." In Properties and Interactions of Interplanetary Dust, 279–82. Dordrecht: Springer Netherlands, 1985. http://dx.doi.org/10.1007/978-94-009-5464-9_58.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Rodríguez, Fernando G., and Silvia Español. "The Transition from Early Bimodal Gesture-Word Combinations to Grammatical Speech." In Moving and Interacting in Infancy and Early Childhood, 207–46. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-08923-7_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ormel, Ellen, and Marcel Giezen. "Bimodal Bilingual Cross-Language Interaction." In Bilingualism and Bilingual Deaf Education, 74–101. Oxford University Press, 2014. http://dx.doi.org/10.1093/acprof:oso/9780199371815.003.0004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ptaszynski, Michal, Jacek Maciejewski, Pawel Dybala, Rafal Rzepka, Kenji Araki, and Yoshio Momouchi. "Science of Emoticons." In Speech, Image, and Language Processing for Human Computer Interaction, 234–60. IGI Global, 2012. http://dx.doi.org/10.4018/978-1-4666-0954-9.ch012.

Full text
Abstract:
Emoticons are string of symbols representing body language in text-based communication. For a long time they have been considered as unnatural language entities. This chapter argues that, in over 40-year-long history of text-based communication, emoticons have gained a status of an indispensable means of support for text-based messages. This makes them fully a part of Natural Language Processing. The fact the emoticons have been considered as unnatural language expressions has two causes. Firstly, emoticons represent body language, which by definition is nonverbal. Secondly, there has been a lack of sufficient methods for the analysis of emoticons. Emoticons represent a multimodal (bimodal in particular) type of information. Although they are embedded in lexical form, they convey non-linguistic information. To prove this argument the authors propose that the analysis of emoticons was based on a theory designed for the analysis of body language. In particular, the authors apply the theory of kinesics to develop a state of the art system for extraction and analysis of kaomoji, Japanese emoticons. The system performance is verified in comparison with other emoticon analysis systems. Experiments showed that the presented approach provides nearly ideal results in different aspects of emoticon analysis, thus proving that emoticons possess features of multimodal expressions.
APA, Harvard, Vancouver, ISO, and other styles
5

Gregorio, Lucrezia Di, Vincenzina Campana, Maria Lavecchia, and Pasquale Rinaldi. "Include to Grow: Prospects for Bilingual and Bicultural Education for Both Deaf and Hearing Students." In Co-Enrollment in Deaf Education, 165–82. Oxford University Press, 2019. http://dx.doi.org/10.1093/oso/9780190912994.003.0009.

Full text
Abstract:
Deaf children in Italy are provided with different types of schooling. Few public schools offer a bilingual curriculum for deaf and hearing students that involves consistent use of Italian and Italian Sign Language (LIS) within the classroom and in which LIS is taught as a subject. One of these schools, the Tommaso Silvestri Primary School, located in Rome, Italy, is discussed in this chapter. In particular, the way in which the program is organized and how it supports deaf and hearing students in cognition, learning, and social interaction will be described. Methodological aspects and the role of technology in enhancing learning processes will be also discussed. This kind of bimodal bilingual co-enrollment program is very useful for deaf students and constitutes a unique opportunity for hearing classmates, giving them the opportunity to experience innovative learning environments and to consider deafness as a status rather than as a limitation.
APA, Harvard, Vancouver, ISO, and other styles
6

Zhang, Na, Yuzhi Xiao, Haixing Zhao, Cunyang Tang, and Baoyang Cui. "Construction of a Three-Layer Directed Network Model with Multimodal Characteristics." In Advances in Transdisciplinary Engineering. IOS Press, 2022. http://dx.doi.org/10.3233/atde220041.

Full text
Abstract:
The multi-layer directed network model focuses on portraying the directionality, diversity and difference of the edges in the network. It is also one of the powerful tools for analyzing the complexity of the network system and the heterogeneous interaction characteristics between network layers. This paper constructs three three-layer directed network models combined with traditional complex network theory and analyzes their degree distribution characteristics. By controlling the out-degree and in-degree of the middle-level nodes, a network structure with multi-peak characteristics is configured. Combined with numerical simulation, it is concluded that the number of links between the layers of the directed network makes the network characterize the characteristics of unimodal, bimodal, and trimodal; the combination of the optimal mechanism and the changes in the number of inter-layer links, the network characteristics show the coexistence of power law and unimodal. The research results of this paper have practical value for the analysis of network generation mechanism and correlation law by using multi-layer directed network theory in the era of big data.
APA, Harvard, Vancouver, ISO, and other styles
7

Pantic, Maja. "Face for Interface." In Encyclopedia of Multimedia Technology and Networking, Second Edition, 560–67. IGI Global, 2009. http://dx.doi.org/10.4018/978-1-60566-014-1.ch075.

Full text
Abstract:
The human face is involved in an impressive variety of different activities. It houses the majority of our sensory apparatus: eyes, ears, mouth, and nose, allowing the bearer to see, hear, taste, and smell. Apart from these biological functions, the human face provides a number of signals essential for interpersonal communication in our social life. The face houses the speech production apparatus and is used to identify other members of the species, to regulate the conversation by gazing or nodding, and to interpret what has been said by lip reading. It is our direct and naturally preeminent means of communicating and understanding somebody’s affective state and intentions on the basis of the shown facial expression (Lewis & Haviland-Jones, 2000). Personality, attractiveness, age, and gender can also be seen from someone’s face. Thus the face is a multisignal sender/receiver capable of tremendous flexibility and specificity. In general, the face conveys information via four kinds of signals listed in Table 1. Automating the analysis of facial signals, especially rapid facial signals, would be highly beneficial for fields as diverse as security, behavioral science, medicine, communication, and education. In security contexts, facial expressions play a crucial role in establishing or detracting from credibility. In medicine, facial expressions are the direct means to identify when specific mental processes are occurring. In education, pupils’ facial expressions inform the teacher of the need to adjust the instructional message. As far as natural user interfaces between humans and computers (PCs/robots/machines) are concerned, facial expressions provide a way to communicate basic information about needs and demands to the machine. In fact, automatic analysis of rapid facial signals seem to have a natural place in various vision subsystems and vision-based interfaces (face-for-interface tools), including automated tools for gaze and focus of attention tracking, lip reading, bimodal speech processing, face/visual speech synthesis, face-based command issuing, and facial affect processing. Where the user is looking (i.e., gaze tracking) can be effectively used to free computer users from the classic keyboard and mouse. Also, certain facial signals (e.g., a wink) can be associated with certain commands (e.g., a mouse click) offering an alternative to traditional keyboard and mouse commands. The human capability to “hear” in noisy environments by means of lip reading is the basis for bimodal (audiovisual) speech processing that can lead to the realization of robust speech-driven interfaces. To make a believable “talking head” (avatar) representing a real person, tracking the person’s facial signals and making the avatar mimic those using synthesized speech and facial expressions is compulsory. The human ability to read emotions from someone’s facial expressions is the basis of facial affect processing that can lead to expanding user interfaces with emotional communication and, in turn, to obtaining a more flexible, adaptable, and natural affective interfaces between humans and machines. More specifically, the information about when the existing interaction/processing should be adapted, the importance of such an adaptation, and how the interaction/ reasoning should be adapted, involves information about how the user feels (e.g., confused, irritated, tired, interested). Examples of affect-sensitive user interfaces are still rare, unfortunately, and include the systems of Lisetti and Nasoz (2002), Maat and Pantic (2006), and Kapoor, Burleson, and Picard (2007). It is this wide range of principle driving applications that has lent a special impetus to the research problem of automatic facial expression analysis and produced a surge of interest in this research topic.
APA, Harvard, Vancouver, ISO, and other styles
8

Nguyen, Hanh thi, Cristiane Vicentini, and André Langevin. "A Microanalysis of Text's Interactional Functions in Text-and-Voice SCMC Chat for Language Learning." In Handbook of Research on Integrating Technology Into Contemporary Language Learning and Teaching, 30–56. IGI Global, 2018. http://dx.doi.org/10.4018/978-1-5225-5140-9.ch003.

Full text
Abstract:
This chapter analyzes the functions of the text mode in an SCMC English tutoring session. Conversation analysis of the sequential and holistic unfolding of both text and voice turns reveals that the bimodal text-and-voice mode was employed in repair, Initiation-Response-Feedback, assessment, and topical talk sequences. Within these sequences, text turns often reinforced voice turns to focus on language forms but also sometimes contributed to rapport-building. In addition to supporting voice turns, text turns also performed distinct actions in conjunction with the actions in the voice turns such as initiating repair, presenting language examples as objects for consideration, achieving humor, and signaling discourse structure. The findings shed light on the interactional processes in bimodal SCMC for second language teaching and learning.
APA, Harvard, Vancouver, ISO, and other styles
9

Cohen, Jonathan H., and Charles E. Epifanio. "Response to Visual, Chemical, and Tactile Stimuli." In Developmental Biology and Larval Ecology, 333–60. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780190648954.003.0012.

Full text
Abstract:
Early life history in marine benthic crustaceans often includes externally brooded eggs that hatch into free-swimming planktonic larvae. These larvae are relatively strong swimmers, and movement in the vertical plane provides a number of advantages, including modulation of horizontal transport and assurance of favorable predator–prey interactions. Swimming behavior in larval crustaceans is regulated by predictable external cues in the water column, primarily light, gravity, and hydrostatic pressure. Light-regulated behavior depends upon the optical physics of seawater and the physiology of light-detecting sensory structures in the larvae, which overall vary little with ontogeny. Swimming in response to light contributes to ecologically significant behaviors in planktonic crustacean larvae, including shadow responses, depth regulation, and diel vertical migration. Moreover, the photoresponses themselves, and in turn the evoked behaviors, change with the needs of larvae as development progresses. Regarding other sensory modalities, crustacean embryos and larvae respond to chemical cues using bimodal sensilla (chemosensory and mechanosensory) as contact receptors, and aesthetascs for detection of water-soluble cues. Processes and behaviors are stimulated by larval detection of chemical cues throughout ontogeny, including egg-hatching, avoidance of predators during free-swimming stages, and, ultimately, settlement and metamorphosis in juvenile habitats. The latter process can also involve tactile cues. The sensory-mediated behaviors described here for crustacean larvae have parallels in numerous arthropod and nonarthropod taxa. Emerging directions for future research on sensory aspects of behavior in crustacean larvae include multimodal sensory integration and behavioral responses to changing environmental stressors.
APA, Harvard, Vancouver, ISO, and other styles
10

Batista, Aurore, Marie-Thérèse Le Normand, and Jean-Marc Colletta. "Chapitre 3. Rôle et évolution des combinaisons bimodales au cours de l’acquisition du langage. Données chez l’enfant francophone âgé de 18 à 42 mois." In Multimodalité du langage dans les interactions et l’acquisition. UGA Éditions, 2019. http://dx.doi.org/10.4000/books.ugaeditions.10947.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Bimodal Interaction"

1

Carbajal-Pérez, Cristina, Alejandro Catala, and Alberto Bugarín-Diz. "Guidelines for Bimodal Virtual Assistants." In NordiCHI '22: Nordic Human-Computer Interaction Conference. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3547522.3547688.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Yin, Yufeng, Jiashu Xu, Tianxin Zu, and Mohammad Soleymani. "X-Norm: Exchanging Normalization Parameters for Bimodal Fusion." In ICMI '22: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3536221.3556581.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Shakeri, Gözel, John H. Williamson, and Stephen A. Brewster. "Bimodal feedback for in-car mid-air gesture interaction." In ICMI '17: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3136755.3143033.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Han, Wei, Hui Chen, Alexander Gelbukh, Amir Zadeh, Louis-philippe Morency, and Soujanya Poria. "Bi-Bimodal Modality Fusion for Correlation-Controlled Multimodal Sentiment Analysis." In ICMI '21: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3462244.3479919.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Xu, Mingkun, Faqiang Liu, and Jing Pei. "Endowing Spiking Neural Networks with Homeostatic Adaptivity for APS-DVS Bimodal Scenarios." In ICMI '22: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3536220.3563690.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zong, Chenglong, Xiaozhou Zhou, Jichen Han, and Haiyan Wang. "Multiphase pointing motion model based on hand-eye bimodal cooperative behavior." In Intelligent Human Systems Integration (IHSI 2023) Integrating People and Intelligent Systems. AHFE International, 2023. http://dx.doi.org/10.54941/ahfe1002844.

Full text
Abstract:
Pointing, as the most common interaction behavior in 3D interactions, has become the basis and hotspot of natural human-computer interaction research. In this paper, hand and eye movement data of multiple participants in a typical pointing task were collected with a virtual reality experiment, and we further clarified the movements of the hand and eye in spatial and temporal properties and their cooperation during the whole task process. Our results showed that the movements of both the hand and eye in a pointing task can be divided into three stages according to their speed properties, namely, the preparation stage, ballistic stage and correction stage. Based on the verification of the phase division of hand and eye movements in the pointing task, we further clarified the phase division standards and the relationship between the duration of every pair of phases of hand and eye. Our research has great significance for further mining human natural pointing behavior and realizing more reliable and accurate human-computer interaction intention recognition.
APA, Harvard, Vancouver, ISO, and other styles
7

Gievska, Sonja, Kiril Koroveshovski, and Natasha Tagasovska. "Bimodal feature-based fusion for real-time emotion recognition in a mobile context." In 2015 International Conference on Affective Computing and Intelligent Interaction (ACII). IEEE, 2015. http://dx.doi.org/10.1109/acii.2015.7344602.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, A. H. "Numerical MHD Simulation of Flux-Rope Formed Ejecta Interaction With Bimodal Solar Wind." In SOLAR WIND TEN: Proceedings of the Tenth International Solar Wind Conference. AIP, 2003. http://dx.doi.org/10.1063/1.1618633.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ma, Wei, Feng Gao, Xavier Ottavy, Lipeng Lu, and A. J. Wang. "Numerical Investigation of Intermittent Corner Separation in a Linear Compressor Cascade." In ASME Turbo Expo 2016: Turbomachinery Technical Conference and Exposition. American Society of Mechanical Engineers, 2016. http://dx.doi.org/10.1115/gt2016-57311.

Full text
Abstract:
Recently bimodal phenomenon in corner separation has been found by Ma et al. (Experiments in Fluids, 2013, doi:10.1007/s00348-013-1546-y). Through detailed and accurate experimental results of the velocity flow field in a linear compressor cascade, they discovered two aperiodic modes exist in the corner separation of the compressor cascade. This phenomenon reflects the flow in corner separation is high intermittent, and large-scale coherent structures corresponding to two modes exist in the flow field of corner separation. However the generation mechanism of the bimodal phenomenon in corner separation is still unclear and thus needs to be studied further. In order to obtain instantaneous flow field with different unsteadiness and thus to analyse the mechanisms of bimodal phenomenon in corner separation, in this paper detached-eddy simulation (DES) is used to simulate the flow field in the linear compressor cascade where bimodal phenomenon has been found in previous experiment. DES in this paper successfully captures the bimodal phenomenon in the linear compressor cascade found in experiment, including the locations of bimodal points and the development of bimodal points along a line that normal to the blade suction side. We infer that the bimodal phenomenon in the corner separation is induced by the strong interaction between the following two facts. The first is the unsteady upstream flow nearby the leading edge whose angle and magnitude fluctuate simultaneously and significantly. The second is the high unsteady separation in the corner region.
APA, Harvard, Vancouver, ISO, and other styles
10

Matulic, Fabrice, and Moira Norrie. "Empirical evaluation of uni- and bimodal pen and touch interaction properties on digital tabletops." In the 2012 ACM international conference. New York, New York, USA: ACM Press, 2012. http://dx.doi.org/10.1145/2396636.2396659.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography