Littérature scientifique sur le sujet « Cross-modal bias paradigm »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Cross-modal bias paradigm ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Articles de revues sur le sujet "Cross-modal bias paradigm"

1

Proops, Leanne, et Karen McComb. « Cross-modal individual recognition in domestic horses ( Equus caballus ) extends to familiar humans ». Proceedings of the Royal Society B : Biological Sciences 279, no 1741 (16 mai 2012) : 3131–38. http://dx.doi.org/10.1098/rspb.2012.0626.

Texte intégral
Résumé :
It has recently been shown that some non-human animals can cross-modally recognize members of their own taxon. What is unclear is just how plastic this recognition system can be. In this study, we investigate whether an animal, the domestic horse, is capable of spontaneous cross-modal recognition of individuals from a morphologically very different species. We also provide the first insights into how cross-modal identity information is processed by examining whether there are hemispheric biases in this important social skill. In our preferential looking paradigm, subjects were presented with two people and playbacks of their voices to determine whether they were able to match the voice with the person. When presented with familiar handlers subjects could match the specific familiar person with the correct familiar voice. Horses were significantly better at performing the matching task when the congruent person was standing on their right, indicating marked hemispheric specialization (left hemisphere bias) in this ability. These results are the first to demonstrate that cross-modal recognition in animals can extend to individuals from phylogenetically very distant species. They also indicate that processes governed by the left hemisphere are central to the cross-modal matching of visual and auditory information from familiar individuals in a naturalistic setting.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Cai, Biye, Shizhong Cai, Hua He, Lu He, Yan Chen et Aijun Wang. « Multisensory Enhancement of Cognitive Control over Working Memory Capture of Attention in Children with ADHD ». Brain Sciences 13, no 1 (29 décembre 2022) : 66. http://dx.doi.org/10.3390/brainsci13010066.

Texte intégral
Résumé :
Attention deficit hyperactivity disorder (ADHD) is a common neurodevelopmental disorder in school-age children. Although it has been well documented that children with ADHD are associated with impairment of executive functions including working memory (WM) and inhibitory control, there is not yet a consensus as to the relationship between ADHD and memory-driven attentional capture (i.e., representations in WM bias attention toward the WM-matched distractors). The present study herein examined whether children with ADHD have sufficient cognitive control to modulate memory-driven attentional capture. 73 school-age children (36 with ADHD and 37 matched typically developing (TD) children) were instructed to perform a visual search task while actively maintaining an item in WM. In such a paradigm, the modality and the validity of the memory sample were manipulated. The results showed that under the visual WM encoding condition, no memory-driven attentional capture was observed in TD children, but significant capture was found in children with ADHD. In addition, under the audiovisual WM encoding condition, memory-matched distractors did not capture the attention of both groups. The results indicate a deficit of cognitive control over memory-driven attentional capture in children with ADHD, which can be improved by multisensory WM encoding. These findings enrich the relationship between ADHD and cognitive control and provide new insight into the influence of cross-modal processing on attentional guidance.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Farkas, Kinga, Zsófia Pálffy et Bertalan Polner. « S61. COMPUTATIONAL MODELLING OF VISUAL MOTION PERCEPTION AND ITS ASSOCIATION WITH SCHIZOTYPAL TRAITS ». Schizophrenia Bulletin 46, Supplement_1 (avril 2020) : S56. http://dx.doi.org/10.1093/schbul/sbaa031.127.

Texte intégral
Résumé :
Abstract Background Psychotic symptoms might be explained by disturbances of information processing due to errors of inference during neural coding, and hierarchical models could advance our understanding of how impaired functioning at different levels of the processing hierarchy are associated with psychotic symptoms. However, in order to examine to what extent such alterations are temporary or stable, the psychometric reliability and validity of the measurements need to be established. Individual differences in visual perception were measured by responses to uncertain stimuli presented during a probabilistic associative learning task. Our novel contributions are the measurement of cross-modal (visual and acoustic) associative learning and the assessment of the psychometric properties of indicators derived from a perceptual decision task: we evaluate its internal consistency, test-retest reliability and external validity as shown by associations with schizotypal traits. Methods Participants (32 healthy individuals, 13 men, age (SD) = 27.4 (9.4)) performed a perceptual decision task twice with one week delay. They were asked to indicate the direction of perceived motion of disambiguous and ambiguous visual stimuli (640 trials), which were preceded by visual and acoustic cues that were probabilistically associated with the motion direction and were congruent (both predict the same motion) or incongruent (cues predict different motion). Schizotypal traits were measured with the short version of the Oxford-Liverpool Inventory of Feelings and Experiences (O-LIFE) questionnaire, which showed good internal consistency and test-retest reliability (Cronbach’s alpha: 0.71 – 0.83 for subscales, test-retest correlation for Cognitive Disorganization: r = 0.84, and Unusual Experiences: r = 0.79). Results We found a significant difference in response reaction times between stimuli with high and low probability (t = -2.037; p = 0.044). Acoustic cues predicted the decision significantly higher in case of ambiguous stimuli in both sessions (1. t=4.19, p<0.001; 2: t=3.46, p=0.002). Congruency of visual and acoustic cue pairs had no significant effect on response times for ambiguous stimuli. Reaction times and bias towards reliance on auditory cues during perceptual decision making under uncertainty showed stability over the two sessions (test-retest rho’s ranging from 0.56 – 0.72). Cognitive Disorganization scores showed weak negative correlation with response time under uncertainty (session 1: r= -0.24, session 2: r= -0.28), Unusual Experiences scores showed weak negative correlation with the bias towards reliance on auditory cues (session1: r= -0.21, session 2: r= -0.19). We did not find relationship between general response speed and any O-LIFE subscale scores. Discussion The results show some intraindividual stability of individual differences in perceptual decision making as measured by our paradigm. Participants with higher schiztypal scores tend to have slower response speed under uncertainty and greater bias towards reliance on auditory cues in a small healthy sample which implies it might be useful to measure these variables in clinical population and evaluate the effectiveness of therapeutic interventions or illness progression in follow-up studies. The presented preliminary results derived from descriptive statistics of the behavioral data. Our research group is currently working on fitting a trial-by-trial hierarchical computational model - which includes the representation of uncertainty - to find more detailed individual differences, e.g. the time course of parameter changes while learning in a visual perception task.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Vicario, Carmelo Mario, Gaetano Rappo, Anna Maria Pepi et Massimiliano Oliveri. « Timing Flickers across Sensory Modalities ». Perception 38, no 8 (1 janvier 2009) : 1144–51. http://dx.doi.org/10.1068/p6362.

Texte intégral
Résumé :
In tasks requiring a comparison of the duration of a reference and a test visual cue, the spatial position of test cue is likely to be implicitly coded, providing a form of a congruency effect or introducing a response bias according to the environmental scale or its vectorial reference. The precise mechanism generating these perceptual shifts in subjective duration is not understood, although several studies suggest that spatial attentional factors may play a critical role. Here we use a duration comparison task within and across sensory modalities to examine if temporal performance is also modulated when people are exposed to spatial distractors involving different sensory modalities. Different groups of healthy participants performed duration comparison tasks in separate sessions: a time comparison task of visual stimuli during exposure to spatially presented auditory distractors; and a time comparison task of auditory stimuli during exposure to spatially presented visual distractors. We found the duration of visual stimuli biased depending on the spatial position of auditory distractors. Observers underestimated the duration of stimuli presented in the left spatial field, while there was an overestimation trend in estimating the duration of stimuli presented in the right spatial field. In contrast, timing of auditory stimuli was unaffected by exposure to visual distractors. These results support the existence of multisensory interactions between space and time showing that, in cross-modal paradigms, the presence of auditory distractors can modify visuo-temporal perception but not vice versa. This asymmetry is discussed in terms of sensory–perceptual differences between the two systems.
Styles APA, Harvard, Vancouver, ISO, etc.

Thèses sur le sujet "Cross-modal bias paradigm"

1

REALDON, OLIVIA. « Differenze culturali nella percezione multimodale delle emozioni ». Doctoral thesis, Università degli Studi di Milano-Bicocca, 2012. http://hdl.handle.net/10281/37944.

Texte intégral
Résumé :
The research question in the present study concerns how culture shapes the way in which simultaneous facial and vocalization cues are combined in emotion perception. The matter is not whether culture influences such process: cultures supply systems of meaning that make salient different core emotional themes, different sets of emotions, their ostensible expression, and action tendencies. Therefore, research doesn’t regard whether, but how and at what level of analysis culture shapes these processes (Matsumoto, 2001). Cultural variability was tested within the methodological framework of cultural priming studies (Matsumoto & Yoo, 2006). In such a methodological option culture is not viewed as consensual, enduring, and context-general, but as fragmented, fluctuating, and context-specific (situated cognition model; Oyserman & Sorensen, 2009). Bicultural individuals that, through enduring exposure to at least two cultures, possess systems of meaning and practices of both cultures, can therefore switch between such cultural orientations alternating them depending on the cultural cues (cultural primers) available in the immediate context (cultural frame switching; Hong et al. 2000). The present research investigated cultural differences in the way visual and auditory cues of fear and disgust are combined in emotion perception by Italian-Japanese biculturals primed with Japanese and Italian cultural cues. Bicultural participants were randomly assigned to Italian or Japanese priming conditions and were shown dynamic faces and vocalizations expressing either congruent (i.e., fear-fear) or incongruent (i.e. fear-disgust) emotion and were asked to identify the emotion expressed ignoring the one or the other modality (cross-modal bias paradigm; Bertelson & de Gelder, 2004). The effect of to-be-ignored vocalization cues was larger for participants in the Japanese priming condition, while the effect of to-be-ignored dynamic face cues was larger for participants in the Italian priming condition. This pattern of results was investigated also within current perspectives on embodied cognition, that, regarding emotion perception, assume that perceivers subtly mimic a target’s facial expression, so that contractions in the perceiver’s face generate an afferent muscolar feedback from the face to the brain, leading the perceiver to use this feedback to reproduce and thus understand the perceived expressions (Barsalou, 2009; Niedenthal, 2007). In other words, mimicry reflects internal simulation of perceived emotion in order to facilitate its understanding. A mimicry-interfering (with the facial expressions of fear and disgust; Oberman, Winkielman & Ramachandran, 2007) manipulation with bicultural participants performing the same task above described generated no cultural differences in the effect of to-be-ignored vocalizations, showing that the interference effect of vocalizations on faces turns out to be larger for participants in the Italian priming condition. Altogether, these results can be interpreted within the cultural syndromes highlighting the independent vs. interdependent and socially embedded nature of self, providing meaning systems that encourage and make available a different weighting of nonverbal cues in emotion perception depending on their relying, respectively, on more (or less) face exposure (meant as individual exposure) in modulating social relationships and less (or more) vocal exposure (more subtle and time-dependent than the face) in order to enhance individual standing and autonomy (vs. establish and maintain social harmony and interpersonal respect). Current perspectives sketching how human cognitive functioning works through a situated (Mesquita, Barrett, & Smith, 2010) and embodied (simulative) mind (Barsalou, 2009), and their implications in emotion perception are briefly described as the theoretical framework guiding the research question addressed in the empirical contribution.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie