To see the other types of publications on this topic, follow the link: Non-visual modalities.

Journal articles on the topic 'Non-visual modalities'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Non-visual modalities.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Scholtz, Desiree. "Visual and non-literal representations as academic literacy modalities." Southern African Linguistics and Applied Language Studies 37, no. 2 (September 6, 2019): 105–18. http://dx.doi.org/10.2989/16073614.2019.1617173.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Spiller, Mary Jane. "Exploring synaesthetes’ mental imagery abilities across multiple sensory modalities." Seeing and Perceiving 25 (2012): 219. http://dx.doi.org/10.1163/187847612x648459.

Full text
Abstract:
Previous research on the mental imagery abilities of synaesthetes has concentrated on visual and spatial imagery in synaesthetes with spatial forms (Price, 2009, 2010; Simner et al., 2008) and letter-colour synaesthesia (Spiller and Jansari, 2008). Though Barnett and Newell (2008) asked synaesthetes of all types to fill out a questionnaire on visual imagery, most of their synaesthetes reported some form of linguistic–colour synaesthesia. We extend the investigation of mental imagery to a wider variety of synaesthesia types and a wider variety of sensory modalities using a questionnaire study and several tests of visual and auditory mental imagery ability. Our results indicate that, as a group, synaesthetes report making greater use of mental imagery than non-synaesthetes, in everyday activities. Furthermore, they self-report greater vividness of visual, auditory, tactile, and taste imagery than do non-synaesthetes. However, as a group the synaesthetes are not seen to do significantly better at the mental imagery tasks, in either the visual or auditory modalities. These results have important implications for our understanding of synaesthesia, in relation to potential fundamental differences in perceptual processing of synaesthetes and non-synaesthetes.
APA, Harvard, Vancouver, ISO, and other styles
3

Dowlatshahi, K., and J. Dieschbourg. "Shift in the surgical treatment of non-palpable breast cancer: tactile to visual." Breast Cancer Online 9, no. 1 (January 2006): 1–10. http://dx.doi.org/10.1017/s1470903105003755.

Full text
Abstract:
Increasing number of small, early-staged breast cancers are detected by screening mammography. Diagnosis and determination of the prognostic factors may be made by either ultrasound (US) or stereotactically guided needle biopsy. Approximately 2000 stereotactic tables are installed at various medical centers throughout the United States and a significant number in other countries where breast cancer is common. Many surgeons and interventional radiologists are trained in the use of this technology for diagnostic purposes. Employing the same technology, these physicians may be trained to treat selected breast cancers with laser energy percutaneously. Experimental and clinical reports to-date indicate the technique to be safe. High-resolution imaging modalities including grayscale and color Doppler US, magnetic resonance imaging, mammography and needle biopsy, when necessary, will confirm the tumor kill. Newer imaging modalities such as magnetic resonance spectroscopy may also provide additional confirmation for total tumor ablation.
APA, Harvard, Vancouver, ISO, and other styles
4

Brodoehl, Stefan, Carsten Klingner, Denise Schaller, and Otto W. Witte. "Plasticity During Short-Term Visual Deprivation." Zeitschrift für Psychologie 224, no. 2 (April 2016): 125–32. http://dx.doi.org/10.1027/2151-2604/a000246.

Full text
Abstract:
Abstract. During everyday experiences, people sometimes close their eyes to better understand spoken words, to listen to music, or when touching textures and objects. A plausible explanation for this observation is that a reversible loss of vision changes the perceptual function of the remaining non-deprived sensory modalities. Within this work, we discuss general aspects of the effects of visual deprivation on the perceptual performance of the non-deprived sensory modalities with a focus on the time dependency of these modifications. In light of ambiguous findings concerning the effects of short-term visual deprivation and because recent literature provides evidence that the act of blindfolding can change the function of the non-deprived senses within seconds, we performed additional psychophysiological and functional magnetic resonance imaging (fMRI) analysis to provide new insight into this matter. Eye closure for several seconds led to a substantial impact on tactile perception probably caused by an unmasking of preformed neuronal pathways.
APA, Harvard, Vancouver, ISO, and other styles
5

Tobin, Michael, Nicholas Bozic, Graeme Douglas, and John Greaney. "How non-visual modalities can help the young visually impaired child to succeed in visual and other tasks." British Journal of Visual Impairment 14, no. 1 (January 1996): 11–17. http://dx.doi.org/10.1177/026461969601400103.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Chenu, Olivier, Yohan Payan, P. Hlavackova, Jacques Demongeot, Francis Cannard, Bruno Diot, and Nicolas Vuillerme. "Pressure Sores Prevention for Paraplegic People: Effects of Visual, Auditory and Tactile Supplementations on Overpressures Distribution in Seated Posture." Applied Bionics and Biomechanics 9, no. 1 (2012): 61–67. http://dx.doi.org/10.1155/2012/961524.

Full text
Abstract:
This paper presents a study on the usage of different informative modalities as biofeedbacks of a perceptual supplementation device aiming at reducing overpressure at the buttock area. Visual, audio and lingual electrotactile modalities are analysed and compared with a non-biofeedback session. In conclusion, sensory modalities have a positive and equal effect, but they are not equally judged in terms of comfort and disturbance with some other activities.
APA, Harvard, Vancouver, ISO, and other styles
7

Pereira Reyes, Yasna, and Valerie Hazan. "English vowel perception by non-native speakers: impact of audio and visual training modalities." Onomázein Revista de lingüística filología y traducción, no. 51 (2021): 111–36. http://dx.doi.org/10.7764/onomazein.51.04.

Full text
Abstract:
Perception of sounds of a second language (L2) presents difficulties for non-native speakers which can be improved with training (Bradlow, Pisoni, Akahane-Yamada & Tohkura, 1997; Logan, Lively & Pisoni, 1991; Iverson & Evans, 2009). The aim of this study was to compare three different English vowel perceptual training programmes using audio (A), audiovisual (AV) and video (V) modes in non-native speakers with Spanish as native language (L1). 47 learners of English with Spanish as L1 were allocated to three different vowel training groups (AT, AVT, VT) and were given five training sessions to assess their improvement in English vowel perception. Additionally, participants were recorded before and after training to measure their improvement in the production of English vowels. Results showed that participants improved their perception and production of English vowels regardless of their training modality with no evidence of a benefit of visual information. These results also suggest that there is a lot of individual differences in perception and production of L2 vowels which may be related to a complex relation between speech perceptual and production mechanisms.
APA, Harvard, Vancouver, ISO, and other styles
8

Zhang, Tao, Yang Cong, Gan Sun, Qianqian Wang, and Zhenming Ding. "Visual Tactile Fusion Object Clustering." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 06 (April 3, 2020): 10426–33. http://dx.doi.org/10.1609/aaai.v34i06.6612.

Full text
Abstract:
Object clustering, aiming at grouping similar objects into one cluster with an unsupervised strategy, has been extensively-studied among various data-driven applications. However, most existing state-of-the-art object clustering methods (e.g., single-view or multi-view clustering methods) only explore visual information, while ignoring one of most important sensing modalities, i.e., tactile information which can help capture different object properties and further boost the performance of object clustering task. To effectively benefit both visual and tactile modalities for object clustering, in this paper, we propose a deep Auto-Encoder-like Non-negative Matrix Factorization framework for visual-tactile fusion clustering. Specifically, deep matrix factorization constrained by an under-complete Auto-Encoder-like architecture is employed to jointly learn hierarchical expression of visual-tactile fusion data, and preserve the local structure of data generating distribution of visual and tactile modalities. Meanwhile, a graph regularizer is introduced to capture the intrinsic relations of data samples within each modality. Furthermore, we propose a modality-level consensus regularizer to effectively align the visual and tactile data in a common subspace in which the gap between visual and tactile data is mitigated. For the model optimization, we present an efficient alternating minimization strategy to solve our proposed model. Finally, we conduct extensive experiments on public datasets to verify the effectiveness of our framework.
APA, Harvard, Vancouver, ISO, and other styles
9

Bautista, Melissa, Nayyar Saleem, and Ian A. Anderson. "Current and novel non-invasive imaging modalities in vascular neurosurgical practice." British Journal of Hospital Medicine 81, no. 12 (December 2, 2020): 1–10. http://dx.doi.org/10.12968/hmed.2020.0550.

Full text
Abstract:
Radiological investigations are a powerful tool in the assessment of patients with intracranial vascular anomalies. ‘Visual’ assessment of neurovascular lesions is central to their diagnosis, monitoring, prognostication and management. Computed tomography and magnetic resonance imaging are the two principal non-invasive imaging modalities used in clinical practice for the assessment of the cerebral vasculature, but these techniques continue to evolve, enabling clinicians to gain greater insights into neurovascular pathology and pathophysiology. This review outlines both established and novel imaging modalities used in modern neurovascular practice and their clinical applications.
APA, Harvard, Vancouver, ISO, and other styles
10

Gonsior, Barbara, Christian Landsiedel, Nicole Mirnig, Stefan Sosnowski, Ewald Strasser, Jakub Złotowski, Martin Buss, et al. "Impacts of Multimodal Feedback on Efficiency of Proactive Information Retrieval from Task-Related HRI." Journal of Advanced Computational Intelligence and Intelligent Informatics 16, no. 2 (March 20, 2012): 313–26. http://dx.doi.org/10.20965/jaciii.2012.p0313.

Full text
Abstract:
This work is a first step towards an integration ofmultimodality with the aim to make efficient use of both human-like, and non-human-like feedback modalities in order to optimize proactive information retrieval from task-related Human-Robot Interaction (HRI) in human environments. The presented approach combines the human-like modalities speech and emotional facial mimicry with non-human-like modalities. The proposed non-human-like modalities are a screen displaying retrieved knowledge of the robot to the human and a pointer mounted above the robot head for pointing directions and referring to objects in shared visual space as an equivalent for arm and hand gestures. Initially, pre-interaction feedback is explored in an experiment investigating different approach behaviors in order to find socially acceptable trajectories to increase the success of interactions and thus efficiency of information retrieval. Secondly, pre-evaluated humanlike modalities are introduced. First results of a multimodal feedback study are presented in the context of the IURO project,1where a robot asks for its way to a predefined goal location.1. Interactive Urban Robot, http://www.iuro-project.eu
APA, Harvard, Vancouver, ISO, and other styles
11

Bolden, Galina B. "Multiple modalities in collaborative turn sequences." Gesture 3, no. 2 (December 31, 2003): 187–212. http://dx.doi.org/10.1075/gest.3.2.04bol.

Full text
Abstract:
The article investigates resources used by parties in interaction to successfully complete each others’ utterances. Among the different ways in which recipients can demonstrate their understanding, collaborative completions are the most convincing since they display not only recipients’ understanding of the stance or the import of a turn-in-progress, but the minute analysis of the action itself. The article starts with a discussion of syntactic and action-sequential features of talk that can account for collaborative turn sequences and then focuses on other, non-verbal resources that may be made relevant at particular interactional junctures. An analysis of the several instances of collaborative completions illustrates the use of visually accessible features of the surround, gestural and postural conduct, and gaze direction in collaborative turn sequences. It is suggested that an interplay of these multiple modalities enables the participants to collaborate in co-constructing single syntactic units of talk. The electronic edition of this article includes audio-visual data.
APA, Harvard, Vancouver, ISO, and other styles
12

Maimon, Neta B., Dominique Lamy, and Zohar Eitan. "Crossmodal Correspondence Between Tonal Hierarchy and Visual Brightness: Associating Syntactic Structure and Perceptual Dimensions Across Modalities." Multisensory Research 33, no. 8 (September 15, 2020): 805–36. http://dx.doi.org/10.1163/22134808-bja10006.

Full text
Abstract:
Abstract Crossmodal correspondences (CMC) systematically associate perceptual dimensions in different sensory modalities (e.g., auditory pitch and visual brightness), and affect perception, cognition, and action. While previous work typically investigated associations between basic perceptual dimensions, here we present a new type of CMC, involving a high-level, quasi-syntactic schema: music tonality. Tonality governs most Western music and regulates stability and tension in melodic and harmonic progressions. Musicians have long associated tonal stability with non-auditory domains, yet such correspondences have hardly been investigated empirically. Here, we investigated CMC between tonal stability and visual brightness, in musicians and in non-musicians, using explicit and implicit measures. On the explicit test, participants heard a tonality-establishing context followed by a probe tone, and matched each probe to one of several circles, varying in brightness. On the implicit test, we applied the Implicit Association Test to auditory (tonally stable or unstable sequences) and visual (bright or dark circles) stimuli. The findings indicate that tonal stability is associated with visual brightness both explicitly and implicitly. They further suggest that this correspondence depends only partially on conceptual musical knowledge, as it also operates through fast, unintentional, and arguably automatic processes in musicians and non-musicians alike. By showing that abstract musical structure can establish concrete connotations to a non-auditory perceptual domain, our results open a hitherto unexplored avenue for research, associating syntactical structure with connotative meaning.
APA, Harvard, Vancouver, ISO, and other styles
13

Sanderson, Penelope M., and Marcus O. Watson. "From Information Content to Auditory Display with Ecological Interface Design: Prospects and Challenges." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 49, no. 3 (September 2005): 259–63. http://dx.doi.org/10.1177/154193120504900310.

Full text
Abstract:
We examine how Ecological Interface Design (EID) might better bridge the gap from analysis to design by taking different modalities into account. Whereas almost all previous research using EID has focused on visual displays, attempts to extend the use of EID to non-visual modalities have revealed hidden assumptions that need to be made explicit and questioned. In this paper we explore the potential for EID to support a systematic process for the design for auditory displays, illustrating our argument with the design of auditory displays to support anaesthesia monitoring. We propose a set of steps that analysts might take to move more deliberatively and effectively from analysis to design with EID.
APA, Harvard, Vancouver, ISO, and other styles
14

Wong, Wai Leung, and Urs Maurer. "The effects of input and output modalities on language switching between Chinese and English." Bilingualism: Language and Cognition 24, no. 4 (March 17, 2021): 719–29. http://dx.doi.org/10.1017/s136672892100002x.

Full text
Abstract:
AbstractLanguage control is important for bilinguals to produce words in the right language. While most previous studies investigated language control using visual stimuli with vocal responses, language control regarding auditory stimuli and manual responses was rarely examined. In the present study, an alternating language switching paradigm was used to investigate language control mechanism under two input modalities (visual and auditory) and two output modalities (manual and vocal) by measuring switch costs in both error percentage and reaction time (RT) in forty-eight Cantonese–English early bilinguals. Results showed that higher switch costs in RT were found with auditory stimuli than visual stimuli, possibly due to shorter preparation time with auditory stimuli. In addition, switch costs in RT and error percentage could be obtained not only in speaking, but also in handwriting. Therefore, language control mechanisms, such as inhibition of the non-target language, may be shared between speaking and handwriting.
APA, Harvard, Vancouver, ISO, and other styles
15

Darki, Farzaneh, and James Rankin. "Perceptual rivalry with vibrotactile stimuli." Attention, Perception, & Psychophysics 83, no. 6 (April 22, 2021): 2613–24. http://dx.doi.org/10.3758/s13414-021-02278-1.

Full text
Abstract:
AbstractIn perceptual rivalry, ambiguous sensory information leads to dynamic changes in the perceptual interpretation of fixed stimuli. This phenomenon occurs when participants receive sensory stimuli that support two or more distinct interpretations; this results in spontaneous alternations between possible perceptual interpretations. Perceptual rivalry has been widely studied across different sensory modalities including vision, audition, and to a limited extent, in the tactile domain. Common features of perceptual rivalry across various ambiguous visual and auditory paradigms characterize the randomness of switching times and their dependence on input strength manipulations (Levelt’s propositions). It is still unclear whether the general characteristics of perceptual rivalry are preserved with tactile stimuli. This study aims to introduce a simple tactile stimulus capable of generating perceptual rivalry and explores whether general features of perceptual rivalry from other modalities extend to the tactile domain. Our results confirm that Levelt’s proposition II extends to tactile bistability, and that the stochastic characteristics of irregular perceptual alternations agree with non-tactile modalities. An analysis of correlations between subsequent perceptual phases reveals a significant positive correlation at lag 1 (as found in visual bistability), and a negative correlation for lag 2 (in contrast with visual bistability).
APA, Harvard, Vancouver, ISO, and other styles
16

Heikkilä, Jenni, Kimmo Alho, and Kaisa Tiippana. "Semantically Congruent Visual Stimuli Can Improve Auditory Memory." Multisensory Research 30, no. 7-8 (2017): 639–51. http://dx.doi.org/10.1163/22134808-00002584.

Full text
Abstract:
We investigated the effects of audiovisual semantic congruency on recognition memory performance. It has been shown previously that memory performance is better for semantically congruent stimuli that are presented together in different modalities (e.g., a dog’s bark with a picture of the dog) during encoding, compared to stimuli presented together with an incongruent or non-semantic stimulus across modalities. We wanted to clarify whether this congruency effect is also present when the effects of response bias and uncertainty of stimulus type are removed. The participants memorized auditory or visual stimuli (sounds, spoken words or written words), which were either presented with a semantically congruent stimulus in the other modality or presented alone during encoding. This experimental paradigm allowed us to utilize signal detection theory in performance analysis. In addition, it enabled us to eliminate possible effects caused by intermingling congruent stimuli with incongruent or non-semantic stimuli, as previously done in other studies. The memory of sounds was facilitated when accompanied by semantically congruent pictures or written words, in comparison to sounds presented in isolation. The memory of spoken words was facilitated by semantically congruent pictures. However, written words did not facilitate memory of spoken words, or vice versa. These results suggest that semantically congruent verbal and non-verbal visual stimuli presented in tandem with auditory counterparts, can enhance the precision of auditory encoding, except when the stimuli in each modality are both verbal.
APA, Harvard, Vancouver, ISO, and other styles
17

Chabrolles, Laura, Imen Ben Ammar, Marie S. A. Fernandez, Nicolas Boyer, Joël Attia, Paulo J. Fonseca, M. Clara P. Amorim, and Marilyn Beauchaud. "Appraisal of unimodal cues during agonistic interactions in Maylandia zebra." PeerJ 5 (August 1, 2017): e3643. http://dx.doi.org/10.7717/peerj.3643.

Full text
Abstract:
Communication is essential during social interactions including animal conflicts and it is often a complex process involving multiple sensory channels or modalities. To better understand how different modalities interact during communication, it is fundamental to study the behavioural responses to both the composite multimodal signal and each unimodal component with adequate experimental protocols. Here we test how an African cichlid, which communicates with multiple senses, responds to different sensory stimuli in a social relevant scenario. We tested Maylandia zebra males with isolated chemical (urine or holding water coming both from dominant males), visual (real opponent or video playback) and acoustic (agonistic sounds) cues during agonistic interactions. We showed that (1) these fish relied mostly on the visual modality, showing increased aggressiveness in response to the sight of a real contestant but no responses to urine or agonistic sounds presented separately, (2) video playback in our study did not appear appropriate to test the visual modality and needs more technical prospecting, (3) holding water provoked territorial behaviours and seems to be promising for the investigation into the role of the chemical channel in this species. Our findings suggest that unimodal signals are non-redundant but how different sensory modalities interplay during communication remains largely unknown in fish.
APA, Harvard, Vancouver, ISO, and other styles
18

Räty, Silja, Carolin Borrmann, Giuseppe Granata, Lizbeth Cárdenas-Morales, Ariel Schoenfeld, Michael Sailer, Katri Silvennoinen, et al. "Non-invasive electrical brain stimulation for vision restoration after stroke: An exploratory randomized trial (REVIS)." Restorative Neurology and Neuroscience 39, no. 3 (August 3, 2021): 221–35. http://dx.doi.org/10.3233/rnn-211198.

Full text
Abstract:
Background: Occipital strokes often cause permanent homonymous hemianopia leading to significant disability. In previous studies, non-invasive electrical brain stimulation (NIBS) has improved vision after optic nerve damage and in combination with training after stroke. Objective: We explored different NIBS modalities for rehabilitation of hemianopia after chronic stroke. Methods: In a randomized, double-blinded, sham-controlled, three-armed trial, altogether 56 patients with homonymous hemianopia were recruited. The three experiments were: i) repetitive transorbital alternating current stimulation (rtACS, n = 8) vs. rtACS with prior cathodal transcranial direct current stimulation over the intact visual cortex (tDCS/rtACS, n = 8) vs. sham (n = 8); ii) rtACS (n = 9) vs. sham (n = 9); and iii) tDCS of the visual cortex (n = 7) vs. sham (n = 7). Visual functions were evaluated before and after the intervention, and after eight weeks follow-up. The primary outcome was change in visual field assessed by high-resolution and standard perimetries. The individual modalities were compared within each experimental arm. Results: Primary outcomes in Experiments 1 and 2 were negative. Only significant between-group change was observed in Experiment 3, where tDCS increased visual field of the contralesional eye compared to sham. tDCS/rtACS improved dynamic vision, reading, and visual field of the contralesional eye, but was not superior to other groups. rtACS alone increased foveal sensitivity, but was otherwise ineffective. All trial-related procedures were tolerated well. Conclusions: This exploratory trial showed safety but no main effect of NIBS on vision restoration after stroke. However, tDCS and combined tDCS/rtACS induced improvements in visually guided performance that need to be confirmed in larger-sample trials. NCT01418820 (clinicaltrials.gov)
APA, Harvard, Vancouver, ISO, and other styles
19

Kondo, Noriko, Ei-Ichi Izawa, and Shigeru Watanabe. "Crows cross-modally recognize group members but not non-group members." Proceedings of the Royal Society B: Biological Sciences 279, no. 1735 (January 4, 2012): 1937–42. http://dx.doi.org/10.1098/rspb.2011.2419.

Full text
Abstract:
Recognizing other individuals by integrating different sensory modalities is a crucial ability of social animals, including humans. Although cross-modal individual recognition has been demonstrated in mammals, the extent of its use by birds remains unknown. Herein, we report the first evidence of cross-modal recognition of group members by a highly social bird, the large-billed crow ( Corvus macrorhynchos ). A cross-modal expectancy violation paradigm was used to test whether crows were sensitive to identity congruence between visual presentation of a group member and the subsequent playback of a contact call. Crows looked more rapidly and for a longer duration when the visual and auditory stimuli were incongruent than when congruent. Moreover, these responses were not observed with non-group member stimuli. These results indicate that crows spontaneously associate visual and auditory information of group members but not of non-group members, which is a demonstration of cross-modal audiovisual recognition of group members in birds.
APA, Harvard, Vancouver, ISO, and other styles
20

Ide, Takeshi, Mariko Ishikawa, Kazuo Tsubota, and Masaru Miyao. "The Effect of 3D Visual Simulator on Children’s Visual Acuity - A Pilot Study Comparing Two Different Modalities." Open Ophthalmology Journal 7, no. 1 (October 18, 2013): 69–78. http://dx.doi.org/10.2174/1874364101307010069.

Full text
Abstract:
Purpose :To evaluate the efficacy of two non-surgical interventions of vision improvement in children.Methods :A prospective, randomized, pilot study to compare fogging method and the use of head mounted 3D display. Subjects were children, between 5 to 15 years old, with normal best corrected visual acuity (BCVA) and up to -3D myopia. Subjects played a video game as near point work, and received one of the two methods of treatments. Measurements of uncorrected far visual acuity (UCVA), refraction with autorefractometer, and subjective accommodative amplitude were taken 3 times, at the baseline, after the near work, and after the treatment.Results :Both methods applied after near work, improved UCVA. Head mounted 3D display group showed significant improvement in UCVA and resulted in better UCVA than baseline. Fogging group showed improvement in subjective accommodative amplitude. While 3D display group did not show change in the refraction, fogging group’s myopic refraction showed significant increase indicating the eyes showed myopic change of eyes after near work and treatment.Discussion :Despite our lack of clear knowledge in the mechanisms, both methods improved UCVA after the treatments. The improvement in UCVA was not correlated to measured refraction values.Conclusion :UCVA after near work can be improved by repeating near and distant accommodation by fogging and 3D image viewing, although at the different degrees. Further investigation on mechanisms of improvements and their clinical significance are warranted.
APA, Harvard, Vancouver, ISO, and other styles
21

Greenlee, Mark W., Sebastian M. Frank, Mariia Kaliuzhna, Olaf Blanke, Frank Bremmer, Jan Churan, Luigi F. Cuturi, Paul R. MacNeilage, and Andrew T. Smith. "Multisensory Integration in Self Motion Perception." Multisensory Research 29, no. 6-7 (2016): 525–56. http://dx.doi.org/10.1163/22134808-00002527.

Full text
Abstract:
Self motion perception involves the integration of visual, vestibular, somatosensory and motor signals. This article reviews the findings from single unit electrophysiology, functional and structural magnetic resonance imaging and psychophysics to present an update on how the human and non-human primate brain integrates multisensory information to estimate one’s position and motion in space. The results indicate that there is a network of regions in the non-human primate and human brain that processes self motion cues from the different sense modalities.
APA, Harvard, Vancouver, ISO, and other styles
22

Luqman, Hamzah, and El-Sayed M. El-Alfy. "Towards Hybrid Multimodal Manual and Non-Manual Arabic Sign Language Recognition: mArSL Database and Pilot Study." Electronics 10, no. 14 (July 20, 2021): 1739. http://dx.doi.org/10.3390/electronics10141739.

Full text
Abstract:
Sign languages are the main visual communication medium between hard-hearing people and their societies. Similar to spoken languages, they are not universal and vary from region to region, but they are relatively under-resourced. Arabic sign language (ArSL) is one of these languages that has attracted increasing attention in the research community. However, most of the existing and available works on sign language recognition systems focus on manual gestures, ignoring other non-manual information needed for other language signals such as facial expressions. One of the main challenges of not considering these modalities is the lack of suitable datasets. In this paper, we propose a new multi-modality ArSL dataset that integrates various types of modalities. It consists of 6748 video samples of fifty signs performed by four signers and collected using Kinect V2 sensors. This dataset will be freely available for researchers to develop and benchmark their techniques for further advancement of the field. In addition, we evaluated the fusion of spatial and temporal features of different modalities, manual and non-manual, for sign language recognition using the state-of-the-art deep learning techniques. This fusion boosted the accuracy of the recognition system at the signer-independent mode by 3.6% compared with manual gestures.
APA, Harvard, Vancouver, ISO, and other styles
23

Simone, Ashley N., Anne-Claude V. Bédard, David J. Marks, and Jeffrey M. Halperin. "Good Holders, Bad Shufflers: An Examination of Working Memory Processes and Modalities in Children with and without Attention-Deficit/Hyperactivity Disorder." Journal of the International Neuropsychological Society 22, no. 1 (November 17, 2015): 1–11. http://dx.doi.org/10.1017/s1355617715001010.

Full text
Abstract:
AbstractThe aim of this study was to examine working memory (WM) modalities (visual-spatial and auditory-verbal) and processes (maintenance and manipulation) in children with and without attention-deficit/hyperactivity disorder (ADHD). The sample consisted of 63 8-year-old children with ADHD and an age- and sex-matched non-ADHD comparison group (N=51). Auditory-verbal and visual-spatial WM were assessed using the Digit Span and Spatial Span subtests from the Wechsler Intelligence Scale for Children Integrated - Fourth Edition. WM maintenance and manipulation were assessed via forward and backward span indices, respectively. Data were analyzed using a 3-way Group (ADHD vs. non-ADHD)×Modality (Auditory-Verbal vs. Visual-Spatial)×Condition (Forward vs. Backward) Analysis of Variance (ANOVA). Secondary analyses examined differences between Combined and Predominantly Inattentive ADHD presentations. Significant Group×Condition (p=.02) and Group×Modality (p=.03) interactions indicated differentially poorer performance by those with ADHD on backward relative to forward and visual-spatial relative to auditory-verbal tasks, respectively. The 3-way interaction was not significant. Analyses targeting ADHD presentations yielded a significant Group×Condition interaction (p=.009) such that children with ADHD-Predominantly Inattentive Presentation performed differentially poorer on backward relative to forward tasks compared to the children with ADHD-Combined Presentation. Findings indicate a specific pattern of WM weaknesses (i.e., WM manipulation and visual-spatial tasks) for children with ADHD. Furthermore, differential patterns of WM performance were found for children with ADHD-Predominantly Inattentive versus Combined Presentations. (JINS, 2016, 22, 1–11)
APA, Harvard, Vancouver, ISO, and other styles
24

To, Phuong Thanh, and David Grierson. "An application of measuring visual and non-visual sensorial experiences of nature for children within primary school spaces." Archnet-IJAR: International Journal of Architectural Research 14, no. 2 (October 31, 2019): 167–86. http://dx.doi.org/10.1108/arch-05-2019-0139.

Full text
Abstract:
Purpose Proximity to nature is essential to a child’s development. Well-designed educational environments are crucial to supporting this proximity, particularly in the early years of schooling. The purpose of this paper is to measure children’s experiences of nature within three primary school spaces at various locations in Glasgow, Scotland. The methodology for measuring children’s visual and non-visual sensory experiences is developed to evaluate the connection between naturalness values and spatial environmental qualities across varying “Child–Nature–Distance” ranges. Design/methodology/approach The approach associates children’s multiple layers of sensory modalities with particular attributes of the spatial environment within primary schools to determine the level of naturalness that children experience, in both internal and external spaces. Findings The study finds that children’s experiences are significantly influenced by factors relating to urban setting, built environment master planning, architectural features and interior design. Research limitations/implications Apart from primary school architecture for children, this methodology could be fully developed to the comprehensive human–nature relationship under the impacts of physical features and societal of other diversified environments in a future study. However, the offering reasonable primary school architecture for a proper children’s multi-sensorial experience with natural environment cannot thoroughly established with a quantitative aspect by the present study only. More qualitative research is recommended to examine the process of altering from “cause” to “perceived” nature of users’ cognitions, attitudes and behaviours within the exposure proximity to nature. Practical implications The methodology for measuring visual and non-visual sensorial experiences of nature, and its application to children’s learning and leisure spaces within primary school architecture could offer a tool for assessing current schools, and evaluating future design proposals for new schools. Originality/value The authors argue that the applicationof this method can support design decision making for refurbishing schools at the micro level, and in planning urban development involving proposals for new schools at the macro level.
APA, Harvard, Vancouver, ISO, and other styles
25

Mansouri-Benssassi, Esma, and Juan Ye. "Synch-Graph: Multisensory Emotion Recognition Through Neural Synchrony via Graph Convolutional Networks." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 02 (April 3, 2020): 1351–58. http://dx.doi.org/10.1609/aaai.v34i02.5491.

Full text
Abstract:
Human emotions are essentially multisensory, where emotional states are conveyed through multiple modalities such as facial expression, body language, and non-verbal and verbal signals. Therefore having multimodal or multisensory learning is crucial for recognising emotions and interpreting social signals. Existing multisensory emotion recognition approaches focus on extracting features on each modality, while ignoring the importance of constant interaction and co-learning between modalities. In this paper, we present a novel bio-inspired approach based on neural synchrony in audio-visual multisensory integration in the brain, named Synch-Graph. We model multisensory interaction using spiking neural networks (SNN) and explore the use of Graph Convolutional Networks (GCN) to represent and learn neural synchrony patterns. We hypothesise that modelling interactions between modalities will improve the accuracy of emotion recognition. We have evaluated Synch-Graph on two state-of-the-art datasets and achieved an overall accuracy of 98.3% and 96.82%, which are significantly higher than the existing techniques.
APA, Harvard, Vancouver, ISO, and other styles
26

Dupont, Sophie, Jérôme Aubin, and Lucie Ménard. "study of the McGurk effect in 4 and 5-year-old French Canadian children." ZAS Papers in Linguistics 40 (January 1, 2005): 1–17. http://dx.doi.org/10.21248/zaspil.40.2005.254.

Full text
Abstract:
It has been shown that visual cues play a crucial role in the perception of vowels and consonants. Conflicting consonantal stimuli presented in the visual and auditory modalities can even result in the emergence of a third perceptual unit (McGurk effect). From a developmental point of view, several studies report that newborns can associate the image of a face uttering a given vowel to the auditory signal corresponding to this vowel; visual cues are thus used by the newborns. Despite the large number of studies carried out with adult speakers and newborns, very little work has been conducted with preschool-aged children. This contribution is aimed at describing the use of auditory and visual cues by 4 and 5-year-old French Canadian speakers, compared to adult speakers, in the identification of voiced consonants. Audiovisual recordings of a French Canadian speaker uttering the sequences [aba], [ada], [aga], [ava], [ibi], [idi], [igi], [ivi] have been carried out. The acoustic and visual signals have been extracted and analysed so that conflicting and non-conflicting stimuli, between the two modalities, were obtained. The resulting stimuli were presented as a perceptual test to eight 4 and 5-year-old French Canadian speakers and ten adults in three conditions: visual-only, auditory-only, and audiovisual. Results show that, even though the visual cues have a significant effect on the identification of the stimuli for adults and children, children are less sensitive to visual cues in the audiovisual condition. Such results shed light on the role of multimodal perception in the emergence and the refinement of the phonological system in children.
APA, Harvard, Vancouver, ISO, and other styles
27

Freelance, Christopher B., Simon M. Tierney, Juanita Rodriguez, Devi M. Stuart-Fox, Bob B. M. Wong, and Mark A. Elgar. "The eyes have it: dim-light activity is associated with the morphology of eyes but not antennae across insect orders." Biological Journal of the Linnean Society 134, no. 2 (July 12, 2021): 303–15. http://dx.doi.org/10.1093/biolinnean/blab088.

Full text
Abstract:
Abstract The perception of cues and signals in visual, olfactory and auditory modalities underpins all animal interactions and provides crucial fitness-related information. Sensory organ morphology is under strong selection to optimize detection of salient cues and signals in a given signalling environment, the most well-studied example being selection on eye design in different photic environments. Many dim-light active species have larger compound eyes relative to body size, but little is known about differences in non-visual sensory organ morphology between diurnal and dim-light active insects. Here, we compare the micromorphology of the compound eyes (visual receptors) and antennae (olfactory and mechanical receptors) in representative pairs of day active and dim-light active species spanning multiple taxonomic orders of insects. We find that dim-light activity is associated with larger compound eye ommatidia and larger overall eye surface area across taxonomic orders but find no evidence that morphological adaptations that enhance the sensitivity of the eye in dim-light active insects are accompanied by morphological traits of the antennae that may increase sensitivity to olfactory, chemical or physical stimuli. This suggests that the ecology and natural history of species is a stronger driver of sensory organ morphology than is selection for complementary investment between sensory modalities.
APA, Harvard, Vancouver, ISO, and other styles
28

Mas-Casadesús, Anna, and Elena Gherri. "Ignoring Irrelevant Information: Enhanced Intermodal Attention in Synaesthetes." Multisensory Research 30, no. 3-5 (2017): 253–77. http://dx.doi.org/10.1163/22134808-00002566.

Full text
Abstract:
Despite the fact that synaesthetes experience additional percepts during their inducer-concurrent associations that are often unrelated or irrelevant to their daily activities, they appear to be relatively unaffected by this potentially distracting information. This might suggest that synaesthetes are particularly good at ignoring irrelevant perceptual information coming from different sensory modalities. To investigate this hypothesis, the performance of a group of synaesthetes was compared to that of a matched non-synaesthete group in two different conflict tasks aimed at assessing participants’ abilities to ignore irrelevant information. In order to match the sensory modality of the task-irrelevant distractors (vision) with participants’ synaesthetic attentional filtering experience, we tested only synaesthetes experiencing at least one synaesthesia subtype triggering visual concurrents (e.g., grapheme–colour synaesthesia or sequence–space synaesthesia). Synaesthetes and controls performed a classic flanker task (FT) and a visuo-tactile cross-modal congruency task (CCT) in which they had to attend to tactile targets while ignoring visual distractors. While no differences were observed between synaesthetes and controls in the FT, synaesthetes showed reduced interference by the irrelevant distractors of the CCT. These findings provide the first direct evidence that synaesthetes might be more efficient than non-synaesthetes at dissociating conflicting information from different sensory modalities when the irrelevant modality correlates with their synaesthetic concurrent modality (here vision).
APA, Harvard, Vancouver, ISO, and other styles
29

Macuch Silva, Vinicius, Judith Holler, Asli Ozyurek, and Seán G. Roberts. "Multimodality and the origin of a novel communication system in face-to-face interaction." Royal Society Open Science 7, no. 1 (January 2020): 182056. http://dx.doi.org/10.1098/rsos.182056.

Full text
Abstract:
Face-to-face communication is multimodal at its core: it consists of a combination of vocal and visual signalling. However, current evidence suggests that, in the absence of an established communication system, visual signalling, especially in the form of visible gesture, is a more powerful form of communication than vocalization and therefore likely to have played a primary role in the emergence of human language. This argument is based on experimental evidence of how vocal and visual modalities (i.e. gesture) are employed to communicate about familiar concepts when participants cannot use their existing languages. To investigate this further, we introduce an experiment where pairs of participants performed a referential communication task in which they described unfamiliar stimuli in order to reduce reliance on conventional signals. Visual and auditory stimuli were described in three conditions: using visible gestures only, using non-linguistic vocalizations only and given the option to use both (multimodal communication). The results suggest that even in the absence of conventional signals, gesture is a more powerful mode of communication compared with vocalization, but that there are also advantages to multimodality compared to using gesture alone. Participants with an option to produce multimodal signals had comparable accuracy to those using only gesture, but gained an efficiency advantage. The analysis of the interactions between participants showed that interactants developed novel communication systems for unfamiliar stimuli by deploying different modalities flexibly to suit their needs and by taking advantage of multimodality when required.
APA, Harvard, Vancouver, ISO, and other styles
30

Bruni, Luis Emilio, and Sarune Baceviciute. "On the embedded cognition of non-verbal narratives." Sign Systems Studies 42, no. 2/3 (December 5, 2014): 359–75. http://dx.doi.org/10.12697/sss.2014.42.2-3.09.

Full text
Abstract:
Acknowledging that narratives are an important resource in human communication and cognition, the focus of this article is on the cognitive aspects of involvement with visual and auditory non-verbal narratives, particularly in relation to the newest immersive media and digital interactive representational technologies. We consider three relevant trends in narrative studies that have emerged in the 60 years of cognitive and digital revolution. The issue at hand could have implications for developmental psychology, pedagogics, cognitive science, cognitive psychology, ethology and evolutionary studies of language. In particular, it is of great importance for narratology in relation to interactive media and new representational technologies. Therefore we outline a research agenda for a bio-cognitive semiotic interdisciplinary investigation into how people understand, react to, and interact with narratives that are communicated through non-verbal modalities.
APA, Harvard, Vancouver, ISO, and other styles
31

Kirnosova, Nadiia, and Yuliia Fedotova. "Chinese and Japanese Characters from the Perspective of Multimodal Studies." ATHENS JOURNAL OF PHILOLOGY 8, no. 4 (September 9, 2021): 253–68. http://dx.doi.org/10.30958/ajp.8-4-1.

Full text
Abstract:
This article aims to demonstrate that a character can generate at least three different modalities simultaneously – visual, audial and vestibular — and influence a recipient in a deeper and more powerful way (than a sign from a phonetic alphabet). To show this, we chose modern Chinese and Japanese characters as live signs, and analyzed them functioning in texts with obvious utilitarian purposes – in advertisements. The main problem we were interested in during conducting this research was the “information capacity” of a character. We find out that any character exists in three dimensions simultaneously and generates three modalities at the same time. Its correspondence with morphemes opens two channels for encoding information – first of all, it brings a space for audial modality through the acoustic form of a syllable, and then it opens a space for visual modality through the graphical form of a character. The latter form implies a space for vestibular modality, because as a “figure,” any character occupies its “ground” (a particular square area), which becomes a source of a sense of stability and symmetry, enriching linguistic messages with non-verbal information. Keywords: advertisement, character, information, mode, multimodality
APA, Harvard, Vancouver, ISO, and other styles
32

Pike, Thomas W., Michael Ramsey, and Anna Wilkinson. "Environmentally induced changes to brain morphology predict cognitive performance." Philosophical Transactions of the Royal Society B: Biological Sciences 373, no. 1756 (August 13, 2018): 20170287. http://dx.doi.org/10.1098/rstb.2017.0287.

Full text
Abstract:
The relationship between the size and structure of a species' brain and its cognitive capacity has long interested scientists. Generally, this work relates interspecific variation in brain anatomy with performance on a variety of cognitive tasks. However, brains are known to show considerable short-term plasticity in response to a range of social, ecological and environmental factors. Despite this, we have a remarkably poor understanding of how this impacts on an animal's cognitive performance. Here, we non-invasively manipulated the relative size of brain regions associated with processing visual and chemical information in fish (the optic tectum and olfactory bulbs, respectively). We then tested performance in a cognitive task in which information from the two sensory modalities was in conflict. Although the fish could effectively use both visual and chemical information if presented in isolation, when they received cues from both modalities simultaneously, those with a relatively better developed optic tectum showed a greater reliance on visual information, while individuals with relatively better developed olfactory bulbs showed a greater reliance on chemical information. These results suggest that short-term changes in brain structure, possibly resulting from an attempt to minimize the costs of developing unnecessary but energetically expensive brain regions, may have marked effects on cognitive performance. This article is part of the theme issue ‘Causes and consequences of individual differences in cognitive abilities’.
APA, Harvard, Vancouver, ISO, and other styles
33

Liu, Zheng-Wei, Chun-Li Chen, Xue-Hao Cui, and Pei-Quan Zhao. "Analysis of the etiologies, treatments and prognoses in children and adolescent vitreous hemorrhage." International Journal of Ophthalmology 14, no. 2 (February 18, 2021): 299–305. http://dx.doi.org/10.18240/ijo.2021.02.18.

Full text
Abstract:
AIM: To determine the etiologies, treatment modalities and visual outcomes of vitreous hemorrhage (VH; range from birth to 18y). METHODS: A total of 262 eyes from 210 patients between January 2010 and September 2016 were included. All children underwent an appropriate ocular and systemic examination. Data collected included demographics, clinical manifestations, details of the ocular and systemic examination, management details, final fundus anatomy and visual acuity (VA). RESULTS: The most common etiologies were non-traumatic VH (64.89%), most of which were due to retinopathy of prematurity (ROP; 37.10%); while traffic accidents, including 16 (21.00%) eyes, was the most common ocular traumas. Surgery, performed in 143 (54.58%) eyes, was the most common management modality. The initial mean baseline visual acuity was 2.77±0.21 logarithm of the minimal angle of resolution (logMAR) in children and adolescent with traumatic VH, which was significantly improved to 2.15±1.31 logMAR (P<0.05). CONCLUSION: VH in children and adolescent has a complicated and diverse etiology. ROP is the primary cause of non-traumatic VH, which is the most common etiology. Appropriate treatment of traumatic VH is associated with obvious improvement in visual acuity. The initial VA is one of most important predictors of outcome.
APA, Harvard, Vancouver, ISO, and other styles
34

Belz, Steven M., Gary S. Robinson, and John G. Casali. "Auditory Icons as Impending Collision System Warning Signals in Commercial Motor Vehicles." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 42, no. 15 (October 1998): 1127–31. http://dx.doi.org/10.1177/154193129804201515.

Full text
Abstract:
This simulator-based study examined the use of conventional auditory warnings (tonal, non-verbal sounds) and auditory icons (representational, non-verbal sounds), alone and in combination with a dash-mounted visual display to warn commercial motor vehicle operators of impending front-to-rear and side collision situations. Driver performance was measured in the simulated driving task via brake response time in the front-to-rear collision scenarios and via a count of accident occurrence in the side collision scenarios. For both front-to-rear and side collision scenarios, auditory icons elicited significantly improved driver performance over conventional auditory warnings. Driver performance improved when collision warning information was presented through multiple modalities.
APA, Harvard, Vancouver, ISO, and other styles
35

Wandtner, Bernhard, Nadja Schömig, and Gerald Schmidt. "Effects of Non-Driving Related Task Modalities on Takeover Performance in Highly Automated Driving." Human Factors: The Journal of the Human Factors and Ergonomics Society 60, no. 6 (April 4, 2018): 870–81. http://dx.doi.org/10.1177/0018720818768199.

Full text
Abstract:
Objective: Aim of the study was to evaluate the impact of different non-driving related tasks (NDR tasks) on takeover performance in highly automated driving. Background: During highly automated driving, it is allowed to engage in NDR tasks temporarily. However, drivers must be able to take over control when reaching a system limit. There is evidence that the type of NDR task has an impact on takeover performance, but little is known about the specific task characteristics that account for performance decrements. Method: Thirty participants drove in a simulator using a highly automated driving system. Each participant faced five critical takeover situations. Based on assumptions of Wickens’s multiple resource theory, stimulus and response modalities of a prototypical NDR task were systematically manipulated. Additionally, in one experimental group, the task was locked out simultaneously with the takeover request. Results: Task modalities had significant effects on several measures of takeover performance. A visual-manual texting task degraded performance the most, particularly when performed handheld. In contrast, takeover performance with an auditory-vocal task was comparable to a baseline without any task. Task lockout was associated with faster hands-on-wheel times but not altered brake response times. Conclusion: Results showed that NDR task modalities are relevant factors for takeover performance. An NDR task lockout was highly accepted by the drivers and showed moderate benefits for the first takeover reaction. Application: Knowledge about the impact of NDR task characteristics is an enabler for adaptive takeover concepts. In addition, it might help regulators to make decisions on allowed NDR tasks during automated driving.
APA, Harvard, Vancouver, ISO, and other styles
36

McClanahan, Bill, and Nigel South. "‘All Knowledge Begins with the Senses’1: Towards a Sensory Criminology." British Journal of Criminology 60, no. 1 (August 6, 2019): 3–23. http://dx.doi.org/10.1093/bjc/azz052.

Full text
Abstract:
AbstractVisual criminology has established itself as a site of criminological innovation. Its ascendance, though, highlights ways in which the ‘ocularcentrism’ of the social sciences is reproduced in criminology. We respond, arguing for attention to the totality of sensorial modalities. Outlining the possible contours of a criminology concerned with smell, taste, sound and touch—along with the visual—the paper describes moments in which the sensory intersects with various phenomena of crime, harm, justice and power. Noting the primacy of the sensorial in understanding environmental harm, we describe an explicitly sensory green criminology while also suggesting the ways that heightened criminological attention to the non-visual senses might uncover new sites and modes of knowledge and a more richly affective criminology.
APA, Harvard, Vancouver, ISO, and other styles
37

Grewal, Manjot K., Shruti Chandra, Sarega Gurudas, Alan Bird, Glen Jeffery, and Sobha Sivaprasad. "Exploratory Study on Visual Acuity and Patient-Perceived Visual Function in Patients with Subretinal Drusenoid Deposits." Journal of Clinical Medicine 9, no. 9 (September 1, 2020): 2832. http://dx.doi.org/10.3390/jcm9092832.

Full text
Abstract:
Purpose: To investigate the value of visual acuity and patient-perceived visual function test when subretinal drusenoid deposits (SDD) are incorporated into the classification of age-related macular degeneration (AMD). A total of 50 participants were recruited into the study in these groups: healthy ageing (n = 11), intermediate AMD (iAMD) with no SDD (n = 17), iAMD with SDD (n = 11) and non-foveal atrophic AMD (n = 11) confirmed by two retinal imaging modalities. Best-corrected visual acuity (BCVA) and low luminance visual acuity (LLVA) were measured and low luminance deficit (LLD) was calculated. Participants were also interviewed with the low luminance questionnaire (LLQ). Linear regression was used to assess function–function relations. Compared with healthy participants, BCVA and LLVA scores were significantly reduced in the atrophic AMD group (p < 0.0001 and p = 0.00016, respectively) and in patients with SDD (p = 0.028 and p = 0.045, respectively). Participants with atrophy also had reduced BCVA (p = 0.001) and LLVA (p = 0.009) compared with the iAMD no SDD group. However, there were no differences in visual function tests between healthy aging and iAMD without SDD and between iAMD with SDD and atrophic AMD groups. The LLD score did not differ between groups. BCVA and LLVA correlated well. The LLQ did not correlate with visual function tests. This study shows that LLD is not a marker of disease severity as assessed clinically. Although LLQ is a good marker for disease severity using the current AMD classification, it does not differentiate between eyes with and without SDD. Eyes with non-macular geographic atrophy and SDD had lower function than eyes with no SDD and healthy controls.
APA, Harvard, Vancouver, ISO, and other styles
38

Auton, Jaime C., Mark W. Wiggins, and Daniel Sturman. "The Relationship Between Visual and Auditory Cue Utilization." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 63, no. 1 (November 2019): 843–47. http://dx.doi.org/10.1177/1071181319631316.

Full text
Abstract:
Within high-risk operational environments, expertise has typically been associated with a greater capacity to extract and utilize task-relevant visual cues during situation assessment. However, a limitation of this literature is its exclusive focus on operators’ use of visual cues, even though cues from other modalities (such as auditory cues) are frequently engaged during this assessment process. Arguably, if the capacity for cue utilization is an underlying skill, those operators who have a greater capacity to use visual cues would also have developed a more nuanced repertoire of non-visual cues. Within the context of electricity distribution control, the current study recruited network operators ( N=89) from twelve Australian Distributed Network Service Providers. Using an online experimental platform, participants’ visual cue utilization was assessed using an online situational judgement test (EXPERTise 2.0). Participants also completed the Auditory Readback Task which assessed their capacity to utilize various auditory cues (namely, final rising intonation, fillers, readback accuracy) when recognising nonunderstandings. The results showed a partial relationship between operator capacity for visual and auditory cue utilization. The outcomes of the current research have practical implications for the design of cue-based training interventions to increase the recognition of communication-related errors within distributed environments.
APA, Harvard, Vancouver, ISO, and other styles
39

Brunner, Rebecca M., and Juan M. Guayasamin. "Nocturnal visual displays and call description of the cascade specialist glassfrog Sachatamia orejuela." Behaviour 157, no. 14-15 (November 12, 2020): 1257–68. http://dx.doi.org/10.1163/1568539x-bja10048.

Full text
Abstract:
Abstract Although most male frogs call to attract females, vocalizations alone can be ineffective long-range signals in certain environments. To increase conspicuousness and counter the background noise generated by rushing water, a few frog species around the world have evolved visual communication modalities in addition to advertisement calls. These species belong to different families on different continents: a clear example of behavioural convergent evolution. Until now, long-distance visual signalling has not been recorded for any species in the glassfrog family (Centrolenidae). Sachatamia orejuela, an exceptionally camouflaged glassfrog species found within the spray zone of waterfalls, has remained poorly studied. Here, we document its advertisement call for the first time — the frequency of which is higher than perhaps any other glassfrog species, likely an evolutionary response to its disruptive acoustic space — as well as a sequence of non-antagonistic visual signals (foot-flagging, hand-waving, and head-bobbing) that we observed at night.
APA, Harvard, Vancouver, ISO, and other styles
40

Thelen, Antonia, and Micah M. Murray. "Heterogeneous auditory–visual integration: Effects of pitch, band-width and visual eccentricity." Seeing and Perceiving 25 (2012): 89. http://dx.doi.org/10.1163/187847612x647081.

Full text
Abstract:
The identification of monosynaptic connections between primary cortices in non-human primates has recently been complemented by observations of early-latency and low-level non-linear interactions in brain responses in humans as well as observations of facilitative effects of multisensory stimuli on behavior/performance in both humans and monkeys. While there is some evidence in favor of causal links between early–latency interactions within low-level cortices and behavioral facilitation, it remains unknown if such effects are subserved by direct anatomical connections between primary cortices. In non-human primates, the above monosynaptic projections from primary auditory cortex terminate within peripheral visual field representations within primary visual cortex, suggestive of there being a potential bias for the integration of eccentric visual stimuli and pure tone (vs. broad-band) sounds. To date, behavioral effects in humans (and monkeys) have been observed after presenting (para)foveal stimuli with any of a range of auditory stimuli from pure tones to noise bursts. The present study aimed to identify any heterogeneity in the integration of auditory–visual stimuli. To this end, we employed a 3 × 3 within subject design that varied the visual eccentricity of an annulus (2.5°, 5.7°, 8.9°) and auditory pitch (250, 1000, 4000 Hz) of multisensory stimuli while subjects completed a simple detection task. We also varied the auditory bandwidth (pure tone vs. pink noise) across blocks of trials that a subject completed. To ensure attention to both modalities, multisensory stimuli were equi-probable with both unisensory visual and unisensory auditory trials that themselves varied along the abovementioned dimensions. Median reaction times for each stimulus condition as well as the percentage gain/loss of each multisensory condition vs. the best constituent unisensory condition were measured. The preliminary results reveal that multisensory interactions (as measured from simple reaction times) are indeed heterogeneous across the tested dimensions and may provide a means for delimiting the anatomo-functional substrates of behaviorally-relevant early–latency neural response interactions. Interestingly, preliminary results suggest selective interactions for visual stimuli when presented with broadband stimuli but not when presented with pure tones. More precisely, centrally presented visual stimuli show the greatest index of multisensory facilitation when coupled to a high pitch tone embedded in pink noise, while visual stimuli presented at approximately 5.7° of visual angle show the greatest slowing of reaction times.
APA, Harvard, Vancouver, ISO, and other styles
41

Duflot, Lesley-Ann, Rafael Reisenhofer, Brahim Tamadazte, Nicolas Andreff, and Alexandre Krupa. "Wavelet and shearlet-based image representations for visual servoing." International Journal of Robotics Research 38, no. 4 (May 8, 2018): 422–50. http://dx.doi.org/10.1177/0278364918769739.

Full text
Abstract:
A visual servoing scheme consists of a closed-loop control approach that uses visual information feedback to control the motion of a robotic system. Probably the most popular visual servoing method is image-based visual servoing (IBVS). This kind of method uses geometric visual features extracted from the image to design the control law. However, extracting, matching, and tracking geometric visual features over time significantly limits the versatility of visual servoing controllers in various industrial and medical applications, in particular for “low-structured” medical images, e.g. ultrasounds and optical coherence tomography modalities. To overcome the limits of conventional IBVS, one can consider novel visual servoing paradigms known as “ direct” or “ featureless” approaches. This paper deals with the development of a new generation of direct visual servoing methods in which the signal control inputs are the coefficients of a multiscale image representation. In particular, we consider the use of multiscale image representations that are based on discrete wavelet and shearlet transforms. Up to now, one of the main obstacles in the investigation of multiscale image representations for visual servoing schemes was the issue of obtaining an analytical formulation of the interaction matrix that links the variation of wavelet and shearlet coefficients to the spatial velocity of the camera and the robot. In this paper, we derive four direct visual servoing controllers: two that are based on subsampled respectively non-subsampled wavelet coefficients and two that are based on the coefficients of subsampled respectively non-subsampled discrete shearlet transforms. All proposed controllers were tested in both simulation and experimental scenarios (using a six-degree-of-freedom Cartesian robot in an eye-in-hand configuration). The objective of this paper is to provide an analysis of the respective strengths and weaknesses of wavelet- and shearlet-based visual servoing controllers.
APA, Harvard, Vancouver, ISO, and other styles
42

Jongbloed, Lyn E., John B. Collins, and Wayne Jones. "A Sensorimotor Integation Test Battery for CVA Clients: Preliminary Evidence of Reliability and Validity." Occupational Therapy Journal of Research 6, no. 3 (May 1986): 131–50. http://dx.doi.org/10.1177/153944928600600301.

Full text
Abstract:
A newly developed Sensorimotor Integration Test Battery (SMITE) for assessing sensorimotor integration deficits in cerebral vascular accident (CVA) clients consists of 16 scales adapted from Ranka and Chapparo. Alpha reliabilities obtained from 84 CVA clients ranged from fair (.45) to strong (.95) but averaged .82 for all tests combined. The scales were further examined under four definitions of validity: discriminant validity, construct (factorial) validity, clinical validity, and robustness against biases due to sex or age. The Hooper Test of Visual Organization, Finger Maze, and Finger Identification Tests discriminated among locations of various cerebral vascular insults; clinical validity measures were strongest for the Symbol Digit Modalities Test, Visual Attention Test, Motor Accuracy, and Imitation of Non-Habitual Postures. The test results showed little bias in terms of clients' sex or age. Four major factors were identified among the battery's individual tests: Sensorimotor Integration, Visual Processing, Tactile Discrimination, and a diagnosis factor that includes the three standardized tests of visual/spatial organization. The SMITB clearly extracts more information during its 90-minute administration setting than clinicians and therapists currently use.
APA, Harvard, Vancouver, ISO, and other styles
43

Islam, Md Jahidul, MM Jalal Uddin, Md Shahadat Hossain, Md Ruhul Amin, Md Moshiur Rahman, Md Mahmudur Rahman Siddiqui, and AKM Salek. "Effect of intraarticular steroid injection in addition to physical modalities in osteoarthritis knee." Journal of Dhaka Medical College 23, no. 1 (March 26, 2015): 48–54. http://dx.doi.org/10.3329/jdmc.v23i1.22694.

Full text
Abstract:
Context: Osteoarthritis (OA) is the most common form of arthritis accounting for about 30% of general physician visits. Intrarticular (IA) corticosteroid injections have been used for decades in clinical practice for pain relief and control of local inflammation in OA. In the present study a combined therapy of long acting intra-articular injection in addition to physical modalities of OA knee was given to find out the functional improvement and clinical outcome of the patient. Methods: It was a prospective interventional non-randomized clinical study conducted in the Department of Physical Medicine & Rehabilitation, Bangabandhu Sheikh Mujib Medical University (BSMMU), Dhaka, from October, 2011 to March, 2012. Fifty four patients between 35 and 75 years without consideration of gender with a history of not less than three months knee pain with radiographic confirmation of primary osteoarthritis were selected purposefully. Then they were divided randomly in group A and B, having 27 patients in each group. Group A received NSAID (non steroidal anti-inflammatory drugs) i.e. aceclofenac 100mg twice daily for 10 days + omeprazol 20mg twice daily for 10 days + MWD (micro wave diathermy 20 minutes for 14 days. + therapeutic exercise + ADL (activities of daily living), while Group B received 80mg intraarticular triamcinolon acetonide injection once followed by NSAID i.e. aceclofenac 100mg twice daily for 10 days + omeprazol 20mg twice daily for 10 days + MWD 20 minutes for 14 days. + therapeutic exercise + ADL. In both groups the patients were observed for six weeks. Results: The mean of age of patients in group A and B were 52.33±9.62 years and 52.29±9.67 years respectively. In group A, 9 (33.3%) were male and 18 (66.7%) were female. In group B, 10 (37.0%) were male and 18 (63.0%) were female. Mean visual analogue scale (VAS) during pre treatment in group A and group B were 6.22±1.60 and 7.15±1.56 respectively. Mean range of motion (ROM) during pre treatment in group A and group B were 117.33±13.05 and 112.37±19.01 respectively. Mean time taken to walk 50 feet during pre treatment in group A and group B were 18.22±2.39 and 18.81±2.13 minutes respectively. Mean Western Ontario and Mc Master Universities (WOMAC) index in group A and group B were 60.85±15.86 and 67.33±16.33 minutes respectively. After treatment in both groups visual analogue scale (VAS), range of motion (ROM), time taken to walk 50 feet and Western Ontario and Mc Master Universities (WOMAC) index gradually decreased and range of motion (ROM) gradually increased, which were statistically significant. However, the study conducted with small sample size in a single centre in Dhaka city, which may not be representative for the whole country. DOI: http://dx.doi.org/10.3329/jdmc.v23i1.22694 J Dhaka Medical College, Vol. 23, No.1, April, 2014, Page 48-54
APA, Harvard, Vancouver, ISO, and other styles
44

Wenzel, A., E. H. Verdonschot, G. J. Truin, and K. G. Konig. "Accuracy of Visual Inspection, Fiber-optic Transillumination, and Various Radiographic Image Modalities for the Detection of Occlusal Caries in Extracted Non-cavitated Teeth." Journal of Dental Research 71, no. 12 (December 1992): 1934–37. http://dx.doi.org/10.1177/00220345920710121501.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Dmitrieva, E. S., and V. Ya Gelman. "Perception of Auditory and Visual Emotional Information in Primary School Age Children and its Impact on Their Academic Progress." Психологическая наука и образование 23, no. 5 (2018): 29–39. http://dx.doi.org/10.17759/pse.2018230504.

Full text
Abstract:
This work explored the connection between the characteristics of perception of non-verbal emotional information in two modalities of presentation — visual and auditory — with indicators of school achievements in 32 schoolchildren aged 8—9 years.We studied how the children recognised four basic emotions — "joy", "sadness", "anger", "fear" — in facial expressions and intonation of speech.The characteristics of their perceptions were compared with their academic achievements in three school disciplines: Russian language, reading and mathematics.It is shown that there is a clear correlation between the child’s school progress and acoustic perception of emotions, while no connection with visual perception was found.It was revealed that the features of the relationship between the effectiveness of perception of emotions and school performance differed in boys and girls and also depended on the specific school subject and the type of emotion.Unlike girls, boys showed an improvement in academic performance when the accuracy of their emotion recognition increased.There was no evidence of a link between successful learning and the preferred type of perception of emotional information (acoustic or visual) in primary school children.
APA, Harvard, Vancouver, ISO, and other styles
46

Aagten-Murphy, David, Giulia Cappagli, and David Burr. "Musical training generalises across modalities and reveals efficient and adaptive mechanisms for judging temporal intervals." Seeing and Perceiving 25 (2012): 13. http://dx.doi.org/10.1163/187847612x646361.

Full text
Abstract:
Expert musicians are able to accurately and consistently time their actions during a musical performance. We investigated how musical expertise influences the ability to reproduce auditory intervals and how this generalises to vision in a ‘ready-set-go’ paradigm. Subjects reproduced time intervals drawn from distributions varying in total length (176, 352 or 704 ms) or in the number of discrete intervals within the total length (3, 5, 11 or 21 discrete intervals). Overall musicians performed more veridically than non-musicians, and all subjects reproduced auditory-defined intervals more accurately than visually-defined intervals. However non-musicians, particularly with visual intervals, consistently exhibited a substantial and systematic regression towards the mean of the interval. When subjects judged intervals from distributions of longer total length they tended to exhibit more regression towards the mean, while the ability to discriminate between discrete intervals within the distribution had little influence on subject error. These results are consistent with a Bayesian model which minimizes reproduction errors by incorporating a central tendency prior weighted by the subject’s own temporal precision relative to the current intervals distribution (Cicchini et al., 2012; Jazayeri and Shadlen, 2010). Finally a strong correlation was observed between all durations of formal musical training and total reproduction errors in both modalities (accounting for 30% of the variance). Taken together these results demonstrate that formal musical training improves temporal reproduction, and that this improvement transfers from audition to vision. They further demonstrate the flexibility of sensorimotor mechanisms in adapting to different task conditions to minimise temporal estimation errors.
APA, Harvard, Vancouver, ISO, and other styles
47

Stecklow, Marcus Vinicius, Antonio Fernando Catelli Infantosi, and Maurício Cagy. "EEG changes during sequences of visual and kinesthetic motor imagery." Arquivos de Neuro-Psiquiatria 68, no. 4 (August 2010): 556–61. http://dx.doi.org/10.1590/s0004-282x2010000400015.

Full text
Abstract:
The evoked cerebral electric response when sequences of complex motor imagery (MI) task are executed several times is still unclear. This work aims at investigating the existence of habituation in the cortical response, more specifically in the alpha band peak of parietal and occipital areas (10-20 international system electroencephalogram, EEG, protocol). The EEG signals were acquired during sequences of MI of volleyball spike movement in kinesthetic and visual modalities and also at control condition. Thirty right-handed male subjects (18 to 40 years) were assigned to either an 'athlete' or a 'non-athlete' group, both containing 15 volunteers. Paired Wilcoxon tests (with α=0.05) indicates that sequential MI of complex tasks promotes cortical changes, mainly in the power vicinity of the alpha peak. This finding is more pronounced along the initial trials and also for the athletes during the modality of kinesthetic motor imagery.
APA, Harvard, Vancouver, ISO, and other styles
48

Schlegel, Peter, Sebastian Steinfartz, and Boris Bulog. "Non-visual sensory physiology and magnetic orientation in the Blind Cave Salamander, Proteus anguinus (and some other cave-dwelling urodele species). Review and new results on light-sensitivity and non-visual orientation in subterranean urodeles (Amphibia)." Animal Biology 59, no. 3 (2009): 351–84. http://dx.doi.org/10.1163/157075609x454971.

Full text
Abstract:
AbstractA review is given on several sensory systems that enable troglophile and troglobian urodele species to orient non-visually in their extreme hypogean habitat. A new sense was discovered allowing the animals to orient according to the Earth's magnetic field, which could serve as a basic and always available reference for general spatial orientation. Moreover, working with permanent magnetic field stimuli offers a very sensitive experimental method to discover the urodeles' thresholds for other sensory modalities such as light, sounds, and other stimuli, perhaps in competition or combination with the magnetic one. Proteus' audition as underwater hearing and light sensitivity due to its partly remaining sensory cells and/or skin sensitivity were studied. Excellent underwater hearing abilities had been demonstrated for Proteus with an acoustic behavioural method. The ability of sound pressure registration in Proteus is supposed to be attained by the tight anatomical junction between the ceiling of the oral cavity and the oval window. More generally, all non-visual sensory capabilities may facilitate certain behavioral strategies, compensating for missing visual orientation. Troglobians are more likely than others to own and regularly use the sensorial opportunities of a magnetic sense for spatial orientation. Compared to their epigean relatives, cave animals may have retained phylogenetically older sensorial properties, transformed or improved them, or finally acquired new ones which enabled them to successfully survive in dark habitats. Neighbor populations living on surface did not necessarily take advantage of these highly evolved sensory systems and orientation strategies of the troglobian species and may have lost them. E.g. Desmognathus ochrophaeus is partly adapted to cave life and exhibits good magnetic sensitivity, whereas, D. monticula and D. quadrimaculatus are epigean and, although living in rather dark places, did not demonstrate magnetic sensitivity when tested with our method.
APA, Harvard, Vancouver, ISO, and other styles
49

Li, W., T. M. Lai, C. Bohon, S. K. Loo, D. McCurdy, M. Strober, S. Bookheimer, and J. Feusner. "Anorexia nervosa and body dysmorphic disorder are associated with abnormalities in processing visual information." Psychological Medicine 45, no. 10 (February 5, 2015): 2111–22. http://dx.doi.org/10.1017/s0033291715000045.

Full text
Abstract:
BackgroundAnorexia nervosa (AN) and body dysmorphic disorder (BDD) are characterized by distorted body image and are frequently co-morbid with each other, although their relationship remains little studied. While there is evidence of abnormalities in visual and visuospatial processing in both disorders, no study has directly compared the two. We used two complementary modalities – event-related potentials (ERPs) and functional magnetic resonance imaging (fMRI) – to test for abnormal activity associated with early visual signaling.MethodWe acquired fMRI and ERP data in separate sessions from 15 unmedicated individuals in each of three groups (weight-restored AN, BDD, and healthy controls) while they viewed images of faces and houses of different spatial frequencies. We used joint independent component analyses to compare activity in visual systems.ResultsAN and BDD groups demonstrated similar hypoactivity in early secondary visual processing regions and the dorsal visual stream when viewing low spatial frequency faces, linked to the N170 component, as well as in early secondary visual processing regions when viewing low spatial frequency houses, linked to the P100 component. Additionally, the BDD group exhibited hyperactivity in fusiform cortex when viewing high spatial frequency houses, linked to the N170 component. Greater activity in this component was associated with lower attractiveness ratings of faces.ConclusionsResults provide preliminary evidence of similar abnormal spatiotemporal activation in AN and BDD for configural/holistic information for appearance- and non-appearance-related stimuli. This suggests a common phenotype of abnormal early visual system functioning, which may contribute to perceptual distortions.
APA, Harvard, Vancouver, ISO, and other styles
50

Araya-Salas, Marcelo, Grace Smith-Vidaurre, Daniel J. Mennill, Paulina L. González-Gómez, James Cahill, and Timothy F. Wright. "Social group signatures in hummingbird displays provide evidence of co-occurrence of vocal and visual learning." Proceedings of the Royal Society B: Biological Sciences 286, no. 1903 (May 29, 2019): 20190666. http://dx.doi.org/10.1098/rspb.2019.0666.

Full text
Abstract:
Vocal learning, in which animals modify their vocalizations based on social experience, has evolved in several lineages of mammals and birds, including humans. Despite much attention, the question of how this key cognitive trait has evolved remains unanswered. The motor theory for the origin of vocal learning posits that neural centres specialized for vocal learning arose from adjacent areas in the brain devoted to general motor learning. One prediction of this hypothesis is that visual displays that rely on complex motor patterns may also be learned in taxa with vocal learning. While learning of both spoken and gestural languages is well documented in humans, the occurrence of learned visual displays has rarely been examined in non-human animals. We tested for geographical variation consistent with learning of visual displays in long-billed hermits ( Phaethornis longirostris ), a lek-mating hummingbird that, like humans, has both learned vocalizations and elaborate visual displays. We found lek-level signatures in both vocal parameters and visual display features, including display element proportions, sequence syntax and fine-scale parameters of elements. This variation was not associated with genetic differentiation between leks. In the absence of genetic differences, geographical variation in vocal signals at small scales is most parsimoniously attributed to learning, suggesting a significant role of social learning in visual display ontogeny. The co-occurrence of learning in vocal and visual displays would be consistent with a parallel evolution of these two signal modalities in this species.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography