Dissertations / Theses on the topic 'Auditory spatial perception'

To see the other types of publications on this topic, follow the link: Auditory spatial perception.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 22 dissertations / theses for your research on the topic 'Auditory spatial perception.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Keating, Peter. "Plasticity and integration of auditory spatial cues." Thesis, University of Oxford, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.561113.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Although there is extensive evidence that auditory spatial processing can adapt to changes in auditory spatial cues both in infancy and adulthood, the mechanisms underlying adaptation appear to differ across species. Whereas barn owls compensate for unilateral hearing loss throughout development by learning abnormal mappings between cue values and spatial position, adult mammals seem to adapt by ignoring the acoustical input available to the affected ear and learning to rely more on unaltered spatial cues. To investigate these differences further, ferrets were raised with a unilateral earplug and their ability to localize sounds was assessed. Although these animals did not fully compensate for the effects of an earplug, they performed considerably better than animals that experienced an earplug for the first time, indicating that adaptation had taken place. We subsequently found that juvenile-plugged (JP) ferrets learned to adjust both cue mappings and weights in response to changes in acoustical input, with the nature of these changes reflecting the expected reliability of different cues. Thus, the auditory system may be able to rapidly update the way in which individual cues are processed, as well as the way in which different cues are integrated, thereby enabling spatial cues to be processed in a context- specific way. In attempting to understand the mechanisms that guide plasticity of spatial hearing, previous studies have raised the possibility that changes in auditory spatial processing may be driven by mechanisms intrinsic to the auditory system. To address this possibility directly, we measured the sensitivity of human subjects to ITDs and ILDs following transient misalignment of these cues. We found that this induces a short-term recalibration that acts to compensate for the effects of cue misalignment. These changes occurred in the absence of error feedback, suggesting that mutual recalibration can occur between auditory spatial cues. The nature of these changes, however, was consistent with models of cue integration, suggesting that plasticity and integration may be inextricably linked. Throughout the course of this work, it became clear that future investigations would benefit from the application of closed-field techniques to the ferret. For this reason, we developed and validated methods that enable stimuli to be presented to ferrets over earphones, and used these methods to assess ITD and ILD sensitivity in ferrets using a variety of different stimuli. We found that the Duplex theory is able to account for binaural spatial sensitivity in these animals, and that sensitivity is comparable with that found in humans, thereby confirming the ferret as an excellent model for understanding binaural spatial hearing.
2

Geeseman, Joseph W. "The influence of auditory cues on visual spatial perception." OpenSIUC, 2010. https://opensiuc.lib.siu.edu/theses/286.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Traditional psychophysical studies have been primarily unimodal experiments due to the ease in which a single sense can be isolated in a laboratory setting. This study, however, presents participants with auditory and visual stimuli to better understand the interaction of the two senses in visuospatial perception. Visual stimuli, presented as Gaussian distributed blobs, moved laterally across a computer monitor to a central location and "bounced" back to their starting position. During this passage across the screen, a brief auditory "click" was presented via headphones. Participants were asked to respond to the bounce of the ball, and response latency was recorded. Response latency to the bounce position varied as a function of baseline (no sound) and the varying sound offset locations.
3

Griffiths, Shaaron S., and shaaron griffiths@deakin edu au. "Spatial and temporal disparaties in aurally aided visual search." Deakin University. School of Psychology, 2001. http://tux.lib.deakin.edu.au./adt-VDU/public/adt-VDU20061207.134032.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Research over the last decade has shown that auditorily cuing the location of visual targets reduces the time taken to locate and identify targets for both free-field and virtually presented sounds. The first study conducted for this thesis confirmed these findings over an extensive region of free-field space. However, the number of sound locations that are measured and stored in the data library of most 3-D audio spatial systems is limited, so that there is often a discrepancy in position between the cued and physical location of the target. Sampling limitations in the systems also produce temporal delays in which the stored data can be conveyed to operators. To investigate the effects of spatial and temporal disparities in audio cuing of visual search, and to provide evidence to alleviate concerns that psychological research lags behind the capabilities to design and implement synthetic interfaces, experiments were conducted to examine (a) the magnitude of spatial separation, and (b) the duration of temporal delay that intervened between auditory spatial cues and visual targets to alter response times to locate targets and discriminate their shape, relative to when the stimuli were spatially aligned, and temporally synchronised, respectively. Participants listened to free-field sound localisation cues that were presented with a single, highly visible target that could appear anywhere across 360° of azimuthal space on the vertical mid-line (spatial separation), or extended to 45° above and below the vertical mid-line (temporal delay). A vertical or horizontal spatial separation of 40° between the stimuli significantly increased response times, while separations of 30° or less did not reach significance. Response times were slowed at most target locations when auditory cues occurred 770 msecs prior to the appearance of targets, but not with similar durations of temporal delay (i.e., 440 msecs or less). When sounds followed the appearance of targets, the stimulus onset asynchrony that affected response times was dependent on target location, and ranged from 440 msecs at higher elevations and rearward of participants, to 1,100 msecs on the vertical mid-line. If targets appeared in the frontal field of view, no delay of acoustical stimulation affected performance. Finally, when conditions of spatial separation and temporal delay were combined, visual search times were degraded with a shorter stimulus onset asynchrony than when only the temporal relationship between the stimuli was varied, but responses to spatial separation were unaffected. The implications of the results for the development of synthetic audio spatial systems to aid visual search tasks was discussed.
4

Elias, Bartholomew. "Cross-modal facilitation of spatial frequency discriminations through auditory frequency cue presentations." Thesis, Georgia Institute of Technology, 1991. http://hdl.handle.net/1853/28611.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Best, Virginia Ann. "Spatial Hearing with Simultaneous Sound Sources: A Psychophysical Investigation." University of Sydney. Medicine, 2004. http://hdl.handle.net/2123/576.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis provides an overview of work conducted to investigate human spatial hearing in situations involving multiple concurrent sound sources. Much is known about spatial hearing with single sound sources, including the acoustic cues to source location and the accuracy of localisation under different conditions. However, more recently interest has grown in the behaviour of listeners in more complex environments. Concurrent sound sources pose a particularly difficult problem for the auditory system, as their identities and locations must be extracted from a common set of sensory receptors and shared computational machinery. It is clear that humans have a rich perception of their auditory world, but just how concurrent sounds are processed, and how accurately, are issues that are poorly understood. This work attempts to fill a gap in our understanding by systematically examining spatial resolution with multiple sound sources. A series of psychophysical experiments was conducted on listeners with normal hearing to measure performance in spatial localisation and discrimination tasks involving more than one source. The general approach was to present sources that overlapped in both frequency and time in order to observe performance in the most challenging of situations. Furthermore, the role of two primary sets of location cues in concurrent source listening was probed by examining performance in different spatial dimensions. The binaural cues arise due to the separation of the two ears, and provide information about the lateral position of sound sources. The spectral cues result from location-dependent filtering by the head and pinnae, and allow vertical and front-rear auditory discrimination. Two sets of experiments are described that employed relatively simple broadband noise stimuli. In the first of these, two-point discrimination thresholds were measured using simultaneous noise bursts. It was found that the pair could be resolved only if a binaural difference was present; spectral cues did not appear to be sufficient. In the second set of experiments, the two stimuli were made distinguishable on the basis of their temporal envelopes, and the localisation of a designated target source was directly examined. Remarkably robust localisation was observed, despite the simultaneous masker, and both binaural and spectral cues appeared to be of use in this case. Small but persistent errors were observed, which in the lateral dimension represented a systematic shift away from the location of the masker. The errors can be explained by interference in the processing of the different location cues. Overall these experiments demonstrated that the spatial perception of concurrent sound sources is highly dependent on stimulus characteristics and configurations. This suggests that the underlying spatial representations are limited by the accuracy with which acoustic spatial cues can be extracted from a mixed signal. Three sets of experiments are then described that examined spatial performance with speech, a complex natural sound. The first measured how well speech is localised in isolation. This work demonstrated that speech contains high-frequency energy that is essential for accurate three-dimensional localisation. In the second set of experiments, spatial resolution for concurrent monosyllabic words was examined using similar approaches to those used for the concurrent noise experiments. It was found that resolution for concurrent speech stimuli was similar to resolution for concurrent noise stimuli. Importantly, listeners were limited in their ability to concurrently process the location-dependent spectral cues associated with two brief speech sources. In the final set of experiments, the role of spatial hearing was examined in a more relevant setting containing concurrent streams of sentence speech. It has long been known that binaural differences can aid segregation and enhance selective attention in such situations. The results presented here confirmed this finding and extended it to show that the spectral cues associated with different locations can also contribute. As a whole, this work provides an in-depth examination of spatial performance in concurrent source situations and delineates some of the limitations of this process. In general, spatial accuracy with concurrent sources is poorer than with single sound sources, as both binaural and spectral cues are subject to interference. Nonetheless, binaural cues are quite robust for representing concurrent source locations, and spectral cues can enhance spatial listening in many situations. The findings also highlight the intricate relationship that exists between spatial hearing, auditory object processing, and the allocation of attention in complex environments.
6

Jin, Craig T. "Spectral analysis and resolving spatial ambiguities in human sound localization." Connect to full text, 2001. http://hdl.handle.net/2123/1342.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (Ph. D.)--School of Electrical and Information Engineering, Faculty of Engineering, University of Sydney, 2001.
Title from title screen (viewed 13 January 2009). Submitted in fulfilment of the requirements for the degree of Doctor of Philosophy to the School of Electrical and Information Engineering, Faculty of Engineering. Includes bibliographical references. Also available in print form.
7

Nuckols, Richard. "Localization of Auditory Spatial Targets in Sighted and Blind Subjects." VCU Scholars Compass, 2013. http://scholarscompass.vcu.edu/etd/3286.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This research was designed to investigate the fundamental nature in which blind people utilize audible cues to attend to their surroundings. Knowledge on how blind people respond to external spatial stimuli is expected to assist in development of better tools for helping people with visual disabilities navigate their environment. There was also interest in determining how blind people compare to sighted people in auditory localization tasks. The ability of sighted individuals, blindfolded individuals, and blind individuals in localizing spatial auditory targets was assessed. An acoustic display board allowed the researcher to provide multiple sound presentations to the subjects. The subjects’ responses in localization tasks were measured using a combination of kinematic head tracking and eye tracking hardware. Data was collected and analyzed to determine the ability of the groups in localizing spatial auditory targets. Significant differences were found among the three groups in spatial localization error and temporal patterns.
8

Euston, David Raymond. "From spectrum to space the integration of frequency-specific intensity cues to produce auditory spatial receptive fields in the barn owl inferior colliculus /." [Eugene, Or. : University of Oregon Library System], 2000. http://libweb.uoregon.edu/UOTheses/2000/eustond00.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Euston, David Raymond 1964. "From spectrum to space: the integration of frequency-specific intensity cues to produce auditory spatial receptive fields in the barn owl inferior colliculus." Thesis, University of Oregon, 2000. http://hdl.handle.net/1794/143.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Advisers: Terry Takahashi and Richard Marrocco. xiv, 152 p.
Neurons in the barn owl's inferior colliculus (IC) derive their spatial receptive fields (RF) from two auditory cues: interaural time difference (ITD) and interaural level difference (ILD). ITD serves to restrict a RF in azimuth but the precise role of ILD was, up to this point, unclear. Filtering by the ears and head insures that each spatial location is associated with a unique combination of frequency-specific ILD values (i.e., an ILD spectrum). We isolated the effect of ILD spectra using virtual sound sources in which ITD was held fixed for all spatial locations while ILD spectra were allowed to vary normally. A cell's response to these stimuli reflects the contribution of ILD to spatial tuning, referred to as an “ILD-alone RF”. In a sample of 34 cells, individual ILD-alone RFs were distributed and amorphous, but consistently showed that the ILD spectrum is facilatory at the cell's best location and inhibitory above and/or below. Prior results have suggested that an IC cell's spatial specificity is generated by summing inputs which are narrowly tuned to frequency and selective for both ILD and ITD. Based on this premise, we present a developmental model which, when trained solely on a cell's true spatial RF, reproduces both the cell's true RF and its ILD-alone RF. According to the model, the connectivity between a space-tuned IC cell and its frequency-specific inputs develops subject to two constraints: the cell must be excited by ILD spectra from the cell's best location and inhibited by spectra from locations above and below but along the vertical strip defined by the best ITD. To assess how frequency-specific inputs are integrated to form restricted spatial RFs, we measured the responses of 47 space-tuned IC cells to pure tones at varying ILDs and frequencies. ILD tuning varied with frequency. Further, pure-tone responses, summed according to the head-related filters, accounted for 56 percent of the variance in broadband ILD-alone RFs. Modelling suggests that, with broadband sounds, cells behave as though they are linearly summing their inputs, but when testing with pure tones, non-linearities arise. This dissertation includes unpublished co-authored materials.
10

Cogné, Mélanie. "Influence de modulations sensorielles sur la navigation et la mémoire spatiale en réalité virtuelle : Processus cognitifs impliqués." Thesis, Bordeaux, 2017. http://www.theses.fr/2017BORD0704.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Se déplacer selon un but déterminé est une activité courante de la vie quotidienne. Des capacités cognitives variées sont associées aux déplacements, comme la navigation, la mémoire ou encore l’orientation spatiale. De nombreux patients cérébro-lésés ou atteints par une maladie neuro-dégénérative présentent des difficultés topographiques qui retentissent sur leur autonomie en vie quotidienne. Les outils de réalité virtuelle permettent d’évaluer la navigation et la mémoire spatiale à grande échelle, avec une bonne corrélation entre cette évaluation et celle qui serait réalisée dans un environnement réel. La réalité virtuelle permet également d’ajouter des stimuli à la tâche proposée. Ces stimuli additionnels peuvent être contextuels, c’est à dire reliés à la tâche à réaliser dans l’environnement virtuel, ou noncontextuels, soit sans lien avec la tâche à accomplir. Ce travail de thèse s’est attaché à évaluer l’impact de stimuli auditifs et visuels sur la navigation et la mémoire spatiale de patients cérébro-lésés ou présentant une maladie neuro-dégénérative, dans des expériences de réalité virtuelle. Les deux premiers volets de cette thèse ont étudié l’effet de stimuli auditifs contextuels ou non-contextuels lors d’une tâche de courses au sein du supermarché virtuel VAP-S. Le premier volet a montré que des stimuli auditifs contextuels de type effet sonar et énoncé du nom du produit facilitaient la navigation spatiale de patients cérébro-lésés impliqués dans cette tâche de courses. Le second volet a mis en évidence que des sons non-contextuels avec une importante saillance cognitive ou perceptuelle péjoraient la performance de navigation de patients ayant présenté un accident vasculaire cérébral. Les deux volets suivants de cette thèse ont étudié l’effet d’indiçages visuels ou auditifs dans une tâche de navigation spatialedans un quartier virtuel. Ainsi, le troisième volet de la thèse a démontré que des indices visuels comme des flèches directionnelles ou des points de repère sursignifiés facilitaient la navigation spatiale et certains aspects de mémoire spatiale de patients avec des troubles cognitifs légers (MCI) ou présentant une Maladie d’Alzheimer. Enfin, le quatrième volet a mis en évidence qu’un indiçage auditif par des bips indiquant la direction à chaque intersection améliorait la navigation spatiale de patients cérébro-lésés droits présentant une héminégligence visuelle et auditive controlatérale. Ces résultats suggèrent que des stimuli auditifs et visuels pourraient être utilisés lors de prises en charge rééducatives et réadaptatives de patients présentant des difficultés topographiques, ainsi qu’en vie quotidienne par le biais de la réalité augmentée afin de faciliter leurs déplacements. L’impact des stimuli chez les sujets sains et chez les cérébrolésés est différent et justifie une analyse spécifique de processus probablement distincts impliqués lors des déficits cognitifs
Navigating in a familiar or unfamiliar environment is a frequent challenge for human beings. Many patients with brain injury suffer from topographical difficulties, which influences their autonomy in daily life. Virtual Reality Tools enable the evaluation of largescale spatial navigation and spatial memory, resembling a real environment. Virtual reality also permits to add stimuli to the software. These stimuli can be contextual, that is to say linked to the task that participants have to accomplish in the Virtual Environment, or non-contextual, i.e. with no link with the require task. This thesis investigates whether visual or auditory stimuli influence spatial navigation and memory in Virtual Environments of patients with brain injury or with a neurodegenerative disease. The first part of the thesis showed contextual auditory stimuli type a sonar effect and the names of products of the shopping list improved spatial navigation of brain-injured patients during a shopping task in the virtual supermarket VAP-S. The second part of this thesis highlighted that non-contextual auditory stimuli with a high perceptual or cognitive salience decreased spatial navigation performance of brain-injured patients during a shopping task in the VAP-S. The third part of this thesis showed that visual cues like directional arrows and salient landmarks improved spatial navigation and some aspects of spatial memory of patients with Alzheimer’s disease or Mild Cognitive Impairments during a navigation task in a virtual district. The last part of this thesis demonstrated that auditory cues, i.e. beeping sounds indicating the directions, increased spatial navigation in a virtual district of patients who have had a stroke with contra-lesional visual and auditory neglect. These results suggest that some visual and auditory stimuli could be helpful for spatial navigation and memory tasks in patients with brain injury of neuro-degenerative disease. It further offers new research avenues for neuro-rehabilitation, such as the use of augmented reality in real-life settings to support the navigational capabilities of these patients
11

Best, Virginia Ann. "Spatial Hearing with Simultaneous Sound Sources: A Psychophysical Investigation." Thesis, The University of Sydney, 2004. http://hdl.handle.net/2123/576.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis provides an overview of work conducted to investigate human spatial hearing in situations involving multiple concurrent sound sources. Much is known about spatial hearing with single sound sources, including the acoustic cues to source location and the accuracy of localisation under different conditions. However, more recently interest has grown in the behaviour of listeners in more complex environments. Concurrent sound sources pose a particularly difficult problem for the auditory system, as their identities and locations must be extracted from a common set of sensory receptors and shared computational machinery. It is clear that humans have a rich perception of their auditory world, but just how concurrent sounds are processed, and how accurately, are issues that are poorly understood. This work attempts to fill a gap in our understanding by systematically examining spatial resolution with multiple sound sources. A series of psychophysical experiments was conducted on listeners with normal hearing to measure performance in spatial localisation and discrimination tasks involving more than one source. The general approach was to present sources that overlapped in both frequency and time in order to observe performance in the most challenging of situations. Furthermore, the role of two primary sets of location cues in concurrent source listening was probed by examining performance in different spatial dimensions. The binaural cues arise due to the separation of the two ears, and provide information about the lateral position of sound sources. The spectral cues result from location-dependent filtering by the head and pinnae, and allow vertical and front-rear auditory discrimination. Two sets of experiments are described that employed relatively simple broadband noise stimuli. In the first of these, two-point discrimination thresholds were measured using simultaneous noise bursts. It was found that the pair could be resolved only if a binaural difference was present; spectral cues did not appear to be sufficient. In the second set of experiments, the two stimuli were made distinguishable on the basis of their temporal envelopes, and the localisation of a designated target source was directly examined. Remarkably robust localisation was observed, despite the simultaneous masker, and both binaural and spectral cues appeared to be of use in this case. Small but persistent errors were observed, which in the lateral dimension represented a systematic shift away from the location of the masker. The errors can be explained by interference in the processing of the different location cues. Overall these experiments demonstrated that the spatial perception of concurrent sound sources is highly dependent on stimulus characteristics and configurations. This suggests that the underlying spatial representations are limited by the accuracy with which acoustic spatial cues can be extracted from a mixed signal. Three sets of experiments are then described that examined spatial performance with speech, a complex natural sound. The first measured how well speech is localised in isolation. This work demonstrated that speech contains high-frequency energy that is essential for accurate three-dimensional localisation. In the second set of experiments, spatial resolution for concurrent monosyllabic words was examined using similar approaches to those used for the concurrent noise experiments. It was found that resolution for concurrent speech stimuli was similar to resolution for concurrent noise stimuli. Importantly, listeners were limited in their ability to concurrently process the location-dependent spectral cues associated with two brief speech sources. In the final set of experiments, the role of spatial hearing was examined in a more relevant setting containing concurrent streams of sentence speech. It has long been known that binaural differences can aid segregation and enhance selective attention in such situations. The results presented here confirmed this finding and extended it to show that the spectral cues associated with different locations can also contribute. As a whole, this work provides an in-depth examination of spatial performance in concurrent source situations and delineates some of the limitations of this process. In general, spatial accuracy with concurrent sources is poorer than with single sound sources, as both binaural and spectral cues are subject to interference. Nonetheless, binaural cues are quite robust for representing concurrent source locations, and spectral cues can enhance spatial listening in many situations. The findings also highlight the intricate relationship that exists between spatial hearing, auditory object processing, and the allocation of attention in complex environments.
12

Fernández, Prieto Irene. "Development and neural bases of the spatial recoding of acoustic pitch." Doctoral thesis, Universitat de Barcelona, 2016. http://hdl.handle.net/10803/401340.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Our perceptual system tends to associate higher-pitched sounds with upper spatial positions and smaller visual sizes, and lower pitched-sounds with lower positions and bigger visual sizes. This processes, known as crossmodal correspondences or crossmodal associations, emerges consistently and almost universally. The cognitive, developmental and neural mechanisms underlying these crossmodal correspondences were investigated in the present doctoral dissertation. In the present work, I studied how the crossmodal associations between auditory and visuospatial stimuli modulates cognitive processes in infancy and maturity. In addition, I investigated the impairments in auditory processing in population with visuospatial disorders. The presence of crossmodal associations between pitch and spatial elevation are solid in adults (see Spence, 2011), and can be observed in the ability of certain auditory stimuli to modulate visuospatial attention (e.g., rising sounds facilitate the detection of a visual stimuli in upper areas in the external space). However, it seems that higher- pitched sounds or sounds with rising frequencies have higher inherent spatial properties than lower-pitched or falling frequency sounds. These spatial properties of the sound could modulate the perceptual system and facilitate the detection of visual objects in superior positions of the visual field. We found evidence of the existence of crossmodal correspondences between auditory and visuospatial dimensions also in prelinguistic infants, at the age of 6 months. We observed that 4-month-old infants have not developed yet the mechanisms that facilitate audiovisual associations. This pattern of results suggests that the experience with the physical world and/or further maturation is needed to fully develop certain audiovisual crossmodal associations. Finally, in Study 3, I observed that patients with NLD, that show impairments in visuospatial skills, show a deficit in pitch-related tasks. These evidences could suggest that auditory and visuospatial processes share common mechanisms in some stages of the perceptual processing, that is, for example, that the impairment in visuospatial skills could disrupt the auditory processing.
Estudios previos sugieren la existencia de correspondencias transmodales entre la altura tonal y las dimensiones visuoespaciales (por ejemplo, la elevación espacial o el tamaño visual) (Gallace y Spence, 2006; Rusconi y cols., 2006). La investigación en neuroimagen ha puesto de manifiesto la activación de áreas parietales (por ejemplo, el surco intraparietal, IPS) tradicionalmente asociadas a funciones espaciales, durante la realización de tareas musicales relacionadas con la altura tonal (Foster y Zatorre, 2010a, 2010b). El objetivo de esta tesis doctoral fue investigar los mecanismos cognitivos, de desarrollo y neurales que subyacen a estas correspondencias transmodales. En la primera parte de la tesis, se estudió el papel de las correspondencias transmodales en la modulación de la atención. Exploramos cómo un sonido dinámico específico (por ejemplo, un sonido ascendente) modula exógenamente la localización espacial de un estímulo visual en un adulto. Los participantes realizaron una tarea que consistió en una modificación del paradigma clásico de Posner, en la que usamos sonidos ascendentes y descendentes como señales auditivas. En una segunda parte de la tesis se analizó el desarrollo de los mecanismos implicados en el procesamiento de las correspondencias transmodales. Más específicamente, se investigó si estas están presentes desde muy temprano en el desarrollo, o aparecen debido a al incremento de la experiencia en el entorno. Para ello, se observó la asociación entre la altura tonal y el tamaño visual en bebés de 4 y 6 meses de edad en un paradigma audiovisual de preferencia visual. Por último, en un intento por establecer el papel funcional de ciertas áreas del lóbulo parietal derecho en la asociación entre la altura tonal y la elevación espacial, analizamos si los síndromes neurológicos, asociados con una alteración funcional o estructural del lóbulo parietal derecho (por ejemplo, el trastorno de aprendizaje no verbal, TANV), pueden afectar al desempeño de tareas relacionadas con la altura tonal. Para lograr nuestros objetivos, se comparó el rendimiento en tareas visuoespaciales y auditivas de pacientes con pacientes con TANV y un grupo de control sin trastornos neurológicos o psiquiátricos. Los resultados de estos tres estudios que constituyen esta tesis sugieren: (1) que los sonidos ascendentes pueden ejercer mayor modulación del tiempo de reacción del sistema perceptivo respecto a los objetos visuales que los sonidos descendentes, (2) la existencia de asociaciones transmodales entre la altura tonal y el tamaño visual en bebés de 6 meses de edad, pero no en bebés más pequeños, lo que sugiere que la experiencia y/o maduración es necesaria para desarrollar plenamente esta asociación, y (3) que la alteración funcional o estructural de áreas parietales tiene un impacto negativo en las tareas de procesamiento auditivo, por ejemplo, discriminar alturas tonales. En conclusión, se confirmó que la altura tonal puede modular la atención espacial. En segundo lugar, las correspondencias transmodales están controladas por mecanismos básicos que están presentes desde temprana edad. Por último, también se encontró evidencia de que la percepción de la información auditiva compleja podría requerir áreas del cerebro relacionadas con trastornos neurológicos que afectan el procesamiento del espacio.
13

Scheperle, Rachel Anna. "Relationships among peripheral and central electrophysiological measures of spatial / spectral resolution and speech perception in cochlear implant users." Diss., University of Iowa, 2013. https://ir.uiowa.edu/etd/5055.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The ability to perceive speech is related to the listener's ability to differentiate among frequencies (i.e. spectral resolution). Cochlear implant users exhibit variable speech perception and spectral resolution abilities, which can be attributed at least in part to electrode interactions at the periphery (i.e. spatial resolution). However, electrophysiological measures of peripheral spatial resolution have not been found to correlate with speech perception. The purpose of this study was to systematically evaluate auditory processing from the periphery to the cortex using both simple and spectrally complex stimuli in order to better understanding the underlying processes affecting spatial and spectral resolution and speech perception. Eleven adult cochlear implant users participated in this study. Peripheral spatial resolution was assessed using the electrically evoked compound action potential (ECAP) to measure channel interaction functions for thirteen probe electrodes. We evaluated central processing using the auditory change complex (ACC), a cortical response, elicited with both spatial (electrode pairs) and spectral (rippled noise) stimulus changes. Speech perception included a vowel-discrimination task and the BKB-SIN test of keyword recognition in noise. We varied the likelihood of electrode interactions within each participant by creating three experimental programs, or MAPs, using a subset of seven electrodes and varying the spacing between activated electrodes. Linear mixed model analysis was used to account for repeated measures within an individual, allowing for a within-subject interpretation. We also performed regression analysis to evaluate the relationships across participants. Both peripheral and central processing abilities contributed to the variability in performance observed across CI users. The spectral ACC was the strongest predictor of speech perception abilities across participants. When spatial resolution was varied within a person, all electrophysiological measures were significantly correlated with each other and with speech perception. However, the ECAP measures were the best single predictor of speech perception for the within-subject analysis, followed by the spectral ACC. Our results indicate that electrophysiological measures of spatial and spectral resolution can provide valuable information about perception. All three of the electrophysiological measures used in this study, including the ECAP channel interaction functions, demonstrated potential for clinical utility.
14

Stanley, Raymond M. "Toward adapting spatial audio displays for use with bone conduction the cancellation of bone-conducted and air-conducted sound waves /." Thesis, Available online, Georgia Institute of Technology, 2006, 2006. http://etd.gatech.edu/theses/available/etd-11022006-103809/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Bordeau, Camille. "Développement d’un dispositif de substitution sensorielle vision-vers-audition : étude des performances de localisation et comparaison de schémas d’encodage." Electronic Thesis or Diss., Bourgogne Franche-Comté, 2023. https://nuxeo.u-bourgogne.fr/nuxeo/site/esupversions/32b91892-b42f-4d42-bf10-0ad744828698.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Les dispositifs de substitution sensorielle vision-vers-audition convertissent des informations visuelles en un paysage sonore dans le but de permettre de percevoir l’environnement à travers la modalité auditive lorsque la modalité visuelle est altérée. Ils constituent une solution prometteuse pour améliorer l’autonomie des personnes déficientes visuelles lors de leurs déplacements pédestres. Ce travail de thèse avait pour objectif principal de déterminer et d’évaluer un schéma d’encodage pour la substitution sensorielle permettant la perception spatiale 3-dimensionnelle en proposant des protocoles de familiarisation et d’évaluation dans des environnements virtuels plus ou moins complexes. Le premier objectif était de déterminer si la reproduction d’indices acoustiques pour la perception spatiale auditive était plus efficace que l’utilisation d’autres indices acoustiques impliqués dans des interactions audio-visuelles. La première étude a mis en évidence que la modulation de la hauteur tonale dans le schéma d’encodage permettait de compenser en partie les limites perceptives de la spatialisation pour la dimension de l’élévation. La deuxième étude a mis en évidence que la modification de l’enveloppe sonore pouvait permettre de compenser la perception compressée de la distance. Le deuxième objectif de ce travail de thèse était de déterminer dans quelle mesure le schéma d’encodage utilisé préservait les capacités de perception spatiale dans un environnement complexe composé de plusieurs objets. La troisième étude a mis en évidence que les capacités de ségrégation d'une scène visuelle complexe à travers le paysage sonore associé dépendaient de la signature spectrale spécifique des objets la composant lorsque la modulation de la hauteur tonale est utilisée comme indice acoustique dans le schéma d'encodage. Les travaux de cette thèse ont des implications pratiques pour l’amélioration des dispositifs de substitution concernant, d’une part, la possibilité de compenser les limites perceptives spatiales avec des indices acoustiques non-spatiaux dans le schéma d’encodage, et d’une autre part, la nécessité de réduire le flux d’informations auditives pour préserver les capacités de ségrégation du paysage sonore. Les protocoles de familiarisation et d’évaluation en environnement virtuel ayant été développés de sorte à être adaptés à la population déficiente visuelle, les travaux de cette thèse soulignent le potentiel des environnements virtuels pour évaluer précisément les capacités d’utilisation de dispositifs de substitution dans un contexte contrôlé et sécurisé
Visual-to-auditory sensory substitution devices convert visual information into soundscapes for the purpose of allowing the perception of the environment with the auditory modality when the visual modality is impaired. They constitute a promising solution for improving the autonomy of visually impaired people when traveling on foot. The main objective of this thesis work was to determine an encoding scheme for sensory substitution allowing 3-dimensional spatial perception by proposing familiarization and evaluation protocols in virtual environments with different complexities. The first aim was to determine whether the reproduction of acoustic cues for auditory spatial perception was more effective than the use of acoustic cues involved in audio-visual interactions. The first study demonstrated that the modulation of pitch in the encoding scheme could partly compensate for the perceptual limits of spatialization for the dimension of elevation. The second study showed that the modification of the sound envelope could partly compensate for the compressed perception of distance. The second objective was to determine to what extent the determined encoding scheme preserved spatial perception abilities in a complex environment where several objects were present. The third study demonstrated that the segregation capabilities of a complex visual scene through the soundscape depend on the specific spectral signature of the objects composing it when pitch modulation is used as an acoustic cue in the encoding scheme. The work of this thesis has practical implications for the improvement of substitution devices concerning, on the one hand, the possibility of compensating spatial perceptual limits with non-spatial acoustic cues in the encoding scheme, and on the other hand, the need to reduce the amount of auditory information to preserve the segregation abilities of the soundscape. The familiarization and evaluation protocols in a virtual environment having been developed to be adapted to the visually impaired population, the work of this thesis highlights the potential of virtual environments to precisely evaluate the abilities to use sensory substitution devices in a secure context
16

Bergqvist, Emil. "Auditory displays : A study in effectiveness between binaural and stereo audio to support interface navigation." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-10072.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis analyses if the change of auditory feedback can improve the effectiveness of performance in the interaction with a non-visual system, or with a system used by individuals with visual impairment. Two prototypes were developed, one with binaural audio and the other with stereo audio. The interaction was evaluated in an experiment where 22 participants, divided into two groups, performed a number of interaction tasks. A post-interview were conducted together with the experiment. The result of the experiment displayed that there were no great difference between binaural audio and stereo regarding the speed and accuracy of the interaction. The post-interviews displayed interesting differences in the way participants visualized the virtual environment that affected the interaction. This opened up interesting questions for future studies.
17

Hobeika, Lise. "Interplay between multisensory integration and social interaction in auditory space : towards an integrative neuroscience approach of proxemics." Thesis, Sorbonne Paris Cité, 2017. http://www.theses.fr/2017USPCB116.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
L'homme ne perçoit pas l'espace de manière homogène : le cerveau code l'espace proche du corps différemment de l'espace lointain. Cette distinction joue un rôle primordial notre comportement social : l'espace proche du corps, appelé espace péripersonnel (EPP), serait une zone de protection du corps, où la présence d'un individu est perçue comme une menace. L'EPP a été initialement décrit par la psychologie sociale et l'anthropologie, comme un facteur de la communication humaine. L'EPP a été plus tard décrit chez le singe par des études de neurophysiologie comme un espace codé par des neurones multisensoriels. Ces neurones déchargent uniquement en réponse à des évènements sensoriels situés à une distance limitée du corps du singe (qu'ils soient tactiles, visuels ou auditifs). L'ensemble de ces neurones multisensoriels code ainsi l'EPP tout autour du corps. Ce codage exclusif de l'EPP est crucial pour interagir avec le monde extérieur, car c'est dans cet espace que sont réalisées les actions visant à protéger le corps ou visant à atteindre des objets autour de soi. Le codage mutlisensoriel de l'EPP pendant des interactions sociales est à ce jour peu étudié. Dans ce travail de recherche, nous avons réalisé plusieurs études en vu d'identifier des facteurs contribuant à la perméabilité de l'EPP et ses aspects adaptatifs. Une première étude a examiné les frontières latérales de l'EPP chez des individus seuls, en mesurant l'interaction d'une source sonore dynamique s'approchant du corps avec le temps de détection de stimulations tactiles. Cette étude a montré des différences dans la taille de l'EPP entre les deux hémi-espaces, qui seraient liées à la latéralité manuelle. Une seconde étude a exploré les modulations de l'EPP dans des contextes sociaux. Elle a montré que l'EPP est modifié lorsque des individus réalisent une tâche en collaboration. La troisième étude est une recherche méthodologique qui vise à dépasser les limitations des paradigmes comportementaux utilisés actuellement pour mesurer l'EPP. Elle propose de nouvelles pistes pour évaluer comment les stimuli approchant le corps sont intégrés en fonction de leur distance et du contexte multisensoriel dans lequel ils sont traités. L'ensemble de ces travaux montre l'intérêt d'étudier l'intégration multisensorielle autour du corps dans l'espace 3D pour comprendre pleinement l'EPP, et les impacts potentiels de facteurs sociaux sur les processus multisensoriels de bas-niveaux. De plus, ces études soulignent l'importance pour les neurosciences sociales de développer des protocoles expérimentaux réellement sociaux, à plusieurs participants
The space near the body, called peripersonal space (PPS), was originally studied in social psychology and anthropology as an important factor in interpersonal communication. It was later described by neurophysiological studies in monkeys as a space mapped with multisensory neurons. Those neurons discharge only when events are occurring near the body (be it tactile, visual or audio information), delineating the space that people consider as belonging to them. The human brain also codes events that are near the body differently from those that are farther away. This dedicated brain function is critical to interact satisfactorily with the external world, be it for defending oneself or to reach objects of interest. However, little is known about how this function is impacted by real social interactions. In this work, we have conducted several studies aiming at understanding the factors that contribute to the permeability and adaptive aspects of PPS. A first study examined lateral PPS for individuals in isolation, by measuring reaction time to tactile stimuli when an irrelevant sound is looming towards the body of the individual. It revealed an anisotropy of reaction time across hemispaces, that we could link to handedness. A second study explored the modulations of PPS in social contexts. It was found that minimal social instructions could influence the shape of peripersonal space, with a complex modification of behaviors in collaborative tasks that outreaches the handedness effect. The third study is a methodological investigation attempting to go beyond the limitations of the behavioral methods measuring PPS, and proposing a new direction to assess how stimuli coming towards the body are integrated according to their distance and the multisensory context in which they are processed. Taken together, our work emphasizes the importance of investigating multisensory integration in 3D space around the body to fully capture PPS mechanisms, and the potential impacts of social factors on low-level multisensory processes. Moreover, this research provides evidence that neurocognitive social investigations, in particular on space perception, benefit from going beyond the traditional isolated individual protocols towards actual live social interactive paradigms
18

Lidji, Pascale. "Musique et langage : spécificités, interactions et associations spatiales." Thèse, Universite Libre de Bruxelles, 2008. http://hdl.handle.net/1866/6347.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Lee, Pei-Jung, and 李佩蓉. "The auditory spatial perception of hearing impaired under noise exposure." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/63275674675376395841.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
碩士
銘傳大學
商品設計學系碩士班
100
Philosopher Lorenzi Okun indicated that: "the human eye will be brought into the world, the ears will bring the world of mankind. " Obviously the hearing is an important way of human to receive external information, but also affect ability to communicate. Hearing impairment is bound to affect the lives learning and communication efficiency, but will also lead to a decline in the quality of life and the interpersonal relationship. Hearing impaired will make inconvenience of life and safety concerns, therefore, how to design and enhance the hearing impaired to improve usability to enhance their welfare should be an important issue. The objective of this study is to investigate the effect of orientation of the sound source, distance, frequency and source of clues to the impaired spatial perception. The results of experiment one indicated that the frequency of a sound, the distance of the sound source, the ears of the advantages of the sound source position and the subjects are significant to determine the sound source position tolerance,also it has two-way interaction of the sound frequency × sound source and sound frequency × sound distance. Experiment two indicated significant of stimulate sound time and stimulate the subjects was affect to impact of sound frequencies before the auditory threshold after hearing threshold tolerance, In addition, the experiment found that the two way interaction of stimulate the auditory type x audio rendering time. The results can guidance the design of the hearing impaired products to enhance the life of the well-being of the hearing-impaired. Example: the design of communications equipment, medical audiometry, calibration, broadcaster, alarms, doorbells, GPS navigation, mobile phones and parking sensor design.
20

Morgenstern, Yaniv. "Broad spatial pooling with local detectors for grating detection revealed with classification image analysis /." 2006. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&res_dat=xri:pqdiss&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&rft_dat=xri:pqdiss:MR19699.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (M.A.)--York University, 2006. Graduate Programme in Psychology.
Typescript. Includes bibliographical references (leaves 89-93). Also available on the Internet. MODE OF ACCESS via web browser by entering the following URL: http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&res_dat=xri:pqdiss&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&rft_dat=xri:pqdiss:MR19699
21

Hirschausen, David. "Body space interaction : sound as a spatial mnemonic." Master's thesis, 2007. http://hdl.handle.net/1885/149581.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Heron, James, N. W. Roach, James Vincent Michael Hanson, Paul V. McGraw, and David J. Whitaker. "Audiovisual time perception is spatially specific." 2012. http://hdl.handle.net/10454/6016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
No
Our sensory systems face a daily barrage of auditory and visual signals whose arrival times form a wide range of audiovisual asynchronies. These temporal relationships constitute an important metric for the nervous system when surmising which signals originate from common external events. Internal consistency is known to be aided by sensory adaptation: repeated exposure to consistent asynchrony brings perceived arrival times closer to simultaneity. However, given the diverse nature of our audiovisual environment, functionally useful adaptation would need to be constrained to signals that were generated together. In the current study, we investigate the role of two potential constraining factors: spatial and contextual correspondence. By employing an experimental design that allows independent control of both factors, we show that observers are able to simultaneously adapt to two opposing temporal relationships, provided they are segregated in space. No such recalibration was observed when spatial segregation was replaced by contextual stimulus features (in this case, pitch and spatial frequency). These effects provide support for dedicated asynchrony mechanisms that interact with spatially selective mechanisms early in visual and auditory sensory pathways.

To the bibliography