To see the other types of publications on this topic, follow the link: Auditory perception.

Dissertations / Theses on the topic 'Auditory perception'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Auditory perception.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Akeroyd, Michael Alexis. "Auditory perception of temporal asymmetry." Thesis, University of Cambridge, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.243023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lea, Andrew P. "Auditory modelling of vowel perception." Thesis, University of Nottingham, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.315235.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Talling, Janet C. "Porcine perception of auditory stimuli." Thesis, University of Edinburgh, 1996. http://hdl.handle.net/1842/13076.

Full text
Abstract:
Animals are adapted to live in fluctuating environments. Some stimuli to which they are exposed will be ignored, some will be avoided and others will be approached. Stimuli perceived as a threat or associated with a painful stimulation will tend to be avoided. Therefore to understand more fully how an animal copes with a particular situation, e.g. transportation, its perception of all stimuli needs to be determined. The aim of the study reported in this thesis was to determine how auditory stimuli, to which pigs are exposed during production, are perceived by individual pigs. A field study was carried out to characterise the sounds to which pigs are exposed during production and studies were made of pig responses to sound under experimental conditions. The sound pressure level in artificially ventilated fattening units was quite loud (70 to 80 dB(Lin)), but relatively constant. In contrast, naturally ventilated units were quieter (60 to 70 dB(Lin)), but more variable. Sound pressure levels during transport were more than 88 dB(Lin) and highly variable. Similar levels were measured in articulated transporters and small livestock trailers. Sound pressure levels measured in abattoir lairages varied from 77 dB(Lin) to 89 dB(Lin). Equivalent sound pressure levels (Leq 20 min) of 97 dB(Lin) were measured in the stun pen of one abattoir that used electric stunning. Pigs' perception of mechanical sounds between 85 and 100 dB(Lin) was assessed. The onset of sound activity and visual searching. Stronger responses were measured for louder sounds. Over a constant exposure period of 15 to 20 minutes the responses observed decreased towards basal levels.
APA, Harvard, Vancouver, ISO, and other styles
4

Wilkie, Sonia. "Auditory manipulation of visual perception." Thesis, View thesis, 2008. http://handle.uws.edu.au:8081/1959.7/39802.

Full text
Abstract:
Psychological research on cross-modal auditory-visual perception has focused predominantly on the manipulation of sensory information by visual information. There are relatively few studies of the way auditory stimuli may affect other sensory information. The Sound-induced Illusory Flash is one illusory paradigm that involves the auditory system biasing visual information. However, little is known about this cross-modal illusion. More research is needed into the structure of the illusion that investigates the different conditions under which the Sound induced Illusory Flash manifests and is enhanced or reduced. The research conducted for this thesis investigates the effect of new auditory stimulus variables on the Sound-induced Illusory Flash. The variables to be discussed concern the formation of a contrast in the auditory stimuli, with the contrast creating a rhythm that emphasises the discontinuous nature of the auditory stimuli, and therefore emphasises the illusory percept. The auditory stimulus contrasts include pitch separation with the octave interval, using the frequencies of 261.5 and 523Hz; and spatial separation in the auditory stimuli, presenting the monophonic auditory stimuli binaurally so that individual tones alternate between the left and right channels. I furthered this concept of auditory stimuli separation biasing an illusory percept, by investigating pitch and spatial presentation and localisation of the visual stimuli presentation, when multiple dots were presented. I also conducted analyses to determine if factors other than the auditory stimuli biased the illusory percept. These included the use of non-illusory trials and determining if their inclusion biased the illusory trial percept; and the impact of physical factors such as handedness, eye dominance, corrected vision, and musical experience on the illusory percept. My ultimate aim is to develop the illusory effect as a basis for new intermedia techniques to create the perceptual synchronisation of sound with images. These would be perceived as visually spliced according to the rhythm of the music on the micro time scale.
APA, Harvard, Vancouver, ISO, and other styles
5

Wilkie, Sonia. "Auditory manipulation of visual perception." View thesis, 2008. http://handle.uws.edu.au:8081/1959.7/39802.

Full text
Abstract:
Thesis (M.A. (Hons.))--University of Western Sydney, 2008.
Thesis accompanied by CD-ROM with demonstration of possible creative applications. A thesis presented to the University of Western Sydney, College of Arts, MARCS Auditory Laboratories, in fulfilment of the requirements for the degree of Master of Arts (Honours). Includes bibliographies. Thesis minus demonstration CD-ROM also available online at: http://handle.uws.edu.au:8081/1959.7/39849.
APA, Harvard, Vancouver, ISO, and other styles
6

Storms, Russell L. "Auditory-visual cross-modal perception phenomena." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1998. http://handle.dtic.mil/100.2/ADA355474.

Full text
Abstract:
Dissertation (Ph.D. in Computer Science) Naval Postgraduate School, September 1998.
Dissertation supervisor(s): Michael J. Zyda. "September 1998." Includes bibliographical references (p. 207-222). Also Available online.
APA, Harvard, Vancouver, ISO, and other styles
7

Ash, Roisin L. "Perception of structure in auditory patterns." Thesis, University of Stirling, 1998. http://hdl.handle.net/1893/26669.

Full text
Abstract:
The present research utilised five tasks to investigate non-musicians' perception of phrase, rhythm, pitch and beat structure in unaccompanied Gaelic melodies and musical sequences. Perception of phrase structure was examined using: i) a segmentation task in which listeners segmented Gaelic melodies into a series of meaningful units and ii) a novel click localisation task whereby listeners indicated where they perceived a superimposed click in the melody had occurred. Listeners consistently segmented the melodies into units of 2.4 - 5.4 seconds. Clicks which were positioned before and after perceived boundaries (identified by segmentation) were perceptually migrated towards the boundary. These results suggest that listeners perceptually differentiate between phrasal groups in melodies (See Sloboda & Gregory, 1980; Stoffer, 1985, for similar results with musicians). Short term memory for rhythmic structure was examined using rhythm recall of computer generated sequences and Gaelic melodies. Computer generated rhythms with small tonal pitch intervals (1 - 4 semitones) were easier to recall than large atonal intervals (predominantly greater than 4 semitones). Recall of Gaelic melodies, containing repetitive rhythmic units, was better than recall of computer sequences. Pitch reversal of Gaelic melodies did not effect recall. Beat-tapping with three Gaelic melodies revealed that the majority of listeners established the underlying beat 1.5 - 3 seconds (5 - 6 notes) after the start of the melodies. Perception of meaning and content in two note melodic intervals and three Gaelic melodies was examined using an adjective pair two-alternative forced choice task. Responses to musical intervals showed evidence of perceptual similarity based mainly on interval size. Perceived information content in the melodies increased significantly by the fourth note. The results suggest that the amounts of Gaelic melody which are: i) required to establish an underlying beat, ii) remembered after one hearing, and iii) perceptually grouped into a meaningful unit, include the unit of melody which is necessary to establish a basic meaning.
APA, Harvard, Vancouver, ISO, and other styles
8

Butcher, Andrew. "Free field auditory localization and perception." Thesis, Lethbridge, Alta. : University of Lethbridge, Dept. of Mathematics and Computer Sciencce, c2011, 2011. http://hdl.handle.net/10133/3113.

Full text
Abstract:
We have designed a system suitable for auditory electroencephalographic (EEG) experiments, with the objective of enabling studies of auditory motion. This thesis details the perceptual cues involved in spatial auditory experiments, and compares a number of spatial panning algorithms while examining their suitability to this purpose. A behavioural experiment involving perception of static auditory objects was used in an attempt to differentiate these panning algorithms. This study was used to inform the panner choice used in an auditory EEG experiment. This auditory EEG experiment involved the effects of discontinuity in velocity and position, and their affects on object perception. A new event related potential (ERP) component – the lateralized object related negativity (LORN) – was identified, and we consider its significance. libnetstation, a library for connecting with the NetStation (EEG) system has been developed, and released as open source software.
viii, 61 leaves : ill. ; 29 cm
APA, Harvard, Vancouver, ISO, and other styles
9

Dooley, Gary John. "The perception of auditory dynamic stimuli." Thesis, University of Cambridge, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.253843.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Merchel, Sebastian [Verfasser]. "Auditory-Tactile Music Perception / Sebastian Merchel." Aachen : Shaker, 2014. http://d-nb.info/106326569X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Bardolf, Lynnette Bosse. "Divided attention, perception, and auditory recall." [Gainesville, Fla.] : University of Florida, 2006. http://purl.fcla.edu/fcla/etd/UFE0014926.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

DEVALLEZ, Delphine. "Auditory perspective: perception, rendering, and applications." Doctoral thesis, Università degli Studi di Verona, 2009. http://hdl.handle.net/11562/337377.

Full text
Abstract:
Nell'apprezzare gli ambienti acustici, la percezione della distanza e cruciale tanto quanto la lateralizzazione. Ancorche sia stato condotto del lavoro di ricerca sulla percezione della distanza, i moderni display uditivi non traggono ancora vantag- gio da cio al ne di fornire dell'informazione addizionale sulla disposizione nello spazio delle sorgenti acustiche in modo da arricchirsi, di conseguenza, di contenuto e qualita. Quando si progetta un display uditivo si deve tener conto dell'obiettivo dell'applicazione data e delle risorse disponibili al ne di scegliere l'approccio ot- timale. In particolare, la resa della prospettiva acustica fornisce un ordinamento gerarchico delle sorgenti sonore e permette di focalizzare l'attenzione dell'utente sulla sorgente piu vicina. A parte cio, quando i dati visuali non sono piu disponibili in quanto al di fuori del campo visivo o perche l'utente e al buio, ovvero perche e bene non adoperarli per ridurre il carico sull'attenzione visiva, il rendering uditivo deve convogliare tutta l'informazione spaziale inclusa la distanza. Questo lavoro di ricerca intende studiare la profondita acustica (sorgenti sonore dislocate di fronte all'ascoltatore) in termini di percezione, resa, e applicazioni all'interazione uomo- macchina. Dapprima si propone una rassegna degli aspetti piu importanti della percezione uditiva della distanza. Le indagini sulla percezione della distanza sono molto piu avanzate nel campo della visione, in quanto hanno gia trovato applicazioni nelle tecnologie di visualizzazione. Da cio, sembrerebbe naturale fornire la stessa in- formazione nel dominio uditivo per aumentare il grado di realismo del display complessivo. La percezione della profondita di fatto puo essere facilitata combi- nando indizi visuali e uditivi. Vengono riportati alcuni risultati di rilievo della letteratura sugli eetti dell'interazione audio-visiva, e illustrati due esperimenti sulla percezione della profondita audio-visiva. In particolare, e stata indagata l'in uenza degli indizi uditivi sull'ordinamento visuo-spaziale percepito. I risultati mostrano che la manipolazione dell'intensita acustica non in uisce sulla percezione dell'ordinamento lungo l'asse della profondita, un'evidenza dovuta probabilmente alla mancanza di integrazione multisensoriale. Inoltre, introducendo un ritardo tra i due stimoli audiovisuali, il secondo esperimento ha rivelato un eetto legato all'ordine temporale dei due stimoli visivi. Tra le tecniche esistenti per la spazializzazione della sorgente acustica lungo la dimenzione della profondita esiste uno studio che ha proposto un modello di tubo virtuale, basato sull'esagerazione del riverbero all'interno di questo ambiente. La tecnica di progetto segue un approccio a modelli sici e fa uso della Digital Waveg- uide Mesh (DWM) rettangolare 3D, la quale ha gia evidenziato la sua capacita di simulare ambienti acustici complessi in larga scala. La DMW 3D e troppo aamata di risorse per la simulazione in tempo reale di ambienti 3D di dimensioni accetta- bili. Ancorche una decimazione possa aiutare a ridurre il carico computazionale sulla CPU, un'alternativa piu eciente e quella di adoperare un modello 2D che, conseguentemente, simula una membrana. Sebbene suoni meno naturale delle sim- ulazioni in 3D, lo spazio acustico bidimensionale risultante presenta proprieta simili specialmente rispetto alla resa della profondita. Questo lavoro di ricerca dimostra anche che l'acustica virtuale permette di plas- mare la percezione della distanza e, in particolare, di compensare la nota com- pressione delle stime soggettive di distanza. A tale scopo si e proposta una DWM bidimensionale trapezoidale come ambiente virtuale capace di fornire una relazione lineare tra distanza sica e percepita. Sono stati poi condotti tre test d'ascolto per misurarne la linearita. Peraltro essi hanno dato vita a una nuova procedura di test che deriva dal test MUSHRA, adatta a condurre un confronto diretto di distanze multiple. Nello specico essa riduce la variabilita della risposta a confronto della procedura di stima di grandezze dirette. Le implementazioni in tempo reale della DWM 2D rettangolare sono state re- alizzate in forma di oggetti \external" per Max/MSP. Il primo external per- mette di rendere una o piu sorgenti acustiche statiche dislocate a diverse dis- tanze dall'ascoltatore, mentre il secondo external simula una sorgente sonora in movimento lungo la dimensione della profondita, una sorgente cioe in avvicina- mento/allontanamento. Come applicazione del primo external e stata proposta un'interfaccia audio-tattile. L'interfaccia tattile comprende un sensore lineare di posizione fatto di materiale conduttivo. La posizione del tocco sulla fascetta viene mappata sulla posizione d'ascolto di una membrana virtuale rettangolare modellata dalla DWM 2D, la quale fornisce indizi di profondita per quattro sorgenti egualmente spaziate. In ag- giunta a cio si dopera la manopola di un controller MIDI per variare la posizione della membrana lungo l'elenco dei suoni, permettendo cos di passare in rassegna l'intero insieme di suoni muovendosi avanti e indietro lungo la nestra audio cos- tituita dalla membrana virtuale. I soggetti coinvolti nella valutazione d'uso hanno avuto successo nel trovare tutti i le audio deniti come target, cos come giudicato l'interfaccia intuitiva e gradevole. Inoltre e stata realizzata un'altra dimostrazione dell'interfaccia audio-tattile adoperando modelli sici per il suono. Suoni di es- perienza quotidiana derivanti da eventi quali \friggere", \bussare", \sgocciolare" sono stati adoperati in modo che sia la creazione del suono che la sua resa in profon- dita fossero il risultato di una sintesi per modelli sici, ipotizzando che l'approccio di tipo ecologico potesse fornire un'interazione di tipo intuitivo. Inne, \DepThrow" e un gioco audio basato sull'utilizzo della DWM 2D per ren- dere indizi di profondita di una sorgente acustica dinamica. Il gioco consiste nel lanciare una palla virtuale, modellata da un modello sico di suoni di rotolamento, all'interno di un tubo virtuale inclinato e aperto alle estremita, modellato da una DWM 2D. L'obiettivo e fare rotolare la palla quanto piu in la nel tubo senza farla cadere all'estremita lontana. Dimostrato come un gioco, questo prototipo e stato pensato anche come strumento per condurre indagini sulla percezione della dis- tanza dinamica. I risultati preliminari di un test d'ascolto condotto sulla percezione della distanza variabile all'interno del tubo virtuale, hanno mostrato che la durata del rotolamento della palla in uenza la stima della distanza raggiunta.
In our appreciation of auditory environments, distance perception is as crucial as lateralization. Although research work has been carried out on distance percep- tion, modern auditory displays do not yet take advantage of it to provide additional information on the spatial layout of sound sources and as a consequence enrich their content and quality. When designing a spatial auditory display, one must take into account the goal of the given application and the resources available in order to choose the optimal approach. In particular, rendering auditory perspec- tive provides a hierarchical ordering of sound sources and allows to focus the user attention on the closest sound source. Besides, when visual data are no longer available, either because they are out of the visual eld or the user is in the dark, or should be avoided to reduce the load of visual attention, auditory rendering must convey all the spatial information, including distance. The present research work aims at studying auditory depth (i.e. sound sources displayed straight ahead of the listener) in terms of perception, rendering and applications in human com- puter interaction. First, an overview is given of the most important aspects of auditory distance perception. Investigations on depth perception are much more advanced in vision since they already found applications in computer graphics. Then it seems nat- ural to give the same information in the auditory domain to increase the degree of realism of the overall display. Depth perception may indeed be facilitated by combining both visual and auditory cues. Relevant results from past literature on audio-visual interaction eects are reported, and two experiments were carried out on the perception of audio-visual depth. In particular, the in uence of auditory cues on the perceived visual layering in depth was investigated. Results show that auditory intensity manipulation does not aect the perceived order in depth, which is most probably due to the lack of multisensory integration. Besides, the second experiment, which introduced a delay between the two auditory-visual stimuli, re- vealed an eect of the temporal order of the two visual stimuli. Among existing techniques for sound source spatialization along the depth di- mension, a previous study proposed the modeling of a virtual pipe, based on the exaggeration of reverberation in such an environment. The design strategy follows a physics-based modeling approach and makes use of a 3D rectangular Digital Waveguide Mesh (DWM), which had already shown its ability to simulate complex, large-scale acoustical environments. The 3D DWM resulted to be too resource consuming for real-time simulations of 3D environments of decent size. While downsampling may help in reducing the CPU processing load, a more ef- cient alternative is to use a model in 2D, consequently simulating a membrane. Although sounding less natural than 3D simulations, the resulting bidimensional audio space presents similar properties, especially for depth rendering. The research work has also shown that virtual acoustics allows to shape depth perception and in particular to compensate for the usual compression of distance estimates. A trapezoidal bidimensional DWM is proposed as a virtual environment able to provide a linear relationship between perceived and physical distance. Three listening tests were conducted to assess the linearity. They also gave rise to a new test procedure deriving from the MUSHRA test and which is suitable for direct comparison of multiple distances. In particular, it reduces the response variability in comparison with the direct magnitude estimation procedure. Real-time implementations of the rectangular 2D DWM have been realized as Max/MSP external objects. The rst external allows to render in depth one or more static sound sources located at dierent distances from the listener, while the second external simulates one moving sound source along the depth dimension, i.e. an approaching/receding source. As an application of the rst external, an audio-tactile interface for sound naviga- tion has been proposed. The tactile interface includes a linear position sensor made by conductive material. The touch position on the ribbon is mapped onto the lis- tening position on a rectangular virtual membrane, modeled by the 2D DWM and providing depth cues of four equally spaced sound sources. Furthermore the knob of a MIDI controller controls the position of the mesh along the playlist, which allows to browse a whole set of les by moving back and forth the audio window resulting from the virtual membrane. Subjects involved in a user study succeeded in nding all the target les, and found the interface intuitive and entertaining. Furthermore, another demonstration of the audio-tactile interface was realized, using physics-based models of sounds. Everyday sounds of \frying", \knocking" and \liquid dripping" are used such that both sound creation and depth rendering are physics-based. It is believed that this ecological approach provides an intuitive interaction. Finally, \DepThrow" is an audio game, based on the use of the 2D DWM to render depth cues of a dynamic sound source. The game consists in throwing a virtual ball (modeled by a physics-based model of rolling sound) inside a virtual tube (modeled by a 2D DWM) which is open-ended and tilted. The goal is to make the ball roll as far as possible in the tube without letting it fall out at the far end. Demonstrated as a game, this prototype is also meant to be a tool for investi- gations on the perception of dynamic distance. Preliminary results of a listening test on the perception of distance motion in the virtual tube showed that duration of the ball's movement in uences the estimation of the distance reached by the rolling ball.
APA, Harvard, Vancouver, ISO, and other styles
13

Chen, Yuxiao. "Multimodal Perception of Auditoria: Influence of Auditory and Visual Factors on Preference." Thesis, The University of Sydney, 2022. https://hdl.handle.net/2123/29773.

Full text
Abstract:
The enjoyment of a music performance is a multisensory experience, of which auditory and visual senses play the most important parts in conveying the content of the concerts. This thesis investigates the effects of and relationships between various auditory and visual factors on subjective preference, with an emphasis on the rarely-studied visual preference. The thesis includes four subjective evaluation experiments (all using head-mounted virtual reality display and headphones audio playback, 30 to 33 volunteers each) and one online survey (153 responses). The experimental method of virtual reality display and digital audio playback allows each factor to be individually controlled and tested, which was never possible with traditional methods, but still provides a reasonable sense of space and realism. Auditory factors considered in the thesis include sound pressure level and reverberation time, while visual factors include interior design colour, distance from the stage, lateral angle from the concert hall mid-plane, vertical angle from stage level, and visual obstruction. The effects of factors were studied using orthogonal control, and verified with realistic models and alternative methods with larger sample. Results include a prediction model that accounts for the effects and relationships of all investigated factors, and a practical tool for design/evaluation of auditorium seating layout.
APA, Harvard, Vancouver, ISO, and other styles
14

Delmotte, Varinthira Duangudom. "Computational auditory saliency." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/45888.

Full text
Abstract:
The objective of this dissertation research is to identify sounds that grab a listener's attention. These sounds that draw a person's attention are sounds that are considered salient. The focus here will be on investigating the role of saliency in the auditory attentional process. In order to identify these salient sounds, we have developed a computational auditory saliency model inspired by our understanding of the human auditory system and auditory perception. By identifying salient sounds we can obtain a better understanding of how sounds are processed by the auditory system, and in particular, the key features contributing to sound salience. Additionally, studying the salience of different auditory stimuli can lead to improvements in the performance of current computational models in several different areas, by making use of the information obtained about what stands out perceptually to observers in a particular scene. Auditory saliency also helps to rapidly sort the information present in a complex auditory scene. Since our resources are finite, not all information can be processed equally. We must, therefore, be able to quickly determine the importance of different objects in a scene. Additionally, an immediate response or decision may be required. In order to respond, the observer needs to know the key elements of the scene. The issue of saliency is closely related to many different areas, including scene analysis. The thesis provides a comprehensive look at auditory saliency. It explores the advantages and limitations of using auditory saliency models through different experiments and presents a general computational auditory saliency model that can be used for various applications.
APA, Harvard, Vancouver, ISO, and other styles
15

STRYBEL, THOMAS ZIGMUNT. "AUDITORY APPARENT MOTION IN THE FREE FIELD." Diss., The University of Arizona, 1987. http://hdl.handle.net/10150/184100.

Full text
Abstract:
The purpose of this investigation was to examine the illusion of auditory apparent motion (AM), and compare it to the visual AM function. Visual explanations of this phenomenon rely on a two-process theory, with the spatial separation between the two stimuli determining which process is involved. A pilot experiment examined the role of spatial separation on auditory AM. Subjects were required to listen to a pair of 50 msec. uncorrelated white noise sources, led through two speakers, and separated in time by interstimulus onset intervals (ISOI's) ranging from 0 to 500 msec. The speakers were positioned at one of eleven different locations which varied both in their separation (0-160° azimuth) and distance from the listener (17-34 inches). The subjects classified their experience of the stimulus presentation into one of five response categories. In addition, they were required to report the direction (left or right) of the first-occurring stimulus. Neither the angular separation between the sound sources nor the distance of the sources from the subject had any effect on the range or midpoint of the ISOI's which produce the illusion of motion. In addition, the percentage of correct direction judgements were not affected by the location of the sound sources. The main experiment examined the possibility of perceiving auditory AM in the absence of binaural cues. Six listeners were employed in this experiment, and only three separation (10, 40 and 160°). Each subject was tested at all speaker positions, both with one ear occluded and with both ears open. The results of this experiment indicated that AM can be perceived under monaural listening conditions. Spatial separation did effect the illusion in this condition. As the separation between the sound sources increased, the percentage of motion reports decreased. The detection of direction of the motion was more difficult as the separation decreased in the monaural condition. These results conflict with previous explanations of motion perception in the auditory modality, which rely exclusively on the presence of binaural spatial information. A two process theory of AM is also indicated, but the spatial separation does not determine which mechanism is being employed.
APA, Harvard, Vancouver, ISO, and other styles
16

King, Robert A. "Determinants of auditory display usage." Thesis, Georgia Institute of Technology, 1993. http://hdl.handle.net/1853/29422.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Schooneveldt, Gregory Paul. "Dynamic aspects of auditory masking." Thesis, University of Cambridge, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.304512.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Lee, Mark D. "Multi-channel auditory search : toward understanding control processes in polychotic auditory listening." Diss., Georgia Institute of Technology, 1996. http://hdl.handle.net/1853/29225.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Wright, James K. "Auditory object perception : counterpoint in a new context." Thesis, McGill University, 1986. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=65537.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Hutchison, Joanna Lynn. "Boundary extension in the auditory domain." Fort Worth, Tex. : Texas Christian University, 2007. http://etd.tcu.edu/etdfiles/available/etd-07232007-150552/unrestricted/Hutchison.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Anderson, Elizabeth. "Audiovisual speech perception with degraded auditory cues." Connect to resource, 2006. http://hdl.handle.net/1811/6532.

Full text
Abstract:
Thesis (Honors)--Ohio State University, 2006.
Title from first page of PDF file. Document formatted into pages: contains 35 p.; also includes graphics. Includes bibliographical references (p. 28-29). Available online via Ohio State University's Knowledge Bank.
APA, Harvard, Vancouver, ISO, and other styles
22

Motz, Benjamin A. "Expectations during the Perception of Auditory Rhythms." Thesis, Indiana University, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10750871.

Full text
Abstract:

When someone hears regular, periodic sounds, such as drum beats, footsteps, or stressed syllables in speech, these individual stimuli tend to be grouped into a perceived rhythm. One of the hallmarks of rhythm perception is that the listener generates expectations for the timing of upcoming stimuli, which theorists have described as endogenous periodic modulations of attention around the time of anticipated sounds. By constructing an internal representation of a rhythm, perceptual processes can be augmented by proactively deploying attention at the expected moment of an upcoming stressed syllable, the next step in an observed stride, or during the stroke of a co-speech hand gesture. A hypothetical benefit of this anticipatory allocation of attention is that it might facilitate temporal integration across the senses, binding multisensory aspects of our experiences into a unified “now,” anchored by temporally-precise auditory expectations. The current dissertation examines this hypothesis, exploring the effects of auditory singletons, and auditory rhythms, on electrophysiological indices of perception and attention to a visual stimulus, using the flash-lag paradigm. An electroencephalography study was conducted, where sounds, either isolated or presented rhythmically, occurred in alignment with a task-relevant visual flash. Results suggest a novel dissociation between the multisensory effects of discrete and rhythmic sounds on visual event perception, as assessed by the N1 component of the event-related potential, and by oscillatory power in the beta (15–20 Hz) frequency range. This dissociation is discussed in the context of classic and contemporary research on rhythm perception, temporal orienting, and temporal binding across the senses, and contributes to a more refined understanding of rhythmically-deployed attention.

APA, Harvard, Vancouver, ISO, and other styles
23

Hollander, Ari J. "An exploration of virtual auditory shape perception /." Connect to this title online (HTML format) Connect to this title online (RTF format), 1994. http://www.hitl.washington.edu/publications/hollander/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Gonzalez, Daniel. "An Adaptation of an Auditory Perception Test." FIU Digital Commons, 2018. https://digitalcommons.fiu.edu/etd/3772.

Full text
Abstract:
The Auditory Perception Test for the Hearing Impaired, 3rd edition (APT/HI-3) was adapted into an auditory perception assessment tool for Spanish-speaking children called the Auditory Perception Test for the Hearing Impaired—Spanish (APT/HI-S). Test items from the APT/HI-S were then validated by three groups of Spanish-English bilinguals to determine if selected words were developmentally and linguistically appropriate for 3-year old children. Survey results revealed that 37 out of 62 words were considered developmentally and grammatically appropriate. The APT/HI-S was then administered to two 3-year old and two 5-year old children, two with typical hearing and two with hearing loss. Results revealed that language proficiency played an integral role in the measurement of auditory perception skills. The children demonstrated better performance when tested in their dominant language, reinforcing the need to have a language-specific assessment tool to obtain a more accurate picture of auditory and speech perception skills in children.
APA, Harvard, Vancouver, ISO, and other styles
25

Keating, Peter. "Plasticity and integration of auditory spatial cues." Thesis, University of Oxford, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.561113.

Full text
Abstract:
Although there is extensive evidence that auditory spatial processing can adapt to changes in auditory spatial cues both in infancy and adulthood, the mechanisms underlying adaptation appear to differ across species. Whereas barn owls compensate for unilateral hearing loss throughout development by learning abnormal mappings between cue values and spatial position, adult mammals seem to adapt by ignoring the acoustical input available to the affected ear and learning to rely more on unaltered spatial cues. To investigate these differences further, ferrets were raised with a unilateral earplug and their ability to localize sounds was assessed. Although these animals did not fully compensate for the effects of an earplug, they performed considerably better than animals that experienced an earplug for the first time, indicating that adaptation had taken place. We subsequently found that juvenile-plugged (JP) ferrets learned to adjust both cue mappings and weights in response to changes in acoustical input, with the nature of these changes reflecting the expected reliability of different cues. Thus, the auditory system may be able to rapidly update the way in which individual cues are processed, as well as the way in which different cues are integrated, thereby enabling spatial cues to be processed in a context- specific way. In attempting to understand the mechanisms that guide plasticity of spatial hearing, previous studies have raised the possibility that changes in auditory spatial processing may be driven by mechanisms intrinsic to the auditory system. To address this possibility directly, we measured the sensitivity of human subjects to ITDs and ILDs following transient misalignment of these cues. We found that this induces a short-term recalibration that acts to compensate for the effects of cue misalignment. These changes occurred in the absence of error feedback, suggesting that mutual recalibration can occur between auditory spatial cues. The nature of these changes, however, was consistent with models of cue integration, suggesting that plasticity and integration may be inextricably linked. Throughout the course of this work, it became clear that future investigations would benefit from the application of closed-field techniques to the ferret. For this reason, we developed and validated methods that enable stimuli to be presented to ferrets over earphones, and used these methods to assess ITD and ILD sensitivity in ferrets using a variety of different stimuli. We found that the Duplex theory is able to account for binaural spatial sensitivity in these animals, and that sensitivity is comparable with that found in humans, thereby confirming the ferret as an excellent model for understanding binaural spatial hearing.
APA, Harvard, Vancouver, ISO, and other styles
26

Leung, Johahn. "Auditory Motion in Motion." Thesis, The University of Sydney, 2016. http://hdl.handle.net/2123/15944.

Full text
Abstract:
This thesis describes a number of studies conducted to examine three different facets of horizontal motion processing in the auditory system. Firstly, when a sound moved around a stationary listener (“source motion”); secondly, when subjects engaged in head rotations while sources remained stationary (“self motion”) and lastly when subjects engaged in self motion during simultaneous source motion. Previous studies in the field have explored these issues separately, and much remains unknown. For “source motion”, a localisation based “snapshot” psychophysical model remains the most commonly used narrative in describing this process, given the lack of clarity about the neural pathways underlying motion perception. However, it remains unclear whether (or how) such a framework can generalise to different stimulus conditions. For “self motion”, studies reported here have considered the sensory implications of head motion in the presence of a stationary sound, questioning how auditory spatial perception remains stable and exploring the perceptual benefits from dynamic localisation cues. Yet, the underlying interactions between audition and the head motor plant remain unclear, particularly at the faster head turn velocities. Lastly, there is a scarcity of studies probing the how listeners perceive a moving source during simultaneous self motion, even though it encapsulates concepts in both self and source motion, providing a unique opportunity to help frame our understanding of the sensorimotor mechanisms involved. We addressed these questions with three psychophysical experiments, and proposed a leaky integrative framework as an alternative to the “snapshot” model.
APA, Harvard, Vancouver, ISO, and other styles
27

Ciocca, Valter. "Effects of auditory streaming upon duplex perception of speech." Thesis, McGill University, 1988. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=75866.

Full text
Abstract:
When a formant transition (isolated transition) and the remainder (base) of a synthesized syllable are presented to opposite ears most subjects perceive two simultaneous sounds, a syllable and a nonspeech chirp. The isolated transition determines the identity of the syllable at one ear and, at the same time, is perceived as a chirp at the opposite ear. This phenomenon, called duplex perception, has been interpreted as the result of the independent operation of two perceptual modes, the phonetic and the auditory mode. In order to test this hypothesis, the isolated transition was preceded and followed by a series of identical transitions sent to the same ear. This streaming procedure weakened the contribution of the transition to the perceived phonetic identity of the syllable. This weakening effect could have been explained in terms of the habituation of an hypothetical phonetic feature detector sensitive to the repetition of identical transitions. For this reason, the same effect was replicated by capturing the isolated transition with others which were aligned on the same frequency-by-time trajectory as the isolated one. These findings are consistent with the idea that the integration of the transition with the base was affected by the operation of general-purpose auditory processes. This contrasts with the hypothesis that the phonetic mode integrated the dichotic stimuli independently of the auditory mode.
APA, Harvard, Vancouver, ISO, and other styles
28

Nisbet, Robert Stevenson. "Children's Matching of Melodies and Their Visual Representations." Thesis, Griffith University, 1998. http://hdl.handle.net/10072/367105.

Full text
Abstract:
The matching of melodies with their visual representations is predicated on the ability to relate changes of melodic pitch with changes of spatial position, usually in a vertical direótion. Previous studies have investigated the matching process in terms of factors such as melodic tonality, contour complexity, presentation rate (in notes per second), modality (visual & auditory) and musical training. This investigation sought to answer a number of questions which arose from reflections on the results of such prior work - questions about musical training and the related notion of musical ability; questions about the role, if any, of mathematical ability, given the graphical nature of the visual materials; questions about the role of the type of visual materials; and questions about the strategies used in the matching process. The investigation was carried out with school children aged 10 and 11 years. The first three experiments in this investigation investigated the effects associated with the nature of the visual materials, along with the effects of ability factors (musical ability, mathematical ability, and simultaneous and successive cognitive processing ability). The last two experiments examined the strategies that children used in the auditory/visual matching process, and whether analytical or holistic processing took precedence during the matching process. Experiment 1 investigated cross-modal (visual-auditory, auditory-visual) and intramodal (visual-visual, auditory-auditory) matching of short melodies and line graphs, and showed that the matching process was influenced by visual/graphical factors as well as auditory/melodic factors in that matching with conventional format graphs (time on the horizontal axis) was superior to matching with non-conventional graphs (time on the vertical axis). It was also found that intramodal tasks were superior to cross-modal tasks, and within these categories, visual-first tasks were superior to auditory-first tasks. This result was at variance with the claim in the literature (the contour abstraction hypothesis) that visual-first matching tasks were superior to auditory-first matching tasks across intramodal and cross-modal categories. Limited positive effects of musical ability and musical training were observed but a close relationship between the two factors was noted. A positive effect of mathematical ability was revealed also, and evidence relating to type of visual format pointed towards the effect being attributable to mathematics experience, rather than just mathematical ability. The effect of visual factors on the matching process was further investigated in Experiment 2 with the use of music notation. Again, it was demonstrated that the process was influenced by visual as well as auditory factors. Also, the modality effects of the first experiment were observed although performance levels with tasks involving visual materials indicated that matching melodies with music notation was more difficult than graphic notation for the children. Results for the visual-to-melody condition confirmed previous claims that the process of reading music is more complex than the cross-modal transfer of auditory and visual information. Musical ability and music experience were positive factors in the melody-to-visual condition. However, overall, the effects of ability factors were overshadowed by the effects of modality condition and complexity. The fact that musically experienced children did not outperform their inexperienced counterparts suggests that, generally speaking, children who learn music find the task of reading music notation a difficult exercise. The third experiment examined the matching of melodies and their visual representations with respect to abilities in simultaneous and successive cognitive processing. Simultaneous cognitive processing was a significant positive factor in the performance of tasks in the two visual-first modality conditions (visual-to-visual and visual-to-melody), whereas successive cognitive processing was a significant positive factor in all four modality conditions. The results indicate that simultaneous processing was involved in the ability to inter-relate features not only of the visual materials, but also of the short melodies in the case of low complexity examples. The effect of successive cognitive processing ability was attributed firstly to the processing of the notes of a melody as elements of a chain-like progression, and secondly to the consecutive presentation of the two stimuli to the children. Experiment 3 also confirmed the assertion made in consideration of the results of Experiments 1 and 2, that music notation is more complex visually than line graphs, and thus requires a higher level of simultaneous processing to abstract the significant perceptual and symbolic features. The features of the melodic and visual materials and their associated processing strategies were the major issues investigated in Experiments 4 and 5. Children's recognition of differences in the materials at the local and global levels was examined with respect to analytical and global processing, and presentation rate. It was found that global processing took precedence in this context, confirming Navon's (1977, 1981) global precedence hypothesis. Global information with respect to overall contour was accessed more easily and more quickly than local information in the form of interval sizes. Attention to these local and global properties was able to be manipulated by mode of instruction (as predicted from the results of Palmer, 1990), such that detection of local differences was reinforced by instructions to act analytically and hindered by instructions to act globally. Similarly, detection of global differences was reinforced by instructions to act holistically and hindered by instructions to act analytically, notably at the faster presentation rate. Decreasing the presentation rate led to a reduction of cohesion of local and global melodic information in terms of the children's perception of relative interval sizes. Although the children recognised global-change items reasonably well, they incorrectly reported more differences for global-change items compared to local-change items. The results from this investigation indicate that the form of the visual representation of musical melodies has a significant influence on matching-task performance levels, even for musically trained children. It appears that the more perceptual and the less abstract and symbolic a visual representation system is, the more easily children will be able to perform the melody/visual matching. It is clear that, in general, children find the task of reading music notation difficult and even those who have had two or three years of formal musical experience would not be able to rely on it in their music lessons, rehearsals or performances to any great extent. Reading music notation requires an ability to process symbolic as well as perceptual information, which, in turn, requires a high level of simultaneous cognitive processing ability. Reading notation also requires an ability to judge the size of musical intervals and to match their notated form with the aural interval. The research conducted in my project complements the work of Morrongiello and Roes (1990) in identifying the influence of visual factors (such as graphical format and system of music notation) as well as auditory factors in auditory/visual matching task performance. Although the results confirmed the existence of presentation-rate and modality effects established by Balch & Muscatelli (1986), the patterns of results from my project showed that the contour abstraction hypothesis does not necessarily hold in other auditory/visual contexts. It appears that orders of performance of the various modality conditions depend also on the nature and complexities of the auditory and visual materials as well as modality condition. Although positive effects of musical ability and mathematics ability on auditory/visual matching were demonstrated, it was clear that these effects could be attributed to the closely-associated notions of music experience and mathematics experience respectively. Nevertheless, the demonstrated positive effects of simultaneous and successive cognitive processing abilities in the various conditions of auditory/visual matching suggest that the ability factors are more generic than the disciplines, and lend support for the Luria model of cognitive processing (Das, Kirby & Jarman, 1979; Naglieri & Das, 1990). The investigation of further aspects of processing showed that (i) global processing took precedence over analytical processing, thus confirming Navon's (1977, 1981) global precedence hypothesis in the context of auditory/visual matching, mid (ii) that attention to the local and global properties could be manipulated by instructions, as predicted from the work of Palmer (1990). A number of areas for further research arise from the results of this investigation. One is the role of instructions (global and local) in the matching of short melodies with music notation, focussing on the conditions which facilitate the recognition of local features such as inter-note intervals. It has been shown firstly that music notation is more complex visually than line graphs, secondly that children have more difficulty processing local information such as interval details comØred to global features such as overall shape, and thirdly that local instructions reinforce the processing of local information. One would expect that levels of performance at melody/notation matching tasks involving local differences would be lower than those reported for melody/graph matching tasks in Experiment 5, but would be more dependent on type of instruction. An extension of that research would be an investigation of the extent to which dimensions such as tempo, rhythm and timbre could be considered to be locaL/analytic or globallholistic. Following on from this, an investigation could be carried out on the role and possible benefits of instruction and extended practice in the matching of musical intervals (aural) and their notated forms. If found to be beneficial to music students in terms of reading music notation, such instruction and practice may exemplify those activities required in addressing the call in the music education literature (Walker, 1992) to develop the ability to integrate information gained from auditory and visual perception.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Education
Arts, Education and Law
Full Text
APA, Harvard, Vancouver, ISO, and other styles
29

Ceballo, Charpentier Sebastian Arturo. "Causal manipulations of auditory perception and learning strategies in the mouse auditory cortex." Thesis, Sorbonne université, 2019. http://www.theses.fr/2019SORUS058.

Full text
Abstract:
À travers des nos sens, le cerveau reçoit une énorme quantité des informations. Cette information doit être filtrée afin d'extraire les caractéristiques les plus saillantes pour guider notre comportement. La façon dont le cerveau génère différents percepts et conduit les comportements, reste comme deux des majeures questions de la neuroscience moderne. Dans ce travail, en utilisant une tâche de discrimination Go / noGo combinée à l’optogénétique pour inactiver le cortex auditif durant le comportement la souris, nous avons établi que le cortex auditif n’est pas utilisé pour les discriminations de fréquence simples, mais qu’il était nécessaire pour résoudre une tâche plus difficile. En combinant différentes techniques de cartographie et l’optogénétique sculptée pour activer des champs tonotopiques définis avec précision dans le cortex auditif, nous avons pu élucider la stratégie utilisée par les souris pour résoudre cette tâche difficile, révélant ainsi un mécanisme de discrimination de fréquence retardé. En parallèle, des observations sur la vitesse d'apprentissage et l'activité des différents sons dans le cortex auditif nous ont amenés à étudier leurs interactions et à tester de manière causale le rôle du recrutement cortical dans l'apprentissage associatif, le révélant comme un possible corrélat neurophysiologique de la saillance
Through our senses, the brain receives an enormous amount of information. This information needs to be filtered in order to extract the most salient features to guide our behavior. How the brain actually generates different percepts and drives behavior, remain the two major questions in modern neuroscience. To answer these questions, novel neural engineering approaches are now employed to map, model and finally generate, artificial sensory perception with its learned or innate associated behavioral outcome. In this work, using a Go/noGo discrimination task combined with optogenetics to silence auditory cortex during ongoing behavior in mice, we have established the dispensable role of auditory cortex for simple frequency discriminations, but also its necessary role to solve a more challenging task. By the combination of different mapping techniques and light-sculpted optogenetics to activate precisely defined tonotopic fields in auditory cortex, we could elucidate the strategy that mice use to solve this hard task, revealing a delayed frequency discrimination mechanism. In parallel, observations about learning speed and sound-triggered activity in auditory cortex, led us to study their interactions and causally test the role of cortical recruitment in associative learning, revealing it as a possible neurophysiological correlate of saliency
APA, Harvard, Vancouver, ISO, and other styles
30

King, Lisa Charmayne. "Auditory ambience as an information display." Diss., Georgia Institute of Technology, 1994. http://hdl.handle.net/1853/28829.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Rogers, Wendy Laurel. "Cumulative effects in auditory stream segregation." Thesis, McGill University, 1991. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=70309.

Full text
Abstract:
Nine experiments were done to test three theories of auditory stream segregation and to investigate some conditions under which segregated tones re-integrate. In two-part trials, subjects (adults with normal hearing) first heard a segregation-inducing "Induction Sequence" whose effects upon an immediately subsequent "Test Sequence" were measured. The Test Sequence always had tones that alternated rhythmically between two frequencies. Rhythm and total duration of Induction Sequence tones were varied in the first two studies. Similarity of Induction and Test Sequences aided segregation whereas rhythmic predictability and longer tone durations in the Induction Sequence did not. Frequency alternation during the Induction Sequence was not necessary to induce segregation in the Test Sequence. The effects of sudden and gradual changes in lateralization, spatial location and sound level were investigated also. The data suggest that explaining segregation by peripheral processes is inadequate and that, once a distinct percept emerges from an auditory scene, properties derived from the percept (particularly changes) are fed back to control the ongoing analysis of that scene. A neural adaptation to stimuli with constant properties may form part of this analysis.
APA, Harvard, Vancouver, ISO, and other styles
32

Lau, Lai-yi Kitty. "Listeners' perception of stuttering in Cantonese." Click to view the E-thesis via HKUTO, 1994. http://sunzi.lib.hku.hk/hkuto/record/B36208942.

Full text
Abstract:
Thesis (B.Sc)--University of Hong Kong, 1994.
"A dissertation submitted in partial fulfilment of the requirements for the Bachelor of Science (Speech and Hearing Sciences), The University of Hong Kong, 29th April, 1994." Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
33

Huang, Tsan. "Language-specificity in auditory perception of Chinese tones." Connect to this title online, 2004. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1092856661.

Full text
Abstract:
Thesis (Ph. D.)--Ohio State University, 2004.
Title from first page of PDF file. Document formatted into pages; contains xix, 194 p.; also includes graphics. Includes bibliographical references (p. 183-194).
APA, Harvard, Vancouver, ISO, and other styles
34

Radeau, Monique. "Interaction audio-visuelle et modularité = Auditory-visual interaction and modularity." Doctoral thesis, Universite Libre de Bruxelles, 1991. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/212982.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Ver, Hulst Pamela. "Visual and auditory factors facilitating multimodal speech perception." Connect to resource, 2006. http://hdl.handle.net/1811/6629.

Full text
Abstract:
Thesis (Honors)--Ohio State University, 2006.
Title from first page of PDF file. Document formatted into pages: contains 35 p.; also includes graphics. Includes bibliographical references (p. 24-26). Available online via Ohio State University's Knowledge Bank.
APA, Harvard, Vancouver, ISO, and other styles
36

Lee, Catherine. "Perception of synchrony between auditory and visual stimuli." Thesis, University of Ottawa (Canada), 2002. http://hdl.handle.net/10393/6375.

Full text
Abstract:
The literature has fairly consistently reported a difference in how well humans perceive synchrony depending on the order of auditory and visual stimuli. When the auditory stimulus occurs first and the visual stimulus follows, subjects are more sensitive and so perceive asynchrony with smaller time delay between the stimuli. On the other hand, when the auditory follows the visual stimulus, the subjects are more tolerant and perceive stimuli with larger time delays as synchronous. Thresholds of synchrony perception in these two conditions are thus asymmetrical. The present study attempts to test the Lewkowicz Model, by which the asymmetrical thresholds are explained as a result of arrival-time differences between auditory and visual stimuli to the brain, such that visual stimulus takes longer in processing to be perceived versus auditory one. Reaction-times to these stimuli were measured to determine the arrival-time difference and plotted with synchrony perception. On the basis of Lewkowicz Model we predicted that reaction-time difference between the two stimuli correlate with subjective synchrony. The results did not support the Lewkowicz Model. The expected tendency of 30--40ms of subjective synchrony was not shown. The subjects took, in average, only 7.7ms to detect asynchrony when the auditory stimulus followed the visual stimulus. That the subjects did not tolerate greater temporal gap when the auditory followed versus when it preceded the visual stimulus was a very different result from majority of previous studies. Different factors in perceiving synchrony are discussed in this paper, as well as the application of the research in telecommunications.
APA, Harvard, Vancouver, ISO, and other styles
37

Don, Audrey Jean. "Auditory pattern perception in children with Williams syndrome." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp03/NQ30287.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Vallejos, Elvira Pérez. "What duplex perception tells us about auditory organisation." Thesis, University of Liverpool, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.428233.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Seton, John Christopher. "A psychophysical investigation of auditory rhythmic beat perception." Thesis, University of York, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.329671.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Hoskin, Robert. "The effect of psychological stress on auditory perception." Thesis, University of Sheffield, 2014. http://etheses.whiterose.ac.uk/6193/.

Full text
Abstract:
Psychological stress appears to precede instances of auditory hallucinations in those vulnerable to them. This suggests that psychological stress acts on the auditory perceptual system in such a way as to encourage the generation of false percepts. This thesis investigated the impact of psychological stress on the perception of emotionally neutral sounds with the aim of identifying a potential mechanism to explain the influence of stress on the occurrence of auditory hallucinations. Two interconnected hypotheses, arising from the theory that stress reduces attentional control and therefore the ability to inhibit distracting information, were tested. An auditory signal detection task was created to test whether stress would reduce the ability of the auditory-perceptual mechanism to accurately detect signals. Instead of reducing discrimination ability, stress was found to bias responding towards reporting a signal in highly anxious individuals. A number of passive oddball tasks were designed to test the hypothesis that stress would increase the distraction caused by emotionally neutral sounds. Once again this hypothesis was largely refuted, with stress appearing to reduce, rather than increase, the impact of distracting auditory information on task performance. On the basis of these findings a revised model of how stress may encourage auditory hallucinations was proposed. This model suggests that, through a strengthening of selective attention, stress may mal-adaptively bias auditory perception towards misinterpreting internal signals as external. Further research proposals, designed to test the predictions of this model, are suggested.
APA, Harvard, Vancouver, ISO, and other styles
41

Vitela, Antonia David. "General Auditory Model of Adaptive Perception of Speech." Diss., The University of Arizona, 2012. http://hdl.handle.net/10150/265343.

Full text
Abstract:
One of the fundamental challenges for communication by speech is the variability in speech production/acoustics. Talkers vary in the size and shape of their vocal tract, in dialect, and in speaking mannerisms. These differences all impact the acoustic output. Despite this lack of invariance in the acoustic signal, listeners can correctly perceive the speech of many different talkers. This ability to adapt one's perception to the particular acoustic structure of a talker has been investigated for over fifty years. The prevailing explanation for this phenomenon is that listeners construct talker-specific representations that can serve as referents for subsequent speech sounds. Specifically, it is thought that listeners may either be creating mappings between acoustics and phonemes or extracting the vocal tract anatomy and shape for each individual talker. This research focuses on an alternative explanation. A separate line of work has demonstrated that much of the variance between talkers' productions can be captured in their neutral vocal tract shape (that is, the average shape of their vocal tract across multiple vowel productions). The current model tested is that listeners compute an average spectrum (long term average spectrum - LTAS) of a talker's speech and use it as a referent. If this LTAS resembles the acoustic output of the neutral vocal tract shape - the neutral vowel - then it could accommodate some of the talker based variability. The LTAS model results in four main hypotheses: 1) during carrier phrases, listeners compute an LTAS for the talker; 2) this LTAS resembles the spectrum of the neutral vowel; 3) listeners represent subsequent targets relative to this LTAS referent; 4) such a representation reduces talker-specific acoustic variability. The goal of this project was to further develop and test the predictions arising from these hypotheses. Results suggest that the LTAS model needs to be further investigated, as the simple model proposed does not explain the effects found across all studies.
APA, Harvard, Vancouver, ISO, and other styles
42

Elangovan, Saravanan, and Andrew Stuart. "Auditory Temporal Processing in the Perception of Voicing." Digital Commons @ East Tennessee State University, 2006. https://dc.etsu.edu/etsu-works/1559.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Linke, Annika Carola. "Feature processing in human audition : the role of auditory cortex in perception, short-term memory and imagery." Thesis, University of Cambridge, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.610253.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Cornew, Lauren A. "Emotion processing in the auditory modality the time course and development of emotional prosody recognition /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2008. http://wwwlib.umi.com/cr/ucsd/fullcit?p3330854.

Full text
Abstract:
Thesis (Ph. D.)--University of California, San Diego, 2008.
Title from first page of PDF file (viewed December 11, 2008). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
45

Leibold, Lori J. "Informational masking in infancy /." Thesis, Connect to this title online; UW restricted, 2004. http://hdl.handle.net/1773/8191.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Vande, Kamp Mark E. "Auditory implicit association tests /." Thesis, Connect to this title online; UW restricted, 2002. http://hdl.handle.net/1773/9119.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Weaver, Lisa L. "Effects of sequential context on the perception of brief tones." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape11/PQDD_0026/NQ50281.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Duan, Mao Li. "The diagnosis and protection of the auditory peripheral system /." Stockholm, 1999. http://diss.kib.ki.se/1999/91-628-3315-4/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Ciocca, Valter. "Perceived continuity of steady-state and glided tones through a louder noise : evidence concerning a trajectory effect." Thesis, McGill University, 1985. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=63303.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Yim, Pui-kwan. "Random gap detection test normative values for Hong Kong young adults /." Click to view the E-thesis via HKU Scholars Hub, 2003. http://lookup.lib.hku.hk/lookup/bib/B38891037.

Full text
Abstract:
Thesis (B.Sc.)--University of Hong Kong, 2003.
"A dissertation submitted in partial fulfilment of the requirements for the Bachelor of Science (Speech and Hearing Sciences), The University of Hong Kong, April 30, 2003." Includes bibliographical references (p. 28-30) Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography