Gotowa bibliografia na temat „Auditory source width”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Auditory source width”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Auditory source width"

1

Becker, Jörg, Markus Sapp i Frederik Görges. "New approach in measuring auditory source width". Journal of the Acoustical Society of America 105, nr 2 (luty 1999): 1190. http://dx.doi.org/10.1121/1.425612.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Whitmer, William M., Bernhard U. Seeber i Michael A. Akeroyd. "Apparent auditory source width insensitivity in older hearing-impaired individuals". Journal of the Acoustical Society of America 132, nr 1 (lipiec 2012): 369–79. http://dx.doi.org/10.1121/1.4728200.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Morimoto, Masayuki, i Kazuhiro Iida. "A practical evaluation method of auditory source width in concert halls." Journal of the Acoustical Society of Japan (E) 16, nr 2 (1995): 59–69. http://dx.doi.org/10.1250/ast.16.59.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Whitmer, William M., Bernhard U. Seeber i Michael A. Akeroyd. "The perception of apparent auditory source width in hearing-impaired adults". Journal of the Acoustical Society of America 135, nr 6 (czerwiec 2014): 3548–59. http://dx.doi.org/10.1121/1.4875575.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Morimoto, Masayuki, Haruki Setoyama i Kazuhiro Iida. "Consistent physical measures of auditory source width for various frequency components of reflections". Journal of the Acoustical Society of America 100, nr 4 (październik 1996): 2802. http://dx.doi.org/10.1121/1.416538.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Kim, Sungyoung, i Hidetaka Imamura. "An assessment of a spatial ear training program for perceived auditory source width". Journal of the Acoustical Society of America 142, nr 2 (sierpień 2017): EL201—EL204. http://dx.doi.org/10.1121/1.4998185.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Morimoto, Masayuki, i Mariko Watanabe. "Directional dependence of the change of auditory source width by very short time‐delay reflections". Journal of the Acoustical Society of America 103, nr 5 (maj 1998): 2996–97. http://dx.doi.org/10.1121/1.421715.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Mason, Russell, Tim Brookes i Francis Rumsey. "Evaluation of a model of auditory source width based on the interaural cross‐correlation coefficient". Journal of the Acoustical Society of America 116, nr 4 (październik 2004): 2475. http://dx.doi.org/10.1121/1.4784888.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Morimoto, M., K. Iida i Y. Furue. "Relation between auditory source width in various sound fields and degree of interaural cross-correlation". Applied Acoustics 38, nr 2-4 (1993): 291–301. http://dx.doi.org/10.1016/0003-682x(93)90057-d.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Morimoto, Masayuki, i Kazuhiro Iida. "Appropriate frequency bandwidth in measuring interaural cross-correlation as a physical measure of auditory source width". Acoustical Science and Technology 26, nr 2 (2005): 179–84. http://dx.doi.org/10.1250/ast.26.179.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "Auditory source width"

1

Durak, Nurcan. "Semantic Video Modeling And Retrieval With Visual, Auditory, Textual Sources". Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/12605438/index.pdf.

Pełny tekst źródła
Streszczenie:
The studies on content-based video indexing and retrieval aim at accessing video content from different aspects more efficiently and effectively. Most of the studies have concentrated on the visual component of video content in modeling and retrieving the video content. Beside visual component, much valuable information is also carried in other media components, such as superimposed text, closed captions, audio, and speech that accompany the pictorial component. In this study, semantic content of video is modeled using visual, auditory, and textual components. In the visual domain, visual events, visual objects, and spatial characteristics of visual objects are extracted. In the auditory domain, auditory events and auditory objects are extracted. In textual domain, speech transcripts and visible texts are considered. With our proposed model, users can access video content from different aspects and get desired information more quickly. Beside multimodality, our model is constituted on semantic hierarchies that enable querying the video content at different semantic levels. There are sequence-scene hierarchies in visual domain, background-foreground hierarchies in auditory domain, and subject hierarchies in speech domain. Presented model has been implemented and multimodal content queries, hierarchical queries, fuzzy spatial queries, fuzzy regional queries, fuzzy spatio-temporal queries, and temporal queries have been applied on video content successfully.
Style APA, Harvard, Vancouver, ISO itp.
2

Best, Virginia Ann. "Spatial Hearing with Simultaneous Sound Sources: A Psychophysical Investigation". University of Sydney. Medicine, 2004. http://hdl.handle.net/2123/576.

Pełny tekst źródła
Streszczenie:
This thesis provides an overview of work conducted to investigate human spatial hearing in situations involving multiple concurrent sound sources. Much is known about spatial hearing with single sound sources, including the acoustic cues to source location and the accuracy of localisation under different conditions. However, more recently interest has grown in the behaviour of listeners in more complex environments. Concurrent sound sources pose a particularly difficult problem for the auditory system, as their identities and locations must be extracted from a common set of sensory receptors and shared computational machinery. It is clear that humans have a rich perception of their auditory world, but just how concurrent sounds are processed, and how accurately, are issues that are poorly understood. This work attempts to fill a gap in our understanding by systematically examining spatial resolution with multiple sound sources. A series of psychophysical experiments was conducted on listeners with normal hearing to measure performance in spatial localisation and discrimination tasks involving more than one source. The general approach was to present sources that overlapped in both frequency and time in order to observe performance in the most challenging of situations. Furthermore, the role of two primary sets of location cues in concurrent source listening was probed by examining performance in different spatial dimensions. The binaural cues arise due to the separation of the two ears, and provide information about the lateral position of sound sources. The spectral cues result from location-dependent filtering by the head and pinnae, and allow vertical and front-rear auditory discrimination. Two sets of experiments are described that employed relatively simple broadband noise stimuli. In the first of these, two-point discrimination thresholds were measured using simultaneous noise bursts. It was found that the pair could be resolved only if a binaural difference was present; spectral cues did not appear to be sufficient. In the second set of experiments, the two stimuli were made distinguishable on the basis of their temporal envelopes, and the localisation of a designated target source was directly examined. Remarkably robust localisation was observed, despite the simultaneous masker, and both binaural and spectral cues appeared to be of use in this case. Small but persistent errors were observed, which in the lateral dimension represented a systematic shift away from the location of the masker. The errors can be explained by interference in the processing of the different location cues. Overall these experiments demonstrated that the spatial perception of concurrent sound sources is highly dependent on stimulus characteristics and configurations. This suggests that the underlying spatial representations are limited by the accuracy with which acoustic spatial cues can be extracted from a mixed signal. Three sets of experiments are then described that examined spatial performance with speech, a complex natural sound. The first measured how well speech is localised in isolation. This work demonstrated that speech contains high-frequency energy that is essential for accurate three-dimensional localisation. In the second set of experiments, spatial resolution for concurrent monosyllabic words was examined using similar approaches to those used for the concurrent noise experiments. It was found that resolution for concurrent speech stimuli was similar to resolution for concurrent noise stimuli. Importantly, listeners were limited in their ability to concurrently process the location-dependent spectral cues associated with two brief speech sources. In the final set of experiments, the role of spatial hearing was examined in a more relevant setting containing concurrent streams of sentence speech. It has long been known that binaural differences can aid segregation and enhance selective attention in such situations. The results presented here confirmed this finding and extended it to show that the spectral cues associated with different locations can also contribute. As a whole, this work provides an in-depth examination of spatial performance in concurrent source situations and delineates some of the limitations of this process. In general, spatial accuracy with concurrent sources is poorer than with single sound sources, as both binaural and spectral cues are subject to interference. Nonetheless, binaural cues are quite robust for representing concurrent source locations, and spectral cues can enhance spatial listening in many situations. The findings also highlight the intricate relationship that exists between spatial hearing, auditory object processing, and the allocation of attention in complex environments.
Style APA, Harvard, Vancouver, ISO itp.
3

Best, Virginia Ann. "Spatial Hearing with Simultaneous Sound Sources: A Psychophysical Investigation". Thesis, The University of Sydney, 2004. http://hdl.handle.net/2123/576.

Pełny tekst źródła
Streszczenie:
This thesis provides an overview of work conducted to investigate human spatial hearing in situations involving multiple concurrent sound sources. Much is known about spatial hearing with single sound sources, including the acoustic cues to source location and the accuracy of localisation under different conditions. However, more recently interest has grown in the behaviour of listeners in more complex environments. Concurrent sound sources pose a particularly difficult problem for the auditory system, as their identities and locations must be extracted from a common set of sensory receptors and shared computational machinery. It is clear that humans have a rich perception of their auditory world, but just how concurrent sounds are processed, and how accurately, are issues that are poorly understood. This work attempts to fill a gap in our understanding by systematically examining spatial resolution with multiple sound sources. A series of psychophysical experiments was conducted on listeners with normal hearing to measure performance in spatial localisation and discrimination tasks involving more than one source. The general approach was to present sources that overlapped in both frequency and time in order to observe performance in the most challenging of situations. Furthermore, the role of two primary sets of location cues in concurrent source listening was probed by examining performance in different spatial dimensions. The binaural cues arise due to the separation of the two ears, and provide information about the lateral position of sound sources. The spectral cues result from location-dependent filtering by the head and pinnae, and allow vertical and front-rear auditory discrimination. Two sets of experiments are described that employed relatively simple broadband noise stimuli. In the first of these, two-point discrimination thresholds were measured using simultaneous noise bursts. It was found that the pair could be resolved only if a binaural difference was present; spectral cues did not appear to be sufficient. In the second set of experiments, the two stimuli were made distinguishable on the basis of their temporal envelopes, and the localisation of a designated target source was directly examined. Remarkably robust localisation was observed, despite the simultaneous masker, and both binaural and spectral cues appeared to be of use in this case. Small but persistent errors were observed, which in the lateral dimension represented a systematic shift away from the location of the masker. The errors can be explained by interference in the processing of the different location cues. Overall these experiments demonstrated that the spatial perception of concurrent sound sources is highly dependent on stimulus characteristics and configurations. This suggests that the underlying spatial representations are limited by the accuracy with which acoustic spatial cues can be extracted from a mixed signal. Three sets of experiments are then described that examined spatial performance with speech, a complex natural sound. The first measured how well speech is localised in isolation. This work demonstrated that speech contains high-frequency energy that is essential for accurate three-dimensional localisation. In the second set of experiments, spatial resolution for concurrent monosyllabic words was examined using similar approaches to those used for the concurrent noise experiments. It was found that resolution for concurrent speech stimuli was similar to resolution for concurrent noise stimuli. Importantly, listeners were limited in their ability to concurrently process the location-dependent spectral cues associated with two brief speech sources. In the final set of experiments, the role of spatial hearing was examined in a more relevant setting containing concurrent streams of sentence speech. It has long been known that binaural differences can aid segregation and enhance selective attention in such situations. The results presented here confirmed this finding and extended it to show that the spectral cues associated with different locations can also contribute. As a whole, this work provides an in-depth examination of spatial performance in concurrent source situations and delineates some of the limitations of this process. In general, spatial accuracy with concurrent sources is poorer than with single sound sources, as both binaural and spectral cues are subject to interference. Nonetheless, binaural cues are quite robust for representing concurrent source locations, and spectral cues can enhance spatial listening in many situations. The findings also highlight the intricate relationship that exists between spatial hearing, auditory object processing, and the allocation of attention in complex environments.
Style APA, Harvard, Vancouver, ISO itp.
4

Arthi, S. "Auditory Timbre and Spatialisation: Signal Analysis and Perception of Source Widening". Thesis, 2022. https://etd.iisc.ac.in/handle/2005/5988.

Pełny tekst źródła
Streszczenie:
In this work, auditory perception of source widening is examined in the context of different source signal timbre. Perception of widening of source or auditory source width (ASW) arises in three cases: (i) In the presence of reverberation, which has been referred to as reverberant source width (RSW); (ii) Distributed sources such as an ensemble, where multiple sources are physically placed widely, referred as ensemble source width (ESW); and (iii) In hearing disabilities, where localisation is poor in the presence of interfering sources and hence a widened or diffused source width (DSW) is perceived. Though the physics of the problem is different in each of the above cases, we observe that the perception of source widening occurs in all the three cases. We also show analytically that in the case of localised, reverb and ensemble sources located about a particular direction, binaural cross-correlation has interesting properties: (i) for localised source, cross-correlation is energy compact, followed by (ii) reverb source and (iii) ESW has highly dispersed cross-correlation compared to localised and reverb sources with the same angle of arrival and degree of decorrelation as reverb sources. Traditionally, (1-IACC) has been used as a measure for RSW and in the literature this measure is used for ESW also. We propose a combination of timbre-independent phase-based angular measure for the physical extent of the sources, localising all or many individual sources using HRIR correlation functions and timbre dependent mean time-bandwidth energy (MTBE) measure for relative perceptual weighting to compare ensemble of different timbres. This analysis gives rise to possible applications in ensemble rendering and insights into improving hearing aids for hearing impaired listeners. Frequency sensitivity to change in IACC, and hence ASW, has been studied using binaural presentation of modulated sinusoids. In this work, we observe a similar sensitivity to ESW by presenting listeners with spatially wide sources using narrow band noise signals. We observe that frequency sensitivity of ASW and ESW are similar. We also study bandwidth sensitivity and observe that with increase in bandwidth, the perceived width of the ensemble increases. We simulate ensemble-like music signals of different spectro-temporal distribution to probe the timbre dependency of human perception. The listeners are asked to rate the ESW of the simulated distributed sources. Broadly, music signals can be classified as sustained instruments, partially sustained, partially transient and predominantly transient signals. Low frequency sustained instruments give rise to a wider percept than semi-sustained and transient signals. We also explore the difference between discrete and continuous spectral sources for spatialisation. We observe that continuous spectra do give rise to stable mono- tonic width perception with change in physical width. On the other hand, in the case of discrete spectra, we do not perceive a stable monotonic perception. We developed a MUSHRA like (Multiple stimulus hidden reference and anchor) listening methodology for estimating the accuracy of direction perception of the target signals with and without interference by normal listeners. We observe that the accuracy of direction perception of the target is high without interference. In the presence of interference, we observe that the perceived target direction is away from that of interference, thus increasing the perceived angular separation. This perceptual effect may be used in the design of binaural hearing-aids to enhance binaural perception of localised sources in the presence of interference. Overall, in this work, we study the perceptual interaction of signal timbre and spatialisation in the perception of ensemble source width. We study the sensitivity of several parameters like frequency, bandwidth, spectro-temporal energy distribution and role of fine AM-FM parameters.
Style APA, Harvard, Vancouver, ISO itp.
5

Lin, Chi-Wen, i 林棋文. "Examination on the relationship between apparent source width and auditory evoked potential from the cerebral hemispheres". Thesis, 2012. http://ndltd.ncl.edu.tw/handle/06872761144752043108.

Pełny tekst źródła
Streszczenie:
碩士
朝陽科技大學
建築及都市設計研究所
100
Morimoto (1989) proposed in a spatial impression study that listening envelopment (LEV) and apparent source width (ASW) were two essential components that determine the spatial sense of a concert hall. While the acoustic impression of ASW was usually composed of direct sound and first reflection (Morimoto, 1989), the LEV was formed by response element. The auditory path through which an acoustic signal from the stage was transmitted to the listener’s brain proposed by Ando (1985) demonstrated in detail how the central nervous system processes the nerve impulse formed in the auditory nerve ending. The characteristic response in the process during which the nerve processes the acoustic signals can be observed and summarized using the cerebral cortex brainwaves. By modifying the magnitude of interaural cross-correlation function (IACC) of the space, the study investigated the changes in different indoor ASW responses and slow vertex response (SVR) caused by apparent acoustic stimulation and compared the difference among these changes. The study also tried to construct a study method with an objective physiological acoustic design. According to the study result: 1. By modifying the IACC in the psychological experiment, quantitative psychological measurements of ASW were as follows: ASW(IACC=0.56) = 0.45 > ASW(IACC=0.68) = 0.03 > ASW(IACC=0.35) = -0.16 > ASW(IACC=0.81) = -0.32, demonstrating a non-linear relationship. 2. The comparison result between changes in brainwaves suggested that within the range from ASW(-0.32) to ASW(0.45), the difference in brainwave amplitude at A (P2-N2) decreased with the increased ASW; while the duration of N2 latency of the left hemisphere shortened with the increased ASW.
Style APA, Harvard, Vancouver, ISO itp.
6

Chen, Zhuo. "Single Channel auditory source separation with neural network". Thesis, 2017. https://doi.org/10.7916/D8W09C8N.

Pełny tekst źródła
Streszczenie:
Although distinguishing different sounds in noisy environment is a relative easy task for human, source separation has long been extremely difficult in audio signal processing. The problem is challenging for three reasons: the large variety of sound type, the abundant mixing conditions and the unclear mechanism to distinguish sources, especially for similar sounds. In recent years, the neural network based methods achieved impressive successes in various problems, including the speech enhancement, where the task is to separate the clean speech out of the noise mixture. However, the current deep learning based source separator does not perform well on real recorded noisy speech, and more importantly, is not applicable in a more general source separation scenario such as overlapped speech. In this thesis, we firstly propose extensions for the current mask learning network, for the problem of speech enhancement, to fix the scale mismatch problem which is usually occurred in real recording audio. We solve this problem by combining two additional restoration layers in the existing mask learning network. We also proposed a residual learning architecture for the speech enhancement, further improving the network generalization under different recording conditions. We evaluate the proposed speech enhancement models on CHiME 3 data. Without retraining the acoustic model, the best bi-direction LSTM with residue connections yields 25.13% relative WER reduction on real data and 34.03% WER on simulated data. Then we propose a novel neural network based model called “deep clustering” for more general source separation tasks. We train a deep network to assign contrastive embedding vectors to each time-frequency region of the spectrogram in order to implicitly predict the segmentation labels of the target spectrogram from the input mixtures. This yields a deep network-based analogue to spectral clustering, in that the embeddings form a low-rank pairwise affinity matrix that approximates the ideal affinity matrix, while enabling much faster performance. At test time, the clustering step “decodes” the segmentation implicit in the embeddings by optimizing K-means with respect to the unknown assignments. Experiments on single channel mixtures from multiple speakers show that a speaker-independent model trained on two-speaker and three speakers mixtures can improve signal quality for mixtures of held-out speakers by an average over 10dB. We then propose an extension for deep clustering named “deep attractor” network that allows the system to perform efficient end-to-end training. In the proposed model, attractor points for each source are firstly created the acoustic signals which pull together the time-frequency bins corresponding to each source by finding the centroids of the sources in the embedding space, which are subsequently used to determine the similarity of each bin in the mixture to each source. The network is then trained to minimize the reconstruction error of each source by optimizing the embeddings. We showed that this frame work can achieve even better results. Lastly, we introduce two applications of the proposed models, in singing voice separation and the smart hearing aid device. For the former, a multi-task architecture is proposed, which combines the deep clustering and the classification based network. And a new state of the art separation result was achieved, where the signal to noise ratio was improved by 11.1dB on music and 7.9dB on singing voice. In the application of smart hearing aid device, we combine the neural decoding with the separation network. The system firstly decodes the user’s attention, which is further used to guide the separator for the targeting source. Both objective study and subjective study show the proposed system can accurately decode the attention and significantly improve the user experience.
Style APA, Harvard, Vancouver, ISO itp.
7

Chen, Shu-Mei, i 陳舒玫. "The source monitoring for emotional information in schizophrenia with auditory hallucination". Thesis, 2016. http://ndltd.ncl.edu.tw/handle/93849156010688455604.

Pełny tekst źródła
Streszczenie:
碩士
中原大學
心理學研究所
104
Background and purpose. Frith has claimed that auditory hallucinations are due to the misattribution of one’s internal cognitive operations to external events. The present study attempted to extend previous research on source monitoring deficits in schizophrenia patients with hallucination. We hypothesized that patients would show a bias to attribute self-generated words to an external source, especially when the stimulus can trigger negative emotions. Furthermore, Brébion et al also found that schizophrenia patients with hallucinations were more prone to report that spoken items had been presented as pictures, compared with those without hallucination. This result is consistent with Frith’s theory and suggest that hallucinations are associated with confusion between imagined and perceived pictures. Methods. Twenty-six schizophrenia patients with auditory hallucination (AHs), Twenty-four schizophrenia patients without auditory hallucination (NAHs), and Twenty-two healthy subjects participated in Experiment 1. Participants completed internal-external task, in which participants and experimenter were instructed to verbally provide 15 semantic words (including positive/neutral/negative words). Then, the experimenter read aloud a word list containing 30 words the experimenter and participants had generated and 30 new words. The participants was required to distinguish each item from old or new, and identify the source as self-generated or experimenter generated. Twenty-one AHs, Twenty-two NAHs, and Twenty-two healthy subjects participated in Experiment 2. Participants completed an external-external source-monitoring task. Thirty items were produced by the computer, either presented as pictures, or as visual words. After that, the experimenter read aloud a word list including the former 30 produced target items and 30 new ones. The participants were required to distinguish each item from old or new, and identify the source as pictures items or sematic items. Results. The AHs reported significant higher emotional distress (i.e. depression and anxiety) than the other two groups. However, in the two kinds of the source memory task, there were no differences between the AHs, NAHs and the healthy groups in regard to memory accuracy and attributional bias. Even if adding the emotional terms to the tasks we did not discover the significantly increased number of source attributional bias. Discussion. The previous research has consistently shown that schizophrenia patients with auditory hallucination performed poorly on source memory task, compared with healthy ones. However, the present study did not observe source monitoring deficit in schizophrenia patients with AHs. The present study suggests that other symptoms (e.g., delusions) may also influence patients’ source monitoring performance. The further research could clarify that whether delusions have impact on attributional bias in schizophrenia patients with and without AHs.
Style APA, Harvard, Vancouver, ISO itp.
8

Ying-JiaHuang i 黃盈嘉. "A New Chip Design of Auditory Source Localization Based on AMDF Algorithm with Folding Architecture". Thesis, 2011. http://ndltd.ncl.edu.tw/handle/72496030097797412174.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Liu, Po-Ting Bertram, i 劉柏廷. "The effect of fully correlated sources with spatial extents on spatial filtering on the MEG data - A study of Auditory Steady-State Response". Thesis, 2019. http://ndltd.ncl.edu.tw/handle/xuef87.

Pełny tekst źródła
Streszczenie:
碩士
國立交通大學
工學院聲音與音樂創意科技碩士學位學程
107
This thesis focus on the problem in source imaging of auditory steady-state responses in MEG signals. When an audio stimulus is simultaneously presented to the ears of a subject, the brain waves recorded from the subject often have fully correlated sources. Conventional spatial filters cannot accurately estimate correlated sources because it’s assumed that all sources are not cross-correlated. The method in this thesis is dual-core beamformer (DCBF). There are some papers discussing the limitation of DCBF, but the effect of spatial extent on the performance of DCBF remains unknown. The effects of noise types of background sources, and of spatial extents of correlated sources on DCBF localizers are investigated in this thesis. In results, localizer-NAI is better than localizer-K. When the standard deviations of spatial extents of correlated sources are less than 5 mm, localizer-NAI is not affected. But localizer-K only works well when the standard deviations of spatial extents of correlated sources are less than 1 mm. Furthermore, localizer-NAI has much smaller range of artifacts, which means localizer-NAI can suppress the estimation of other source locations than localizer-K.
Style APA, Harvard, Vancouver, ISO itp.

Książki na temat "Auditory source width"

1

Minobrnauki, Rossiyskoy. Finance and Financial analysis. ru: INFRA-M Academic Publishing LLC., 2021. http://dx.doi.org/10.12737/1242227.

Pełny tekst źródła
Streszczenie:
The textbook systematizes basic knowledge in the field of finance, financial analysis and financial management, presented in their direct relationship and significance from the point of view of evaluation, diagnosis, forecasting and monitoring of the continuity of the organization's activities. It includes seven chapters grouped into three sections. The first section is devoted to the theoretical foundations of the organization's financial management, stakeholders and sources of the organization's activities. The second section discusses the basics of financial analysis, providing knowledge of the main directions, information base and methods of financial analysis, as well as allowing them to be applied reasonably, calculate and evaluate analytical indicators, determine the impact of globalization processes, various macro-and microfactors on the financial condition of the organization. The third section contains the basics of financial management, providing an understanding of the essence of the financial mechanism of the organization and algorithms for justifying decisions in the field of financial management. It complies with the federal state educational standards of higher education of the latest generation and provides the formation of basic competencies in the field of finance, financial management and financial analysis. For bachelor's, specialist's and master's students studying in the field of Economics, the system of additional professional education, training centers for advanced training of auditors and other financial market specialists, as well as for individual preparation of applicants for qualification certification and passing qualification exams.
Style APA, Harvard, Vancouver, ISO itp.
2

Enhancing communication skills of deaf & hard of hearing children in the mainstream. Clifton Park, NY: Thomson Delmar Learning, 2006.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Smith, Leslie S. Audition. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780199674923.003.0015.

Pełny tekst źródła
Streszczenie:
Audition is the ability to sense and interpret pressure waves within a range of frequencies. The system tries to solve the what and where tasks: what is the sound source (interpretation), and where is it (location)? Auditory environments vary in the number and location of sound sources, their level and in the degree of reverberation, yet biological systems have robust techniques that work over a large range of conditions. We briefly review the auditory system, including the auditory brainstem and mid-brain major components, attempting to connect its structure with the problems to be solved: locating some sounds, and interpreting important ones. Systems using knowledge of animal auditory processing are discussed, including both CPU-based and Neuromorphic approaches, starting from the auditory filterbank, and including silicon cochleae: feature (auditory landmark) based systems are considered. The level of performance associated with animal auditory systems has not been achieved, and we discuss ways forward.
Style APA, Harvard, Vancouver, ISO itp.
4

Tenney, James. The Several Dimensions of Pitch. Redaktorzy Larry Polansky, Lauren Pratt, Robert Wannamaker i Michael Winter. University of Illinois Press, 2017. http://dx.doi.org/10.5406/illinois/9780252038723.003.0017.

Pełny tekst źródła
Streszczenie:
James Tenney explains the different mechanisms behind the simultaneous and consecutive relationships between pitches using ideas from evolution and neurocognition. He suggests that there are two different aspects of pitch perception and that one of those aspects can also be thought of as multidimensional. In considering such fundamental questions regarding the nature of auditory perception, Tenney refers to the evolution of hearing and considers two complementary if not contradictory things: distinguish between or among sounds issuing from different sound sources, and recognize when two or more sounds—though different—actually arise from a single sound source. The first mechanism is the basis for what Tenney calls the contour aspect of contour aspect of contour pitch perception. The other aspect of pitch perception has to do with the temporal ordering of the neural information. Tenney concludes by proposing a psychoacoustic explanation for contour formation based on the ear's temporal processing.
Style APA, Harvard, Vancouver, ISO itp.
5

Toop, David. Sinister Resonance. The Continuum International Publishing Group, 2010. http://dx.doi.org/10.5040/9781501382864.

Pełny tekst źródła
Streszczenie:
Sinister Resonance begins with the premise that sound is a haunting, a ghost, a presence whose location is ambiguous and whose existence is transitory. The intangibility of sound is uncanny – a phenomenal presence in the head, at its point of source and all around. The close listener is like a medium who draws out substance from that which is not entirely there. The history of listening must be constructed from the narratives of myth and fiction, ‘silent’ arts such as painting, the resonance of architecture, auditory artefacts and nature. In such contexts, sound often functions as a metaphor for mystical revelation, forbidden desires, formlessness, the unknown, and the unconscious. As if reading a map of hitherto unexplored territory, Sinister Resonance deciphers sounds and silences buried within the ghostly horrors of Arthur Machen, Shirley Jackson, Charles Dickens, M.R. James and Edgar Allen Poe, Dutch genre painting from Rembrandt to Vermeer, artists as diverse as Francis Bacon and Juan Munoz, and the writing of many modernist authors including Virginia Woolf, Samuel Beckett, and James Joyce.
Style APA, Harvard, Vancouver, ISO itp.
6

Worthington, Sarah, i Sinéad Agnew. Sealy & Worthington's Text, Cases, and Materials in Company Law. Wyd. 12. Oxford University Press, 2022. http://dx.doi.org/10.1093/he/9780198830092.001.0001.

Pełny tekst źródła
Streszczenie:
Sealy & Worthington’s Cases and Materials in Company Law clearly explains the fundamental structure of company law and provides a concise introduction to each different aspect of the subject. The materials are carefully selected and well supported by commentary so that the logic of the doctrinal or policy argument is unambiguously laid out. Notes and questions appear periodically throughout the text to provoke persistent analysis and debate, and to enable students to test their understanding of the issues as the topics unfold. This text covers a wide range of sources, and provides intelligent and thought-provoking commentary in a succinct format. It is invaluable to all those who need vital materials and expert observations on company law in one volume. This twelfth edition brings: improved chapter order and location of materials; the incorporation of changes necessitated by Brexit; complete updating of statutory, regulatory and case law materials, including by the Corporate Governance and Insolvency Act 2020 and the many changes and additions to corporate governance codes requiring ‘apply and explain’ and ‘comply or explain’ adherence; major rewriting of Chapter 3 (Corporate Activity and Legal Liability) in the light of significant Supreme Court cases; expansion of Chapter 6 (Corporate Governance) and Chapter 9 (Company Auditors), along with additional coverage of shareholder remedies (Chapter 8), including coverage of Sevilleja v Marex Financial Ltd (2020, SC) and new cases on statutory derivative actions; and additional coverage of insolvency issues.
Style APA, Harvard, Vancouver, ISO itp.
7

Lee, James, James Mahshie, Mary June Moseley i Susanne M. Scott. Enhancing Communication Skills of Deaf and Hard of Hearing Children in the Mainstream. Singular, 2005.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "Auditory source width"

1

Altman, J. A., L. M. Kotelenko i S. F. Vaitulevich. "Disorders of Sound Source Localization and Auditory Evoked Potentials in Patients with Temporal Epilepsy". W Acoustical Signal Processing in the Central Auditory System, 589–98. Boston, MA: Springer US, 1997. http://dx.doi.org/10.1007/978-1-4419-8712-9_55.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Whitmer, William M., Bernhard U. Seeber i Michael A. Akeroyd. "Measuring the Apparent Width of Auditory Sources in Normal and Impaired Hearing". W Advances in Experimental Medicine and Biology, 303–10. New York, NY: Springer New York, 2013. http://dx.doi.org/10.1007/978-1-4614-1590-9_34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Rogers, R. L., A. C. Papanicolaou, S. Baumann, C. Saydjari i H. M. Eisenberg. "Nonstationary Dynamics of Sequential Magnetic Dipole Source Changes Associated with N100 Auditory Evoked Responses". W Advances in Biomagnetism, 105–8. Boston, MA: Springer US, 1989. http://dx.doi.org/10.1007/978-1-4613-0581-1_13.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Yao, Chiung. "Contribution of Precisely Apparent Source Width to Auditory Spaciousness". W Soundscape Semiotics - Localisation and Categorisation. InTech, 2014. http://dx.doi.org/10.5772/56616.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Huron, David. "Sources and Images". W Voice Leading. The MIT Press, 2016. http://dx.doi.org/10.7551/mitpress/9780262034852.003.0003.

Pełny tekst źródła
Streszczenie:
An introduction to the perception of sound is given, with special emphasis on topics useful for understanding the organization of music. The chapter covers essential concepts in acoustics and auditory perception, including basic auditory anatomy and physiology. Core concepts are defined such as vibrational mode, pure tone, complex tone, partial, harmonic, cochlea, basilar membrane, resolved partial, auditory image, auditory stream, acoustic scene, auditory scene, and auditory scene analysis.
Style APA, Harvard, Vancouver, ISO itp.
6

Hari, Riitta, i Aina Puce. "Auditory Responses". W MEG - EEG Primer, redaktorzy Riitta Hari i Aina Puce, 260—C13P77. Wyd. 2. Oxford University PressNew York, 2023. http://dx.doi.org/10.1093/med/9780197542187.003.0013.

Pełny tekst źródła
Streszczenie:
Abstract This chapter—devoted to MEG/EEG recordings of the human auditory system—begins by briefly outlining the multiple types of auditory responses that can be recorded from the human brainstem and cerebral cortex. After introducing important experimental variables, such as hearing threshold and stimulus parameters that will affect neurophysiological responses, the chapter continues with a description of experimental setups and typical waveforms and sources of auditory brainstem responses, middle-latency auditory-evoked responses, long-latency auditory-evoked responses, and auditory steady-state responses. Frequency tagging, by stimulating the two ears with steady-state stimuli of slightly different repetition rates, is then introduced as a means to explore binaural interaction. The chapter also discusses some historical controversies of the generation of long-latency auditory evoked responses.
Style APA, Harvard, Vancouver, ISO itp.
7

Ortiz de Gortari, Angelica B., i Mark D. Griffiths. "Auditory Experiences in Game Transfer Phenomena". W Gamification, 1329–45. IGI Global, 2015. http://dx.doi.org/10.4018/978-1-4666-8200-9.ch067.

Pełny tekst źródła
Streszczenie:
This study investigated gamers' auditory experiences as after effects of playing. This was done by classifying, quantifying, and analysing 192 experiences from 155 gamers collected from online videogame forums. The gamers' experiences were classified as: (i) involuntary auditory imagery (e.g., hearing the music, sounds or voices from the game), (ii) inner speech (e.g., completing phrases in the mind), (iii) auditory misperceptions (e.g., confusing real life sounds with videogame sounds), and (iv) multisensorial auditory experiences (e.g., hearing music while involuntary moving the fingers). Gamers heard auditory cues from the game in their heads, in their ears, but also coming from external sources. Occasionally, the vividness of the sound evoked thoughts and emotions that resulted in behaviours and copying strategies. The psychosocial implications of the gamers' auditory experiences are discussed. This study contributes to the understanding of the effects of auditory features in videogames, and to the phenomenology of non-volitional auditory experiences.
Style APA, Harvard, Vancouver, ISO itp.
8

Bautista Calero del Castillo, Juan, Alberto Guillén Martínez i Francisco García Purriños. "Precocious Auditory Evoked Potential Recording with Free-Field Stimulus". W Human Auditory System - Function and Disorders [Working Title]. IntechOpen, 2022. http://dx.doi.org/10.5772/intechopen.102569.

Pełny tekst źródła
Streszczenie:
The aim of this study is to determine the thresholds of normality in the recording of precocious auditory evoked potentials with free-field stimulation and to compare them with conventional stimulation with insertion headphones. For this purpose, we have carried out a case series study of children with normal hearing stimulated with insertion headphones, who underwent Auditory Brainstem Response (ABR) and Auditory Steady-State Response (ASSR) with free-field stimuli. Fifty-four ears with normal criteria of children between 6 months and 24 months of age were assessed. The latencies found with free-field stimulation in ABR were significantly longer than the latencies with insert earphone stimulation (p<0.05), and no differences were found in the inter-latencies. No significant differences were found in the thresholds of the ASSR response. We conclude that the ABR thresholds obtained in the free-field correspond to the delay due to the distance of the sound source to the eardrum and, therefore, are superimposable, being applicable to patients where it is not possible to stimulate with insert phones.
Style APA, Harvard, Vancouver, ISO itp.
9

van Zanten, Gijsbert, Huib Versnel, Nathan van der Stoep, Wiepke Koopmans i Alex Hoetink. "Short-Latency Evoked Potentials of the Human Auditory System". W Human Auditory System - Function and Disorders [Working Title]. IntechOpen, 2022. http://dx.doi.org/10.5772/intechopen.102039.

Pełny tekst źródła
Streszczenie:
Auditory Brainstem Responses (ABR) are short-latency electric potentials from the auditory nervous system that can be evoked by presenting transient acoustic stimuli to the ear. Sources of the ABR are the auditory nerve and brainstem auditory nuclei. Clinical application of ABRs includes identification of the site of lesion in retrocochlear hearing loss, establishing functional integrity of the auditory nerve, and objective audiometry. Recording of ABR requires a measurement setup with a high-quality amplifier with adequate filtering and low skin-electrode impedance to reduce non-physiological interference. Furthermore, signal averaging and artifact rejection are essential tools for obtaining a good signal-to-noise ratio. Comparing latencies for different peaks at different stimulus intensities allows the determination of hearing threshold, location of the site of lesion, and establishment of neural integrity. Audiological assessment of infants who are referred after failing hearing screening relies on accurate estimation of hearing thresholds. Frequency-specific ABR using tone-burst stimuli is a clinically feasible method for this. Appropriate correction factors should be applied to estimate the hearing threshold from the ABR threshold. Whenever possible, obtained thresholds should be confirmed with behavioral testing. The Binaural Interaction Component of the ABR provides important information regarding binaural processing in the brainstem.
Style APA, Harvard, Vancouver, ISO itp.
10

Craig, Tom K. J., i Mar Rus-Calafell. "AVATAR therapy". W Psychotic Disorders, redaktor Elyn R. Saks, 565–72. Oxford University Press, 2020. http://dx.doi.org/10.1093/med/9780190653279.003.0063.

Pełny tekst źródła
Streszczenie:
AVATAR therapy is a newly developed treatment for auditory verbal hallucinations (AVH) that uses virtual reality technology to allow a three-way interaction between therapist, participant, and the entity the person believes is the source of their distressing auditory verbal hallucinations with the aim of reducing the perceived power and hostility of the persecutory voices. Two controlled trials, one a preliminary proof of concept and the second a larger powered clinical trial comparing AVATAR therapy with a supportive counseling control intervention, have demonstrated effectiveness in terms of reduced frequency, omnipotence, and associated distress of targeted AVH. This chapter reviews the present evidence and speculates on possible mechanism of action and future developments.
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "Auditory source width"

1

Arthi, S., K. R. Adhithya i T. V. Sreenivas. "Perceptual evaluation of simulated auditory source width expansion". W 2017 Twenty-third National Conference on Communications (NCC). IEEE, 2017. http://dx.doi.org/10.1109/ncc.2017.8077113.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Băcilă, Bogdan Ioan, i Hyunkook Lee. "Subjective Elicitation Of Listener-Perspective-Dependent Spatial Attributes in a Rerverberant Room, using the Repertory Grid Technique". W ICAD 2019: The 25th International Conference on Auditory Display. Newcastle upon Tyne, United Kingdom: Department of Computer and Information Sciences, Northumbria University, 2019. http://dx.doi.org/10.21785/icad2019.073.

Pełny tekst źródła
Streszczenie:
Spatial impression is a widely researched topic in concert hall acoustics and spatial audio display. In order to provide the listener with plausible spatial impression in virtual and augmented reality applications, especially in the 6 Degrees of Freedom (6DOF) context, it is first important to understand how humans perceive various acoustical cues from different listening perspectives in a real space. This paper presents a fundamental subjective study conducted on the perception of spatial impression for multiple listener positions and orientations. An in-situ elicitation test was carried out using the repertory grid technique in a reverberant concert hall. Cluster analysis revealed a number of conventional spatial attributes such as source width, environmental width and envelopment. However, reverb directionality and echo perception were also found to be salient spatial properties associated with changes in the listener’s position and head orientation.
Style APA, Harvard, Vancouver, ISO itp.
3

Balan, Oana, Alin Moldoveanu, Florica Moldoveanu i Ionut Negoi. "THE ROLE OF PERCEPTUAL FEEDBACK TRAINING ON SOUND LOCALIZATION ACCURACY IN AUDIO EXPERIMENTS". W eLSE 2015. Carol I National Defence University Publishing House, 2015. http://dx.doi.org/10.12753/2066-026x-15-074.

Pełny tekst źródła
Streszczenie:
The purpose of this paper is to present and compare the main techniques and methodological approaches aimed to improve audio discrimination in sound localization experiments. As reviewed from the literature, perceptual feedback training procedures provide listeners with the proper auditory/visual feedback concerning the correct sound source location. This type of feedback training can significantly improve localization accuracy, as well as the listener's ability to discern between the sound sources situated in the front and rear hemifield, reducing the incidence of reversal errors (a very common situation in which the listener perceives a sound coming from the front as emerging from the back and vice versa). For instance, in a training session, the listener indicates the perceived location of the sound source and receives immediate visual and audio feedback consisting in the presentation of the correct direction of the audio signal and instant playback of continuous streams of the corresponding sound source location. These results are long-term, suggesting that even short periods of training can enhance audio localization performance under 3D sound conditions, using non-individualized Head Related Transfer Functions (in the case of virtual auditory displays). In this paper we focus on the examination and investigation of the role of perceptual feedback training under controlled conditions in both free-field (using loudspeakers as the main audio rendering channel) and in virtual auditory displays that employ headphone-presented sound. The results obtained from this study will build the basis of a methodology aimed to improve 3D sound localization in the case of the visually impaired people.
Style APA, Harvard, Vancouver, ISO itp.
4

Wühle, Tom, i M. Ercan Altinsoy. "Investigation of auditory events with projected sound sources". W 173rd Meeting of Acoustical Society of America and 8th Forum Acusticum. Acoustical Society of America, 2017. http://dx.doi.org/10.1121/2.0000577.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Gao, Fenglin, i Fei Xu. "The Sound Nest project: mobile Application design for auditory cognitive training of stressed people in the post epidemic era." W 14th International Conference on Applied Human Factors and Ergonomics (AHFE 2023). AHFE International, 2023. http://dx.doi.org/10.54941/ahfe1003440.

Pełny tekst źródła
Streszczenie:
Under the COVID-19 epidemic and the economic downturn, people's pressure is increasing daily. Many people in the workplace are troubled by the distraction caused by excessive pressure. This paper aims to study the effective methods for stressed people to improve their concentration in the post-pandemic era. It also introduces the design and development process of an Android system-based mobile app - "Sound Nest." With auditory cognitive training as the core, we will conduct nonmusical auditory training for stressed people from three aspects: sound frequency, spatial orientation, and rhythm. We use SteamAudioVR to build a 3D virtual sound field and, with the guidance of visual interface orientation, to help improve our concentration. This software has three training modules. Based on the auditory attention training (AT) of Adrian Wells, it constructs single-source orientation training and multi-source track training modules. Based on the binaural auditory beats training of James · D · Lane · A, build a binaural beat training module. 3D virtual sound field orientation is constructed based on the ASMR Map of Emma L. Barratt. This application conducted sampling tests on some stressed people and received good feedback, which initially entered clinical validation.
Style APA, Harvard, Vancouver, ISO itp.
6

Cui, Hongyan, Xiaobo Xie, Shengpu Xu, Huifang Yan, Li Feng i Yong Hu. "Source analysis of bimodal event-related potentials with auditory-visual stimuli". W 2013 6th International IEEE/EMBS Conference on Neural Engineering (NER). IEEE, 2013. http://dx.doi.org/10.1109/ner.2013.6695877.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Phillips, Sean, i Andrés Cabrera. "Sonification Workstation". W ICAD 2019: The 25th International Conference on Auditory Display. Newcastle upon Tyne, United Kingdom: Department of Computer and Information Sciences, Northumbria University, 2019. http://dx.doi.org/10.21785/icad2019.056.

Pełny tekst źródła
Streszczenie:
Sonification Workstation is an open-source application for general sonification tasks, designed with ease-of-use and wide applicability in mind. Intended to foster adoption of sonification across disciplines, and increase experimentation with sonification by non-specialists, Sonification Workstation distills tasks useful in sonification and encapsulates them in a single software envi-ronment. The novel interface combines familiar modes of navigation from Digital Audio Workstations, with a highly simplified patcher interface for creating the sonification scheme. Further, the software associates methods of sonification with the data they sonify, in session files, which will make sharing and reproducing sonifications easier. It is posited that facilitating experimentation by non-specialists will increase the potential growth of sonification into fresh territory, encourage discussion of sonification techniques and uses, and create a larger pool of ideas to draw from in advancing the field of sonification. Source code is available at https://github.com/Cherdyakov/sonification-workstation. Binaries for macOS and Windows, as well as sample content, are available at http://sonificationworkstation.org.
Style APA, Harvard, Vancouver, ISO itp.
8

Jette, Christopher, i James H. J. Buchholz. "Fluor Sonescense: A Sonification of the Visualization of Brass Instrument Tones". W The 24th International Conference on Auditory Display. Arlington, Virginia: The International Community for Auditory Display, 2018. http://dx.doi.org/10.21785/icad2018.002.

Pełny tekst źródła
Streszczenie:
This paper is a discussion of the composition Fluor Sonescence, which combines trombone, electronics and video. The trombone and electronics are a mediated sonification of the video component. The video is a high framerate capture of the air motions produced by sound emanating from a brass instrument. This video material is translated into sound and serves as the final video component. The paper begins with a description of the data collection process and an overview of the compositional components. This is followed by a detailed description of the composition of the three components of Fluor Sonescence, while a discussion of the technical and aesthetic concerns is interwoven throughout. There is a discussion of the relationship of Fluor Sonescence to earlier works of the composer and the capture method for source material. The paper is an overview of a specific sonification project that is part of a larger trajectory of work. Please see https://vimeo.com/255790972/ to hear and view Fluor Sonescence.
Style APA, Harvard, Vancouver, ISO itp.
9

Kawai, Kaoru, i Kenji Muto. "Effect of Visibility of Auditory Stimulus Location on Ventriloquism Effect using AR-Head-Mounted Display". W 13th International Conference on Applied Human Factors and Ergonomics (AHFE 2022). AHFE International, 2022. http://dx.doi.org/10.54941/ahfe1002089.

Pełny tekst źródła
Streszczenie:
Virtual reality (VR) or augmented reality (AR) games using head-mounted displays (HMDs) are becoming increasingly popular in recent games. These games can present wider visual stimuli than TVs or handheld games. Moreover, the location of auditory stimuli is presented to the same location as visual stimuli. Therefore, we propose presenting visual stimuli to auditory stimuli rather than presenting auditory stimuli to visual stimuli. When we present visual stimuli to auditory stimuli, it is necessary to clarify how far the locations of the sound source and visual stimuli can be shifted. Thus, we examined varying degrees of spatial disparity between auditory and visual stimuli to determine whether they are still perceived as originating from the same location. The ventriloquism effect is known as a cross-modality between the locations of auditory and visual stimuli. Many researchers investigated the ventriloquism effect; however, there is no research on the effect of visibility of a loudspeaker playing a sound on the ventriloquism effect. In this study, we aim to clarify the effect of visibility of a loudspeaker playing a sound on the ventriloquism effect. For this purpose, we conducted two experiments to determine whether auditory and visual stimuli are originating from the same location when there are varying degrees of spatial disparity between them and measure their angle of origin. One was an AR condition experiment in which measurements were made with the loudspeaker visible, whereas the other was a VR condition experiment in which the loudspeaker was not visible. From the experimental results, the discrimination threshold of angle was more significant under the condition in which the loudspeaker was visible (AR condition experiment) than under the condition in which the loudspeaker was not visible (VR condition experiment). The results show that the ventriloquism effect is more substantial when the loudspeaker is visible.
Style APA, Harvard, Vancouver, ISO itp.
10

Huang, Mincong (Jerry), Samuel Chabot i Jonas Braasch. "Panoptic Reconstruction of Immersive Virtual Soundscapes Using Human-Scale Panoramic Imagery with Visual Recognition". W ICAD 2021: The 26th International Conference on Auditory Display. icad.org: International Community for Auditory Display, 2021. http://dx.doi.org/10.21785/icad2021.043.

Pełny tekst źródła
Streszczenie:
This work, situated at Rensselaer’s Collaborative-Research Augmented Immersive Virtual Environment Laboratory (CRAIVE-Lab), uses panoramic image datasets for spatial audio display. A system is developed for the room-centered immersive virtual reality facility to analyze panoramic images on a segment-by-segment basis, using pre-trained neural network models for semantic segmentation and object detection, thereby generating audio objects with respective spatial locations. These audio objects are then mapped with a series of synthetic and recorded audio datasets and populated within a spatial audio environment as virtual sound sources. The resulting audiovisual outcomes are then displayed using the facility’s human-scale panoramic display, as well as the 128-channel loudspeaker array for wave field synthesis (WFS). Performance evaluation indicates effectiveness for real-time enhancements, with potentials for large-scale expansion and rapid deployment in dynamic immersive virtual environments.
Style APA, Harvard, Vancouver, ISO itp.

Raporty organizacyjne na temat "Auditory source width"

1

liu, cong, xing wang, rao chen i jie zhang. Meta-analyses of the Effects of Virtual Reality Training on Balance, Gross Motor Function and Daily Living Ability in Children with Cerebral Palsy. INPLASY - International Platform of Registered Systematic Review and Meta-analysis Protocols, kwiecień 2022. http://dx.doi.org/10.37766/inplasy2022.4.0137.

Pełny tekst źródła
Streszczenie:
Review question / Objective: Cerebral palsy (CP) is a non-progressive, persistent syndrome occurring in the brain of the fetus or infant[1]. The prevalence of CP is 0.2% worldwide, and the prevalence can increase to 20-30 times in preterm or low birth weight newborns. There are about 6 million children with CP in China, and the number is increasing at a rate of 45,000 per year. Virtual reality (VR) refers to a virtual environment that is generated by a computer and can be interacted with.VR can mobilize the visual, auditory, tactile and kinesthetic organs of CP, so that they can actively participate in the rehabilitation exercise. Information sources: Two researchers searched 5 databases, including Pubmed (N=82), Embase (N=191), The Cochrane Library (N=147), Web of Science (N=359) and CNKI (N=11).
Style APA, Harvard, Vancouver, ISO itp.
2

Yatsymirska, Mariya. SOCIAL EXPRESSION IN MULTIMEDIA TEXTS. Ivan Franko National University of Lviv, luty 2021. http://dx.doi.org/10.30970/vjo.2021.49.11072.

Pełny tekst źródła
Streszczenie:
The article investigates functional techniques of extralinguistic expression in multimedia texts; the effectiveness of figurative expressions as a reaction to modern events in Ukraine and their influence on the formation of public opinion is shown. Publications of journalists, broadcasts of media resonators, experts, public figures, politicians, readers are analyzed. The language of the media plays a key role in shaping the worldview of the young political elite in the first place. The essence of each statement is a focused thought that reacts to events in the world or in one’s own country. The most popular platform for mass information and social interaction is, first of all, network journalism, which is characterized by mobility and unlimited time and space. Authors have complete freedom to express their views in direct language, including their own word formation. Phonetic, lexical, phraseological and stylistic means of speech create expression of the text. A figurative word, a good aphorism or proverb, a paraphrased expression, etc. enhance the effectiveness of a multimedia text. This is especially important for headlines that simultaneously inform and influence the views of millions of readers. Given the wide range of issues raised by the Internet as a medium, research in this area is interdisciplinary. The science of information, combining language and social communication, is at the forefront of global interactions. The Internet is an effective source of knowledge and a forum for free thought. Nonlinear texts (hypertexts) – «branching texts or texts that perform actions on request», multimedia texts change the principles of information collection, storage and dissemination, involving billions of readers in the discussion of global issues. Mastering the word is not an easy task if the author of the publication is not well-read, is not deep in the topic, does not know the psychology of the audience for which he writes. Therefore, the study of media broadcasting is an important component of the professional training of future journalists. The functions of the language of the media require the authors to make the right statements and convincing arguments in the text. Journalism education is not only knowledge of imperative and dispositive norms, but also apodictic ones. In practice, this means that there are rules in media creativity that are based on logical necessity. Apodicticity is the first sign of impressive language on the platform of print or electronic media. Social expression is a combination of creative abilities and linguistic competencies that a journalist realizes in his activity. Creative self-expression is realized in a set of many important factors in the media: the choice of topic, convincing arguments, logical presentation of ideas and deep philological education. Linguistic art, in contrast to painting, music, sculpture, accumulates all visual, auditory, tactile and empathic sensations in a universal sign – the word. The choice of the word for the reproduction of sensory and semantic meanings, its competent use in the appropriate context distinguishes the journalist-intellectual from other participants in forums, round tables, analytical or entertainment programs. Expressive speech in the media is a product of the intellect (ability to think) of all those who write on socio-political or economic topics. In the same plane with him – intelligence (awareness, prudence), the first sign of which (according to Ivan Ogienko) is a good knowledge of the language. Intellectual language is an important means of organizing a journalistic text. It, on the one hand, logically conveys the author’s thoughts, and on the other – encourages the reader to reflect and comprehend what is read. The richness of language is accumulated through continuous self-education and interesting communication. Studies of social expression as an important factor influencing the formation of public consciousness should open up new facets of rational and emotional media broadcasting; to trace physical and psychological reactions to communicative mimicry in the media. Speech mimicry as one of the methods of disguise is increasingly becoming a dangerous factor in manipulating the media. Mimicry is an unprincipled adaptation to the surrounding social conditions; one of the most famous examples of an animal characterized by mimicry (change of protective color and shape) is a chameleon. In a figurative sense, chameleons are called adaptive journalists. Observations show that mimicry in politics is to some extent a kind of game that, like every game, is always conditional and artificial.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii