Academic literature on the topic 'Stereophonic Audio'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Stereophonic Audio.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Stereophonic Audio"

1

Brungart, Douglas S. "Simplified analog virtual externalization for stereophonic audio." Journal of the Acoustical Society of America 105, no. 2 (1999): 582. http://dx.doi.org/10.1121/1.426976.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Morris, Robert Edward. "Centralizing of a spatially expanded stereophonic audio image." Journal of the Acoustical Society of America 116, no. 3 (2004): 1322. http://dx.doi.org/10.1121/1.1809894.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Noll, Peter, and Davis Pan. "ISO/MPEG Audio Coding." International Journal of High Speed Electronics and Systems 08, no. 01 (March 1997): 69–118. http://dx.doi.org/10.1142/s0129156497000044.

Full text
Abstract:
The Moving Pictures Expert Group within the International Organization of Standardization (ISO/MPEG) has developed, and is presently developing, a series of audiovisual standards. Its audio coding standard MPEG Phase 1 is the first international standard in the field of high quality digital audio compression and has been applied in many areas, both for consumer and professional audio. Typical application areas for digital audio are in the fields of audio production, program distribution and exchange, digital sound broadcasting, digital storage, and various multimedia applications. This paper will describe in some detail the main features of MPEG Phase 1 coders. As a logical further step in digital audio a multichannel audio standard MPEG Phase 2 is being standardized to provide an improved stereophonic image for audio-only applications including teleconferencing and for improved television systems. The status of this standardization process will be covered briefly.
APA, Harvard, Vancouver, ISO, and other styles
4

Aarts, Ronaldus M. "Signal processing circuit including a signal combining circuit stereophonic audio reproduction system including the signal processing circuit and an audio-visual reproduction system including the stereophonic audio reproduction system." Journal of the Acoustical Society of America 105, no. 2 (1999): 582. http://dx.doi.org/10.1121/1.426975.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Fîciu, Ionuț-Dorinel, Cristian-Lucian Stanciu, Constantin Paleologu, and Jacob Benesty. "Low-Complexity Data-Reuse RLS Algorithm for Stereophonic Acoustic Echo Cancellation." Applied Sciences 13, no. 4 (February 9, 2023): 2227. http://dx.doi.org/10.3390/app13042227.

Full text
Abstract:
Stereophonic audio devices employ two loudspeakers and two microphones in order to create a bidirectional sound effect. In this context, the stereophonic acoustic echo cancellation (SAEC) setup requires the estimation of four echo paths, each one corresponding to a loudspeaker-to-microphone pair. The widely linear (WL) model was proposed in recent literature in order to simplify the handling of the SAEC mathematical model. Moreover, low complexity recursive least- squares (RLS) adaptive algorithms were developed within the WL framework and successfully tested for SAEC scenarios. This paper proposes to apply a data-reuse (DR) approach for the combination between the RLS algorithm and the dichotomous coordinate descent (DCD) iterative method. The resulting WL-DR-RLS-DCD algorithm inherits the convergence properties of the RLS family and requires an amount of mathematical operations attractive for practical implementations. Simulation results show that the DR approach improves the tracking capabilities of the RLS-DCD algorithm, with an acceptable surplus in terms of computational workload.
APA, Harvard, Vancouver, ISO, and other styles
6

NIWA, Kenta, Takanori NISHINO, and Kazuya TAKEDA. "Selective Listening Point Audio Based on Blind Signal Separation and Stereophonic Technology." IEICE Transactions on Information and Systems E92-D, no. 3 (2009): 469–76. http://dx.doi.org/10.1587/transinf.e92.d.469.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Jeon, Se-Woon, Young-Cheol Park, and Dae Hee Youn. "Auditory Distance Rendering Based on ICPD Control for Stereophonic 3D Audio System." IEEE Signal Processing Letters 22, no. 5 (May 2015): 529–33. http://dx.doi.org/10.1109/lsp.2014.2363455.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Hanneton, Sylvain, Malika Auvray, and Barthélemy Durette. "The Vibe: A Versatile Vision-to-Audition Sensory Substitution Device." Applied Bionics and Biomechanics 7, no. 4 (2010): 269–76. http://dx.doi.org/10.1155/2010/282341.

Full text
Abstract:
We describe a sensory substitution scheme that converts a video stream into an audio stream in real-time. It was initially developed as a research tool for studying human ability to learn new ways of perceiving the world: the Vibe can give us the ability to learn a kind of ‘vision’ by audition. It converts a video stream into a continuous stereophonic audio signal that conveys information coded from the video stream. The conversion from the video stream to the audio stream uses a kind of retina with receptive fields. Each receptive field controls a sound source and the user listens to a sound that is a mixture of all these sound sources. Compared to other existing vision-to-audition sensory substitution devices, the Vibe is highly versatile in particular because it uses a set of configurable units working in parallel. In order to demonstrate the validity and interest of this method of vision to audition conversion, we give the results of an experiment involving a pointing task to targets memorised through visual perception or through their auditory conversion by the Vibe. This article is also an opportunity to precisely draw the general specifications of this scheme in order to prepare its implementation on an autonomous/mobile hardware.
APA, Harvard, Vancouver, ISO, and other styles
9

Komiyama, Setsu. "Special Edition Recent Audio Technique in Sound Field Reproduction. Signal Processing Technology. 3D Stereophonic Imaging." Journal of the Institute of Television Engineers of Japan 46, no. 9 (1992): 1076–79. http://dx.doi.org/10.3169/itej1978.46.1076.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kobayashi, Wataru. "Method for localizing sound image of reproducing sound of audio signals for stereophonic reproduction outside speakers." Journal of the Acoustical Society of America 117, no. 6 (2005): 3357. http://dx.doi.org/10.1121/1.1948274.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Stereophonic Audio"

1

Sofianos, Stratis. "Singing voice extraction from stereophonic recordings." Thesis, University of Hertfordshire, 2013. http://hdl.handle.net/2299/10054.

Full text
Abstract:
Singing voice separation (SVS) can be defined as the process of extracting the vocal element from a given song recording. The impetus for research in this area is mainly that of facilitating certain important applications of music information retrieval (MIR) such as lyrics recognition, singer identification, and melody extraction. To date, the research in the field of SVS has been relatively limited, and mainly focused on the extraction of vocals from monophonic sources. The general approach in this scenario has been one of considering SVS as a blind source separation (BSS) problem. Given the inherent diversity of music, such an approach is motivated by the quest for a generic solution. However, it does not allow the exploitation of prior information, regarding the way in which commercial music is produced. To this end, investigations are conducted into effective methods for unsupervised separation of singing voice from stereophonic studio recordings. The work involves extensive literature review of existing methods that relate to SVS, as well as commercial approaches. Following the identification of shortcomings of the conventional methods, two novel approaches are developed for the purpose of SVS. These approaches, termed SEMANICS and SEMANTICS draw their motivation from statistical as well as spectral properties of the target signal and focus on the separation of voice in the frequency domain. In addition, a third method, named Hybrid SEMANTICS, is introduced that addresses time‐, as well as frequency‐domain separation. As there is lack of a concrete standardised music database that includes a large number of songs, a dataset is created using conventional stereophonic mixing methods. Using this database, and based on widely adopted objective metrics, the effectiveness of the proposed methods has been evaluated through thorough experimental investigations.
APA, Harvard, Vancouver, ISO, and other styles
2

TAKEDA, Kazuya, Takanori NISHINO, and Kenta NIWA. "Selective Listening Point Audio Based on Blind Signal Separation and Stereophonic Technology." Institute of Electronics, Information and Communication Engineers, 2009. http://hdl.handle.net/2237/15055.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Blum, Konrad. "Evaluating the applications of spatial audio in telephony." Thesis, Stellenbosch : University of Stellenbosch, 2010. http://hdl.handle.net/10019.1/4376.

Full text
Abstract:
Thesis (MScEng (Electrical and Electronic Engineering))--University of Stellenbosch, 2010.
ENGLISH ABSTRACT: Telephony has developed substantially over the years, but the fundamental auditory model of mixing all the audio from di erent sources together into a single monaural stream has not changed since the telephone was rst invented. Monaural audio is very di cult to follow in a multiple-source situation such as a conference call. Sound originating from a speci c point in space will travel along a slightly di erent path to each ear. Although we are not consciously aware of it, our brain processes these spatial cues to help us to locate sounds in space. It is this spatial information that allows us to focus our attention and listen to a single speaker in an environment where many di erent sources may be active at the same time; a phenomenon known as the \cocktail party e ect". It is possible to reproduce these spatial cues in a sound recording, using Head-Related Transfer Functions (HRTFs) to allow a listener to experience localised audio, even when sound is reproduced through a headset. In this thesis, spatial audio is implemented in a telephony application as well as in a virtual world. Experiments were conducted which demonstrated that spatial audio increases the intelligibility of speech in a multiple-source environment and aids active speaker identi cation. Resource usage measurements show that these bene ts are, however, not without a cost. In conclusion, spatial audio was shown to be an improvement over the monaural audio model traditionally implemented in telephony.
AFRIKAANSE OPSOMMING: Telefonie het ansienlik ontwikkel oor die jare, maar die basiese ouditiewe model waarin die klank van alle verskillende bronne bymekaar gemeng word na een enkelouditoriese stroom het nie verander sedert die eerste telefoon gebou is nie. Enkelouditoriese klank is baie moeilik om te volg in 'n meervoudigebron situasie, soos byvoorbeeld in 'n konferensie oproep. Klank met oorsprong by 'n sekere punt in die ruimte sal 'n e ens anderse pad na elke oor volg. Selfs is ons nie aktief bewus hiervan nie, verwerk ons brein hierdie ruimtelike aanduidinge om ons te help om klanke in die ruimte te vind. Dit is hierdie ruimtelike inligting wat ons toelaat om ons aandag te vestig en te luister na 'n enkele spreker in 'n omgewing waar baie verskillende bronne terselfdertyd aktief mag wees, 'n verskynsel wat bekend staan as die \skemerkelkiepartytjiee ek". Dit is moontlik om hierdie ruimtelike leidrade na 'n klank te reproduseer met behulp van hoofverwandeoordragfunksies (HRTFs) en om daardeur 'n luisteraar gelokaliseerde klank te laat ervaar, selfs wanneer die klank deur middel van oorfone gespeel word. In hierdie tesis word ruimtelike klank ge mplementeer in 'n telefonieprogram, sowel as in 'n virtuelew^ereld. Eksperimente is uitgevoer wat getoon het dat ruimtelike klank die verstaanbaarheid van spraak in 'n meerderebronomgewing verhoog en help met aktiewe spreker identi kasie. Hulpbrongebruiks metings toon aan dat hierdie voordele egter nie sonder 'n koste kom nie. Ter afsluiting, dit is bewys dat ruimtelike klank 'n verbetering tewees gebring het oor die enkelouditorieseklankmodel wat tradisioneel in telefonie gebruik het.
APA, Harvard, Vancouver, ISO, and other styles
4

Presti, G. "SIGNAL TRANSFORMATIONS FOR IMPROVING INFORMATION REPRESENTATION, FEATURE EXTRACTION AND SOURCE SEPARATION." Doctoral thesis, Università degli Studi di Milano, 2017. http://hdl.handle.net/2434/470676.

Full text
Abstract:
Questa tesi riguarda nuovi metodi di rappresentazione del segnale nel dominio tempo-frequenza, tali da mostrare le informazioni ricercate come dimensioni esplicite di un nuovo spazio. In particolare due trasformate sono introdotte: lo Spazio di Miscelazione Bivariato (Bivariate Mixture Space) e il Campo della Struttura Spettro-Temporale (Spectro-Temporal Structure-Field). La prima trasformata mira a evidenziare le componenti latenti di un segnale bivariato basandosi sul comportamento di ogni componente frequenziale (ad esempio a fini di separazione delle sorgenti); la seconda trasformata mira invece all'incapsulamento di informazioni relative al vicinato di un punto in R^2 in un vettore associato al punto stesso, tale da descrivere alcune proprietà topologiche della funzione di partenza. Nel dominio dell'elaborazione digitale del segnale audio, il Bivariate Mixture Space può essere interpretato come un modo di investigare lo spazio stereofonico per operazioni di separazione delle sorgenti o di estrazione di informazioni, mentre lo Spectro-Temporal Structure-Field può essere usato per ispezionare lo spazio spettro-temporale (segregare suoni percussivi da suoni intonati o tracciae modulazioni di frequenza). Queste trasformate sono studiate e testate anche in relazione allo stato del'arte in campi come la separazione delle sorgenti, l'estrazione di informazioni e la visualizzazione dei dati. Nel campo dell'informatica applicata al suono, queste tecniche mirano al miglioramento della rappresentazione del segnale nel dominio tempo-frequenza, in modo tale da rendere possibile l'esplorazione dello spettro anche in spazi alternativi, quali il panorama stereofonico o una dimensione virtuale che separa gli aspetti percussivi da quelli intonati.
This thesis is about new methods of signal representation in time-frequency domain, so that required information is rendered as explicit dimensions in a new space. In particular two transformations are presented: Bivariate Mixture Space and Spectro-Temporal Structure-Field. The former transform aims at highlighting latent components of a bivariate signal based on the behaviour of each frequency base (e.g. for source separation purposes), whereas the latter aims at folding neighbourhood information of each point of a R^2 function into a vector, so as to describe some topological properties of the function. In the audio signal processing domain, the Bivariate Mixture Space can be interpreted as a way to investigate the stereophonic space for source separation and Music Information Retrieval tasks, whereas the Spectro-Temporal Structure-Field can be used to inspect spectro-temporal dimension (segregate pitched vs. percussive sounds or track pitch modulations). These transformations are investigated and tested against state-of-the-art techniques in fields such as source separation, information retrieval and data visualization. In the field of sound and music computing, these techniques aim at improving the frequency domain representation of signals such that the exploration of the spectrum can be achieved also in alternative spaces like the stereophonic panorama or a virtual percussive vs. pitched dimension.
APA, Harvard, Vancouver, ISO, and other styles
5

Caringer, Kelly Heath. "To Produce and Persist: A Dialectical Investigation of Purpose in Commercial Stereophony." OpenSIUC, 2017. https://opensiuc.lib.siu.edu/dissertations/1343.

Full text
Abstract:
This dissertation seeks to identify the purposive force that determines the form and function of commercial stereophony in capitalist society, and the ways in which this force affects the productive and consumptive activities of stereophonic practitioners and listening audiences. Employing dialectical materialism, I examine three social processes that either historically established or continue to influence the mediative potential of stereophonic sound: the invention and industrial standardization of the stereophonic apparatus, the professionalization of stereophonic practitioners, and the social construction of stereophonic listeners as a mass consuming audience. These interrelated studies reveal perceived economic necessity as the dominant causal force that governs all stereophonic processes and practices under the capitalist economic system. Informed by my chapter findings, which complicate Karl Marx’s materialist base and superstructure schema – a coarse conceptual abstraction of capitalist production, I construct a more refined and flexible schematic diagram that offers a distinctive bird’s eye view of the universal interplay between capitalists, producers and consumers. This novel conceptual schematic depicts productive forces and productive relations as coterminous expressions of the dual-purpose of capitalism: to produce surplus-value for accumulation by capitalists, and to do so in perpetuity.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Stereophonic Audio"

1

Billboard's complete book of audio. New York: Billboard Books, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Self, Douglas. Self on audio. Oxford: Newnes, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Self, Douglas. Self on audio. 2nd ed. Amsterdam: Elsevier Newnes, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Howard W. Sams & Co., ed. Complete guide to audio. Indianapolis, Ind: Prompt Publications, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Spatial audio. Oxford: Focal Press, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Dearborn, Laura. Good sound: An uncomplicated guide to choosing and using audio equipment. New York: Quill/W. Morrow, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

How to design and install high performance car stereo. North Branch, MN: Cartech, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Pettitt, Joe. How to design and install high-performance car stereo. North Branch, MN: Cartech, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

The home theater companion: Buying, installing, and using todayʼs audio-visual equipment. New York: Schirmer Books, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Practical home theater: A guide to video and audio systems. 2nd ed. New York: Quiet River Press, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Stereophonic Audio"

1

Zhou, Hang, Xudong Xu, Dahua Lin, Xiaogang Wang, and Ziwei Liu. "Sep-Stereo: Visually Guided Stereophonic Audio Generation by Associating Source Separation." In Computer Vision – ECCV 2020, 52–69. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58610-2_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Arberet, Simon, Rémi Gribonval, and Frédéric Bimbot. "A Robust Method to Count and Locate Audio Sources in a Stereophonic Linear Instantaneous Mixture." In Independent Component Analysis and Blind Signal Separation, 536–43. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11679363_67.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Goldsmith, Mike. "5. Electronic sound." In Sound: A Very Short Introduction, 69–81. Oxford University Press, 2015. http://dx.doi.org/10.1093/actrade/9780198708445.003.0005.

Full text
Abstract:
‘Electronic sound’ explains the conversion of sound to electricity through the development of microphones, amplifiers, and loudspeakers. There are several types of microphones, but only three types in common use: the dynamic or moving coil microphone, the condenser microphone, and the piezoelectric microphone. Directionality is also important; an omnidirectional microphone is equally sensitive to sound from any direction, and is required to capture a full soundscape, whereas a unidirectional microphone picks up sound from one direction only—ideal for picking up speech or song in noisy environments. The development of sound storage from stereophonic phonographs, to analogue cassette tapes, compact discs, and now MP3 audio files is also described.
APA, Harvard, Vancouver, ISO, and other styles
4

Watkinson, John. "Microphones, loudspeakers and stereophony." In Audio for Television, 23–56. Routledge, 1997. http://dx.doi.org/10.4324/9780080926735-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Stereophonic Audio"

1

van der Waal, R. G., and R. N. J. Veldhuis. "Subband coding of stereophonic digital audio signals." In [Proceedings] ICASSP 91: 1991 International Conference on Acoustics, Speech, and Signal Processing. IEEE, 1991. http://dx.doi.org/10.1109/icassp.1991.151053.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kimura, Masaru, and Atsushi Hotta. "Improvements in stereophonic sound images of lossy compression audio." In 2013 IEEE 2nd Global Conference on Consumer Electronics (GCCE). IEEE, 2013. http://dx.doi.org/10.1109/gcce.2013.6664935.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Cho, Namgook, Jaeyoun Cho, Jaewon Lee, and Yongje Kim. "Stereophonic acoustic echo cancellation using spatial decorrelation." In 2011 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA). IEEE, 2011. http://dx.doi.org/10.1109/aspaa.2011.6082281.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Mohan Sondhi, M., and D. R. Morgan. "Acoustic Echo Cancellation for Stereophonic Teleconferencing." In Final Program and Paper Summaries 1991 IEEE ASSP Workshop on Applications of Signal Processing to Audio and Acoustics. IEEE, 1991. http://dx.doi.org/10.1109/aspaa.1991.634135.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wung, Jason, Ted S. Wada, Mehrez Souden, and Biing-Hwang Fred Juang. "On the misalignment of stereophonic acoustic echo cancellation with decorrelation by resampling." In 2013 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA). IEEE, 2013. http://dx.doi.org/10.1109/waspaa.2013.6701811.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Megias, D., J. Herrera-Joancomarti, and J. Minguillon. "A robust frequency domain audio watermarking scheme for monophonic and stereophonic PCM formats." In Proceedings. 30th Euromicro Conference, 2004. IEEE, 2004. http://dx.doi.org/10.1109/eurmic.2004.1333402.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Arberet, Simon, Remi Gribonval, and Frederic Bimbot. "A Robust Method to Count and Locate Audio Sources in a Stereophonic Linear Anechoic Mixture." In 2007 IEEE International Conference on Acoustics, Speech and Signal Processing - ICASSP '07. IEEE, 2007. http://dx.doi.org/10.1109/icassp.2007.366787.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Balan, Oana, Alin Moldoveanu, Florica Moldoveanu, Ionut Negoi, and Alex Butean. "COMPARATIVE RESEARCH ON SOUND LOCALIZATION ACCURACY IN THE FREE-FIELD AND VIRTUAL AUDITORY DISPLAYS." In eLSE 2015. Carol I National Defence University Publishing House, 2015. http://dx.doi.org/10.12753/2066-026x-15-079.

Full text
Abstract:
The following paper aims to present a comparative study on the audio localization accuracy (directional judgment, absolute spatial perception and the rate of front-back confusions- situation in which the listener perceives the sound coming from the front as coming from the rear and vice-versa) in both free-field and virtual sound source conditions. Sound localization experiments in the free-field rely on the use of loudspeakers for delivering the auditory information to the listener. On the other hand, virtual auditory displays are based on 3D sounds (resulting from the filtering of a particular sound with the Head Related Transfer Function corresponding to the direction of the sound source in space) that are rendered to the listener through a pair of stereophonic headphones. 3D sounds are used in a wide range of applications, as they can simulate the perception of an external sound source in real-world hearing conditions and generally increase situational awareness. Nonetheless, they can introduce several localization errors (caused primarily by the use of non-individualized Head Related Transfer Functions), such as poor performance in the median plane (for vertical localization) and an increase in the rate of front-back confusions, especially for the directions of 0 degrees (to the front region) and 180 degrees (to the rear). As a result, we intend to include in our research a comprehensive psychophysical evaluation, interpretation and analysis of the accuracy of free-field and headphone-presented stimuli in order to bring to light the audio localization particularities that differentiate audio discrimination performance under the two presented conditions. .
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography