Gotowa bibliografia na temat „Multichannel audio”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Spis treści
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Multichannel audio”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "Multichannel audio"
Ono, Kazuho. "2.Multichannel Audio". Journal of the Institute of Image Information and Television Engineers 68, nr 8 (2014): 604–7. http://dx.doi.org/10.3169/itej.68.604.
Pełny tekst źródłaHolbrook, Kyle A., i Michael J. Yacavone. "Multichannel audio reproduction system". Journal of the Acoustical Society of America 82, nr 2 (sierpień 1987): 728. http://dx.doi.org/10.1121/1.395373.
Pełny tekst źródłaEmmett, John. "Metering for Multichannel Audio". SMPTE Journal 110, nr 8 (sierpień 2001): 532–36. http://dx.doi.org/10.5594/j17765.
Pełny tekst źródłaZhu, Qiushi, Jie Zhang, Yu Gu, Yuchen Hu i Lirong Dai. "Multichannel AV-wav2vec2: A Framework for Learning Multichannel Multi-Modal Speech Representation". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 17 (24.03.2024): 19768–76. http://dx.doi.org/10.1609/aaai.v38i17.29951.
Pełny tekst źródłaMartyniuk, Tetiana, Maksym Mykytiuk i Mykola Zaitsev. "FEATURES OF ANALYSIS OF MULTICHANNEL AUDIO SIGNALSFEATURES OF ANALYSIS OF MULTICHANNEL AUDIO SIGNALS". ГРААЛЬ НАУКИ, nr 2-3 (9.04.2021): 302–5. http://dx.doi.org/10.36074/grail-of-science.02.04.2021.061.
Pełny tekst źródłaGao, Xue Fei, Guo Yang, Jing Wang, Xiang Xie i Jing Ming Kuang. "A Backward Compatible Multichannel Audio Compression Method". Advanced Materials Research 756-759 (wrzesień 2013): 977–81. http://dx.doi.org/10.4028/www.scientific.net/amr.756-759.977.
Pełny tekst źródłaGunawan, Teddy Surya, i Mira Kartiwi. "Performance Evaluation of Multichannel Audio Compression". Indonesian Journal of Electrical Engineering and Computer Science 10, nr 1 (1.04.2018): 146. http://dx.doi.org/10.11591/ijeecs.v10.i1.pp146-153.
Pełny tekst źródłaDong, Yingjun, Neil G. MacLaren, Yiding Cao, Francis J. Yammarino, Shelley D. Dionne, Michael D. Mumford, Shane Connelly, Hiroki Sayama i Gregory A. Ruark. "Utterance Clustering Using Stereo Audio Channels". Computational Intelligence and Neuroscience 2021 (25.09.2021): 1–8. http://dx.doi.org/10.1155/2021/6151651.
Pełny tekst źródłaFujimori, Kazuki, Bisser Raytchev, Kazufumi Kaneda, Yasufumi Yamada, Yu Teshima, Emyo Fujioka, Shizuko Hiryu i Toru Tamaki. "Localization of Flying Bats from Multichannel Audio Signals by Estimating Location Map with Convolutional Neural Networks". Journal of Robotics and Mechatronics 33, nr 3 (20.06.2021): 515–25. http://dx.doi.org/10.20965/jrm.2021.p0515.
Pełny tekst źródłaHotho, Gerard, Lars F. Villemoes i Jeroen Breebaart. "A Backward-Compatible Multichannel Audio Codec". IEEE Transactions on Audio, Speech, and Language Processing 16, nr 1 (styczeń 2008): 83–93. http://dx.doi.org/10.1109/tasl.2007.910768.
Pełny tekst źródłaRozprawy doktorskie na temat "Multichannel audio"
Romoli, Laura. "Advanced application for multichannel teleconferencing audio systems". Doctoral thesis, Università Politecnica delle Marche, 2011. http://hdl.handle.net/11566/242000.
Pełny tekst źródłaNowadays, there is a large interest towards multimedia teleconferencing sys- tems as a consequence of the increasing requirement for efficent communica- tions and the development of advanced digital signal processing techniques. A teleconferencing system should provide a realistic representation of visual and sound fields, allowing a natural communication among participants any- where in the world as they were all in the same room. In this context, a lot of systems have been developed ranging from PC-based applications, thought for single users communications, up to complex systems provided with large video screens playing the remote room as it were a continuum of the local room. In teleconferencing systems the undesired echo due to coupling between the loudspeaker and the microphone can be reduced using an acoustic echo can- celer (AEC). In the presence of more than one participant, multichannel systems have to be taken into consideration for speaker localization. More realistic performance can be already obtained through stereophonic systems since listeners have spatial information that helps to identify the speaker position. Anyway, more adaptive lters have to be used and the linear rela- tionship existing between the two channels generated from the same source brings some additional problems: the solution of the adaptive algorithm is not unique and depends on the speaker position in the transmission room which is not stationary, causing possible convergence problems. Moreover, the choice of the adaptive algorithm becomes extremely important because the performance depends on the condition number of the input signal which is very high in the multichannel scenario. In this thesis novel contributions for stereophonic acoustic echo cancellation are given based on the \missing- fundamental" phenomenon. The novelty of the solutions is related to the great interchannel coherence reduction obtained without a ecting speech quality and stereo perception. Moreover, a solution for improving the con- vergence speed of adaptive lters is discussed based on a variable step-size method: the approach is applied to stereophonic acoustic echo cancellation but, actually, it can be used for generic adaptive algorithms. Contextually, there has been an increasing interest in the design of systems providing a reproduction of sounds as realistic as possible so that the lis- tener does not notice that they have been produced arti cially since he is immersed in the virtual audio scene surrounded by a large number of loud- speakers. Conventional systems are designed to obtain the optimal acoustic sensation in a particular position of the listening environment, i.e., the so called sweet spot. Furthermore, it is impossible to achieve a correct source localization with a limited number of loudspeakers. Hence, several research e orts have been made in the optimization of these systems, focusing on new recording and reproduction techniques, i.e., Wave Field Analysis (WFA) and Wave Field Synthesis (WFS). The former is a sound eld recording tech- nique based on microphone arrays and the latter allows sound eld synthesis through loudspeakers arrays. At the aim of using these techniques in real world applications (e.g., teleconferencing systems, cinemas, home theatres) it is necessary to apply multichannel digital signal processing algorithms, already developed for traditional systems. This led to the introduction of Wave Domain Adaptive Filtering (WDAF), a spatio-temporal generalization of Fast Least Mean Squares adaptive algorithm, allowing a considerable re- duction of the computational complexity. Efficient solutions for real time implementation and possible phase approx- imations of the driving functions used in order to manage the loudspeakers are discussed in this thesis. Furthermore, a Weighted-Overlap-Add-based (WOLA-based) approach for WDAF and a WFS-based digital pointing of line arrays are presented: the objective of these studies is that of apply- ing these concepts in real scenarios, such as a teleconferencing system. In- deed, the aforementioned immersive audio reproduction techniques can be exploited for enhancing the performance of life-sized teleconferencing sys- tems, combining temporal and spatial requirements. Furthermore, audio rendering algorithms are needed to improve the perceived audio quality in order to make the listening environment more pleasant by taking into account some speci c features of the environment. More specifically, equalization represents a powerful tool capable of dealing with the frequency response irregularities: an equalizer can compensates for speaker placement and listening room characteristics and it can be applied in a tele-conferencing system to make the communication the most natural as possible. The evaluation of a multipoint equalizer and a mixed-phase solution with a suitably designed room group delay are discussed in this work.
De, Sena Enzo. "Analysis, design and implementation of multichannel audio systems". Thesis, King's College London (University of London), 2013. https://kclpure.kcl.ac.uk/portal/en/theses/analysis-design-and-implementation-of-multichannel-audio-systems(2667506b-f58e-44f1-858a-bcb67d341720).html.
Pełny tekst źródłaDaniel, Adrien. "Spatial Auditory Blurring and Applications to Multichannel Audio Coding". Phd thesis, Université Pierre et Marie Curie - Paris VI, 2011. http://tel.archives-ouvertes.fr/tel-00623670.
Pełny tekst źródłaGeorge, Sunish. "Objective models for predicting selected multichannel audio quality attributes". Thesis, University of Surrey, 2009. http://epubs.surrey.ac.uk/844426/.
Pełny tekst źródłaMartí, Guerola Amparo. "Multichannel audio processing for speaker localization, separation and enhancement". Doctoral thesis, Universitat Politècnica de València, 2013. http://hdl.handle.net/10251/33101.
Pełny tekst źródłaMartí Guerola, A. (2013). Multichannel audio processing for speaker localization, separation and enhancement [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/33101
TESIS
Belloch, Rodríguez José Antonio. "PERFORMANCE IMPROVEMENT OF MULTICHANNEL AUDIO BY GRAPHICS PROCESSING UNITS". Doctoral thesis, Universitat Politècnica de València, 2014. http://hdl.handle.net/10251/40651.
Pełny tekst źródłaBelloch Rodríguez, JA. (2014). PERFORMANCE IMPROVEMENT OF MULTICHANNEL AUDIO BY GRAPHICS PROCESSING UNITS [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/40651
TESIS
Premiado
Parry, Robert Mitchell. "Separation and Analysis of Multichannel Signals". Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/19743.
Pełny tekst źródłaWille, Joachim Olsen. "Performance of a Multichannel Audio Correction System Outside the Sweetspot. : Further Investigations of the Trinnov Optimizer". Thesis, Norwegian University of Science and Technology, Department of Electronics and Telecommunications, 2008. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-8911.
Pełny tekst źródłaThis report is a continuation of the student project "Evaluation of TrinnovOptimizer audio reproduction system". It will further investigate theproperties and function of the Trinnov Optimizer, a correction system foraudio reproduction systems. During the student project measurements wereperformed in an anechoic lab to provide information on the functionality andabilities of the Trinnov Optimizer. Massive amounts of data were recorded,and that has also been the foundation of this report. The new work that hasbeen done is by interpreting these results through the use of Matlab. The Optimizer by Trinnov [9 ] is a standalone system for reproductionof audio over a single or multiple loudspeaker setup. It is designed tocorrect frequency and phase response in addition to correcting loudspeakerplacements and cancel simple early re?ections in a multiple loudspeakersetup. The purpose of further investigating this issue was to understandmore about the sound?eld produced around the listening position, and togive more detailed results on the changes in the sound?eld after correction.Importance of correcting the system not only in the listening position, butalso in the surrounding area, is obvious because there is often more than onelistener. This report gives further insight in physical measurements ratherthan subjective statements, on the performance of a room and loudspeakercorrection device. WinMLS has been used to measure the system with single, and multiplemicrophone setups. Some results from the earlier student project are alsoin this report to verify measurement methods, and to show correspondancebetween the di?erent measuring systems. Therefore some of the data havebeen compared to the Trinnov Optimizer's own measurements and appear similar in this report. Some errors found in the initial report, the results from the phase response measurements, have also been corrected. Multiple loudspeakers in a 5.0 setup have been measured with 5 microphones on a rotating boom to measure the soundpressure over an area around the listening position. This allowed the e?ect of simple re?ections cancellation, and the ability to generate virtual sources to be investigated. For the speci?c cases that were investigated in this report, the Optimizer showed the following: ? Frequency and phase response will in every situation be optimized to the extent of the Optimizers algorithms. ? Every case shows improvement in the frequency and phase response over the whole measured area. ? Direct frontal re?ections was deconvolved up to 300Hz over the whole measured area with a radius of 56cm. ? A re?ection from the side was deconvolved roughly up to 200Hz for microphones 1 through 3, up to a radius of 31.25cm, and up to 100Hz for microphones 4 and 5. ? The ability to create virtual sources corresponds fairly to the theoretical expectations. The video sequences that were developed give an interesting new angle on the problems that were investigated. Other than looking at plots of di?erent angles which is di?cult and time consuming, the videos showed an intuitive perspective that enlightened the same issues as the common presented data of frequency and phase response measurements.
Sekiguchi, Kouhei. "A Unified Statistical Approach to Fast and Robust Multichannel Speech Separation and Dereverberation". Doctoral thesis, Kyoto University, 2021. http://hdl.handle.net/2433/263770.
Pełny tekst źródłaGaultier, Clément. "Conception et évaluation de modèles parcimonieux et d'algorithmes pour la résolution de problèmes inverses en audio". Thesis, Rennes 1, 2019. http://www.theses.fr/2019REN1S009/document.
Pełny tekst źródłaToday's challenges in the context of audio and acoustic signal processing inverse problems are multiform. Addressing these problems often requires additional appropriate signal models due to their inherent ill-posedness. This work focuses on designing and evaluating audio reconstruction algorithms. Thus, it shows how various sparse models (analysis, synthesis, plain, structured or “social”) are particularly suited for single or multichannel audio signal reconstruction. The core of this work notably identifies the limits of state-of-the-art methods evaluation for audio declipping and proposes a rigourous large-scale evaluation protocol to determine the more appropriate methods depending on the context (music or speech, moderately or highly degraded signals). Experimental results demonstrate substantial quality improvements for some newly considered testing configurations. We also show computational efficiency of the different methods and considerable speed improvements. Additionally, a part of this work is dedicated to the sound source localization problem. We address it with a “virtually supervised” machine learning technique. Experiments show with this method promising results on distance and direction of arrival estimation
Książki na temat "Multichannel audio"
Society, Audio Engineering. AES recommended practice for digital audio engineering: Serial multichannel audio digital interface (MADI). New York: Audio Engineering Society, 1991.
Znajdź pełny tekst źródłaGrimm, Simon. Directivity Based Multichannel Audio Signal Processing For Microphones in Noisy Acoustic Environments. Wiesbaden: Springer Fachmedien Wiesbaden, 2019. http://dx.doi.org/10.1007/978-3-658-25152-9.
Pełny tekst źródłaMeares, D. J. Evaluations of high quality, multichannel audio codecs carried out on behalf of ISO/IEC MPeg. London: British Broadcasting Corporation Research and Development Department, 1995.
Znajdź pełny tekst źródłaKyriakakis, Chris, Dai Tracy Yang i C. C. Jay Kuo. High-Fidelity Multichannel Audio Coding. Hindawi, 2004.
Znajdź pełny tekst źródłaGrimm, Simon. Directivity Based Multichannel Audio Signal Processing For Microphones in Noisy Acoustic Environments ). Springer Vieweg, 2019.
Znajdź pełny tekst źródłaThéberge, Paul, Kyle Devine i Tom Everrett. Living Stereo: Histories and Cultures of Multichannel Sound. Bloomsbury Academic & Professional, 2015.
Znajdź pełny tekst źródłaThéberge, Paul, Kyle Devine i Tom Everrett. Living stereo: Histories and cultures of multichannel sound. 2015.
Znajdź pełny tekst źródłaChris Kyriakakis, and C.-C. Jay Kuo Dai Tracy Yang. High-Fidelity Multichannel Audio Coding (Second Edition) (EURASIP Book Series on Signal Processing & Communications). Wyd. 2. Hindawi Publishing Corporation, 2006.
Znajdź pełny tekst źródłaYang, dai tracy. High-Fidelity Multichannel Audio Coding (Eurasip Book Series on Signal Processing and Communications, Vol. 1). Hindawi Publishing Corporation, 2004.
Znajdź pełny tekst źródłaHuff, W. A. Kelly. Regulating the Future. Praeger, 2001. http://dx.doi.org/10.5040/9798216006695.
Pełny tekst źródłaCzęści książek na temat "Multichannel audio"
Toole, Floyd E. "Multichannel Audio". W Sound Reproduction, 397–432. Third edition. | New York ; London : Routledge, 2017.: Routledge, 2017. http://dx.doi.org/10.4324/9781315686424-15.
Pełny tekst źródłaMarkovich-Golan, Shmulik, Walter Kellermann i Sharon Gannot. "Multichannel Parameter Estimation". W Audio Source Separation and Speech Enhancement, 219–34. Chichester, UK: John Wiley & Sons Ltd, 2018. http://dx.doi.org/10.1002/9781119279860.ch11.
Pełny tekst źródłaKameoka, Hirokazu, Hiroshi Sawada i Takuya Higuchi. "General Formulation of Multichannel Extensions of NMF Variants". W Audio Source Separation, 95–124. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-73031-8_5.
Pełny tekst źródłaNugraha, Aditya Arie, Antoine Liutkus i Emmanuel Vincent. "Deep Neural Network Based Multichannel Audio Source Separation". W Audio Source Separation, 157–85. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-73031-8_7.
Pełny tekst źródłaMandel, Michael I., Shoko Araki i Tomohiro Nakatani. "Multichannel Clustering and Classification Approaches". W Audio Source Separation and Speech Enhancement, 235–61. Chichester, UK: John Wiley & Sons Ltd, 2018. http://dx.doi.org/10.1002/9781119279860.ch12.
Pełny tekst źródłaOzerov, Alexey, i Hirokazu Kameoka. "Gaussian Model Based Multichannel Separation". W Audio Source Separation and Speech Enhancement, 289–315. Chichester, UK: John Wiley & Sons Ltd, 2018. http://dx.doi.org/10.1002/9781119279860.ch14.
Pełny tekst źródłaOzerov, Alexey, Cédric Févotte i Emmanuel Vincent. "An Introduction to Multichannel NMF for Audio Source Separation". W Audio Source Separation, 73–94. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-73031-8_4.
Pełny tekst źródłaKornatowski, Eugeniusz. "Monitoring of the Multichannel Audio Signal". W Computational Collective Intelligence. Technologies and Applications, 298–306. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-16732-4_32.
Pełny tekst źródłaIto, Nobutaka, Shoko Araki i Tomohiro Nakatani. "Recent Advances in Multichannel Source Separation and Denoising Based on Source Sparseness". W Audio Source Separation, 279–300. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-73031-8_11.
Pełny tekst źródłaPertilä, Pasi, Alessio Brutti, Piergiorgio Svaizer i Maurizio Omologo. "Multichannel Source Activity Detection, Localization, and Tracking". W Audio Source Separation and Speech Enhancement, 47–64. Chichester, UK: John Wiley & Sons Ltd, 2018. http://dx.doi.org/10.1002/9781119279860.ch4.
Pełny tekst źródłaStreszczenia konferencji na temat "Multichannel audio"
Ozerov, Alexey, Cagdas Bilen i Patrick Perez. "Multichannel audio declipping". W 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2016. http://dx.doi.org/10.1109/icassp.2016.7471757.
Pełny tekst źródłaBoltze, Thomas, i Leon van de Kerkhof. "MPEG Multichannel Audio in DVB". W SMPTE Australia Conference. IEEE, 1999. http://dx.doi.org/10.5594/m001173.
Pełny tekst źródłaLanger, Henrik, i Robert Manzke. "Embedded Multichannel Linux Audiosystem for Musical Applications". W AM '17: Audio Mostly 2017. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3123514.3123523.
Pełny tekst źródłaLeglaive, Simon, Umut Simsekli, Antoine Liutkus, Roland Badeau i Gael Richard. "Alpha-stable multichannel audio source separation". W 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2017. http://dx.doi.org/10.1109/icassp.2017.7952221.
Pełny tekst źródłaLyman, Steve. "Contribution and Distribution of Multichannel Audio". W SMPTE Australia Conference. IEEE, 1999. http://dx.doi.org/10.5594/m001174.
Pełny tekst źródłaReiss, Joshua D. "Intelligent systems for mixing multichannel audio". W 2011 17th International Conference on Digital Signal Processing (DSP). IEEE, 2011. http://dx.doi.org/10.1109/icdsp.2011.6004988.
Pełny tekst źródłaYang, Dai, Hongmei Ai, Christos Kyriakakis i C. C. Jay Kuo. "Embedded high-quality multichannel audio coding". W Photonics West 2001 - Electronic Imaging, redaktorzy Sethuraman Panchanathan, V. Michael Bove, Jr. i Subramania I. Sudharsanan. SPIE, 2001. http://dx.doi.org/10.1117/12.420793.
Pełny tekst źródłaThomas, Mark R. P., Nikolay D. Gaubitch, Jon Gudnason i Patrick A. Naylor. "A Practical Multichannel Dereverberation Algorithm using Multichannel Dypsa and Spatiotemporal Averaging". W 2007 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics. IEEE, 2007. http://dx.doi.org/10.1109/aspaa.2007.4392983.
Pełny tekst źródłaArteaga, Daniel, i Jordi Pons. "Multichannel-based Learning for Audio Object Extraction". W ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2021. http://dx.doi.org/10.1109/icassp39728.2021.9414585.
Pełny tekst źródłaWohlmayr, Michael, i Marián Képesi. "Joint position-pitch extraction from multichannel audio". W Interspeech 2007. ISCA: ISCA, 2007. http://dx.doi.org/10.21437/interspeech.2007-454.
Pełny tekst źródła