Artículos de revistas sobre el tema "Multichannel audio"

Siga este enlace para ver otros tipos de publicaciones sobre el tema: Multichannel audio.

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "Multichannel audio".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Ono, Kazuho. "2.Multichannel Audio". Journal of the Institute of Image Information and Television Engineers 68, n.º 8 (2014): 604–7. http://dx.doi.org/10.3169/itej.68.604.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Holbrook, Kyle A. y Michael J. Yacavone. "Multichannel audio reproduction system". Journal of the Acoustical Society of America 82, n.º 2 (agosto de 1987): 728. http://dx.doi.org/10.1121/1.395373.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Emmett, John. "Metering for Multichannel Audio". SMPTE Journal 110, n.º 8 (agosto de 2001): 532–36. http://dx.doi.org/10.5594/j17765.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Zhu, Qiushi, Jie Zhang, Yu Gu, Yuchen Hu y Lirong Dai. "Multichannel AV-wav2vec2: A Framework for Learning Multichannel Multi-Modal Speech Representation". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 17 (24 de marzo de 2024): 19768–76. http://dx.doi.org/10.1609/aaai.v38i17.29951.

Texto completo
Resumen
Self-supervised speech pre-training methods have developed rapidly in recent years, which show to be very effective for many near-field single-channel speech tasks. However, far-field multichannel speech processing is suffering from the scarcity of labeled multichannel data and complex ambient noises. The efficacy of self-supervised learning for far-field multichannel and multi-modal speech processing has not been well explored. Considering that visual information helps to improve speech recognition performance in noisy scenes, in this work we propose the multichannel multi-modal speech self-supervised learning framework AV-wav2vec2, which utilizes video and multichannel audio data as inputs. First, we propose a multi-path structure to process multi-channel audio streams and a visual stream in parallel, with intra-, and inter-channel contrastive as training targets to fully exploit the rich information in multi-channel speech data. Second, based on contrastive learning, we use additional single-channel audio data, which is trained jointly to improve the performance of multichannel multi-modal representation. Finally, we use a Chinese multichannel multi-modal dataset in real scenarios to validate the effectiveness of the proposed method on audio-visual speech recognition (AVSR), automatic speech recognition (ASR), visual speech recognition (VSR) and audio-visual speaker diarization (AVSD) tasks.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Martyniuk, Tetiana, Maksym Mykytiuk y Mykola Zaitsev. "FEATURES OF ANALYSIS OF MULTICHANNEL AUDIO SIGNALSFEATURES OF ANALYSIS OF MULTICHANNEL AUDIO SIGNALS". ГРААЛЬ НАУКИ, n.º 2-3 (9 de abril de 2021): 302–5. http://dx.doi.org/10.36074/grail-of-science.02.04.2021.061.

Texto completo
Resumen
The rapid growth of audio content has led to the need to use tools for analysis and quality control of audio signals using software and hardware and modules. The fastest-growing industry is software and programming languages.The Python programming language today has the most operational and visual capabilities for working with sound. When developing programs for computational signal analysis, it provides the optimal balance of high and low-level programming functions. Compared to Matlab or other similar solutions, Python is free and allows you to create standalone applications without the need for large, permanently installed files and a virtual environment.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Gao, Xue Fei, Guo Yang, Jing Wang, Xiang Xie y Jing Ming Kuang. "A Backward Compatible Multichannel Audio Compression Method". Advanced Materials Research 756-759 (septiembre de 2013): 977–81. http://dx.doi.org/10.4028/www.scientific.net/amr.756-759.977.

Texto completo
Resumen
This paper proposes a backward-compatible multichannel audio codec based on downmix and upmix operation. The codec represents a multichannel audio input signal with downmixed mono signal and spatial parametric data. The encoding method consists of three parts: spatial temporal analysis of audio signal, compressing multi-channel audio into mono audio and encoding mono signals. The proposed codec combines high audio quality and low parameter coding rate and the method is simpler and more effective than the conventional methods. With this method, its possible to transmit or store multi-channel audio signals as mono audio signals.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Gunawan, Teddy Surya y Mira Kartiwi. "Performance Evaluation of Multichannel Audio Compression". Indonesian Journal of Electrical Engineering and Computer Science 10, n.º 1 (1 de abril de 2018): 146. http://dx.doi.org/10.11591/ijeecs.v10.i1.pp146-153.

Texto completo
Resumen
<p>In recent years, multichannel audio systems are widely used in modern sound devices as it can provide more realistic and engaging experience to the listener. This paper focuses on the performance evaluation of three lossy, i.e. AAC, Ogg Vorbis, and Opus, and three lossless compression, i.e. FLAC, TrueAudio, and WavPack, for multichannel audio signals, including stereo, 5.1 and 7.1 channels. Experiments were conducted on the same three audio files but with different channel configurations. The performance of each encoder was evaluated based on its encoding time (averaged over 100 times), data reduction, and audio quality. Usually, there is always a trade-off between the three metrics. To simplify the evaluation, a new integrated performance metric was proposed that combines all the three performance metrics. Using the new measure, FLAC was found to be the best lossless compression, while Ogg Vorbis and Opus were found to be the best for lossy compression depends on the channel configuration. This result could be used in determining the proper audio format for multichannel audio systems.</p>
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Dong, Yingjun, Neil G. MacLaren, Yiding Cao, Francis J. Yammarino, Shelley D. Dionne, Michael D. Mumford, Shane Connelly, Hiroki Sayama y Gregory A. Ruark. "Utterance Clustering Using Stereo Audio Channels". Computational Intelligence and Neuroscience 2021 (25 de septiembre de 2021): 1–8. http://dx.doi.org/10.1155/2021/6151651.

Texto completo
Resumen
Utterance clustering is one of the actively researched topics in audio signal processing and machine learning. This study aims to improve the performance of utterance clustering by processing multichannel (stereo) audio signals. Processed audio signals were generated by combining left- and right-channel audio signals in a few different ways and then by extracting the embedded features (also called d-vectors) from those processed audio signals. This study applied the Gaussian mixture model for supervised utterance clustering. In the training phase, a parameter-sharing Gaussian mixture model was obtained to train the model for each speaker. In the testing phase, the speaker with the maximum likelihood was selected as the detected speaker. Results of experiments with real audio recordings of multiperson discussion sessions showed that the proposed method that used multichannel audio signals achieved significantly better performance than a conventional method with mono-audio signals in more complicated conditions.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Fujimori, Kazuki, Bisser Raytchev, Kazufumi Kaneda, Yasufumi Yamada, Yu Teshima, Emyo Fujioka, Shizuko Hiryu y Toru Tamaki. "Localization of Flying Bats from Multichannel Audio Signals by Estimating Location Map with Convolutional Neural Networks". Journal of Robotics and Mechatronics 33, n.º 3 (20 de junio de 2021): 515–25. http://dx.doi.org/10.20965/jrm.2021.p0515.

Texto completo
Resumen
We propose a method that uses ultrasound audio signals from a multichannel microphone array to estimate the positions of flying bats. The proposed model uses a deep convolutional neural network that takes multichannel signals as input and outputs the probability maps of the locations of bats. We present experimental results using two ultrasound audio clips of different bat species and show numerical simulations with synthetically generated sounds.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Hotho, Gerard, Lars F. Villemoes y Jeroen Breebaart. "A Backward-Compatible Multichannel Audio Codec". IEEE Transactions on Audio, Speech, and Language Processing 16, n.º 1 (enero de 2008): 83–93. http://dx.doi.org/10.1109/tasl.2007.910768.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Chen, Ling, Wei Wang y Cheng Jiang. "Research on Embedded Multichannel Audio Conversion Module". Journal of Physics: Conference Series 2625, n.º 1 (1 de octubre de 2023): 012075. http://dx.doi.org/10.1088/1742-6596/2625/1/012075.

Texto completo
Resumen
Abstract With the rapid development and progress of information technology and the wide usage of audio signal processing in underwater acoustic signal processing, acoustic audio signal acquisition, conversion, and transmission technology always play an important role. To enhance the ability of signal acquisition and conversion with high reliability, this paper designs an embedded multi-channel audio conversion module. The module achieves multi-channel, multi-sample rates with synchronous analog-to-digital/digital-to-analog conversion (ADC/DAC) function, and additionally equips with dual network redundancy. The module uses a chip integrated with ARM and programmable logic FPGA as the main control chip. The writing of the underlying driver and application code ensures the reliable operation of the DAC chip and ADC chip and the steady of the network transmission. Through experimental verification, the audio conversion module performs well in multi-channel and multi-sample rates with synchronous ADC/DAC, and the crosstalk between channels is less than -50 dB. The dual network redundancy design and the application of the RS232 serial port ensure the high fidelity, reliability, and reprogramming of the audio conversion function, which shows the design can be used in a wide range of scenarios.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Sobirin, Muhammad y Ikhwana Elfitri. "Perancangan dan Analisis Kinerja Pengkodean Audio Multichannel Dengan Metode Closed Loop". Jurnal Nasional Teknik Elektro 3, n.º 2 (1 de septiembre de 2014): 157–66. http://dx.doi.org/10.20449/jnte.v3i2.80.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Hidri Adel, Meddeb Souad, Abdulqadir Alaqeeli y Amiri Hamid. "Beamforming Techniques for Multichannel audio Signal Separation". International Journal of Digital Content Technology and its Applications 6, n.º 20 (30 de noviembre de 2012): 659–67. http://dx.doi.org/10.4156/jdcta.vol6.issue20.72.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Wrigley, S. N., G. J. Brown, V. Wan y S. Renals. "Speech and crosstalk detection in multichannel audio". IEEE Transactions on Speech and Audio Processing 13, n.º 1 (enero de 2005): 84–91. http://dx.doi.org/10.1109/tsa.2004.838531.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Gulsrud, Timothy. "Acoustical design of multichannel audio listening environments". Journal of the Acoustical Society of America 123, n.º 5 (mayo de 2008): 3202. http://dx.doi.org/10.1121/1.2933360.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Sarensen, J. A. "High-Fidelity Multichannel Audio Coding [Book Review". IEEE Signal Processing Magazine 22, n.º 5 (septiembre de 2005): 150–53. http://dx.doi.org/10.1109/msp.2005.1511837.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Yao, Shu-Nung y Chang-Wei Huang. "Autonomous Technology for 2.1 Channel Audio Systems". Electronics 11, n.º 3 (23 de enero de 2022): 339. http://dx.doi.org/10.3390/electronics11030339.

Texto completo
Resumen
During the COVID-19 pandemic, smart home requirements have shifted toward entertainment at home. The purpose of this research project was therefore to develop a robotic audio system for home automation. High-end audio systems normally refer to multichannel home theaters. Although multichannel audio systems enable people to enjoy surround sound as they do at the cinema, stereo audio systems have been popularly used since the 1980s. The major shortcoming of a stereo audio system is its narrow listening area. If listeners are out of the area, the system has difficulty providing a stable sound field. This is because of the head-shadow effect blocking the high-frequency sound. The proposed system, by integrating computer vision and robotics, can track the head movement of a user and adjust the directions of loudspeakers, thereby helping the sound wave travel through the air. Unlike previous studies, in which only a diminutive scenario was built, in this work, the idea was applied to a commercial 2.1 audio system, and listening tests were conducted. The theory and the simulation coincide with the experimental results. The approximate rate of audio quality improvement is 31%. The experimental results are encouraging, especially for high-pitched music.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Zhu, Yunxi, Wenyao Ma, Zheng Kuang, Ming Wu y Jun Yang. "Optimal audio beam pattern synthesis for an enhanced parametric array loudspeaker". Journal of the Acoustical Society of America 154, n.º 5 (1 de noviembre de 2023): 3210–22. http://dx.doi.org/10.1121/10.0022415.

Texto completo
Resumen
A parametric array loudspeaker (PAL) generates highly directional audible sound in air with a small aperture size compared to a conventional loudspeaker. But in indoor applications, the long propagation distance of a PAL causes reflections, which disturbs the reproduction of narrow audio beams. Moreover, sound distortion appears along the off-axis direction due to the frequency dependence of the beam width. This study proposed an optimal audio beam pattern synthesis for a PAL-based convex optimization, which can design the audio beam of a PAL with an optimal solution. The proposed method overcame the mentioned limitations by applying it to a length-limited PAL for audio spot control and a multichannel PAL array for a constant beam width audio beam. In a length-limited PAL, the proposed method restricts the audio spot to a smaller region and weakens the sound leakage along the off-axis direction. Whereas in a multichannel PAL array, the proposed method also achieves a constant beam width near the radiator axis. Simulations and experiments verify the effectiveness of the proposed method, which will enhance the performance of a PAL in scenarios where control of the audio beam is required.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Lee, Dongheon y Jung-Woo Choi. "Inter-channel Conv-TasNet for source-agnostic multichannel audio enhancement". INTER-NOISE and NOISE-CON Congress and Conference Proceedings 265, n.º 5 (1 de febrero de 2023): 2068–75. http://dx.doi.org/10.3397/in_2022_0297.

Texto completo
Resumen
Deep neural network (DNN) models for the audio enhancement task have been developed in various ways. Most of them rely on the source-dependent characteristics, such as temporal or spectral characteristics of speeches, to suppress noises embedded in measured signals. Only a few studies have attempted to exploit the spatial information embedded in multichannel data. In this work, we propose a DNN architecture that fully exploits inter-channel relations to realize source-agnostic audio enhancement. The proposed model is based on the fully convolutional time-domain audio separation network (Conv-TasNet) but extended to extract and learn spatial features from multichannel input signals. The use of spatial information is facilitated by separating each convolutional layer into dedicated inter-channel 1x1 Conv blocks and 2D spectro-temporal Conv blocks. The performance of the proposed model is verified through the training and test with heterogeneous datasets including speech and other audio datasets, which demonstrates that the enriched spatial information from the proposed architecture enables the versatile audio enhancement in a source-agnostic way.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Mattos, Tiago F. y Bennett M. Brooks. "Comparison of recording studio control room operational response measurements for single, stereo, and immersive audio monitor configurations". Journal of the Acoustical Society of America 152, n.º 4 (octubre de 2022): A104. http://dx.doi.org/10.1121/10.0015691.

Texto completo
Resumen
Audio monitoring in recording studio control rooms has evolved continuously since the beginning of multitrack systems. In recent years, control rooms have adapted to using multiple audio monitors (loudspeakers) needed for the immersive audio experience. The primary technical recommendations for determining the acoustical quality of a control room are given in EBU-TECH-3276 and ITU BS.1116. The results of measuring the Operational Room Response Curve (ORRC) can differ significantly for only one audio monitor operating compared to the two monitors required for stereo. For multichannel immersive audio configurations, there can also be significant differences in the measured ORRC. A recent immersive system, known as Dolby Atmos, follows the Dolby Laboratories recommendations which generally comply with the EBU/ITU specifications. The goal of this research is to analyze and compare the measurements of the ORRC per the EBU/ITU/Dolby standards for various audio monitor configurations in a control room. These include individual monitors operating alone, two monitors in a stereo configuration operating at the same time, and combinations of monitors in the multichannel immersive audio system. The impact of the coupled system of room acoustics and multiple loudspeakers on the decision quality for the studio user will be addressed.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Smith, William P. "Method of decoding two-channel matrix encoded audio to reconstruct multichannel audio". Journal of the Acoustical Society of America 120, n.º 2 (2006): 573. http://dx.doi.org/10.1121/1.2336656.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Li, Zhen y Qian Yi Yang. "The Research of Dynamic Sound of Multichannel System Based on Matlab". Applied Mechanics and Materials 602-605 (agosto de 2014): 2569–71. http://dx.doi.org/10.4028/www.scientific.net/amm.602-605.2569.

Texto completo
Resumen
With the emergence and development of 3D video technology, audiences have higher expectations to the sound effect while watching the 3D video. So we studied the dynamic sound effect under the existing audio standard of the multichannel. In this paper, the audio file of the sea wave was disposed by the Gaussian function through MATLAB and divided into several parts to be saved as different audio files. Then each audio file was sent to a channel to simulate a stereo sound field. Test result showed that the effect will provide a perfect experience for the audiences.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Pulkki, V. y T. Hirvonen. "Localization of virtual sources in multichannel audio reproduction". IEEE Transactions on Speech and Audio Processing 13, n.º 1 (enero de 2005): 105–19. http://dx.doi.org/10.1109/tsa.2004.838533.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Faller, C. "Parametric multichannel audio coding: synthesis of coherence cues". IEEE Transactions on Audio, Speech and Language Processing 14, n.º 1 (enero de 2006): 299–310. http://dx.doi.org/10.1109/tsa.2005.854105.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Nugraha, Aditya Arie, Antoine Liutkus y Emmanuel Vincent. "Multichannel Audio Source Separation With Deep Neural Networks". IEEE/ACM Transactions on Audio, Speech, and Language Processing 24, n.º 9 (septiembre de 2016): 1652–64. http://dx.doi.org/10.1109/taslp.2016.2580946.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Leglaive, Simon, Roland Badeau y Gael Richard. "Multichannel Audio Source Separation With Probabilistic Reverberation Priors". IEEE/ACM Transactions on Audio, Speech, and Language Processing 24, n.º 12 (diciembre de 2016): 2453–65. http://dx.doi.org/10.1109/taslp.2016.2614140.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Elfitri, I., Banu Günel y A. M. Kondoz. "Multichannel Audio Coding Based on Analysis by Synthesis". Proceedings of the IEEE 99, n.º 4 (abril de 2011): 657–70. http://dx.doi.org/10.1109/jproc.2010.2102310.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Pulkki, V. y M. Karjalainen. "Multichannel audio rendering using amplitude panning [DSP Applications]". IEEE Signal Processing Magazine 25, n.º 3 (mayo de 2008): 118–22. http://dx.doi.org/10.1109/msp.2008.918025.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Miron, Marius, Julio J. Carabias-Orti, Juan J. Bosch, Emilia Gómez y Jordi Janer. "Score-Informed Source Separation for Multichannel Orchestral Recordings". Journal of Electrical and Computer Engineering 2016 (2016): 1–19. http://dx.doi.org/10.1155/2016/8363507.

Texto completo
Resumen
This paper proposes a system for score-informed audio source separation for multichannel orchestral recordings. The orchestral music repertoire relies on the existence of scores. Thus, a reliable separation requires a good alignment of the score with the audio of the performance. To that extent, automatic score alignment methods are reliable when allowing a tolerance window around the actual onset and offset. Moreover, several factors increase the difficulty of our task: a high reverberant image, large ensembles having rich polyphony, and a large variety of instruments recorded within a distant-microphone setup. To solve these problems, we design context-specific methods such as the refinement of score-following output in order to obtain a more precise alignment. Moreover, we extend a close-microphone separation framework to deal with the distant-microphone orchestral recordings. Then, we propose the first open evaluation dataset in this musical context, including annotations of the notes played by multiple instruments from an orchestral ensemble. The evaluation aims at analyzing the interactions of important parts of the separation framework on the quality of separation. Results show that we are able to align the original score with the audio of the performance and separate the sources corresponding to the instrument sections.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Dewi Nurdiyah, Eko Mulyanto Yuniarno, Yoyon Kusnendar Suprapto y Mauridhi Hery Purnomo. "IRAWNET: A Method for Transcribing Indonesian Classical Music Notes Directly from Multichannel Raw Audio". EMITTER International Journal of Engineering Technology 11, n.º 2 (22 de diciembre de 2023): 246–64. http://dx.doi.org/10.24003/emitter.v11i2.827.

Texto completo
Resumen
A challenging task when developing real-time Automatic Music Transcription (AMT) methods is directly leveraging inputs from multichannel raw audio without any handcrafted signal transformation and feature extraction steps. The crucial problems are that raw audio only contains an amplitude in each timestamp, and the signals of the left and right channels have different amplitude intensities and onset times. Thus, this study addressed these issues by proposing the IRawNet method with fused feature layers to merge different amplitude from multichannel raw audio. IRawNet aims to transcribe Indonesian classical music notes. It was validated with the Gamelan music dataset. The Synthetic Minority Oversampling Technique (SMOTE) overcame the class imbalance of the Gamelan music dataset. Under various experimental scenarios, the performance effects of oversampled data, hyperparameters tuning, and fused feature layers are analyzed. Furthermore, the performance of the proposed method was compared with Temporal Convolutional Network (TCN), Deep WaveNet, and the monochannel IRawNet. The results proved that proposed method almost achieves superior results in entire metric performances with 0.871 of accuracy, 0.988 of AUC, 0.927 of precision, 0.896 of recall, and 0.896 of F1 score.
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Qiao, Yue, Léo Guadagnin y Edgar Choueiri. "Isolation performance metrics for personal sound zone reproduction systems". JASA Express Letters 2, n.º 10 (octubre de 2022): 104801. http://dx.doi.org/10.1121/10.0014604.

Texto completo
Resumen
Two isolation performance metrics, inter-zone isolation (IZI) and inter-program isolation (IPI), are introduced for evaluating personal sound zone (PSZ) systems. Compared to the commonly used acoustic contrast metric, IZI and IPI are generalized for multichannel audio and quantify the isolation of sound zones and of audio programs, respectively. The two metrics are shown to be generally non-interchangeable and suitable for different scenarios, such as generating dark zones (IZI) or minimizing audio-on-audio interference (IPI). Furthermore, two examples with free-field simulations are presented and demonstrate the applications of IZI and IPI in evaluating PSZ performance in different rendering modes and PSZ robustness.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Lee, Tae-Jin, Jae-Hyoun Yoo, Jeong-Il Seo, Kyeong-Ok Kang y Whan-Woo Kim. "Multichannel Audio Reproduction Technology based on 10.2ch for UHDTV". Journal of Broadcast Engineering 17, n.º 5 (30 de septiembre de 2012): 827–37. http://dx.doi.org/10.5909/jbe.2012.17.5.827.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Dai Yang, Hongmei Ai, C. Kyriakakis y C. C. J. Kuo. "High-fidelity multichannel audio coding with karhunen-loeve transform". IEEE Transactions on Speech and Audio Processing 11, n.º 4 (julio de 2003): 365–80. http://dx.doi.org/10.1109/tsa.2003.814375.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Olson, Bruce C. "Using 3‐D modeling to design multichannel audio systems". Journal of the Acoustical Society of America 113, n.º 4 (abril de 2003): 2201. http://dx.doi.org/10.1121/1.4780191.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Ye, Qinghua, Hefei Yang y Xiaodong Li. "A simplified crosstalk cancellation method for multichannel audio equalization". Journal of the Acoustical Society of America 131, n.º 4 (abril de 2012): 3218. http://dx.doi.org/10.1121/1.4708000.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Bharitkar, Sunil, Grant Davidson, Louis Fielder y Poppy Crum. "Tutorial on Critical Listening of Multichannel Audio Codec Performance". SMPTE Motion Imaging Journal 121, n.º 8 (noviembre de 2012): 30–45. http://dx.doi.org/10.5594/j18246xy.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Bayram, Ilker. "A Multichannel Audio Denoising Formulation Based on Spectral Sparsity". IEEE/ACM Transactions on Audio, Speech, and Language Processing 23, n.º 12 (diciembre de 2015): 2272–85. http://dx.doi.org/10.1109/taslp.2015.2479042.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Pagès, Guilhem, Roberto Longo, Laurent Simon y Manuel Melon. "Online adaptive identification of multichannel systems for audio applications". Journal of the Acoustical Society of America 155, n.º 1 (1 de enero de 2024): 229–40. http://dx.doi.org/10.1121/10.0024149.

Texto completo
Resumen
Impulse responses (IRs) estimation of multi-input acoustic systems is a prerequisite for many audio applications. In this paper, an adaptive identification problem based on the Autostep algorithm is extended to the simultaneous estimation of room IRs for multiple input single output linear time invariant systems without any a priori information. To do so, the proposed algorithm is initially evaluated in a simulated room with several sound sources active at the same time. Finally, an experimental validation is proposed for the cases of a semi-anechoic chamber and an arbitrary room. Special attention is dedicated to the algorithm convergence behavior, considering different meta parameters settings. Results are eventually compared with the other normalized version of the least mean square algorithm.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Noll, Peter y Davis Pan. "ISO/MPEG Audio Coding". International Journal of High Speed Electronics and Systems 08, n.º 01 (marzo de 1997): 69–118. http://dx.doi.org/10.1142/s0129156497000044.

Texto completo
Resumen
The Moving Pictures Expert Group within the International Organization of Standardization (ISO/MPEG) has developed, and is presently developing, a series of audiovisual standards. Its audio coding standard MPEG Phase 1 is the first international standard in the field of high quality digital audio compression and has been applied in many areas, both for consumer and professional audio. Typical application areas for digital audio are in the fields of audio production, program distribution and exchange, digital sound broadcasting, digital storage, and various multimedia applications. This paper will describe in some detail the main features of MPEG Phase 1 coders. As a logical further step in digital audio a multichannel audio standard MPEG Phase 2 is being standardized to provide an improved stereophonic image for audio-only applications including teleconferencing and for improved television systems. The status of this standardization process will be covered briefly.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Lee, Yong Ju, Jeongil Seo, Seungkwon Beack, Daeyoung Jang, Kyeongok Kang, Jinwoong Kim y Jin Woo Hong. "Design and Development of T-DMB Multichannel Audio Service System Based on Spatial Audio Coding". ETRI Journal 31, n.º 4 (5 de agosto de 2009): 365–75. http://dx.doi.org/10.4218/etrij.09.0108.0557.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Houge, Benjamin y Jutta Friedrichs. "Food Opera: A New Genre for Audio-gustatory Expression". Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment 9, n.º 5 (30 de junio de 2021): 59–63. http://dx.doi.org/10.1609/aiide.v9i5.12652.

Texto completo
Resumen
“Food opera” is the term that the authors have applied to a new genre of audio-gustatory experience, in which a multi-course meal is paired with real-time, algorithmically generated music, deployed over a massively multichannel sound system. This paper presents an overview of the system used to deploy the sonic component of these events, while also exploring the history and creative potential of this unique multisensory format.
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Sánchez-Hevia, Héctor A., Roberto Gil-Pita y Manuel Rosa-Zurera. "Efficient multichannel detection of impulsive audio events for wireless networks". Applied Acoustics 179 (agosto de 2021): 108005. http://dx.doi.org/10.1016/j.apacoust.2021.108005.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Vernony, Steve y Tony Spath. "Carrying Multichannel Audio in a Stereo Production and Distribution Infrastructure". SMPTE Journal 111, n.º 2 (febrero de 2002): 97–102. http://dx.doi.org/10.5594/j16393.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Hong, Jin‐Woo, Dae‐Young Jang y Seong‐Han Kim. "Multichannel audio signal compression and quality assessment for AV communications". Journal of the Acoustical Society of America 103, n.º 5 (mayo de 1998): 3027. http://dx.doi.org/10.1121/1.422552.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Kendall, Gary S. "Spatial Perception and Cognition in Multichannel Audio for Electroacoustic Music". Organised Sound 15, n.º 03 (25 de octubre de 2010): 228–38. http://dx.doi.org/10.1017/s1355771810000336.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Mihashi, Tadashi, Tomoya Takatani, Shigeki Miyabe, Yoshimitsu Mori, Hiroshi Saruwatari y Kiyohiro Shikano. "Compressive coding for multichannel audio signals using independent component analysis". Journal of the Acoustical Society of America 120, n.º 5 (noviembre de 2006): 3219. http://dx.doi.org/10.1121/1.4788173.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Rao, Dan. "Analysis on decorrelation effect of audio signal in multichannel reproduction". Journal of the Acoustical Society of America 141, n.º 5 (mayo de 2017): 3902. http://dx.doi.org/10.1121/1.4988783.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

George, S., S. Zielinski y F. Rumsey. "Feature Extraction for the Prediction of Multichannel Spatial Audio Fidelity". IEEE Transactions on Audio, Speech and Language Processing 14, n.º 6 (noviembre de 2006): 1994–2005. http://dx.doi.org/10.1109/tasl.2006.883248.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Lee, Seokjin, Sang Ha Park y Koeng-Mo Sung. "Beamspace-Domain Multichannel Nonnegative Matrix Factorization for Audio Source Separation". IEEE Signal Processing Letters 19, n.º 1 (enero de 2012): 43–46. http://dx.doi.org/10.1109/lsp.2011.2173192.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Battisti, Luca, Angelo Farina, Antonella Bevilacqua y Antonella Bevilacqua. "Implementation of non-equal-partition multi-channel convolver". INTER-NOISE and NOISE-CON Congress and Conference Proceedings 265, n.º 6 (1 de febrero de 2023): 1570–81. http://dx.doi.org/10.3397/in_2022_0220.

Texto completo
Resumen
Convolution has become a largely exploited signal operation thanks to his several applications in digital signal processing. In the realm of audio elaboration, convolution has the particular meaning of imposing a spectral and/or temporal structure onto a sound. These structures are completely given by the signal with which the signal is being convolved, called Impulse Response (IR). These signals contain a sort of acoustical footprint that can be completely transferred to another sound, earning the same acoustic characteristics as a consequence. With a multichannel approach, convolution assumes even a further meaning and a wider application field. Indeed, it's exploited to deal with modern spatial sound techniques such as Ambisonics which necessitate matrix elaborations of the involved signals. Ambisonics recordings, for example, are made by special coincident multi-capsule microphone arrays, whose signals can be converted to standard Ambisonics format by a multi-channel convolver. A similar concept can apply to the mixing stage of audio production, where direction-based audio objects must be converted to the Ambisonics format to be reproduced in the relative speaker setups. The aim of the work is to analyse an existing algorithm of a multichannel convolver software evaluating his efficiency. Moreover, the managing of the matrix of filters has showed weaknesses when assembling new matrices. Solution proposes a handy way to deal with matrices and to improve the efficiency of the algorithm.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía