Artykuły w czasopismach na temat „Multichannel audio”

Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Multichannel audio.

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Multichannel audio”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.

1

Ono, Kazuho. "2.Multichannel Audio". Journal of the Institute of Image Information and Television Engineers 68, nr 8 (2014): 604–7. http://dx.doi.org/10.3169/itej.68.604.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Holbrook, Kyle A., i Michael J. Yacavone. "Multichannel audio reproduction system". Journal of the Acoustical Society of America 82, nr 2 (sierpień 1987): 728. http://dx.doi.org/10.1121/1.395373.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Emmett, John. "Metering for Multichannel Audio". SMPTE Journal 110, nr 8 (sierpień 2001): 532–36. http://dx.doi.org/10.5594/j17765.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Zhu, Qiushi, Jie Zhang, Yu Gu, Yuchen Hu i Lirong Dai. "Multichannel AV-wav2vec2: A Framework for Learning Multichannel Multi-Modal Speech Representation". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 17 (24.03.2024): 19768–76. http://dx.doi.org/10.1609/aaai.v38i17.29951.

Pełny tekst źródła
Streszczenie:
Self-supervised speech pre-training methods have developed rapidly in recent years, which show to be very effective for many near-field single-channel speech tasks. However, far-field multichannel speech processing is suffering from the scarcity of labeled multichannel data and complex ambient noises. The efficacy of self-supervised learning for far-field multichannel and multi-modal speech processing has not been well explored. Considering that visual information helps to improve speech recognition performance in noisy scenes, in this work we propose the multichannel multi-modal speech self-supervised learning framework AV-wav2vec2, which utilizes video and multichannel audio data as inputs. First, we propose a multi-path structure to process multi-channel audio streams and a visual stream in parallel, with intra-, and inter-channel contrastive as training targets to fully exploit the rich information in multi-channel speech data. Second, based on contrastive learning, we use additional single-channel audio data, which is trained jointly to improve the performance of multichannel multi-modal representation. Finally, we use a Chinese multichannel multi-modal dataset in real scenarios to validate the effectiveness of the proposed method on audio-visual speech recognition (AVSR), automatic speech recognition (ASR), visual speech recognition (VSR) and audio-visual speaker diarization (AVSD) tasks.
Style APA, Harvard, Vancouver, ISO itp.
5

Martyniuk, Tetiana, Maksym Mykytiuk i Mykola Zaitsev. "FEATURES OF ANALYSIS OF MULTICHANNEL AUDIO SIGNALSFEATURES OF ANALYSIS OF MULTICHANNEL AUDIO SIGNALS". ГРААЛЬ НАУКИ, nr 2-3 (9.04.2021): 302–5. http://dx.doi.org/10.36074/grail-of-science.02.04.2021.061.

Pełny tekst źródła
Streszczenie:
The rapid growth of audio content has led to the need to use tools for analysis and quality control of audio signals using software and hardware and modules. The fastest-growing industry is software and programming languages.The Python programming language today has the most operational and visual capabilities for working with sound. When developing programs for computational signal analysis, it provides the optimal balance of high and low-level programming functions. Compared to Matlab or other similar solutions, Python is free and allows you to create standalone applications without the need for large, permanently installed files and a virtual environment.
Style APA, Harvard, Vancouver, ISO itp.
6

Gao, Xue Fei, Guo Yang, Jing Wang, Xiang Xie i Jing Ming Kuang. "A Backward Compatible Multichannel Audio Compression Method". Advanced Materials Research 756-759 (wrzesień 2013): 977–81. http://dx.doi.org/10.4028/www.scientific.net/amr.756-759.977.

Pełny tekst źródła
Streszczenie:
This paper proposes a backward-compatible multichannel audio codec based on downmix and upmix operation. The codec represents a multichannel audio input signal with downmixed mono signal and spatial parametric data. The encoding method consists of three parts: spatial temporal analysis of audio signal, compressing multi-channel audio into mono audio and encoding mono signals. The proposed codec combines high audio quality and low parameter coding rate and the method is simpler and more effective than the conventional methods. With this method, its possible to transmit or store multi-channel audio signals as mono audio signals.
Style APA, Harvard, Vancouver, ISO itp.
7

Gunawan, Teddy Surya, i Mira Kartiwi. "Performance Evaluation of Multichannel Audio Compression". Indonesian Journal of Electrical Engineering and Computer Science 10, nr 1 (1.04.2018): 146. http://dx.doi.org/10.11591/ijeecs.v10.i1.pp146-153.

Pełny tekst źródła
Streszczenie:
<p>In recent years, multichannel audio systems are widely used in modern sound devices as it can provide more realistic and engaging experience to the listener. This paper focuses on the performance evaluation of three lossy, i.e. AAC, Ogg Vorbis, and Opus, and three lossless compression, i.e. FLAC, TrueAudio, and WavPack, for multichannel audio signals, including stereo, 5.1 and 7.1 channels. Experiments were conducted on the same three audio files but with different channel configurations. The performance of each encoder was evaluated based on its encoding time (averaged over 100 times), data reduction, and audio quality. Usually, there is always a trade-off between the three metrics. To simplify the evaluation, a new integrated performance metric was proposed that combines all the three performance metrics. Using the new measure, FLAC was found to be the best lossless compression, while Ogg Vorbis and Opus were found to be the best for lossy compression depends on the channel configuration. This result could be used in determining the proper audio format for multichannel audio systems.</p>
Style APA, Harvard, Vancouver, ISO itp.
8

Dong, Yingjun, Neil G. MacLaren, Yiding Cao, Francis J. Yammarino, Shelley D. Dionne, Michael D. Mumford, Shane Connelly, Hiroki Sayama i Gregory A. Ruark. "Utterance Clustering Using Stereo Audio Channels". Computational Intelligence and Neuroscience 2021 (25.09.2021): 1–8. http://dx.doi.org/10.1155/2021/6151651.

Pełny tekst źródła
Streszczenie:
Utterance clustering is one of the actively researched topics in audio signal processing and machine learning. This study aims to improve the performance of utterance clustering by processing multichannel (stereo) audio signals. Processed audio signals were generated by combining left- and right-channel audio signals in a few different ways and then by extracting the embedded features (also called d-vectors) from those processed audio signals. This study applied the Gaussian mixture model for supervised utterance clustering. In the training phase, a parameter-sharing Gaussian mixture model was obtained to train the model for each speaker. In the testing phase, the speaker with the maximum likelihood was selected as the detected speaker. Results of experiments with real audio recordings of multiperson discussion sessions showed that the proposed method that used multichannel audio signals achieved significantly better performance than a conventional method with mono-audio signals in more complicated conditions.
Style APA, Harvard, Vancouver, ISO itp.
9

Fujimori, Kazuki, Bisser Raytchev, Kazufumi Kaneda, Yasufumi Yamada, Yu Teshima, Emyo Fujioka, Shizuko Hiryu i Toru Tamaki. "Localization of Flying Bats from Multichannel Audio Signals by Estimating Location Map with Convolutional Neural Networks". Journal of Robotics and Mechatronics 33, nr 3 (20.06.2021): 515–25. http://dx.doi.org/10.20965/jrm.2021.p0515.

Pełny tekst źródła
Streszczenie:
We propose a method that uses ultrasound audio signals from a multichannel microphone array to estimate the positions of flying bats. The proposed model uses a deep convolutional neural network that takes multichannel signals as input and outputs the probability maps of the locations of bats. We present experimental results using two ultrasound audio clips of different bat species and show numerical simulations with synthetically generated sounds.
Style APA, Harvard, Vancouver, ISO itp.
10

Hotho, Gerard, Lars F. Villemoes i Jeroen Breebaart. "A Backward-Compatible Multichannel Audio Codec". IEEE Transactions on Audio, Speech, and Language Processing 16, nr 1 (styczeń 2008): 83–93. http://dx.doi.org/10.1109/tasl.2007.910768.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
11

Chen, Ling, Wei Wang i Cheng Jiang. "Research on Embedded Multichannel Audio Conversion Module". Journal of Physics: Conference Series 2625, nr 1 (1.10.2023): 012075. http://dx.doi.org/10.1088/1742-6596/2625/1/012075.

Pełny tekst źródła
Streszczenie:
Abstract With the rapid development and progress of information technology and the wide usage of audio signal processing in underwater acoustic signal processing, acoustic audio signal acquisition, conversion, and transmission technology always play an important role. To enhance the ability of signal acquisition and conversion with high reliability, this paper designs an embedded multi-channel audio conversion module. The module achieves multi-channel, multi-sample rates with synchronous analog-to-digital/digital-to-analog conversion (ADC/DAC) function, and additionally equips with dual network redundancy. The module uses a chip integrated with ARM and programmable logic FPGA as the main control chip. The writing of the underlying driver and application code ensures the reliable operation of the DAC chip and ADC chip and the steady of the network transmission. Through experimental verification, the audio conversion module performs well in multi-channel and multi-sample rates with synchronous ADC/DAC, and the crosstalk between channels is less than -50 dB. The dual network redundancy design and the application of the RS232 serial port ensure the high fidelity, reliability, and reprogramming of the audio conversion function, which shows the design can be used in a wide range of scenarios.
Style APA, Harvard, Vancouver, ISO itp.
12

Sobirin, Muhammad, i Ikhwana Elfitri. "Perancangan dan Analisis Kinerja Pengkodean Audio Multichannel Dengan Metode Closed Loop". Jurnal Nasional Teknik Elektro 3, nr 2 (1.09.2014): 157–66. http://dx.doi.org/10.20449/jnte.v3i2.80.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
13

Hidri Adel, Meddeb Souad, Abdulqadir Alaqeeli i Amiri Hamid. "Beamforming Techniques for Multichannel audio Signal Separation". International Journal of Digital Content Technology and its Applications 6, nr 20 (30.11.2012): 659–67. http://dx.doi.org/10.4156/jdcta.vol6.issue20.72.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
14

Wrigley, S. N., G. J. Brown, V. Wan i S. Renals. "Speech and crosstalk detection in multichannel audio". IEEE Transactions on Speech and Audio Processing 13, nr 1 (styczeń 2005): 84–91. http://dx.doi.org/10.1109/tsa.2004.838531.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
15

Gulsrud, Timothy. "Acoustical design of multichannel audio listening environments". Journal of the Acoustical Society of America 123, nr 5 (maj 2008): 3202. http://dx.doi.org/10.1121/1.2933360.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
16

Sarensen, J. A. "High-Fidelity Multichannel Audio Coding [Book Review". IEEE Signal Processing Magazine 22, nr 5 (wrzesień 2005): 150–53. http://dx.doi.org/10.1109/msp.2005.1511837.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
17

Yao, Shu-Nung, i Chang-Wei Huang. "Autonomous Technology for 2.1 Channel Audio Systems". Electronics 11, nr 3 (23.01.2022): 339. http://dx.doi.org/10.3390/electronics11030339.

Pełny tekst źródła
Streszczenie:
During the COVID-19 pandemic, smart home requirements have shifted toward entertainment at home. The purpose of this research project was therefore to develop a robotic audio system for home automation. High-end audio systems normally refer to multichannel home theaters. Although multichannel audio systems enable people to enjoy surround sound as they do at the cinema, stereo audio systems have been popularly used since the 1980s. The major shortcoming of a stereo audio system is its narrow listening area. If listeners are out of the area, the system has difficulty providing a stable sound field. This is because of the head-shadow effect blocking the high-frequency sound. The proposed system, by integrating computer vision and robotics, can track the head movement of a user and adjust the directions of loudspeakers, thereby helping the sound wave travel through the air. Unlike previous studies, in which only a diminutive scenario was built, in this work, the idea was applied to a commercial 2.1 audio system, and listening tests were conducted. The theory and the simulation coincide with the experimental results. The approximate rate of audio quality improvement is 31%. The experimental results are encouraging, especially for high-pitched music.
Style APA, Harvard, Vancouver, ISO itp.
18

Zhu, Yunxi, Wenyao Ma, Zheng Kuang, Ming Wu i Jun Yang. "Optimal audio beam pattern synthesis for an enhanced parametric array loudspeaker". Journal of the Acoustical Society of America 154, nr 5 (1.11.2023): 3210–22. http://dx.doi.org/10.1121/10.0022415.

Pełny tekst źródła
Streszczenie:
A parametric array loudspeaker (PAL) generates highly directional audible sound in air with a small aperture size compared to a conventional loudspeaker. But in indoor applications, the long propagation distance of a PAL causes reflections, which disturbs the reproduction of narrow audio beams. Moreover, sound distortion appears along the off-axis direction due to the frequency dependence of the beam width. This study proposed an optimal audio beam pattern synthesis for a PAL-based convex optimization, which can design the audio beam of a PAL with an optimal solution. The proposed method overcame the mentioned limitations by applying it to a length-limited PAL for audio spot control and a multichannel PAL array for a constant beam width audio beam. In a length-limited PAL, the proposed method restricts the audio spot to a smaller region and weakens the sound leakage along the off-axis direction. Whereas in a multichannel PAL array, the proposed method also achieves a constant beam width near the radiator axis. Simulations and experiments verify the effectiveness of the proposed method, which will enhance the performance of a PAL in scenarios where control of the audio beam is required.
Style APA, Harvard, Vancouver, ISO itp.
19

Lee, Dongheon, i Jung-Woo Choi. "Inter-channel Conv-TasNet for source-agnostic multichannel audio enhancement". INTER-NOISE and NOISE-CON Congress and Conference Proceedings 265, nr 5 (1.02.2023): 2068–75. http://dx.doi.org/10.3397/in_2022_0297.

Pełny tekst źródła
Streszczenie:
Deep neural network (DNN) models for the audio enhancement task have been developed in various ways. Most of them rely on the source-dependent characteristics, such as temporal or spectral characteristics of speeches, to suppress noises embedded in measured signals. Only a few studies have attempted to exploit the spatial information embedded in multichannel data. In this work, we propose a DNN architecture that fully exploits inter-channel relations to realize source-agnostic audio enhancement. The proposed model is based on the fully convolutional time-domain audio separation network (Conv-TasNet) but extended to extract and learn spatial features from multichannel input signals. The use of spatial information is facilitated by separating each convolutional layer into dedicated inter-channel 1x1 Conv blocks and 2D spectro-temporal Conv blocks. The performance of the proposed model is verified through the training and test with heterogeneous datasets including speech and other audio datasets, which demonstrates that the enriched spatial information from the proposed architecture enables the versatile audio enhancement in a source-agnostic way.
Style APA, Harvard, Vancouver, ISO itp.
20

Mattos, Tiago F., i Bennett M. Brooks. "Comparison of recording studio control room operational response measurements for single, stereo, and immersive audio monitor configurations". Journal of the Acoustical Society of America 152, nr 4 (październik 2022): A104. http://dx.doi.org/10.1121/10.0015691.

Pełny tekst źródła
Streszczenie:
Audio monitoring in recording studio control rooms has evolved continuously since the beginning of multitrack systems. In recent years, control rooms have adapted to using multiple audio monitors (loudspeakers) needed for the immersive audio experience. The primary technical recommendations for determining the acoustical quality of a control room are given in EBU-TECH-3276 and ITU BS.1116. The results of measuring the Operational Room Response Curve (ORRC) can differ significantly for only one audio monitor operating compared to the two monitors required for stereo. For multichannel immersive audio configurations, there can also be significant differences in the measured ORRC. A recent immersive system, known as Dolby Atmos, follows the Dolby Laboratories recommendations which generally comply with the EBU/ITU specifications. The goal of this research is to analyze and compare the measurements of the ORRC per the EBU/ITU/Dolby standards for various audio monitor configurations in a control room. These include individual monitors operating alone, two monitors in a stereo configuration operating at the same time, and combinations of monitors in the multichannel immersive audio system. The impact of the coupled system of room acoustics and multiple loudspeakers on the decision quality for the studio user will be addressed.
Style APA, Harvard, Vancouver, ISO itp.
21

Smith, William P. "Method of decoding two-channel matrix encoded audio to reconstruct multichannel audio". Journal of the Acoustical Society of America 120, nr 2 (2006): 573. http://dx.doi.org/10.1121/1.2336656.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
22

Li, Zhen, i Qian Yi Yang. "The Research of Dynamic Sound of Multichannel System Based on Matlab". Applied Mechanics and Materials 602-605 (sierpień 2014): 2569–71. http://dx.doi.org/10.4028/www.scientific.net/amm.602-605.2569.

Pełny tekst źródła
Streszczenie:
With the emergence and development of 3D video technology, audiences have higher expectations to the sound effect while watching the 3D video. So we studied the dynamic sound effect under the existing audio standard of the multichannel. In this paper, the audio file of the sea wave was disposed by the Gaussian function through MATLAB and divided into several parts to be saved as different audio files. Then each audio file was sent to a channel to simulate a stereo sound field. Test result showed that the effect will provide a perfect experience for the audiences.
Style APA, Harvard, Vancouver, ISO itp.
23

Pulkki, V., i T. Hirvonen. "Localization of virtual sources in multichannel audio reproduction". IEEE Transactions on Speech and Audio Processing 13, nr 1 (styczeń 2005): 105–19. http://dx.doi.org/10.1109/tsa.2004.838533.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
24

Faller, C. "Parametric multichannel audio coding: synthesis of coherence cues". IEEE Transactions on Audio, Speech and Language Processing 14, nr 1 (styczeń 2006): 299–310. http://dx.doi.org/10.1109/tsa.2005.854105.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
25

Nugraha, Aditya Arie, Antoine Liutkus i Emmanuel Vincent. "Multichannel Audio Source Separation With Deep Neural Networks". IEEE/ACM Transactions on Audio, Speech, and Language Processing 24, nr 9 (wrzesień 2016): 1652–64. http://dx.doi.org/10.1109/taslp.2016.2580946.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
26

Leglaive, Simon, Roland Badeau i Gael Richard. "Multichannel Audio Source Separation With Probabilistic Reverberation Priors". IEEE/ACM Transactions on Audio, Speech, and Language Processing 24, nr 12 (grudzień 2016): 2453–65. http://dx.doi.org/10.1109/taslp.2016.2614140.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
27

Elfitri, I., Banu Günel i A. M. Kondoz. "Multichannel Audio Coding Based on Analysis by Synthesis". Proceedings of the IEEE 99, nr 4 (kwiecień 2011): 657–70. http://dx.doi.org/10.1109/jproc.2010.2102310.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
28

Pulkki, V., i M. Karjalainen. "Multichannel audio rendering using amplitude panning [DSP Applications]". IEEE Signal Processing Magazine 25, nr 3 (maj 2008): 118–22. http://dx.doi.org/10.1109/msp.2008.918025.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
29

Miron, Marius, Julio J. Carabias-Orti, Juan J. Bosch, Emilia Gómez i Jordi Janer. "Score-Informed Source Separation for Multichannel Orchestral Recordings". Journal of Electrical and Computer Engineering 2016 (2016): 1–19. http://dx.doi.org/10.1155/2016/8363507.

Pełny tekst źródła
Streszczenie:
This paper proposes a system for score-informed audio source separation for multichannel orchestral recordings. The orchestral music repertoire relies on the existence of scores. Thus, a reliable separation requires a good alignment of the score with the audio of the performance. To that extent, automatic score alignment methods are reliable when allowing a tolerance window around the actual onset and offset. Moreover, several factors increase the difficulty of our task: a high reverberant image, large ensembles having rich polyphony, and a large variety of instruments recorded within a distant-microphone setup. To solve these problems, we design context-specific methods such as the refinement of score-following output in order to obtain a more precise alignment. Moreover, we extend a close-microphone separation framework to deal with the distant-microphone orchestral recordings. Then, we propose the first open evaluation dataset in this musical context, including annotations of the notes played by multiple instruments from an orchestral ensemble. The evaluation aims at analyzing the interactions of important parts of the separation framework on the quality of separation. Results show that we are able to align the original score with the audio of the performance and separate the sources corresponding to the instrument sections.
Style APA, Harvard, Vancouver, ISO itp.
30

Dewi Nurdiyah, Eko Mulyanto Yuniarno, Yoyon Kusnendar Suprapto i Mauridhi Hery Purnomo. "IRAWNET: A Method for Transcribing Indonesian Classical Music Notes Directly from Multichannel Raw Audio". EMITTER International Journal of Engineering Technology 11, nr 2 (22.12.2023): 246–64. http://dx.doi.org/10.24003/emitter.v11i2.827.

Pełny tekst źródła
Streszczenie:
A challenging task when developing real-time Automatic Music Transcription (AMT) methods is directly leveraging inputs from multichannel raw audio without any handcrafted signal transformation and feature extraction steps. The crucial problems are that raw audio only contains an amplitude in each timestamp, and the signals of the left and right channels have different amplitude intensities and onset times. Thus, this study addressed these issues by proposing the IRawNet method with fused feature layers to merge different amplitude from multichannel raw audio. IRawNet aims to transcribe Indonesian classical music notes. It was validated with the Gamelan music dataset. The Synthetic Minority Oversampling Technique (SMOTE) overcame the class imbalance of the Gamelan music dataset. Under various experimental scenarios, the performance effects of oversampled data, hyperparameters tuning, and fused feature layers are analyzed. Furthermore, the performance of the proposed method was compared with Temporal Convolutional Network (TCN), Deep WaveNet, and the monochannel IRawNet. The results proved that proposed method almost achieves superior results in entire metric performances with 0.871 of accuracy, 0.988 of AUC, 0.927 of precision, 0.896 of recall, and 0.896 of F1 score.
Style APA, Harvard, Vancouver, ISO itp.
31

Qiao, Yue, Léo Guadagnin i Edgar Choueiri. "Isolation performance metrics for personal sound zone reproduction systems". JASA Express Letters 2, nr 10 (październik 2022): 104801. http://dx.doi.org/10.1121/10.0014604.

Pełny tekst źródła
Streszczenie:
Two isolation performance metrics, inter-zone isolation (IZI) and inter-program isolation (IPI), are introduced for evaluating personal sound zone (PSZ) systems. Compared to the commonly used acoustic contrast metric, IZI and IPI are generalized for multichannel audio and quantify the isolation of sound zones and of audio programs, respectively. The two metrics are shown to be generally non-interchangeable and suitable for different scenarios, such as generating dark zones (IZI) or minimizing audio-on-audio interference (IPI). Furthermore, two examples with free-field simulations are presented and demonstrate the applications of IZI and IPI in evaluating PSZ performance in different rendering modes and PSZ robustness.
Style APA, Harvard, Vancouver, ISO itp.
32

Lee, Tae-Jin, Jae-Hyoun Yoo, Jeong-Il Seo, Kyeong-Ok Kang i Whan-Woo Kim. "Multichannel Audio Reproduction Technology based on 10.2ch for UHDTV". Journal of Broadcast Engineering 17, nr 5 (30.09.2012): 827–37. http://dx.doi.org/10.5909/jbe.2012.17.5.827.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
33

Dai Yang, Hongmei Ai, C. Kyriakakis i C. C. J. Kuo. "High-fidelity multichannel audio coding with karhunen-loeve transform". IEEE Transactions on Speech and Audio Processing 11, nr 4 (lipiec 2003): 365–80. http://dx.doi.org/10.1109/tsa.2003.814375.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
34

Olson, Bruce C. "Using 3‐D modeling to design multichannel audio systems". Journal of the Acoustical Society of America 113, nr 4 (kwiecień 2003): 2201. http://dx.doi.org/10.1121/1.4780191.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
35

Ye, Qinghua, Hefei Yang i Xiaodong Li. "A simplified crosstalk cancellation method for multichannel audio equalization". Journal of the Acoustical Society of America 131, nr 4 (kwiecień 2012): 3218. http://dx.doi.org/10.1121/1.4708000.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
36

Bharitkar, Sunil, Grant Davidson, Louis Fielder i Poppy Crum. "Tutorial on Critical Listening of Multichannel Audio Codec Performance". SMPTE Motion Imaging Journal 121, nr 8 (listopad 2012): 30–45. http://dx.doi.org/10.5594/j18246xy.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
37

Bayram, Ilker. "A Multichannel Audio Denoising Formulation Based on Spectral Sparsity". IEEE/ACM Transactions on Audio, Speech, and Language Processing 23, nr 12 (grudzień 2015): 2272–85. http://dx.doi.org/10.1109/taslp.2015.2479042.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
38

Pagès, Guilhem, Roberto Longo, Laurent Simon i Manuel Melon. "Online adaptive identification of multichannel systems for audio applications". Journal of the Acoustical Society of America 155, nr 1 (1.01.2024): 229–40. http://dx.doi.org/10.1121/10.0024149.

Pełny tekst źródła
Streszczenie:
Impulse responses (IRs) estimation of multi-input acoustic systems is a prerequisite for many audio applications. In this paper, an adaptive identification problem based on the Autostep algorithm is extended to the simultaneous estimation of room IRs for multiple input single output linear time invariant systems without any a priori information. To do so, the proposed algorithm is initially evaluated in a simulated room with several sound sources active at the same time. Finally, an experimental validation is proposed for the cases of a semi-anechoic chamber and an arbitrary room. Special attention is dedicated to the algorithm convergence behavior, considering different meta parameters settings. Results are eventually compared with the other normalized version of the least mean square algorithm.
Style APA, Harvard, Vancouver, ISO itp.
39

Noll, Peter, i Davis Pan. "ISO/MPEG Audio Coding". International Journal of High Speed Electronics and Systems 08, nr 01 (marzec 1997): 69–118. http://dx.doi.org/10.1142/s0129156497000044.

Pełny tekst źródła
Streszczenie:
The Moving Pictures Expert Group within the International Organization of Standardization (ISO/MPEG) has developed, and is presently developing, a series of audiovisual standards. Its audio coding standard MPEG Phase 1 is the first international standard in the field of high quality digital audio compression and has been applied in many areas, both for consumer and professional audio. Typical application areas for digital audio are in the fields of audio production, program distribution and exchange, digital sound broadcasting, digital storage, and various multimedia applications. This paper will describe in some detail the main features of MPEG Phase 1 coders. As a logical further step in digital audio a multichannel audio standard MPEG Phase 2 is being standardized to provide an improved stereophonic image for audio-only applications including teleconferencing and for improved television systems. The status of this standardization process will be covered briefly.
Style APA, Harvard, Vancouver, ISO itp.
40

Lee, Yong Ju, Jeongil Seo, Seungkwon Beack, Daeyoung Jang, Kyeongok Kang, Jinwoong Kim i Jin Woo Hong. "Design and Development of T-DMB Multichannel Audio Service System Based on Spatial Audio Coding". ETRI Journal 31, nr 4 (5.08.2009): 365–75. http://dx.doi.org/10.4218/etrij.09.0108.0557.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
41

Houge, Benjamin, i Jutta Friedrichs. "Food Opera: A New Genre for Audio-gustatory Expression". Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment 9, nr 5 (30.06.2021): 59–63. http://dx.doi.org/10.1609/aiide.v9i5.12652.

Pełny tekst źródła
Streszczenie:
“Food opera” is the term that the authors have applied to a new genre of audio-gustatory experience, in which a multi-course meal is paired with real-time, algorithmically generated music, deployed over a massively multichannel sound system. This paper presents an overview of the system used to deploy the sonic component of these events, while also exploring the history and creative potential of this unique multisensory format.
Style APA, Harvard, Vancouver, ISO itp.
42

Sánchez-Hevia, Héctor A., Roberto Gil-Pita i Manuel Rosa-Zurera. "Efficient multichannel detection of impulsive audio events for wireless networks". Applied Acoustics 179 (sierpień 2021): 108005. http://dx.doi.org/10.1016/j.apacoust.2021.108005.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
43

Vernony, Steve, i Tony Spath. "Carrying Multichannel Audio in a Stereo Production and Distribution Infrastructure". SMPTE Journal 111, nr 2 (luty 2002): 97–102. http://dx.doi.org/10.5594/j16393.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
44

Hong, Jin‐Woo, Dae‐Young Jang i Seong‐Han Kim. "Multichannel audio signal compression and quality assessment for AV communications". Journal of the Acoustical Society of America 103, nr 5 (maj 1998): 3027. http://dx.doi.org/10.1121/1.422552.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
45

Kendall, Gary S. "Spatial Perception and Cognition in Multichannel Audio for Electroacoustic Music". Organised Sound 15, nr 03 (25.10.2010): 228–38. http://dx.doi.org/10.1017/s1355771810000336.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
46

Mihashi, Tadashi, Tomoya Takatani, Shigeki Miyabe, Yoshimitsu Mori, Hiroshi Saruwatari i Kiyohiro Shikano. "Compressive coding for multichannel audio signals using independent component analysis". Journal of the Acoustical Society of America 120, nr 5 (listopad 2006): 3219. http://dx.doi.org/10.1121/1.4788173.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
47

Rao, Dan. "Analysis on decorrelation effect of audio signal in multichannel reproduction". Journal of the Acoustical Society of America 141, nr 5 (maj 2017): 3902. http://dx.doi.org/10.1121/1.4988783.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
48

George, S., S. Zielinski i F. Rumsey. "Feature Extraction for the Prediction of Multichannel Spatial Audio Fidelity". IEEE Transactions on Audio, Speech and Language Processing 14, nr 6 (listopad 2006): 1994–2005. http://dx.doi.org/10.1109/tasl.2006.883248.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
49

Lee, Seokjin, Sang Ha Park i Koeng-Mo Sung. "Beamspace-Domain Multichannel Nonnegative Matrix Factorization for Audio Source Separation". IEEE Signal Processing Letters 19, nr 1 (styczeń 2012): 43–46. http://dx.doi.org/10.1109/lsp.2011.2173192.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
50

Battisti, Luca, Angelo Farina, Antonella Bevilacqua i Antonella Bevilacqua. "Implementation of non-equal-partition multi-channel convolver". INTER-NOISE and NOISE-CON Congress and Conference Proceedings 265, nr 6 (1.02.2023): 1570–81. http://dx.doi.org/10.3397/in_2022_0220.

Pełny tekst źródła
Streszczenie:
Convolution has become a largely exploited signal operation thanks to his several applications in digital signal processing. In the realm of audio elaboration, convolution has the particular meaning of imposing a spectral and/or temporal structure onto a sound. These structures are completely given by the signal with which the signal is being convolved, called Impulse Response (IR). These signals contain a sort of acoustical footprint that can be completely transferred to another sound, earning the same acoustic characteristics as a consequence. With a multichannel approach, convolution assumes even a further meaning and a wider application field. Indeed, it's exploited to deal with modern spatial sound techniques such as Ambisonics which necessitate matrix elaborations of the involved signals. Ambisonics recordings, for example, are made by special coincident multi-capsule microphone arrays, whose signals can be converted to standard Ambisonics format by a multi-channel convolver. A similar concept can apply to the mixing stage of audio production, where direction-based audio objects must be converted to the Ambisonics format to be reproduced in the relative speaker setups. The aim of the work is to analyse an existing algorithm of a multichannel convolver software evaluating his efficiency. Moreover, the managing of the matrix of filters has showed weaknesses when assembling new matrices. Solution proposes a handy way to deal with matrices and to improve the efficiency of the algorithm.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii