Добірка наукової літератури з теми "Vocoder"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Vocoder".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Vocoder"

1

Karoui, Chadlia, Chris James, Pascal Barone, David Bakhos, Mathieu Marx, and Olivier Macherey. "Searching for the Sound of a Cochlear Implant: Evaluation of Different Vocoder Parameters by Cochlear Implant Users With Single-Sided Deafness." Trends in Hearing 23 (January 2019): 233121651986602. http://dx.doi.org/10.1177/2331216519866029.

Повний текст джерела
Анотація:
Cochlear implantation in subjects with single-sided deafness (SSD) offers a unique opportunity to directly compare the percepts evoked by a cochlear implant (CI) with those evoked acoustically. Here, nine SSD-CI users performed a forced-choice task evaluating the similarity of speech processed by their CI with speech processed by several vocoders presented to their healthy ear. In each trial, subjects heard two intervals: their CI followed by a certain vocoder in Interval 1 and their CI followed by a different vocoder in Interval 2. The vocoders differed either (i) in carrier type—(sinusoidal [SINE], bandfiltered noise [NOISE], and pulse-spreading harmonic complex) or (ii) in frequency mismatch between the analysis and synthesis frequency ranges—(no mismatch, and two frequency-mismatched conditions of 2 and 4 equivalent rectangular bandwidths [ERBs]). Subjects had to state in which of the two intervals the CI and vocoder sounds were more similar. Despite a large intersubject variability, the PSHC vocoder was judged significantly more similar to the CI than SINE or NOISE vocoders. Furthermore, the No-mismatch and 2-ERB mismatch vocoders were judged significantly more similar to the CI than the 4-ERB mismatch vocoder. The mismatch data were also interpreted by comparing spiral ganglion characteristic frequencies with electrode contact positions determined from postoperative computed tomography scans. Only one subject demonstrated a pattern of preference consistent with adaptation to the CI sound processor frequency-to-electrode allocation table and two subjects showed possible partial adaptation. Those subjects with adaptation patterns presented overall small and consistent frequency mismatches across their electrode arrays.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Roebel, Axel, and Frederik Bous. "Neural Vocoding for Singing and Speaking Voices with the Multi-Band Excited WaveNet." Information 13, no. 3 (February 23, 2022): 103. http://dx.doi.org/10.3390/info13030103.

Повний текст джерела
Анотація:
The use of the mel spectrogram as a signal parameterization for voice generation is quite recent and linked to the development of neural vocoders. These are deep neural networks that allow reconstructing high-quality speech from a given mel spectrogram. While initially developed for speech synthesis, now neural vocoders have also been studied in the context of voice attribute manipulation, opening new means for voice processing in audio production. However, to be able to apply neural vocoders in real-world applications, two problems need to be addressed: (1) To support use in professional audio workstations, the computational complexity should be small, (2) the vocoder needs to support a large variety of speakers, differences in voice qualities, and a wide range of intensities potentially encountered during audio production. In this context, the present study will provide a detailed description of the Multi-band Excited WaveNet, a fully convolutional neural vocoder built around signal processing blocks. It will evaluate the performance of the vocoder when trained on a variety of multi-speaker and multi-singer databases, including an experimental evaluation of the neural vocoder trained on speech and singing voices. Addressing the problem of intensity variation, the study will introduce a new adaptive signal normalization scheme that allows for robust compensation for dynamic and static gain variations. Evaluations are performed using objective measures and a number of perceptual tests including different neural vocoder algorithms known from the literature. The results confirm that the proposed vocoder compares favorably to the state-of-the-art in its capacity to generalize to unseen voices and voice qualities. The remaining challenges will be discussed.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Ausili, Sebastian A., Bradford Backus, Martijn J. H. Agterberg, A. John van Opstal, and Marc M. van Wanrooij. "Sound Localization in Real-Time Vocoded Cochlear-Implant Simulations With Normal-Hearing Listeners." Trends in Hearing 23 (January 2019): 233121651984733. http://dx.doi.org/10.1177/2331216519847332.

Повний текст джерела
Анотація:
Bilateral cochlear-implant (CI) users and single-sided deaf listeners with a CI are less effective at localizing sounds than normal-hearing (NH) listeners. This performance gap is due to the degradation of binaural and monaural sound localization cues, caused by a combination of device-related and patient-related issues. In this study, we targeted the device-related issues by measuring sound localization performance of 11 NH listeners, listening to free-field stimuli processed by a real-time CI vocoder. The use of a real-time vocoder is a new approach, which enables testing in a free-field environment. For the NH listening condition, all listeners accurately and precisely localized sounds according to a linear stimulus–response relationship with an optimal gain and a minimal bias both in the azimuth and in the elevation directions. In contrast, when listening with bilateral real-time vocoders, listeners tended to orient either to the left or to the right in azimuth and were unable to determine sound source elevation. When listening with an NH ear and a unilateral vocoder, localization was impoverished on the vocoder side but improved toward the NH side. Localization performance was also reflected by systematic variations in reaction times across listening conditions. We conclude that perturbation of interaural temporal cues, reduction of interaural level cues, and removal of spectral pinna cues by the vocoder impairs sound localization. Listeners seem to ignore cues that were made unreliable by the vocoder, leading to acute reweighting of available localization cues. We discuss how current CI processors prevent CI users from localizing sounds in everyday environments.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Wess, Jessica M., and Joshua G. W. Bernstein. "The Effect of Nonlinear Amplitude Growth on the Speech Perception Benefits Provided by a Single-Sided Vocoder." Journal of Speech, Language, and Hearing Research 62, no. 3 (March 25, 2019): 745–57. http://dx.doi.org/10.1044/2018_jslhr-h-18-0001.

Повний текст джерела
Анотація:
PurposeFor listeners with single-sided deafness, a cochlear implant (CI) can improve speech understanding by giving the listener access to the ear with the better target-to-masker ratio (TMR; head shadow) or by providing interaural difference cues to facilitate the perceptual separation of concurrent talkers (squelch). CI simulations presented to listeners with normal hearing examined how these benefits could be affected by interaural differences in loudness growth in a speech-on-speech masking task.MethodExperiment 1 examined a target–masker spatial configuration where the vocoded ear had a poorer TMR than the nonvocoded ear. Experiment 2 examined the reverse configuration. Generic head-related transfer functions simulated free-field listening. Compression or expansion was applied independently to each vocoder channel (power-law exponents: 0.25, 0.5, 1, 1.5, or 2).ResultsCompression reduced the benefit provided by the vocoder ear in both experiments. There was some evidence that expansion increased squelch in Experiment 1 but reduced the benefit in Experiment 2 where the vocoder ear provided a combination of head-shadow and squelch benefits.ConclusionsThe effects of compression and expansion are interpreted in terms of envelope distortion and changes in the vocoded-ear TMR (for head shadow) or changes in perceived target–masker spatial separation (for squelch). The compression parameter is a candidate for clinical optimization to improve single-sided deafness CI outcomes.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Bosen, Adam K., and Michael F. Barry. "Serial Recall Predicts Vocoded Sentence Recognition Across Spectral Resolutions." Journal of Speech, Language, and Hearing Research 63, no. 4 (April 27, 2020): 1282–98. http://dx.doi.org/10.1044/2020_jslhr-19-00319.

Повний текст джерела
Анотація:
Purpose The goal of this study was to determine how various aspects of cognition predict speech recognition ability across different levels of speech vocoding within a single group of listeners. Method We tested the ability of young adults ( N = 32) with normal hearing to recognize Perceptually Robust English Sentence Test Open-set (PRESTO) sentences that were degraded with a vocoder to produce different levels of spectral resolution (16, eight, and four carrier channels). Participants also completed tests of cognition (fluid intelligence, short-term memory, and attention), which were used as predictors of sentence recognition. Sentence recognition was compared across vocoder conditions, predictors were correlated with individual differences in sentence recognition, and the relationships between predictors were characterized. Results PRESTO sentence recognition performance declined with a decreasing number of vocoder channels, with no evident floor or ceiling performance in any condition. Individual ability to recognize PRESTO sentences was consistent relative to the group across vocoder conditions. Short-term memory, as measured with serial recall, was a moderate predictor of sentence recognition (ρ = 0.65). Serial recall performance was constant across vocoder conditions when measured with a digit span task. Fluid intelligence was marginally correlated with serial recall, but not sentence recognition. Attentional measures had no discernible relationship to sentence recognition and a marginal relationship with serial recall. Conclusions Verbal serial recall is a substantial predictor of vocoded sentence recognition, and this predictive relationship is independent of spectral resolution. In populations that show variable speech recognition outcomes, such as listeners with cochlear implants, it should be possible to account for the independent effects of spectral resolution and verbal serial recall in their speech recognition ability. Supplemental Material https://doi.org/10.23641/asha.12021051
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Shi, Yong Peng. "Research and Implementation of MELP Algorithm Based on TMS320VC5509A." Advanced Materials Research 934 (May 2014): 239–44. http://dx.doi.org/10.4028/www.scientific.net/amr.934.239.

Повний текст джерела
Анотація:
A kind of MELP vocode is designed based on DSP TMS320VC5509A in this article. Firstly, it expatiates the MELP algorithm,then the idea of modeling and realization process on DSP based is proposed. At last we can complete the function simulation of the encoding and decoding system,and the experiment result shows that the synthetical signals fit well with the original ones, and the quality of the speech got from the vocoder is good.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Clark, Graeme, and Peter J. Blamey. "ELectrotactile vocoder." Journal of the Acoustical Society of America 90, no. 5 (November 1991): 2880. http://dx.doi.org/10.1121/1.401830.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Goupell, Matthew J., Garrison T. Draves, and Ruth Y. Litovsky. "Recognition of vocoded words and sentences in quiet and multi-talker babble with children and adults." PLOS ONE 15, no. 12 (December 29, 2020): e0244632. http://dx.doi.org/10.1371/journal.pone.0244632.

Повний текст джерела
Анотація:
A vocoder is used to simulate cochlear-implant sound processing in normal-hearing listeners. Typically, there is rapid improvement in vocoded speech recognition, but it is unclear if the improvement rate differs across age groups and speech materials. Children (8–10 years) and young adults (18–26 years) were trained and tested over 2 days (4 hours) on recognition of eight-channel noise-vocoded words and sentences, in quiet and in the presence of multi-talker babble at signal-to-noise ratios of 0, +5, and +10 dB. Children achieved poorer performance than adults in all conditions, for both word and sentence recognition. With training, vocoded speech recognition improvement rates were not significantly different between children and adults, suggesting that improvement in learning how to process speech cues degraded via vocoding is absent of developmental differences across these age groups and types of speech materials. Furthermore, this result confirms that the acutely measured age difference in vocoded speech recognition persists after extended training.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Ding, Yuntao, Rangzhuoma Cai, and Baojia Gong. "Tibetan speech synthesis based on an improved neural network." MATEC Web of Conferences 336 (2021): 06012. http://dx.doi.org/10.1051/matecconf/202133606012.

Повний текст джерела
Анотація:
Nowadays, Tibetan speech synthesis based on neural network has become the mainstream synthesis method. Among them, the griffin-lim vocoder is widely used in Tibetan speech synthesis because of its relatively simple synthesis.Aiming at the problem of low fidelity of griffin-lim vocoder, this paper uses WaveNet vocoder instead of griffin-lim for Tibetan speech synthesis.This paper first uses convolution operation and attention mechanism to extract sequence features.And then uses linear projection and feature amplification module to predict mel spectrogram.Finally, use WaveNet vocoder to synthesize speech waveform. Experimental data shows that our model has a better performance in Tibetan speech synthesis.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Eng, Erica, Can Xu, Sarah Medina, Fan-Yin Cheng, René Gifford, and Spencer Smith. "Objective discrimination of bimodal speech using the frequency following response: A machine learning approach." Journal of the Acoustical Society of America 152, no. 4 (October 2022): A91. http://dx.doi.org/10.1121/10.0015651.

Повний текст джерела
Анотація:
Bimodal hearing, which combines a cochlear implant (CI) with a contralateral hearing aid, provides significant speech recognition benefits relative to a monaural CI. Factors predicting bimodal benefit remain poorly understood but may involve extracting fundamental frequency and/or formant information from the non-implanted ear. This study investigated whether neural responses (frequency following responses, FFRs) to simulated bimodal signals can be (1) accurately classified using machine learning and (2) used to predict perceptual bimodal benefit. We hypothesized that FFR classification accuracy would improve with increasing acoustic bandwidth due to greater fundamental and formant frequency access. Three vowels (/e/, /i/, and /ʊ/) with identical fundamental frequencies were manipulated to create five bimodal simulations (vocoder in right ear, lowpass filtered in left ear): Vocoder-only, Vocoder +125 Hz, Vocoder +250 Hz, Vocoder +500 Hz, and Vocoder +750 Hz. Perceptual performance on the BKB-SIN test was also measured using the same five configurations. FFR classification accuracy improved with increasing bimodal acoustic bandwidth. Furthermore, FFR bimodal benefit predicted behavioral bimodal benefit. These results indicate that the FFR may be useful in objectively verifying and tuning bimodal configurations.
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "Vocoder"

1

LeBlanc, Wilfrid P. (Wilfrid Paul) Carleton University Dissertation Engineering Electrical. "An advanced speech coder based on a rate-distortion theory framework." Ottawa, 1988.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Griffin, Daniel W. "Multi-band excitation vocoder." Thesis, Massachusetts Institute of Technology, 1987. http://hdl.handle.net/1721.1/14803.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Martins, José Antônio. "Vocoder LPC com quantização vetorial." [s.n.], 1991. http://repositorio.unicamp.br/jspui/handle/REPOSIP/261389.

Повний текст джерела
Анотація:
Orientador : Fabio Violaro
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica
Made available in DSpace on 2018-07-13T23:59:40Z (GMT). No. of bitstreams: 1 Martins_JoseAntonio_M.pdf: 6784204 bytes, checksum: 4e9df50ca8f72e1710d541924b76a67c (MD5) Previous issue date: 1991
Resumo: Neste trabalho são descritos os princípios do vocoder LPC, sendo mostrados os métodos para cálculo dos parâmetros do mesmo. Também são apresentados os resultados de simulações de vocoders LPC usando quantização escalar, quantização vetorial e interpolação dos parâmetros quantizados. Inicialmente foi projetado um vocoder LPC não quantizado, o qual serviu de padrão para a avaliação dos vocoders quantizados. Usando a quantização escalar dos coeficientes razão log-área foi obtido um vocoder à taxa de 2200 bit /s, assegurando uma boa qualidade e alta inteligibilidade da voz sintetizada. Com o uso da quantização vetorial obteve-se um bom desempenho em taxas da ordem de 1000 bit/s. Essas taxas foram reduzidas em 50% com o uso da interpolação linear, transmitindo apenas os parâmetros dos quadros ímpares. Assim, conseguiu-se vocoders com taxas ao redor de 500 bit/s, apresentando voz sintetizada com degradação em relação aos sistemas anteriores, mas ainda assegurando uma boa inteligibilidade
Abstract: Not informed.
Mestrado
Eletronica e Comunicações
Mestre em Engenharia Elétrica
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Hudson, Nicholaus D. W. "The self-excited vocoder for mobile telephony." Thesis, University of Bath, 1992. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.760629.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Moore, James Thomas. "A mixed excitation vocoder with fuzzy logic classifier." Thesis, Monterey, California. Naval Postgraduate School, 1992. http://hdl.handle.net/10945/23960.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Foley, Jeffrey J. (Jeffrey Joseph). "Digital implementation of a frequency-lowering channel vocoder." Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/38798.

Повний текст джерела
Анотація:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1996.
Includes bibliographical references (p. 58-59).
by Jeffrey J. Foley.
M.Eng.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Carr, Raymond C. "Improvements to a pitch-synchronous linear predictive coding (LPC) vocoder." Thesis, University of Ottawa (Canada), 1989. http://hdl.handle.net/10393/5954.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Yeh, Ernest Nanjung 1975. "Advanced Vocoder Idle Slot Exploitation for TIA IS-136 standard." Thesis, Massachusetts Institute of Technology, 1998. http://hdl.handle.net/1721.1/47580.

Повний текст джерела
Анотація:
Thesis (S.B. and M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998.
Includes bibliographical references (p. 55).
by Ernest Nanjung Yeh.
S.B.and M.Eng.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Manjunath, Sharath. "Implementation of a variable rate vocoder and its performance analysis." Thesis, This resource online, 1994. http://scholar.lib.vt.edu/theses/available/etd-06102009-063255/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Iyengar, Vasu. "A low delay 16 kbit/sec coder for speech signals /." Thesis, McGill University, 1987. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=63799.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Книги з теми "Vocoder"

1

Surphlis, David S. Multi-band excitation vocoder. [s.l: The Author], 1996.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Redding, Christopher. Voice quality assessment of vocoders in tandem configuration. [Washington, D.C.]: U.S. Dept. of Commerce, National Telecommunications and Information Administration, 2001.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Moore, James Thomas. A mixed excitation vocoder with fuzzy logic classifier. Monterey, Calif: Naval Postgraduate School, 1992.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Papamichalis, Panos E. Practical approaches to speech coding. Englewood Cliffs, N.J: Prentice-Hall, 1987.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Tompkins, Dave. How to wreck a nice beach: The vocoder from World War II to hip-hop : the machine speaks. Brooklyn, NY: Melville House, 2010.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Tompkins, Dave. How to wreck a nice beach: The vocoder from World War II to hip-hop : the machine speaks. Brooklyn, NY: Melville House, 2011.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Fundamentals of voice-quality engineering in wireless networks. Cambridge, UK: Cambridge University Press, 2007.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

How to wreck a nice beach: The vocoder from World War II to hip-hop : the machine speaks. Chicago: Stop Smiling/Melville House Pub., 2010.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Ramamurthy, Karthikeyan N. MATLAB software for the code excited linear prediction algorithm: The Federal Standard, 1016. San Rafael, Calif. (1537 Fourth Street, San Rafael, CA 94901 USA): Morgan & Claypool Publishers, 2010.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Carroll, Angela, and Charles Moore. Vocoder. Petite Ivy Press, 2021.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Vocoder"

1

Weik, Martin H. "vocoder." In Computer Science and Communications Dictionary, 1900. Boston, MA: Springer US, 2000. http://dx.doi.org/10.1007/1-4020-0613-6_20884.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Chung, Jae H., and Ronald W. Schafer. "Vector Excitation Homomorphic Vocoder." In Advances in Speech Coding, 235–44. Boston, MA: Springer US, 1991. http://dx.doi.org/10.1007/978-1-4615-3266-8_23.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Gerstlauer, Andreas. "Design of a GSM Vocoder." In System Design, 175–92. Boston, MA: Springer US, 2001. http://dx.doi.org/10.1007/978-1-4615-1481-7_3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Pirkle, Will C. "FFT Processing: The Phase Vocoder." In Designing Audio Effect Plugins in C++, 564–613. Second edition. | New York, NY : Routledge, 2019.: Routledge, 2019. http://dx.doi.org/10.4324/9780429490248-20.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Vít, Jakub, Zdeněk Hanzlíček, and Jindřich Matoušek. "Czech Speech Synthesis with Generative Neural Vocoder." In Text, Speech, and Dialogue, 307–15. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-27947-9_26.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Vondra, Martin, and Robert Vích. "Speech Emotion Modification Using a Cepstral Vocoder." In Development of Multimodal Interfaces: Active Listening and Synchrony, 280–85. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-12397-9_23.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Loizou, Philipos C. "Speech Processing in Vocoder-Centric Cochlear Implants." In Cochlear and Brainstem Implants, 109–43. Basel: S. KARGER AG, 2006. http://dx.doi.org/10.1159/000094648.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Lu, Tangle, and Xiaoqun Zhao. "An MELP Vocoder Based on UVS and MVF." In Machine Learning and Intelligent Communications, 44–52. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-52730-7_5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Mandeel, Ali Raheem, Mohammed Salah Al-Radhi, and Tamás Gábor Csapó. "Speaker Adaptation with Continuous Vocoder-Based DNN-TTS." In Speech and Computer, 407–16. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-87802-3_37.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Saranya, A., and N. Sripriya. "LPC VOCODER Using Instants of Significant Excitation and Pole Focusing." In Advances in Parallel Distributed Computing, 180–90. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-24037-9_18.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Vocoder"

1

Liu, Zhijun, Kuan Chen, and Kai Yu. "Neural Homomorphic Vocoder." In Interspeech 2020. ISCA: ISCA, 2020. http://dx.doi.org/10.21437/interspeech.2020-3188.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Ribeiro, Carlos M., Isabel M. Trancoso, and Diamantino A. Caseiro. "Phonetic vocoder assessment." In 6th International Conference on Spoken Language Processing (ICSLP 2000). ISCA: ISCA, 2000. http://dx.doi.org/10.21437/icslp.2000-663.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Barbany, Oriol, Antonio Bonafonte, and Santiago Pascual. "Multi-Speaker Neural Vocoder." In IberSPEECH 2018. ISCA: ISCA, 2018. http://dx.doi.org/10.21437/iberspeech.2018-7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Tamamori, Akira, Tomoki Hayashi, Kazuhiro Kobayashi, Kazuya Takeda, and Tomoki Toda. "Speaker-Dependent WaveNet Vocoder." In Interspeech 2017. ISCA: ISCA, 2017. http://dx.doi.org/10.21437/interspeech.2017-314.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Prusa, Zdenek, and Nicki Holighaus. "Phase vocoder done right." In 2017 25th European Signal Processing Conference (EUSIPCO). IEEE, 2017. http://dx.doi.org/10.23919/eusipco.2017.8081353.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Cohen, Aaron E., Yvette T. Lee, and David A. Heide. "Vocoder susceptibility to baseband Trojans." In 2017 IEEE 8th Annual Ubiquitous Computing, Electronics and Mobile Communication Conference (UEMCON). IEEE, 2017. http://dx.doi.org/10.1109/uemcon.2017.8249068.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Obranovich, Charles R., John M. Golusky, Robert D. Preuss, Darren R. Fabbri, Daniel R. Cruthirds, Erin M. Aylward, James A. Freebersyser, and Stephen R. Kolek. "300 bps noise robust vocoder." In MILCOM 2010 - 2010 IEEE Military Communications Conference. IEEE, 2010. http://dx.doi.org/10.1109/milcom.2010.5680311.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Moreno, Asunción, José A. R. Fonollosa, and Josep Vidal. "Vocoder design based on HOS." In 3rd European Conference on Speech Communication and Technology (Eurospeech 1993). ISCA: ISCA, 1993. http://dx.doi.org/10.21437/eurospeech.1993-25.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Beskrovnyi, Ivan, Aleksandr Ivchenko, Pavel Kononyuk, Liubov Antufrieva, and Alexander Dvorkovich. "Low-Speed Vocoder with Noise Filtration." In 2019 International Conference on Engineering and Telecommunication (EnT). IEEE, 2019. http://dx.doi.org/10.1109/ent47717.2019.9030582.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Martino, Joseph Di. "Speech synthesis using phase vocoder techniques." In 5th European Conference on Speech Communication and Technology (Eurospeech 1997). ISCA: ISCA, 1997. http://dx.doi.org/10.21437/eurospeech.1997-467.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Звіти організацій з теми "Vocoder"

1

Heide, David A., Aaron E. Cohen, Yvette T. Lee, and Thomas M. Moran. Universal Vocoder Using Variable Data Rate Vocoding. Fort Belvoir, VA: Defense Technical Information Center, June 2013. http://dx.doi.org/10.21236/ada588068.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Mack, M. A., and B. Gold. The Intelligibility of Non-Vocoded and Vocoded Semantically Anomalous Sentences. Fort Belvoir, VA: Defense Technical Information Center, July 1985. http://dx.doi.org/10.21236/ada160401.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Li, A. RTP Payload Format for Enhanced Variable Rate Codecs (EVRC) and Selectable Mode Vocoders (SMV). RFC Editor, July 2003. http://dx.doi.org/10.17487/rfc3558.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Mack, M., J. Tierney, and M. E. Boyle. The Intelligibility of Natural and LPC-Vocoded Words and Sentences Presented to Native and Non-Native Speakers of English. Fort Belvoir, VA: Defense Technical Information Center, July 1990. http://dx.doi.org/10.21236/ada226180.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії