Academic literature on the topic 'Vocoder'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Vocoder.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Vocoder"

1

Cychosz, Margaret, Matthew B. Winn, and Matthew J. Goupell. "How to vocode: Using channel vocoders for cochlear-implant research." Journal of the Acoustical Society of America 155, no. 4 (April 1, 2024): 2407–37. http://dx.doi.org/10.1121/10.0025274.

Full text
Abstract:
The channel vocoder has become a useful tool to understand the impact of specific forms of auditory degradation—particularly the spectral and temporal degradation that reflect cochlear-implant processing. Vocoders have many parameters that allow researchers to answer questions about cochlear-implant processing in ways that overcome some logistical complications of controlling for factors in individual cochlear implant users. However, there is such a large variety in the implementation of vocoders that the term “vocoder” is not specific enough to describe the signal processing used in these experiments. Misunderstanding vocoder parameters can result in experimental confounds or unexpected stimulus distortions. This paper highlights the signal processing parameters that should be specified when describing vocoder construction. The paper also provides guidance on how to determine vocoder parameters within perception experiments, given the experimenter's goals and research questions, to avoid common signal processing mistakes. Throughout, we will assume that experimenters are interested in vocoders with the specific goal of better understanding cochlear implants.
APA, Harvard, Vancouver, ISO, and other styles
2

Karoui, Chadlia, Chris James, Pascal Barone, David Bakhos, Mathieu Marx, and Olivier Macherey. "Searching for the Sound of a Cochlear Implant: Evaluation of Different Vocoder Parameters by Cochlear Implant Users With Single-Sided Deafness." Trends in Hearing 23 (January 2019): 233121651986602. http://dx.doi.org/10.1177/2331216519866029.

Full text
Abstract:
Cochlear implantation in subjects with single-sided deafness (SSD) offers a unique opportunity to directly compare the percepts evoked by a cochlear implant (CI) with those evoked acoustically. Here, nine SSD-CI users performed a forced-choice task evaluating the similarity of speech processed by their CI with speech processed by several vocoders presented to their healthy ear. In each trial, subjects heard two intervals: their CI followed by a certain vocoder in Interval 1 and their CI followed by a different vocoder in Interval 2. The vocoders differed either (i) in carrier type—(sinusoidal [SINE], bandfiltered noise [NOISE], and pulse-spreading harmonic complex) or (ii) in frequency mismatch between the analysis and synthesis frequency ranges—(no mismatch, and two frequency-mismatched conditions of 2 and 4 equivalent rectangular bandwidths [ERBs]). Subjects had to state in which of the two intervals the CI and vocoder sounds were more similar. Despite a large intersubject variability, the PSHC vocoder was judged significantly more similar to the CI than SINE or NOISE vocoders. Furthermore, the No-mismatch and 2-ERB mismatch vocoders were judged significantly more similar to the CI than the 4-ERB mismatch vocoder. The mismatch data were also interpreted by comparing spiral ganglion characteristic frequencies with electrode contact positions determined from postoperative computed tomography scans. Only one subject demonstrated a pattern of preference consistent with adaptation to the CI sound processor frequency-to-electrode allocation table and two subjects showed possible partial adaptation. Those subjects with adaptation patterns presented overall small and consistent frequency mismatches across their electrode arrays.
APA, Harvard, Vancouver, ISO, and other styles
3

Harding, Eleanor, Etienne Gaudrain, Imke Hrycyk, Robert Harris, Barbara Tillmann, Bert Maat, Rolien Free, and Deniz Başkent. "Arousal but not valence: Music emotion categorization in normal hearing and cochlear implanted participants." Journal of the Acoustical Society of America 153, no. 3_supplement (March 1, 2023): A287. http://dx.doi.org/10.1121/10.0018868.

Full text
Abstract:
Perceiving acoustic cues that convey music emotion is challenging for cochlear implant (CI) users. Emotional arousal (stimulating/relaxing) can be conveyed by temporal cues such as tempo, while emotional valence (positive/negative) can be conveyed by spectral information salient to pitch and harmony. It is however unclear the extent to which other temporal and spectral features convey emotional arousal and valence in music, respectively. In 23 normal-hearing participants, we varied the quality of temporal and spectral content using vocoders during a music emotion categorization task—musical excerpts conveyed joy (high arousal high valence), fear (high arousal low valence), serenity (low arousal high valence), and sorrow (low arousal low valence). Vocoder carriers (sinewave/noise) primarily modulated temporal information, and filter orders (low/high) primarily modulated spectral information. Improvement of temporal- (using sinewave carriers) and spectral content (using high filter order) both improved categorization. Vocoder results were compared to data from 25 CI users performing the same task with non-vocoded musical excerpts. The CI user data showed a similar pattern of errors as observed for the vocoded conditions in normal-hearing participants, suggesting that increasing the quality of temporal information, and not only spectral details, could prove beneficial for CI users’ music emotion perception.
APA, Harvard, Vancouver, ISO, and other styles
4

Roebel, Axel, and Frederik Bous. "Neural Vocoding for Singing and Speaking Voices with the Multi-Band Excited WaveNet." Information 13, no. 3 (February 23, 2022): 103. http://dx.doi.org/10.3390/info13030103.

Full text
Abstract:
The use of the mel spectrogram as a signal parameterization for voice generation is quite recent and linked to the development of neural vocoders. These are deep neural networks that allow reconstructing high-quality speech from a given mel spectrogram. While initially developed for speech synthesis, now neural vocoders have also been studied in the context of voice attribute manipulation, opening new means for voice processing in audio production. However, to be able to apply neural vocoders in real-world applications, two problems need to be addressed: (1) To support use in professional audio workstations, the computational complexity should be small, (2) the vocoder needs to support a large variety of speakers, differences in voice qualities, and a wide range of intensities potentially encountered during audio production. In this context, the present study will provide a detailed description of the Multi-band Excited WaveNet, a fully convolutional neural vocoder built around signal processing blocks. It will evaluate the performance of the vocoder when trained on a variety of multi-speaker and multi-singer databases, including an experimental evaluation of the neural vocoder trained on speech and singing voices. Addressing the problem of intensity variation, the study will introduce a new adaptive signal normalization scheme that allows for robust compensation for dynamic and static gain variations. Evaluations are performed using objective measures and a number of perceptual tests including different neural vocoder algorithms known from the literature. The results confirm that the proposed vocoder compares favorably to the state-of-the-art in its capacity to generalize to unseen voices and voice qualities. The remaining challenges will be discussed.
APA, Harvard, Vancouver, ISO, and other styles
5

Ausili, Sebastian A., Bradford Backus, Martijn J. H. Agterberg, A. John van Opstal, and Marc M. van Wanrooij. "Sound Localization in Real-Time Vocoded Cochlear-Implant Simulations With Normal-Hearing Listeners." Trends in Hearing 23 (January 2019): 233121651984733. http://dx.doi.org/10.1177/2331216519847332.

Full text
Abstract:
Bilateral cochlear-implant (CI) users and single-sided deaf listeners with a CI are less effective at localizing sounds than normal-hearing (NH) listeners. This performance gap is due to the degradation of binaural and monaural sound localization cues, caused by a combination of device-related and patient-related issues. In this study, we targeted the device-related issues by measuring sound localization performance of 11 NH listeners, listening to free-field stimuli processed by a real-time CI vocoder. The use of a real-time vocoder is a new approach, which enables testing in a free-field environment. For the NH listening condition, all listeners accurately and precisely localized sounds according to a linear stimulus–response relationship with an optimal gain and a minimal bias both in the azimuth and in the elevation directions. In contrast, when listening with bilateral real-time vocoders, listeners tended to orient either to the left or to the right in azimuth and were unable to determine sound source elevation. When listening with an NH ear and a unilateral vocoder, localization was impoverished on the vocoder side but improved toward the NH side. Localization performance was also reflected by systematic variations in reaction times across listening conditions. We conclude that perturbation of interaural temporal cues, reduction of interaural level cues, and removal of spectral pinna cues by the vocoder impairs sound localization. Listeners seem to ignore cues that were made unreliable by the vocoder, leading to acute reweighting of available localization cues. We discuss how current CI processors prevent CI users from localizing sounds in everyday environments.
APA, Harvard, Vancouver, ISO, and other styles
6

Wess, Jessica M., and Joshua G. W. Bernstein. "The Effect of Nonlinear Amplitude Growth on the Speech Perception Benefits Provided by a Single-Sided Vocoder." Journal of Speech, Language, and Hearing Research 62, no. 3 (March 25, 2019): 745–57. http://dx.doi.org/10.1044/2018_jslhr-h-18-0001.

Full text
Abstract:
PurposeFor listeners with single-sided deafness, a cochlear implant (CI) can improve speech understanding by giving the listener access to the ear with the better target-to-masker ratio (TMR; head shadow) or by providing interaural difference cues to facilitate the perceptual separation of concurrent talkers (squelch). CI simulations presented to listeners with normal hearing examined how these benefits could be affected by interaural differences in loudness growth in a speech-on-speech masking task.MethodExperiment 1 examined a target–masker spatial configuration where the vocoded ear had a poorer TMR than the nonvocoded ear. Experiment 2 examined the reverse configuration. Generic head-related transfer functions simulated free-field listening. Compression or expansion was applied independently to each vocoder channel (power-law exponents: 0.25, 0.5, 1, 1.5, or 2).ResultsCompression reduced the benefit provided by the vocoder ear in both experiments. There was some evidence that expansion increased squelch in Experiment 1 but reduced the benefit in Experiment 2 where the vocoder ear provided a combination of head-shadow and squelch benefits.ConclusionsThe effects of compression and expansion are interpreted in terms of envelope distortion and changes in the vocoded-ear TMR (for head shadow) or changes in perceived target–masker spatial separation (for squelch). The compression parameter is a candidate for clinical optimization to improve single-sided deafness CI outcomes.
APA, Harvard, Vancouver, ISO, and other styles
7

Bosen, Adam K., and Michael F. Barry. "Serial Recall Predicts Vocoded Sentence Recognition Across Spectral Resolutions." Journal of Speech, Language, and Hearing Research 63, no. 4 (April 27, 2020): 1282–98. http://dx.doi.org/10.1044/2020_jslhr-19-00319.

Full text
Abstract:
Purpose The goal of this study was to determine how various aspects of cognition predict speech recognition ability across different levels of speech vocoding within a single group of listeners. Method We tested the ability of young adults ( N = 32) with normal hearing to recognize Perceptually Robust English Sentence Test Open-set (PRESTO) sentences that were degraded with a vocoder to produce different levels of spectral resolution (16, eight, and four carrier channels). Participants also completed tests of cognition (fluid intelligence, short-term memory, and attention), which were used as predictors of sentence recognition. Sentence recognition was compared across vocoder conditions, predictors were correlated with individual differences in sentence recognition, and the relationships between predictors were characterized. Results PRESTO sentence recognition performance declined with a decreasing number of vocoder channels, with no evident floor or ceiling performance in any condition. Individual ability to recognize PRESTO sentences was consistent relative to the group across vocoder conditions. Short-term memory, as measured with serial recall, was a moderate predictor of sentence recognition (ρ = 0.65). Serial recall performance was constant across vocoder conditions when measured with a digit span task. Fluid intelligence was marginally correlated with serial recall, but not sentence recognition. Attentional measures had no discernible relationship to sentence recognition and a marginal relationship with serial recall. Conclusions Verbal serial recall is a substantial predictor of vocoded sentence recognition, and this predictive relationship is independent of spectral resolution. In populations that show variable speech recognition outcomes, such as listeners with cochlear implants, it should be possible to account for the independent effects of spectral resolution and verbal serial recall in their speech recognition ability. Supplemental Material https://doi.org/10.23641/asha.12021051
APA, Harvard, Vancouver, ISO, and other styles
8

Yang, Jing, Jenna Barrett, Zhigang Yin, and Li Xu. "Recognition of foreign-accented vocoded speech by native English listeners." Acta Acustica 7 (2023): 43. http://dx.doi.org/10.1051/aacus/2023038.

Full text
Abstract:
This study examined how talker accentedness affects the recognition of noise-vocoded speech by native English listeners and how contextual information interplays with talker accentedness during this process. The listeners included 20 native English-speaking, normal-hearing adults aged between 19 and 23 years old. The stimuli were English Hearing in Noise Test (HINT) and Revised Speech Perception in Noise (R-SPIN) sentences produced by four native Mandarin talkers (two males and two females) who learned English as a second language. Two talkers (one in each sex) had a mild foreign accent and the other two had a moderate foreign accent. A six-channel noise vocoder was used to process the stimulus sentences. The vocoder-processed and unprocessed sentences were presented to the listeners. The results revealed that talkers’ foreign accents introduced additional detrimental effects besides spectral degradation and that the negative effect was exacerbated as the foreign accent became stronger. While the contextual information provided a beneficial role in recognizing mildly accented vocoded speech, the magnitude of contextual benefit decreased as the talkers’ accentedness increased. These findings revealed the joint influence of talker variability and sentence context on the perception of degraded speech.
APA, Harvard, Vancouver, ISO, and other styles
9

Bak, Taejun, Junmo Lee, Hanbin Bae, Jinhyeok Yang, Jae-Sung Bae, and Young-Sun Joo. "Avocodo: Generative Adversarial Network for Artifact-Free Vocoder." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 11 (June 26, 2023): 12562–70. http://dx.doi.org/10.1609/aaai.v37i11.26479.

Full text
Abstract:
Neural vocoders based on the generative adversarial neural network (GAN) have been widely used due to their fast inference speed and lightweight networks while generating high-quality speech waveforms. Since the perceptually important speech components are primarily concentrated in the low-frequency bands, most GAN-based vocoders perform multi-scale analysis that evaluates downsampled speech waveforms. This multi-scale analysis helps the generator improve speech intelligibility. However, in preliminary experiments, we discovered that the multi-scale analysis which focuses on the low-frequency bands causes unintended artifacts, e.g., aliasing and imaging artifacts, which degrade the synthesized speech waveform quality. Therefore, in this paper, we investigate the relationship between these artifacts and GAN-based vocoders and propose a GAN-based vocoder, called Avocodo, that allows the synthesis of high-fidelity speech with reduced artifacts. We introduce two kinds of discriminators to evaluate speech waveforms in various perspectives: a collaborative multi-band discriminator and a sub-band discriminator. We also utilize a pseudo quadrature mirror filter bank to obtain downsampled multi-band speech waveforms while avoiding aliasing. According to experimental results, Avocodo outperforms baseline GAN-based vocoders, both objectively and subjectively, while reproducing speech with fewer artifacts.
APA, Harvard, Vancouver, ISO, and other styles
10

Shi, Yong Peng. "Research and Implementation of MELP Algorithm Based on TMS320VC5509A." Advanced Materials Research 934 (May 2014): 239–44. http://dx.doi.org/10.4028/www.scientific.net/amr.934.239.

Full text
Abstract:
A kind of MELP vocode is designed based on DSP TMS320VC5509A in this article. Firstly, it expatiates the MELP algorithm,then the idea of modeling and realization process on DSP based is proposed. At last we can complete the function simulation of the encoding and decoding system,and the experiment result shows that the synthetical signals fit well with the original ones, and the quality of the speech got from the vocoder is good.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Vocoder"

1

LeBlanc, Wilfrid P. (Wilfrid Paul) Carleton University Dissertation Engineering Electrical. "An advanced speech coder based on a rate-distortion theory framework." Ottawa, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Griffin, Daniel W. "Multi-band excitation vocoder." Thesis, Massachusetts Institute of Technology, 1987. http://hdl.handle.net/1721.1/14803.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Martins, José Antônio. "Vocoder LPC com quantização vetorial." [s.n.], 1991. http://repositorio.unicamp.br/jspui/handle/REPOSIP/261389.

Full text
Abstract:
Orientador : Fabio Violaro
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica
Made available in DSpace on 2018-07-13T23:59:40Z (GMT). No. of bitstreams: 1 Martins_JoseAntonio_M.pdf: 6784204 bytes, checksum: 4e9df50ca8f72e1710d541924b76a67c (MD5) Previous issue date: 1991
Resumo: Neste trabalho são descritos os princípios do vocoder LPC, sendo mostrados os métodos para cálculo dos parâmetros do mesmo. Também são apresentados os resultados de simulações de vocoders LPC usando quantização escalar, quantização vetorial e interpolação dos parâmetros quantizados. Inicialmente foi projetado um vocoder LPC não quantizado, o qual serviu de padrão para a avaliação dos vocoders quantizados. Usando a quantização escalar dos coeficientes razão log-área foi obtido um vocoder à taxa de 2200 bit /s, assegurando uma boa qualidade e alta inteligibilidade da voz sintetizada. Com o uso da quantização vetorial obteve-se um bom desempenho em taxas da ordem de 1000 bit/s. Essas taxas foram reduzidas em 50% com o uso da interpolação linear, transmitindo apenas os parâmetros dos quadros ímpares. Assim, conseguiu-se vocoders com taxas ao redor de 500 bit/s, apresentando voz sintetizada com degradação em relação aos sistemas anteriores, mas ainda assegurando uma boa inteligibilidade
Abstract: Not informed.
Mestrado
Eletronica e Comunicações
Mestre em Engenharia Elétrica
APA, Harvard, Vancouver, ISO, and other styles
4

Hudson, Nicholaus D. W. "The self-excited vocoder for mobile telephony." Thesis, University of Bath, 1992. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.760629.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Moore, James Thomas. "A mixed excitation vocoder with fuzzy logic classifier." Thesis, Monterey, California. Naval Postgraduate School, 1992. http://hdl.handle.net/10945/23960.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Foley, Jeffrey J. (Jeffrey Joseph). "Digital implementation of a frequency-lowering channel vocoder." Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/38798.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1996.
Includes bibliographical references (p. 58-59).
by Jeffrey J. Foley.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
7

Carr, Raymond C. "Improvements to a pitch-synchronous linear predictive coding (LPC) vocoder." Thesis, University of Ottawa (Canada), 1989. http://hdl.handle.net/10393/5954.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Yeh, Ernest Nanjung 1975. "Advanced Vocoder Idle Slot Exploitation for TIA IS-136 standard." Thesis, Massachusetts Institute of Technology, 1998. http://hdl.handle.net/1721.1/47580.

Full text
Abstract:
Thesis (S.B. and M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998.
Includes bibliographical references (p. 55).
by Ernest Nanjung Yeh.
S.B.and M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
9

Manjunath, Sharath. "Implementation of a variable rate vocoder and its performance analysis." Thesis, This resource online, 1994. http://scholar.lib.vt.edu/theses/available/etd-06102009-063255/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Iyengar, Vasu. "A low delay 16 kbit/sec coder for speech signals /." Thesis, McGill University, 1987. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=63799.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Vocoder"

1

Surphlis, David S. Multi-band excitation vocoder. [s.l: The Author], 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Redding, Christopher. Voice quality assessment of vocoders in tandem configuration. [Washington, D.C.]: U.S. Dept. of Commerce, National Telecommunications and Information Administration, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

N, DeMinco, Lindner Jeanne, United States. National Telecommunications and Information Administration, and Institute for Telecommunication Sciences, eds. Voice quality assessment of vocoders in tandem configuration. [Washington, D.C.]: U.S. Dept. of Commerce, National Telecommunications and Information Administration, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Moore, James Thomas. A mixed excitation vocoder with fuzzy logic classifier. Monterey, Calif: Naval Postgraduate School, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Papamichalis, Panos E. Practical approaches to speech coding. Englewood Cliffs, N.J: Prentice-Hall, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Tompkins, Dave. How to wreck a nice beach: The vocoder from World War II to hip-hop : the machine speaks. Brooklyn, NY: Melville House, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Tompkins, Dave. How to wreck a nice beach: The vocoder from World War II to hip-hop : the machine speaks. Brooklyn, NY: Melville House, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ramamurthy, Karthikeyan N. MATLAB software for the code excited linear prediction algorithm: The Federal Standard, 1016. San Rafael, Calif. (1537 Fourth Street, San Rafael, CA 94901 USA): Morgan & Claypool Publishers, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Jean-Pierre, Tubach, Boë Louis-Jean, and Calliope (Association), eds. La Parole et son traitement automatique. Paris: Masson, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Carroll, Angela, and Charles Moore. Vocoder. Petite Ivy Press, 2021.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Vocoder"

1

Weik, Martin H. "vocoder." In Computer Science and Communications Dictionary, 1900. Boston, MA: Springer US, 2000. http://dx.doi.org/10.1007/1-4020-0613-6_20884.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Drymonitis, Alexandros. "Phase Vocoder Techniques." In The Python Audio Cookbook, 100–108. London: Focal Press, 2023. http://dx.doi.org/10.4324/9781003386964-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Chung, Jae H., and Ronald W. Schafer. "Vector Excitation Homomorphic Vocoder." In Advances in Speech Coding, 235–44. Boston, MA: Springer US, 1991. http://dx.doi.org/10.1007/978-1-4615-3266-8_23.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Gerstlauer, Andreas. "Design of a GSM Vocoder." In System Design, 175–92. Boston, MA: Springer US, 2001. http://dx.doi.org/10.1007/978-1-4615-1481-7_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Pirkle, Will C. "FFT Processing: The Phase Vocoder." In Designing Audio Effect Plugins in C++, 564–613. Second edition. | New York, NY : Routledge, 2019.: Routledge, 2019. http://dx.doi.org/10.4324/9780429490248-20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Vít, Jakub, Zdeněk Hanzlíček, and Jindřich Matoušek. "Czech Speech Synthesis with Generative Neural Vocoder." In Text, Speech, and Dialogue, 307–15. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-27947-9_26.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Vondra, Martin, and Robert Vích. "Speech Emotion Modification Using a Cepstral Vocoder." In Development of Multimodal Interfaces: Active Listening and Synchrony, 280–85. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-12397-9_23.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Loizou, Philipos C. "Speech Processing in Vocoder-Centric Cochlear Implants." In Cochlear and Brainstem Implants, 109–43. Basel: S. KARGER AG, 2006. http://dx.doi.org/10.1159/000094648.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Sun, Chengzhe, Ehab AlBadawy, Timothy F. Davison, Sarah R. Robinson, Ming-Ching Chang, and Siwei Lyu. "Using Vocoder Artifacts For Audio Deepfakes Detection." In Adversarial Multimedia Forensics, 263–82. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-49803-9_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lu, Tangle, and Xiaoqun Zhao. "An MELP Vocoder Based on UVS and MVF." In Machine Learning and Intelligent Communications, 44–52. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-52730-7_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Vocoder"

1

Liu, Zhijun, Kuan Chen, and Kai Yu. "Neural Homomorphic Vocoder." In Interspeech 2020. ISCA: ISCA, 2020. http://dx.doi.org/10.21437/interspeech.2020-3188.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ribeiro, Carlos M., Isabel M. Trancoso, and Diamantino A. Caseiro. "Phonetic vocoder assessment." In 6th International Conference on Spoken Language Processing (ICSLP 2000). ISCA: ISCA, 2000. http://dx.doi.org/10.21437/icslp.2000-663.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Barbany, Oriol, Antonio Bonafonte, and Santiago Pascual. "Multi-Speaker Neural Vocoder." In IberSPEECH 2018. ISCA: ISCA, 2018. http://dx.doi.org/10.21437/iberspeech.2018-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Tamamori, Akira, Tomoki Hayashi, Kazuhiro Kobayashi, Kazuya Takeda, and Tomoki Toda. "Speaker-Dependent WaveNet Vocoder." In Interspeech 2017. ISCA: ISCA, 2017. http://dx.doi.org/10.21437/interspeech.2017-314.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Prusa, Zdenek, and Nicki Holighaus. "Phase vocoder done right." In 2017 25th European Signal Processing Conference (EUSIPCO). IEEE, 2017. http://dx.doi.org/10.23919/eusipco.2017.8081353.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Cohen, Aaron E., Yvette T. Lee, and David A. Heide. "Vocoder susceptibility to baseband Trojans." In 2017 IEEE 8th Annual Ubiquitous Computing, Electronics and Mobile Communication Conference (UEMCON). IEEE, 2017. http://dx.doi.org/10.1109/uemcon.2017.8249068.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Obranovich, Charles R., John M. Golusky, Robert D. Preuss, Darren R. Fabbri, Daniel R. Cruthirds, Erin M. Aylward, James A. Freebersyser, and Stephen R. Kolek. "300 bps noise robust vocoder." In MILCOM 2010 - 2010 IEEE Military Communications Conference. IEEE, 2010. http://dx.doi.org/10.1109/milcom.2010.5680311.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Moreno, Asunción, José A. R. Fonollosa, and Josep Vidal. "Vocoder design based on HOS." In 3rd European Conference on Speech Communication and Technology (Eurospeech 1993). ISCA: ISCA, 1993. http://dx.doi.org/10.21437/eurospeech.1993-25.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Beskrovnyi, Ivan, Aleksandr Ivchenko, Pavel Kononyuk, Liubov Antufrieva, and Alexander Dvorkovich. "Low-Speed Vocoder with Noise Filtration." In 2019 International Conference on Engineering and Telecommunication (EnT). IEEE, 2019. http://dx.doi.org/10.1109/ent47717.2019.9030582.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Martino, Joseph Di. "Speech synthesis using phase vocoder techniques." In 5th European Conference on Speech Communication and Technology (Eurospeech 1997). ISCA: ISCA, 1997. http://dx.doi.org/10.21437/eurospeech.1997-467.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Vocoder"

1

Heide, David A., Aaron E. Cohen, Yvette T. Lee, and Thomas M. Moran. Universal Vocoder Using Variable Data Rate Vocoding. Fort Belvoir, VA: Defense Technical Information Center, June 2013. http://dx.doi.org/10.21236/ada588068.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Mack, M. A., and B. Gold. The Intelligibility of Non-Vocoded and Vocoded Semantically Anomalous Sentences. Fort Belvoir, VA: Defense Technical Information Center, July 1985. http://dx.doi.org/10.21236/ada160401.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Li, A. RTP Payload Format for Enhanced Variable Rate Codecs (EVRC) and Selectable Mode Vocoders (SMV). RFC Editor, July 2003. http://dx.doi.org/10.17487/rfc3558.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Mack, M., J. Tierney, and M. E. Boyle. The Intelligibility of Natural and LPC-Vocoded Words and Sentences Presented to Native and Non-Native Speakers of English. Fort Belvoir, VA: Defense Technical Information Center, July 1990. http://dx.doi.org/10.21236/ada226180.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography