Gotowa bibliografia na temat „Speech synthesis”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Speech synthesis”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Speech synthesis"

1

Hirose, Yoshifumi. "Speech synthesis apparatus and speech synthesis method". Journal of the Acoustical Society of America 128, nr 1 (2010): 515. http://dx.doi.org/10.1121/1.3472332.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Murthy, Savitha, i Dinkar Sitaram. "Low Resource Kannada Speech Recognition using Lattice Rescoring and Speech Synthesis". Indian Journal Of Science And Technology 16, nr 4 (29.01.2023): 282–91. http://dx.doi.org/10.17485/ijst/v16i4.2371.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Silverman, Kim E. A. "Speech synthesis". Journal of the Acoustical Society of America 90, nr 6 (grudzień 1991): 3391. http://dx.doi.org/10.1121/1.401356.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Takagi, Tohru. "Speech Synthesis." Journal of the Institute of Television Engineers of Japan 46, nr 2 (1992): 163–71. http://dx.doi.org/10.3169/itej1978.46.163.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Kuusisto, Finn. "Speech synthesis". XRDS: Crossroads, The ACM Magazine for Students 21, nr 1 (14.10.2014): 63. http://dx.doi.org/10.1145/2667637.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Kamai, Takahiro, i Yumiko Kato. "Speech Synthesis Method And Speech Synthesizer". Journal of the Acoustical Society of America 129, nr 4 (2011): 2356. http://dx.doi.org/10.1121/1.3582212.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Kagoshima, Takehiko, i Masami Akamine. "Speech synthesis method and speech synthesizer". Journal of the Acoustical Society of America 125, nr 6 (2009): 4108. http://dx.doi.org/10.1121/1.3155494.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Suckle, Leonard I. "Speech synthesis system". Journal of the Acoustical Society of America 84, nr 4 (październik 1988): 1580. http://dx.doi.org/10.1121/1.397209.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Sharman, Richard Anthony. "Speech synthesis system". Journal of the Acoustical Society of America 103, nr 6 (czerwiec 1998): 3136. http://dx.doi.org/10.1121/1.423023.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Kagoshima, Takehiko, i Masami Akamine. "Speech synthesis method". Journal of the Acoustical Society of America 124, nr 5 (2008): 2678. http://dx.doi.org/10.1121/1.3020583.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "Speech synthesis"

1

Donovan, R. E. "Trainable speech synthesis". Thesis, University of Cambridge, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.598598.

Pełny tekst źródła
Streszczenie:
This thesis is concerned with the synthesis of speech using trainable systems. The research it describes was conducted with two principal aims: to build a hidden Markov model (HMM) based speech synthesis system which could synthesise very high quality speech; and to ensure that all the parameters used by the system were obtained through training. The motivation behind the first of these aims was to determine if the HMM techniques which have been applied so successfully in recent years to the problem of automatic speech recognition could achieve a similar level of success in the field of speech synthesis. The motivation behind the second aim was to construct a system that would be very flexible with respect to changing voices, or even languages. A synthesis system was developed which used the clustered states of a set of decision-tree state-clustered HMMs as its synthesis units. The synthesis parameters for each clustered state were obtained completely automatically through training on a one hour single-speaker continuous-speech database. During synthesis the required utterance, specified as a string of words of known phonetic pronunciation, was generated as a sequence of these clustered states. Initially, each clustered state was associated with a single linear prediction (LP) vector, and LP synthesis used to generate the sequence of vectors corresponding to the state sequence required. Numerous shortcomings were identified in this system, and these were addressed through improvements to its transcription, clustering, and segmentation capabilities. The LP synthesis scheme was replaced by a TD-PSOLA scheme which synthesised speech by concatenating waveform segments selected to represent each clustered state.
Style APA, Harvard, Vancouver, ISO itp.
2

Greenwood, Andrew Richard. "Articulatory speech synthesis". Thesis, University of Liverpool, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.386773.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Tsukanova, Anastasiia. "Articulatory speech synthesis". Electronic Thesis or Diss., Université de Lorraine, 2019. http://www.theses.fr/2019LORR0166.

Pełny tekst źródła
Streszczenie:
Cette thèse se situe dans le domaine de la synthèse articulatoire de la parole et est organisé en trois grandes parties : les deux premières sont consacrées au développement de deux synthétiseurs articulatoires de la parole ; la troisième traite des liens que l'on peut établir entre les deux approches utilisées. Le premier synthétiseur est issu d'une approche à base de règles. Celle-ci visait à obtenir le contrôle complet sur les articulateurs (mâchoire, langue, lèvres, vélum, larynx et épiglotte). Elle s'appuyait sur des données statiques du plan sagittal médian obtenues par IRM (Imagerie par Résonance Magnétique) correspondant à des articulations bloquées de voyelles du français, ainsi que des syllabes de type consonne-voyelle, et était composée de plusieurs étapes : l'encodage de l'ensemble des données grâce à un modèle du conduit vocal basé sur l'ACP (analyse en composantes principales) ; l'utilisation des configurations articulatoires obtenues comme sources de positions à atteindre et destinées à piloter le synthétiseur à base de règles qui est la contribution principale de cette première partie ; l'ajustement des conduits vocaux obtenus selon une perspective phonétique ; la simulation acoustique permettant d'obtenir un signal acoustique. Les résultats de cette synthèse ont été évalués de manière visuelle, acoustique et perceptuelle, et les problèmes rencontrés ont été identifiés et classés selon leurs origines, qui pouvaient être : les données, leur modélisation, l'algorithme contrôlant la forme du conduit vocal, la traduction de cette forme en fonctions d'aire, ou encore la simulation acoustique. Ces analyses nous permettent de conclure que, parmi les tests effectués, les stratégies articulatoires des voyelles et des occlusives sont les plus correctes, suivies par celles des nasales et des fricatives. La seconde approche a été développée en s'appuyant sur un synthétiseur de référence constitué d'un réseau de neurones feed-forward entraîné à l'aide de la méthode standard du système Merlin sur des données audio composées de parole en langue française enregistrée par IRM en temps réel. Ces données ont été segmentées phonétiquement et linguistiquement. Ces données audio, malgré un débruitage, étaient fortement parasitées par le son de la machine à IRM. Nous avons complété le synthétiseur de référence en ajoutant huit paramètres représentant de l'information articulatoire : l'ouverture des lèvres et leur protrusion, la distance entre la langue et le vélum, entre le vélum et la paroi pharyngale, et enfin entre la langue et la paroi pharyngale. Ces paramètres ont été extraits automatiquement à partir des images et alignés au signal et aux spécifications linguistiques. Les séquences articulatoires et les séquences de parole, générées conjointement, ont été évaluées à l'aide de différentes mesures : distance de déformation temporelle dynamique, la distortion mel-cepstrum moyenne, l'erreur de prédiction de l'apériodicité, et trois mesures pour F0 : RMSE (root mean square error), CORR (coéfficient de corrélation) and V/UV (frame-level voiced/unvoiced error). Une analyse de la pertinence des paramètres articulatoires par rapport aux labels phonétiques a également été réalisée. Elle permet de conclure que les paramètres articulatoires générés s'approchent de manière acceptable des paramètres originaux, et que l'ajout des paramètres articulatoires n'a pas dégradé le modèle acoustique original. Les deux approches présentées ci-dessus ont en commun l'utilisation de deux types de données IRM. Ce point commun a motivé la recherche, dans les données temps réel, des images clés, c'est-à-dire les configurations statiques IRM, utilisées pour modéliser la coarticulation. Afin de comparer les images IRM statiques avec les images dynamiques en temps réel, nous avons utilisé plusieurs mesures : [...]
The thesis is set in the domain of articulatory speech synthesis and consists of three major parts: the first two are dedicated to the development of two articulatory speech synthesizers and the third addresses how we can relate them to each other. The first approach results from a rule-based approach to articulatory speech synthesis that aimed to have a comprehensive control over the articulators (the jaw, the tongue, the lips, the velum, the larynx and the epiglottis). This approach used a dataset of static mid-sagittal magnetic resonance imaging (MRI) captures showing blocked articulation of French vowels and a set of consonant-vowel syllables; that dataset was encoded with a PCA-based vocal tract model. Then the system comprised several components: using the recorded articulatory configurations to drive a rule-based articulatory speech synthesizer as a source of target positions to attain (which is the main contribution of this first part); adjusting the obtained vocal tract shapes from the phonetic perspective; running an acoustic simulation unit to obtain the sound. The results of this synthesis were evaluated visually, acoustically and perceptually, and the problems encountered were broken down by their origin: the dataset, its modeling, the algorithm for managing the vocal tract shapes, their translation to the area functions, and the acoustic simulation. We concluded that, among our test examples, the articulatory strategies for vowels and stops are most correct, followed by those of nasals and fricatives. The second explored approach started off a baseline deep feed-forward neural network-based speech synthesizer trained with the standard recipe of Merlin on the audio recorded during real-time MRI (RT-MRI) acquisitions: denoised (and yet containing a considerable amount of noise of the MRI machine) speech in French and force-aligned state labels encoding phonetic and linguistic information. This synthesizer was augmented with eight parameters representing articulatory information---the lips opening and protrusion, the distance between the tongue and the velum, the velum and the pharyngeal wall and the tongue and the pharyngeal wall---that were automatically extracted from the captures and aligned with the audio signal and the linguistic specification. The jointly synthesized speech and articulatory sequences were evaluated objectively with dynamic time warping (DTW) distance, mean mel-cepstrum distortion (MCD), BAP (band aperiodicity prediction error), and three measures for F0: RMSE (root mean square error), CORR (correlation coefficient) and V/UV (frame-level voiced/unvoiced error). The consistency of articulatory parameters with the phonetic label was analyzed as well. I concluded that the generated articulatory parameter sequences matched the original ones acceptably closely, despite struggling more at attaining a contact between the articulators, and that the addition of articulatory parameters did not hinder the original acoustic model. The two approaches above are linked through the use of two different kinds of MRI speech data. This motivated a search for such coarticulation-aware targets as those that we had in the static case to be present or absent in the real-time data. To compare static and real-time MRI captures, the measures of structural similarity, Earth mover's distance, and SIFT were utilized; having analyzed these measures for validity and consistency, I qualitatively and quantitatively studied their temporal behavior, interpreted it and analyzed the identified similarities. I concluded that SIFT and structural similarity did capture some articulatory information and that their behavior, overall, validated the static MRI dataset. [...]
Style APA, Harvard, Vancouver, ISO itp.
4

Sun, Felix (Felix W. ). "Speech Representation Models for Speech Synthesis and Multimodal Speech Recognition". Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/106378.

Pełny tekst źródła
Streszczenie:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 59-63).
The field of speech recognition has seen steady advances over the last two decades, leading to the accurate, real-time recognition systems available on mobile phones today. In this thesis, I apply speech modeling techniques developed for recognition to two other speech problems: speech synthesis and multimodal speech recognition with images. In both problems, there is a need to learn a relationship between speech sounds and another source of information. For speech synthesis, I show that using a neural network acoustic model results in a synthesizer that is more tolerant of noisy training data than previous work. For multimodal recognition, I show how information from images can be effectively integrated into the recognition search framework, resulting in improved accuracy when image data is available.
by Felix Sun.
M. Eng.
Style APA, Harvard, Vancouver, ISO itp.
5

Morton, K. "Speech production and synthesis". Thesis, University of Essex, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.377930.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Jin, Yi-Xuan. "A HIGH SPEED DIGITAL IMPLEMENTATION OF LPC SPEECH SYNTHESIZER USING THE TMS320". Thesis, The University of Arizona, 1985. http://hdl.handle.net/10150/275309.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Wong, Chun-ho Eddy. "Reliability of rating synthesized hypernasal speech signals in connected speech and vowels". Click to view the E-thesis via HKU Scholars Hub, 2007. http://lookup.lib.hku.hk/lookup/bib/B4200617X.

Pełny tekst źródła
Streszczenie:
Thesis (B.Sc)--University of Hong Kong, 2007.
"A dissertation submitted in partial fulfilment of the requirements for the Bachelor of Science (Speech and Hearing Sciences), The University of Hong Kong, June 30, 2007." Includes bibliographical references (p. 28-30). Also available in print.
Style APA, Harvard, Vancouver, ISO itp.
8

Peng, Antai. "Speech expression modeling and synthesis". Diss., Georgia Institute of Technology, 1996. http://hdl.handle.net/1853/13560.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Brierton, Richard A. "Variable frame-rate speech synthesis". Thesis, University of Liverpool, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.357363.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Klompje, Gideon. "A parametric monophone speech synthesis system". Thesis, Link to online version, 2006. http://hdl.handle.net/10019/561.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Książki na temat "Speech synthesis"

1

Eric, Keller, i European Cooperation in the Field of Scientific and Technical Research (Organization). COST 258., red. Improvements in speech synthesis: COST 258: the naturalness of synthetic speech. Chichester, West Sussex: J. Wiley, 2002.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Holmes, J. N. Speech synthesis and recognition. Wyd. 2. New York: Taylor & Francis, 2001.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Keller, E., G. Bailly, A. Monaghan, J. Terken i M. Huckvale, red. Improvements in Speech Synthesis. Chichester, UK: John Wiley & Sons, Ltd, 2001. http://dx.doi.org/10.1002/0470845945.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Keller, E., G. Bailly, A. Monaghan, J. Terken i M. Huckvale, red. Improvements in Speech Synthesis. Chichester, UK: John Wiley & Sons, Ltd, 2001. http://dx.doi.org/10.1002/0470845945.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

van Santen, Jan P. H., Joseph P. Olive, Richard W. Sproat i Julia Hirschberg, red. Progress in Speech Synthesis. New York, NY: Springer New York, 1997. http://dx.doi.org/10.1007/978-1-4612-1894-4.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Wendy, Holmes, red. Speech synthesis and recognition. Wyd. 2. London: Taylor & Francis, 2002.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Taylor, Paul. Text-to-speech synthesis. Cambridge, UK: Cambridge University Press, 2009.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

1956-, Kleijn W. B., i Paliwal K. K, red. Speech coding and synthesis. Amsterdam: Elsevier, 1995.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

inc, International Resource Development, red. Speech recognition & voice synthesis. Norwalk, Conn., U.S.A. (6 Prowitt St., Norwalk 06855): International Resource Development, 1985.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Van Santen, Jan P. H., red. Progress in speech synthesis. New York: Springer, 1997.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "Speech synthesis"

1

Scully, Celia. "Articulatory Synthesis". W Speech Production and Speech Modelling, 151–86. Dordrecht: Springer Netherlands, 1990. http://dx.doi.org/10.1007/978-94-009-2037-8_7.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Schroeder, Manfred R. "Speech Synthesis". W Computer Speech, 85–90. Berlin, Heidelberg: Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/978-3-662-03861-1_5.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Schroeder, Manfred R. "Speech Synthesis". W Computer Speech, 129–34. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-662-06384-2_6.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Owens, F. J. "Speech Synthesis". W Signal Processing of Speech, 88–121. London: Macmillan Education UK, 1993. http://dx.doi.org/10.1007/978-1-349-22599-6_5.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Dutoit, Thierry, i Baris Bozkurt. "Speech Synthesis". W Handbook of Signal Processing in Acoustics, 557–85. New York, NY: Springer New York, 2008. http://dx.doi.org/10.1007/978-0-387-30441-0_30.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Sinha, Priyabrata. "Speech Synthesis". W Speech Processing in Embedded Systems, 157–64. Boston, MA: Springer US, 2009. http://dx.doi.org/10.1007/978-0-387-75581-6_11.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Hinterleitner, Florian. "Speech Synthesis". W Quality of Synthetic Speech, 5–18. Singapore: Springer Singapore, 2017. http://dx.doi.org/10.1007/978-981-10-3734-4_2.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Kurematsu, Akira, i Tsuyoshi Morimoto. "Speech Synthesis". W Automatic Speech Translation, 71–85. London: CRC Press, 2023. http://dx.doi.org/10.1201/9780429333385-4.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Beckman, Mary E. "Speech Models and Speech Synthesis". W Progress in Speech Synthesis, 185–209. New York, NY: Springer New York, 1997. http://dx.doi.org/10.1007/978-1-4612-1894-4_15.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Suendermann, David, Harald Höge i Alan Black. "Challenges in Speech Synthesis". W Speech Technology, 19–32. New York, NY: Springer US, 2010. http://dx.doi.org/10.1007/978-0-387-73819-2_2.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "Speech synthesis"

1

Taylor, P. "Speech synthesis". W IEE Colloquium Speech and Language Engineering - State of the Art. IEE, 1998. http://dx.doi.org/10.1049/ic:19980957.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

BREEN, AP. "SPEECH SYNTHESIS". W Autumn Conference 1998. Institute of Acoustics, 2024. http://dx.doi.org/10.25144/18952.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Louw, Johannes A., Daniel R. van Niekerk i Georg I. Schlünz. "Introducing the Speect speech synthesis platform". W The Blizzard Challenge 2010. ISCA: ISCA, 2010. http://dx.doi.org/10.21437/blizzard.2010-4.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Huckvale, Mark. "Speech synthesis, speech simulation and speech science". W 7th International Conference on Spoken Language Processing (ICSLP 2002). ISCA: ISCA, 2002. http://dx.doi.org/10.21437/icslp.2002-388.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Karlsson, Inger, i Lennart Neovius. "Speech synthesis experiments with the glove synthesiser". W 3rd European Conference on Speech Communication and Technology (Eurospeech 1993). ISCA: ISCA, 1993. http://dx.doi.org/10.21437/eurospeech.1993-213.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Valentini-Botinhao, Cassia, Xin Wang, Shinji Takaki i Junichi Yamagishi. "Investigating RNN-based speech enhancement methods for noise-robust Text-to-Speech". W 9th ISCA Speech Synthesis Workshop. ISCA, 2016. http://dx.doi.org/10.21437/ssw.2016-24.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Álvarez, David, Santiago Pascual i Antonio Bonafonte. "Problem-Agnostic Speech Embeddings for Multi-Speaker Text-to-Speech with SampleRNN". W 10th ISCA Speech Synthesis Workshop. ISCA: ISCA, 2019. http://dx.doi.org/10.21437/ssw.2019-7.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Sagisaka, Yoshinori, Takumi Yamashita i Yoko Kokenawa. "Speech synthesis with attitude". W Speech Prosody 2004. ISCA: ISCA, 2004. http://dx.doi.org/10.21437/speechprosody.2004-91.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Krishna, Gautam, Co Tran, Yan Han, Mason Carnahan i Ahmed H. Tewfik. "Speech Synthesis Using EEG". W ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020. http://dx.doi.org/10.1109/icassp40776.2020.9053340.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Black, Alan W., Heiga Zen i Keiichi Tokuda. "Statistical Parametric Speech Synthesis". W 2007 IEEE International Conference on Acoustics, Speech and Signal Processing - ICASSP '07. IEEE, 2007. http://dx.doi.org/10.1109/icassp.2007.367298.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Raporty organizacyjne na temat "Speech synthesis"

1

Greenberg, Steven. Speech Synthesis Using Perceptually Motivated Features. Fort Belvoir, VA: Defense Technical Information Center, styczeń 2012. http://dx.doi.org/10.21236/ada567193.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Ore, Brian M. Speech Recognition, Articulatory Feature Detection, and Speech Synthesis in Multiple Languages. Fort Belvoir, VA: Defense Technical Information Center, listopad 2009. http://dx.doi.org/10.21236/ada519140.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Johnson, W. L., Shrikanth Narayanan, Richard Whitney, Rajat Das i Catherine LaBore. Limited Domain Synthesis of Expressive Military Speech for Animated Characters. Fort Belvoir, VA: Defense Technical Information Center, styczeń 2002. http://dx.doi.org/10.21236/ada459392.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Johnson, W. L., S. Narayanan, R. Whitney, R. Das, M. Bulut i C. LaBore. Limited Domain Synthesis of Expressive Military Speech for Animated Characters. Fort Belvoir, VA: Defense Technical Information Center, styczeń 2002. http://dx.doi.org/10.21236/ada459395.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Gordon, Jane. Use of synthetic speech in tests of speech discrimination. Portland State University Library, styczeń 2000. http://dx.doi.org/10.15760/etd.5327.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Castan, Diego, Md Rahman, Sarah Bakst, Chris Cobo-Kroenke, Mitchell McLaren, Martin Graciarena i Aaron Lawson. Speaker-targeted Synthetic Speech Detection. Office of Scientific and Technical Information (OSTI), luty 2022. http://dx.doi.org/10.2172/1844063.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Mathew, Jijo K. Speed Enforcement in Work Zones and Synthesis on Cost-Benefit Assessment of Installing Speed Enforcement Cameras on INDOT Road Network. Purdue University, 2023. http://dx.doi.org/10.5703/1288284317639.

Pełny tekst źródła
Streszczenie:
Work zone safety is a high priority for transportation agencies across the United States. High speeds in construction zones are a well-documented risk factor that increases the frequency and severity of crashes. It is therefore important to understand the extent and severity of high-speed vehicles in and around construction work zones. This study uses CV trajectory data to evaluate the impact of several work zone speed compliance measures, such as posted speed limit signs, radar-based speed feedback displays, and automated speed enforcement on controlling speeds inside the work zone. This study also presents several methodologies to characterize both the spatial and temporal effects of these control measures on driver behavior and vehicle speeds across the work zones.
Style APA, Harvard, Vancouver, ISO itp.
8

Kostova, Maya. Synthesis of PSA Inhibitors as SPECT- and PET-Based Imaging Agents for Prostate Cancer. Fort Belvoir, VA: Defense Technical Information Center, czerwiec 2011. http://dx.doi.org/10.21236/ada548605.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Kabalka, G. W. Boron in nuclear medicine: New synthetic approaches to PET and SPECT. Office of Scientific and Technical Information (OSTI), wrzesień 1992. http://dx.doi.org/10.2172/7199090.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Kabalka, G. W. Boron in nuclear medicine: New synthetic approaches to PET, SPECT, and BNCT agents. Office of Scientific and Technical Information (OSTI), październik 1989. http://dx.doi.org/10.2172/5516333.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii