Academic literature on the topic 'Speech synthesis'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Speech synthesis.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Speech synthesis"

1

Hirose, Yoshifumi. "Speech synthesis apparatus and speech synthesis method." Journal of the Acoustical Society of America 128, no. 1 (2010): 515. http://dx.doi.org/10.1121/1.3472332.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Murthy, Savitha, and Dinkar Sitaram. "Low Resource Kannada Speech Recognition using Lattice Rescoring and Speech Synthesis." Indian Journal Of Science And Technology 16, no. 4 (January 29, 2023): 282–91. http://dx.doi.org/10.17485/ijst/v16i4.2371.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Silverman, Kim E. A. "Speech synthesis." Journal of the Acoustical Society of America 90, no. 6 (December 1991): 3391. http://dx.doi.org/10.1121/1.401356.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Takagi, Tohru. "Speech Synthesis." Journal of the Institute of Television Engineers of Japan 46, no. 2 (1992): 163–71. http://dx.doi.org/10.3169/itej1978.46.163.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kuusisto, Finn. "Speech synthesis." XRDS: Crossroads, The ACM Magazine for Students 21, no. 1 (October 14, 2014): 63. http://dx.doi.org/10.1145/2667637.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kamai, Takahiro, and Yumiko Kato. "Speech Synthesis Method And Speech Synthesizer." Journal of the Acoustical Society of America 129, no. 4 (2011): 2356. http://dx.doi.org/10.1121/1.3582212.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kagoshima, Takehiko, and Masami Akamine. "Speech synthesis method and speech synthesizer." Journal of the Acoustical Society of America 125, no. 6 (2009): 4108. http://dx.doi.org/10.1121/1.3155494.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Suckle, Leonard I. "Speech synthesis system." Journal of the Acoustical Society of America 84, no. 4 (October 1988): 1580. http://dx.doi.org/10.1121/1.397209.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Sharman, Richard Anthony. "Speech synthesis system." Journal of the Acoustical Society of America 103, no. 6 (June 1998): 3136. http://dx.doi.org/10.1121/1.423023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kagoshima, Takehiko, and Masami Akamine. "Speech synthesis method." Journal of the Acoustical Society of America 124, no. 5 (2008): 2678. http://dx.doi.org/10.1121/1.3020583.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Speech synthesis"

1

Donovan, R. E. "Trainable speech synthesis." Thesis, University of Cambridge, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.598598.

Full text
Abstract:
This thesis is concerned with the synthesis of speech using trainable systems. The research it describes was conducted with two principal aims: to build a hidden Markov model (HMM) based speech synthesis system which could synthesise very high quality speech; and to ensure that all the parameters used by the system were obtained through training. The motivation behind the first of these aims was to determine if the HMM techniques which have been applied so successfully in recent years to the problem of automatic speech recognition could achieve a similar level of success in the field of speech synthesis. The motivation behind the second aim was to construct a system that would be very flexible with respect to changing voices, or even languages. A synthesis system was developed which used the clustered states of a set of decision-tree state-clustered HMMs as its synthesis units. The synthesis parameters for each clustered state were obtained completely automatically through training on a one hour single-speaker continuous-speech database. During synthesis the required utterance, specified as a string of words of known phonetic pronunciation, was generated as a sequence of these clustered states. Initially, each clustered state was associated with a single linear prediction (LP) vector, and LP synthesis used to generate the sequence of vectors corresponding to the state sequence required. Numerous shortcomings were identified in this system, and these were addressed through improvements to its transcription, clustering, and segmentation capabilities. The LP synthesis scheme was replaced by a TD-PSOLA scheme which synthesised speech by concatenating waveform segments selected to represent each clustered state.
APA, Harvard, Vancouver, ISO, and other styles
2

Greenwood, Andrew Richard. "Articulatory speech synthesis." Thesis, University of Liverpool, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.386773.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Tsukanova, Anastasiia. "Articulatory speech synthesis." Electronic Thesis or Diss., Université de Lorraine, 2019. http://www.theses.fr/2019LORR0166.

Full text
Abstract:
Cette thèse se situe dans le domaine de la synthèse articulatoire de la parole et est organisé en trois grandes parties : les deux premières sont consacrées au développement de deux synthétiseurs articulatoires de la parole ; la troisième traite des liens que l'on peut établir entre les deux approches utilisées. Le premier synthétiseur est issu d'une approche à base de règles. Celle-ci visait à obtenir le contrôle complet sur les articulateurs (mâchoire, langue, lèvres, vélum, larynx et épiglotte). Elle s'appuyait sur des données statiques du plan sagittal médian obtenues par IRM (Imagerie par Résonance Magnétique) correspondant à des articulations bloquées de voyelles du français, ainsi que des syllabes de type consonne-voyelle, et était composée de plusieurs étapes : l'encodage de l'ensemble des données grâce à un modèle du conduit vocal basé sur l'ACP (analyse en composantes principales) ; l'utilisation des configurations articulatoires obtenues comme sources de positions à atteindre et destinées à piloter le synthétiseur à base de règles qui est la contribution principale de cette première partie ; l'ajustement des conduits vocaux obtenus selon une perspective phonétique ; la simulation acoustique permettant d'obtenir un signal acoustique. Les résultats de cette synthèse ont été évalués de manière visuelle, acoustique et perceptuelle, et les problèmes rencontrés ont été identifiés et classés selon leurs origines, qui pouvaient être : les données, leur modélisation, l'algorithme contrôlant la forme du conduit vocal, la traduction de cette forme en fonctions d'aire, ou encore la simulation acoustique. Ces analyses nous permettent de conclure que, parmi les tests effectués, les stratégies articulatoires des voyelles et des occlusives sont les plus correctes, suivies par celles des nasales et des fricatives. La seconde approche a été développée en s'appuyant sur un synthétiseur de référence constitué d'un réseau de neurones feed-forward entraîné à l'aide de la méthode standard du système Merlin sur des données audio composées de parole en langue française enregistrée par IRM en temps réel. Ces données ont été segmentées phonétiquement et linguistiquement. Ces données audio, malgré un débruitage, étaient fortement parasitées par le son de la machine à IRM. Nous avons complété le synthétiseur de référence en ajoutant huit paramètres représentant de l'information articulatoire : l'ouverture des lèvres et leur protrusion, la distance entre la langue et le vélum, entre le vélum et la paroi pharyngale, et enfin entre la langue et la paroi pharyngale. Ces paramètres ont été extraits automatiquement à partir des images et alignés au signal et aux spécifications linguistiques. Les séquences articulatoires et les séquences de parole, générées conjointement, ont été évaluées à l'aide de différentes mesures : distance de déformation temporelle dynamique, la distortion mel-cepstrum moyenne, l'erreur de prédiction de l'apériodicité, et trois mesures pour F0 : RMSE (root mean square error), CORR (coéfficient de corrélation) and V/UV (frame-level voiced/unvoiced error). Une analyse de la pertinence des paramètres articulatoires par rapport aux labels phonétiques a également été réalisée. Elle permet de conclure que les paramètres articulatoires générés s'approchent de manière acceptable des paramètres originaux, et que l'ajout des paramètres articulatoires n'a pas dégradé le modèle acoustique original. Les deux approches présentées ci-dessus ont en commun l'utilisation de deux types de données IRM. Ce point commun a motivé la recherche, dans les données temps réel, des images clés, c'est-à-dire les configurations statiques IRM, utilisées pour modéliser la coarticulation. Afin de comparer les images IRM statiques avec les images dynamiques en temps réel, nous avons utilisé plusieurs mesures : [...]
The thesis is set in the domain of articulatory speech synthesis and consists of three major parts: the first two are dedicated to the development of two articulatory speech synthesizers and the third addresses how we can relate them to each other. The first approach results from a rule-based approach to articulatory speech synthesis that aimed to have a comprehensive control over the articulators (the jaw, the tongue, the lips, the velum, the larynx and the epiglottis). This approach used a dataset of static mid-sagittal magnetic resonance imaging (MRI) captures showing blocked articulation of French vowels and a set of consonant-vowel syllables; that dataset was encoded with a PCA-based vocal tract model. Then the system comprised several components: using the recorded articulatory configurations to drive a rule-based articulatory speech synthesizer as a source of target positions to attain (which is the main contribution of this first part); adjusting the obtained vocal tract shapes from the phonetic perspective; running an acoustic simulation unit to obtain the sound. The results of this synthesis were evaluated visually, acoustically and perceptually, and the problems encountered were broken down by their origin: the dataset, its modeling, the algorithm for managing the vocal tract shapes, their translation to the area functions, and the acoustic simulation. We concluded that, among our test examples, the articulatory strategies for vowels and stops are most correct, followed by those of nasals and fricatives. The second explored approach started off a baseline deep feed-forward neural network-based speech synthesizer trained with the standard recipe of Merlin on the audio recorded during real-time MRI (RT-MRI) acquisitions: denoised (and yet containing a considerable amount of noise of the MRI machine) speech in French and force-aligned state labels encoding phonetic and linguistic information. This synthesizer was augmented with eight parameters representing articulatory information---the lips opening and protrusion, the distance between the tongue and the velum, the velum and the pharyngeal wall and the tongue and the pharyngeal wall---that were automatically extracted from the captures and aligned with the audio signal and the linguistic specification. The jointly synthesized speech and articulatory sequences were evaluated objectively with dynamic time warping (DTW) distance, mean mel-cepstrum distortion (MCD), BAP (band aperiodicity prediction error), and three measures for F0: RMSE (root mean square error), CORR (correlation coefficient) and V/UV (frame-level voiced/unvoiced error). The consistency of articulatory parameters with the phonetic label was analyzed as well. I concluded that the generated articulatory parameter sequences matched the original ones acceptably closely, despite struggling more at attaining a contact between the articulators, and that the addition of articulatory parameters did not hinder the original acoustic model. The two approaches above are linked through the use of two different kinds of MRI speech data. This motivated a search for such coarticulation-aware targets as those that we had in the static case to be present or absent in the real-time data. To compare static and real-time MRI captures, the measures of structural similarity, Earth mover's distance, and SIFT were utilized; having analyzed these measures for validity and consistency, I qualitatively and quantitatively studied their temporal behavior, interpreted it and analyzed the identified similarities. I concluded that SIFT and structural similarity did capture some articulatory information and that their behavior, overall, validated the static MRI dataset. [...]
APA, Harvard, Vancouver, ISO, and other styles
4

Sun, Felix (Felix W. ). "Speech Representation Models for Speech Synthesis and Multimodal Speech Recognition." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/106378.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 59-63).
The field of speech recognition has seen steady advances over the last two decades, leading to the accurate, real-time recognition systems available on mobile phones today. In this thesis, I apply speech modeling techniques developed for recognition to two other speech problems: speech synthesis and multimodal speech recognition with images. In both problems, there is a need to learn a relationship between speech sounds and another source of information. For speech synthesis, I show that using a neural network acoustic model results in a synthesizer that is more tolerant of noisy training data than previous work. For multimodal recognition, I show how information from images can be effectively integrated into the recognition search framework, resulting in improved accuracy when image data is available.
by Felix Sun.
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
5

Morton, K. "Speech production and synthesis." Thesis, University of Essex, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.377930.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Jin, Yi-Xuan. "A HIGH SPEED DIGITAL IMPLEMENTATION OF LPC SPEECH SYNTHESIZER USING THE TMS320." Thesis, The University of Arizona, 1985. http://hdl.handle.net/10150/275309.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Wong, Chun-ho Eddy. "Reliability of rating synthesized hypernasal speech signals in connected speech and vowels." Click to view the E-thesis via HKU Scholars Hub, 2007. http://lookup.lib.hku.hk/lookup/bib/B4200617X.

Full text
Abstract:
Thesis (B.Sc)--University of Hong Kong, 2007.
"A dissertation submitted in partial fulfilment of the requirements for the Bachelor of Science (Speech and Hearing Sciences), The University of Hong Kong, June 30, 2007." Includes bibliographical references (p. 28-30). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
8

Peng, Antai. "Speech expression modeling and synthesis." Diss., Georgia Institute of Technology, 1996. http://hdl.handle.net/1853/13560.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Brierton, Richard A. "Variable frame-rate speech synthesis." Thesis, University of Liverpool, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.357363.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Klompje, Gideon. "A parametric monophone speech synthesis system." Thesis, Link to online version, 2006. http://hdl.handle.net/10019/561.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Speech synthesis"

1

Eric, Keller, and European Cooperation in the Field of Scientific and Technical Research (Organization). COST 258., eds. Improvements in speech synthesis: COST 258: the naturalness of synthetic speech. Chichester, West Sussex: J. Wiley, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Holmes, J. N. Speech synthesis and recognition. 2nd ed. New York: Taylor & Francis, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Keller, E., G. Bailly, A. Monaghan, J. Terken, and M. Huckvale, eds. Improvements in Speech Synthesis. Chichester, UK: John Wiley & Sons, Ltd, 2001. http://dx.doi.org/10.1002/0470845945.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Keller, E., G. Bailly, A. Monaghan, J. Terken, and M. Huckvale, eds. Improvements in Speech Synthesis. Chichester, UK: John Wiley & Sons, Ltd, 2001. http://dx.doi.org/10.1002/0470845945.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

van Santen, Jan P. H., Joseph P. Olive, Richard W. Sproat, and Julia Hirschberg, eds. Progress in Speech Synthesis. New York, NY: Springer New York, 1997. http://dx.doi.org/10.1007/978-1-4612-1894-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Wendy, Holmes, ed. Speech synthesis and recognition. 2nd ed. London: Taylor & Francis, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Taylor, Paul. Text-to-speech synthesis. Cambridge, UK: Cambridge University Press, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

1956-, Kleijn W. B., and Paliwal K. K, eds. Speech coding and synthesis. Amsterdam: Elsevier, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

inc, International Resource Development, ed. Speech recognition & voice synthesis. Norwalk, Conn., U.S.A. (6 Prowitt St., Norwalk 06855): International Resource Development, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Van Santen, Jan P. H., ed. Progress in speech synthesis. New York: Springer, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Speech synthesis"

1

Scully, Celia. "Articulatory Synthesis." In Speech Production and Speech Modelling, 151–86. Dordrecht: Springer Netherlands, 1990. http://dx.doi.org/10.1007/978-94-009-2037-8_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Schroeder, Manfred R. "Speech Synthesis." In Computer Speech, 85–90. Berlin, Heidelberg: Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/978-3-662-03861-1_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Schroeder, Manfred R. "Speech Synthesis." In Computer Speech, 129–34. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-662-06384-2_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Owens, F. J. "Speech Synthesis." In Signal Processing of Speech, 88–121. London: Macmillan Education UK, 1993. http://dx.doi.org/10.1007/978-1-349-22599-6_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Dutoit, Thierry, and Baris Bozkurt. "Speech Synthesis." In Handbook of Signal Processing in Acoustics, 557–85. New York, NY: Springer New York, 2008. http://dx.doi.org/10.1007/978-0-387-30441-0_30.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sinha, Priyabrata. "Speech Synthesis." In Speech Processing in Embedded Systems, 157–64. Boston, MA: Springer US, 2009. http://dx.doi.org/10.1007/978-0-387-75581-6_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hinterleitner, Florian. "Speech Synthesis." In Quality of Synthetic Speech, 5–18. Singapore: Springer Singapore, 2017. http://dx.doi.org/10.1007/978-981-10-3734-4_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kurematsu, Akira, and Tsuyoshi Morimoto. "Speech Synthesis." In Automatic Speech Translation, 71–85. London: CRC Press, 2023. http://dx.doi.org/10.1201/9780429333385-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Beckman, Mary E. "Speech Models and Speech Synthesis." In Progress in Speech Synthesis, 185–209. New York, NY: Springer New York, 1997. http://dx.doi.org/10.1007/978-1-4612-1894-4_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Suendermann, David, Harald Höge, and Alan Black. "Challenges in Speech Synthesis." In Speech Technology, 19–32. New York, NY: Springer US, 2010. http://dx.doi.org/10.1007/978-0-387-73819-2_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Speech synthesis"

1

Taylor, P. "Speech synthesis." In IEE Colloquium Speech and Language Engineering - State of the Art. IEE, 1998. http://dx.doi.org/10.1049/ic:19980957.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

BREEN, AP. "SPEECH SYNTHESIS." In Autumn Conference 1998. Institute of Acoustics, 2024. http://dx.doi.org/10.25144/18952.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Louw, Johannes A., Daniel R. van Niekerk, and Georg I. Schlünz. "Introducing the Speect speech synthesis platform." In The Blizzard Challenge 2010. ISCA: ISCA, 2010. http://dx.doi.org/10.21437/blizzard.2010-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Huckvale, Mark. "Speech synthesis, speech simulation and speech science." In 7th International Conference on Spoken Language Processing (ICSLP 2002). ISCA: ISCA, 2002. http://dx.doi.org/10.21437/icslp.2002-388.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Karlsson, Inger, and Lennart Neovius. "Speech synthesis experiments with the glove synthesiser." In 3rd European Conference on Speech Communication and Technology (Eurospeech 1993). ISCA: ISCA, 1993. http://dx.doi.org/10.21437/eurospeech.1993-213.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Valentini-Botinhao, Cassia, Xin Wang, Shinji Takaki, and Junichi Yamagishi. "Investigating RNN-based speech enhancement methods for noise-robust Text-to-Speech." In 9th ISCA Speech Synthesis Workshop. ISCA, 2016. http://dx.doi.org/10.21437/ssw.2016-24.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Álvarez, David, Santiago Pascual, and Antonio Bonafonte. "Problem-Agnostic Speech Embeddings for Multi-Speaker Text-to-Speech with SampleRNN." In 10th ISCA Speech Synthesis Workshop. ISCA: ISCA, 2019. http://dx.doi.org/10.21437/ssw.2019-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Sagisaka, Yoshinori, Takumi Yamashita, and Yoko Kokenawa. "Speech synthesis with attitude." In Speech Prosody 2004. ISCA: ISCA, 2004. http://dx.doi.org/10.21437/speechprosody.2004-91.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Krishna, Gautam, Co Tran, Yan Han, Mason Carnahan, and Ahmed H. Tewfik. "Speech Synthesis Using EEG." In ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020. http://dx.doi.org/10.1109/icassp40776.2020.9053340.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Black, Alan W., Heiga Zen, and Keiichi Tokuda. "Statistical Parametric Speech Synthesis." In 2007 IEEE International Conference on Acoustics, Speech and Signal Processing - ICASSP '07. IEEE, 2007. http://dx.doi.org/10.1109/icassp.2007.367298.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Speech synthesis"

1

Greenberg, Steven. Speech Synthesis Using Perceptually Motivated Features. Fort Belvoir, VA: Defense Technical Information Center, January 2012. http://dx.doi.org/10.21236/ada567193.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ore, Brian M. Speech Recognition, Articulatory Feature Detection, and Speech Synthesis in Multiple Languages. Fort Belvoir, VA: Defense Technical Information Center, November 2009. http://dx.doi.org/10.21236/ada519140.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Johnson, W. L., Shrikanth Narayanan, Richard Whitney, Rajat Das, and Catherine LaBore. Limited Domain Synthesis of Expressive Military Speech for Animated Characters. Fort Belvoir, VA: Defense Technical Information Center, January 2002. http://dx.doi.org/10.21236/ada459392.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Johnson, W. L., S. Narayanan, R. Whitney, R. Das, M. Bulut, and C. LaBore. Limited Domain Synthesis of Expressive Military Speech for Animated Characters. Fort Belvoir, VA: Defense Technical Information Center, January 2002. http://dx.doi.org/10.21236/ada459395.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Gordon, Jane. Use of synthetic speech in tests of speech discrimination. Portland State University Library, January 2000. http://dx.doi.org/10.15760/etd.5327.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Castan, Diego, Md Rahman, Sarah Bakst, Chris Cobo-Kroenke, Mitchell McLaren, Martin Graciarena, and Aaron Lawson. Speaker-targeted Synthetic Speech Detection. Office of Scientific and Technical Information (OSTI), February 2022. http://dx.doi.org/10.2172/1844063.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Mathew, Jijo K. Speed Enforcement in Work Zones and Synthesis on Cost-Benefit Assessment of Installing Speed Enforcement Cameras on INDOT Road Network. Purdue University, 2023. http://dx.doi.org/10.5703/1288284317639.

Full text
Abstract:
Work zone safety is a high priority for transportation agencies across the United States. High speeds in construction zones are a well-documented risk factor that increases the frequency and severity of crashes. It is therefore important to understand the extent and severity of high-speed vehicles in and around construction work zones. This study uses CV trajectory data to evaluate the impact of several work zone speed compliance measures, such as posted speed limit signs, radar-based speed feedback displays, and automated speed enforcement on controlling speeds inside the work zone. This study also presents several methodologies to characterize both the spatial and temporal effects of these control measures on driver behavior and vehicle speeds across the work zones.
APA, Harvard, Vancouver, ISO, and other styles
8

Kostova, Maya. Synthesis of PSA Inhibitors as SPECT- and PET-Based Imaging Agents for Prostate Cancer. Fort Belvoir, VA: Defense Technical Information Center, June 2011. http://dx.doi.org/10.21236/ada548605.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kabalka, G. W. Boron in nuclear medicine: New synthetic approaches to PET and SPECT. Office of Scientific and Technical Information (OSTI), September 1992. http://dx.doi.org/10.2172/7199090.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kabalka, G. W. Boron in nuclear medicine: New synthetic approaches to PET, SPECT, and BNCT agents. Office of Scientific and Technical Information (OSTI), October 1989. http://dx.doi.org/10.2172/5516333.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography