Academic literature on the topic 'Spoken language'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Spoken language.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Spoken language"

1

Jerger, James. "Spoken Words versus Spoken Language." Journal of the American Academy of Audiology 17, no. 07 (July 2006): i—ii. http://dx.doi.org/10.1055/s-0040-1715680.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Winters, Margaret E., and Paul Meara. "Spoken Language." Modern Language Journal 72, no. 2 (1988): 220. http://dx.doi.org/10.2307/328250.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

CIENKI, ALAN. "Spoken language usage events." Language and Cognition 7, no. 4 (November 2, 2015): 499–514. http://dx.doi.org/10.1017/langcog.2015.20.

Full text
Abstract:
abstractAs an explicitly usage-based model of language structure (Barlow & Kemmer, 2000), cognitive grammar draws on the notion of ‘usage events’ of language as the starting point from which linguistic units are schematized by language users. To be true to this claim for spoken languages, phenomena such as non-lexical sounds, intonation patterns, and certain uses of gesture should be taken into account to the degree to which they constitute the phonological pole of signs, paired in entrenched ways with conceptual content. Following through on this view of usage events also means realizing the gradable nature of signs. In addition, taking linguistic meaning as consisting of not only conceptual content but also a particular way of construing that content (Langacker, 2008, p. 43), we find that the forms of expression mentioned above play a prominent role in highlighting the ways in which speakers construe what they are talking about, in terms of different degrees of specificity, focusing, prominence, and perspective. Viewed in this way, usage events of spoken language are quite different in nature from those of written language, a point which highlights the need for differentiated accounts of the grammar of these two forms of expression taken by many languages.
APA, Harvard, Vancouver, ISO, and other styles
4

Siniscalchi, Sabato Marco, Jeremy Reed, Torbjørn Svendsen, and Chin-Hui Lee. "Universal attribute characterization of spoken languages for automatic spoken language recognition." Computer Speech & Language 27, no. 1 (January 2013): 209–27. http://dx.doi.org/10.1016/j.csl.2012.05.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Makhoul, J., F. Jelinek, L. Rabiner, C. Weinstein, and V. Zue. "Spoken Language Systems." Annual Review of Computer Science 4, no. 1 (June 1990): 481–501. http://dx.doi.org/10.1146/annurev.cs.04.060190.002405.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Yule, George. "The Spoken Language." Annual Review of Applied Linguistics 10 (March 1989): 163–72. http://dx.doi.org/10.1017/s0267190500001276.

Full text
Abstract:
The investigation of aspects of the spoken language from a pedagogical perspective in recent years has tended, with a few exceptions, to be indirect and typically subordinate to considerations of other topics such as acquisition processes, cognitive constraints on learning, cross-cultural factors, and many others. At the same time, there has been a broad movement in language teaching away from organizing courses in terms of discrete skills such as speaking or listening and towards more holistic or integrated classroom experiences for learners. There is no reason to suspect that these trends will be reversed in the early 1990s and, with the exception of those specifically involved in remediation, language teachers will be less likely to find themselves being prompted to “teach the spoken language” than to “create learner-centered, acquisition-rich environments” which will have listening and speaking activities as incidental processes rather than as objectives. While acknowledging this trend, I would like to survey, albeit selectively, some of the areas where speaking and listening activities relevant to the classroom have been the subject of recent investigation and evaluate some of the claims concerning what might be beneficial or not. In the three sections which follow, I shall review current thinking on: 1) the spoken language as a formal system, focusing on pronunciation, 2) the spoken language as a medium of information transfer (that is, in its transactional function), and 3) the spoken language as a medium of interpersonal exchange (that is, in its interactional function).
APA, Harvard, Vancouver, ISO, and other styles
7

Walker, Marilyn A., and Owen C. Rambow. "Spoken language generation." Computer Speech & Language 16, no. 3-4 (July 2002): 273–81. http://dx.doi.org/10.1016/s0885-2308(02)00029-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Grotjahn, Rüdiger. "Testing spoken language." System 16, no. 3 (January 1988): 393–94. http://dx.doi.org/10.1016/0346-251x(88)90084-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ye-Yi Wang, Li Deng, and A. Acero. "Spoken language understanding." IEEE Signal Processing Magazine 22, no. 5 (September 2005): 16–31. http://dx.doi.org/10.1109/msp.2005.1511821.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

De Mori, R., F. Bechet, D. Hakkani-Tur, M. McTear, G. Riccardi, and G. Tur. "Spoken language understanding." IEEE Signal Processing Magazine 25, no. 3 (May 2008): 50–58. http://dx.doi.org/10.1109/msp.2008.918413.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Spoken language"

1

Ryu, Koichiro, and Shigeki Matsubara. "SIMULTANEOUS SPOKEN LANGUAGE TRANSLATION." INTELLIGENT MEDIA INTEGRATION NAGOYA UNIVERSITY / COE, 2006. http://hdl.handle.net/2237/10466.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Jones, J. M. "Iconicity and spoken language." Thesis, University College London (University of London), 2017. http://discovery.ucl.ac.uk/1559788/.

Full text
Abstract:
Contrary to longstanding assumptions about the arbitrariness of language, recent work has highlighted how much iconicity – i.e. non-arbitrariness – exists in language, in the form of not only onomatopoeia (bang, splash, meow), but also sound-symbolism, signed vocabulary, and (in a paralinguistic channel) mimetic gesture. But is this iconicity ornamental, or does it represent a systematic feature of language important in language acquisition, processing, and evolution? Scholars have begun to address this question, and this thesis adds to that effort, focusing on spoken language (including gesture). After introducing iconicity and reviewing the literature in the introduction, Chapter 2 reviews sound-shape iconicity (the “kiki-bouba” effect), and presents a norming study that verifies the phonetic parameters of the effect, suggesting that it likely involves multiple mechanisms. Chapter 3 shows that sound-shape iconicity helps participants learn in a model of vocabulary acquisition (cross-situational learning) by disambiguating reference. Variations on this experiment show that the round association may be marginally stronger than the spiky, but only barely, suggesting that representations of lip shape may be partly but not entirely responsible for the effect. Chapter 4 models language change using the iterated learning paradigm. It shows that iconicity (both sound-shape and motion) emerges from an arbitrary initial language over ten ‘generations’ of speakers. I argue this shows that psychological biases introduce systematic pressure towards iconicity over language change, and that moreover spoken iconicity can help bootstrap a system of communication. Chapter 5 shifts to children and gesture, attempting to answer whether children can take meaning from iconic action gestures. Results here were null, but definitive conclusions must await new experiments with higher statistical power. The conclusion sums up my findings and their significance, and points towards crucial research for the future.
APA, Harvard, Vancouver, ISO, and other styles
3

Dinarelli, Marco. "Spoken Language Understanding: from Spoken Utterances to Semantic Structures." Doctoral thesis, Università degli studi di Trento, 2010. https://hdl.handle.net/11572/367830.

Full text
Abstract:
In the past two decades there have been several projects on Spoken Language Understanding (SLU). In the early nineties DARPA ATIS project aimed at providing a natural language interface to a travel information database. Following the ATIS project, DARPA Communicator project aimed at building a spoken dialog system automatically providing information on flights and travel reservation. These two projects defined a first generation of conversational systems. In late nineties ``How may I help you'' project from AT\&T, with Large Vocabulary Continuous Speech Recognition (LVCSR) and mixed initiatives spoken interfaces, started the second generation of conversational systems, which later have been improved integrating approaches based on machine learning techniques. The European funded project LUNA aims at starting the third generation of spoken language interfaces. In the context of this project we have acquired the first Italian corpus of spontaneous speech from real users engaged in a problem solving task, as opposed to previous projects. The corpus contains transcriptions and annotations based on a new multilevel protocol studied specifically for the goal of the LUNA project. The task of Spoken Language Understanding is the extraction of the meaning structure from spoken utterances in conversational systems. For this purpose, two main statistical learning paradigms have been proposed in the last decades: generative and discriminative models. The former are robust to over-fitting and they are less affected by noise but they cannot easily integrate complex structures (e.g. trees). In contrast, the latter can easily integrate very complex features that can capture arbitrarily long distance dependencies. On the other hand they tend to over-fit training data and so they are less robust to annotation errors in the data needed to learn the model. This work presents an exhaustive study of Spoken Language Understanding models, putting particular focus on structural features used in a Joint Generative and Discriminative learning framework. This combines the strengths of both approaches while training segmentation and labeling models for SLU. Its main characteristic is the use of Kernel Methods to encode structured features in Support Vector Machines, which in turn re-rank the hypotheses produced by an first step SLU module based either on Stochastic Finite State Transducers or Conditional Random Fields. Joint models based on transducers are also amenable to decode word lattices generated by large vocabulary speech recognizers. We show the benefit of our approach with comparative experiments among generative, discriminative and joint models on some of the most representative corpora of SLU, for a total of four corpora in four different languages: the ATIS corpus (English), the MEDIA corpus (French) and the LUNA Italian and Polish corpora (Italian and Polish respectively). These also represent three different kinds of domain applications, i.e. informational, transactional and problem-solving domains. The results, although depending on the task and in some range on the first model baseline, show that joint models improve in most cases the state-of-the-art, especially when a small training set is available.
APA, Harvard, Vancouver, ISO, and other styles
4

Dinarelli, Marco. "Spoken Language Understanding: from Spoken Utterances to Semantic Structures." Doctoral thesis, University of Trento, 2010. http://eprints-phd.biblio.unitn.it/280/1/PhD-Thesis-Dinarelli.pdf.

Full text
Abstract:
In the past two decades there have been several projects on Spoken Language Understanding (SLU). In the early nineties DARPA ATIS project aimed at providing a natural language interface to a travel information database. Following the ATIS project, DARPA Communicator project aimed at building a spoken dialog system automatically providing information on flights and travel reservation. These two projects defined a first generation of conversational systems. In late nineties ``How may I help you'' project from AT\&T, with Large Vocabulary Continuous Speech Recognition (LVCSR) and mixed initiatives spoken interfaces, started the second generation of conversational systems, which later have been improved integrating approaches based on machine learning techniques. The European funded project LUNA aims at starting the third generation of spoken language interfaces. In the context of this project we have acquired the first Italian corpus of spontaneous speech from real users engaged in a problem solving task, as opposed to previous projects. The corpus contains transcriptions and annotations based on a new multilevel protocol studied specifically for the goal of the LUNA project. The task of Spoken Language Understanding is the extraction of the meaning structure from spoken utterances in conversational systems. For this purpose, two main statistical learning paradigms have been proposed in the last decades: generative and discriminative models. The former are robust to over-fitting and they are less affected by noise but they cannot easily integrate complex structures (e.g. trees). In contrast, the latter can easily integrate very complex features that can capture arbitrarily long distance dependencies. On the other hand they tend to over-fit training data and so they are less robust to annotation errors in the data needed to learn the model. This work presents an exhaustive study of Spoken Language Understanding models, putting particular focus on structural features used in a Joint Generative and Discriminative learning framework. This combines the strengths of both approaches while training segmentation and labeling models for SLU. Its main characteristic is the use of Kernel Methods to encode structured features in Support Vector Machines, which in turn re-rank the hypotheses produced by an first step SLU module based either on Stochastic Finite State Transducers or Conditional Random Fields. Joint models based on transducers are also amenable to decode word lattices generated by large vocabulary speech recognizers. We show the benefit of our approach with comparative experiments among generative, discriminative and joint models on some of the most representative corpora of SLU, for a total of four corpora in four different languages: the ATIS corpus (English), the MEDIA corpus (French) and the LUNA Italian and Polish corpora (Italian and Polish respectively). These also represent three different kinds of domain applications, i.e. informational, transactional and problem-solving domains. The results, although depending on the task and in some range on the first model baseline, show that joint models improve in most cases the state-of-the-art, especially when a small training set is available.
APA, Harvard, Vancouver, ISO, and other styles
5

Melander, Linda. "Language attitudes : Evaluational Reactions to Spoken Language." Thesis, Högskolan Dalarna, Engelska, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:du-2282.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Harwath, David F. (David Frank). "Learning spoken language through vision." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/118081.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 145-159).
Humans learn language at an early age by simply observing the world around them. Why can't computers do the same? Conventional automatic speech recognition systems have a long history and have recently made great strides thanks to the revival of deep neural networks. However, their reliance on highly supervised (and therefore expensive) training paradigms has restricted their application to the major languages of the world, accounting for a small fraction of the more than 7,000 human languages spoken worldwide. This thesis introduces datasets, models, and methodologies for grounding continuous speech signals at the raw waveform level to natural image scenes. The context and constraint provided by the visual information enables our models to efficiently learn linguistic units, such as words, along with their visual semantics. For example, our models are able to recognize instances of the spoken word "water" within spoken captions and associate them with image regions containing bodies of water. Further, we demonstrate that our models are capable of learning cross-lingual semantics by using the visual space as an interlingua to perform speech-to-speech retrieval between English and Hindi. In all cases, this learning is done without linguistic transcriptions or conventional speech recognition - yet we show that our methods achieve retrieval scores close to what is possible when transcriptions are available. This offers a promising new direction for speech processing that only requires speakers to provide narrations of what they see.
by David Frank Harwath.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
7

Lainio, Jarmo. "Spoken Finnish in urban Sweden." Uppsala : Centre for multiethnic research, 1989. http://catalogue.bnf.fr/ark:/12148/cb35513801d.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kanda, Naoyuki. "Open-ended Spoken Language Technology: Studies on Spoken Dialogue Systems and Spoken Document Retrieval Systems." 京都大学 (Kyoto University), 2014. http://hdl.handle.net/2433/188874.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Intilisano, Antonio Rosario. "Spoken dialog systems: from automatic speech recognition to spoken language understanding." Doctoral thesis, Università di Catania, 2016. http://hdl.handle.net/10761/3920.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zámečník, Jiří [Verfasser], Christian [Akademischer Betreuer] Mair, and John A. [Akademischer Betreuer] Nerbonne. "Disfluency prediction in natural spoken language." Freiburg : Universität, 2019. http://d-nb.info/1238517714/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Spoken language"

1

Mariani, Joseph, ed. Spoken Language Processing. London, UK: ISTE, 2009. http://dx.doi.org/10.1002/9780470611180.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Tur, Gokhan, and Renato De Mori, eds. Spoken Language Understanding. Chichester, UK: John Wiley & Sons, Ltd, 2011. http://dx.doi.org/10.1002/9781119992691.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

1948-, Nakagawa Seiichi, Okada Michio 1960-, and Kawahara Tatsuya, eds. Spoken language systems. Tokyo: Ohmsha, Ltd., 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Joseph, Mariani, ed. Spoken language processing. Hoboken, NJ: John Wiley and Sons, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Garrison, Mary, Arpad P. Orbán, and Marco Mostert, eds. Spoken and Written Language. Turnhout: Brepols Publishers, 2013. http://dx.doi.org/10.1484/m.usml-eb.6.09070802050003050007070005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Jorden, Eleanor Harz. Japanese: The spoken language. New Haven: Yale University Press, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Huo, Qiang, Bin Ma, Eng-Siong Chng, and Haizhou Li, eds. Chinese Spoken Language Processing. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11939993.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

1958-, Rayner Manny, ed. The spoken language translator. Cambridge: Cambridge University Press, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Mari, Noda, ed. Japanese, the spoken language. New Haven: Yale University Press, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Mari, Noda, ed. Japanese: The spoken language. New Haven,CT: Yale U.P., 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Spoken language"

1

Borwick, Caroline N. "Spoken Language." In Dyslexia in Practice, 31–55. Boston, MA: Springer US, 2000. http://dx.doi.org/10.1007/978-1-4615-4169-1_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lull, James. "Spoken Language." In Evolutionary Communication, 81–117. New York, NY : Routledge, 2020.: Routledge, 2019. http://dx.doi.org/10.4324/9780429456879-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Juffs, Alan. "Spoken Language." In Aspects of Language Development in an Intensive English Program, 170–93. 1. | New York : Taylor and Francis, 2020. | Series: Routledge studies in applied linguistics: Routledge, 2020. http://dx.doi.org/10.4324/9781315170190-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ehsani, Farzad, Robert Frederking, Manny Rayner, and Pierrette Bouillon. "Spoken Language Translation." In Speech Technology, 167–93. New York, NY: Springer US, 2010. http://dx.doi.org/10.1007/978-0-387-73819-2_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Harper, Mary P., and Michael Maxwell. "Spoken Language Characterization." In Springer Handbook of Speech Processing, 797–810. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-49127-9_40.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

McTear, Michael, Zoraida Callejas, and David Griol. "Spoken Language Understanding." In The Conversational Interface, 161–85. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-32967-3_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Matuszek, Cynthia. "Grounding Spoken Language." In Sound and Robotics, 76–98. Boca Raton: Chapman and Hall/CRC, 2023. http://dx.doi.org/10.1201/9781003320470-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Paaß, Gerhard, and Dirk Hecker. "Understanding Spoken Language." In Artificial Intelligence, 239–79. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-50605-5_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Thorne, Sara. "Spoken English." In Mastering Advanced English Language, 193–228. London: Macmillan Education UK, 1997. http://dx.doi.org/10.1007/978-1-349-13645-2_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Rosset, Sophie, Olivier Galibert, and Lori Lamel. "Spoken Question Answering." In Spoken Language Understanding, 147–70. Chichester, UK: John Wiley & Sons, Ltd, 2011. http://dx.doi.org/10.1002/9781119992691.ch6.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Spoken language"

1

De Sisto, Mirella, Vincent Vandeghinste, Caro Brosens, Myriam Vermeerbergen, and Dimitar Shterionov. "XSL-HoReCo and GoSt-ParC-Sign: Two New Signed Language - Written Language Parallel Corpora." In CLARIN Annual Conference 2023. Linköping University Electronic Press, 2024. http://dx.doi.org/10.3384/ecp210002.

Full text
Abstract:
Developments in language technology targeting signed languages are lagging behind in comparison to the advances related to what is available for so-called spoken languages.1 This is partly due to the scarcity of good quality signed language data, including good quality parallel corpora of signed and spoken languages. This paper introduces two parallel corpora which aim at reducing the gap between signed and spoken-only language technology: The XSL Hotel Review Corpus (XSL-HoReCo) and the Gold Standard Parallel Corpus of Signed and Spoken Language (GoSt-ParC-Sign). Both corpora are available through the CLARIN infrastructure.
APA, Harvard, Vancouver, ISO, and other styles
2

Moore, Roger K. "Spoken language technology." In the 38th Annual Meeting. Morristown, NJ, USA: Association for Computational Linguistics, 2000. http://dx.doi.org/10.3115/1075218.1075221.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Makhoul, John. "Spoken language systems." In the workshop. Morristown, NJ, USA: Association for Computational Linguistics, 1989. http://dx.doi.org/10.3115/1075434.1075506.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Makhoul, John. "Spoken language systems." In the workshop. Morristown, NJ, USA: Association for Computational Linguistics, 1991. http://dx.doi.org/10.3115/112405.1138644.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Makhoul, John. "Spoken language systems." In the workshop. Morristown, NJ, USA: Association for Computational Linguistics, 1990. http://dx.doi.org/10.3115/116580.1138592.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lunati, Jean-Michel, and Alexander I. Rudnicky. "Spoken language interfaces." In the SIGCHI conference. New York, New York, USA: ACM Press, 1991. http://dx.doi.org/10.1145/108844.108999.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hayes, Philip J., Alexander G. Hauptmann, Jaime G. Carbonell, and Masaru Tomita. "Parsing spoken language." In the 11th coference. Morristown, NJ, USA: Association for Computational Linguistics, 1986. http://dx.doi.org/10.3115/991365.991537.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Reitmaier, Thomas, Dani Kalarikalayil Raju, Ondrej Klejch, Electra Wallington, Nina Markl, Jennifer Pearson, Matt Jones, Peter Bell, and Simon Robinson. "Cultivating Spoken Language Technologies for Unwritten Languages." In CHI '24: CHI Conference on Human Factors in Computing Systems. New York, NY, USA: ACM, 2024. http://dx.doi.org/10.1145/3613904.3642026.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Gris, Lucas Rafael Stefanel, and Arnaldo Candido Junior. "Automatic Spoken Language Identification using Convolutional Neural Networks." In Congresso Latino-Americano de Software Livre e Tecnologias Abertas. Sociedade Brasileira de Computação - SBC, 2020. http://dx.doi.org/10.5753/latinoware.2020.18603.

Full text
Abstract:
Automatic Spoken Language Identification systems classify the spoken language automatically and can be used in many tasks, for example, to support Automatic Speech Recognition or Video Recommendation systems. In this work, we propose an automatic language identification model obtained through a Convolutional Neural Network trained over audio spectrograms on Portuguese, English and Spanish languages. The audio for the model training was obtained through audiobooks and different corpora for speech recognition systems. The audios were used to generate instances having five seconds each. We addressed the limitation of having few speakers in our dataset with simple data augmentation techniques such as speed and pitch changing on the original instances to increase the size of the dataset. The proposed model was optimized with a random hyperparameter search which provided a final model able to identify the proposed languages with 83% of accuracy on a new, unseen test data, made with audios from different sources.
APA, Harvard, Vancouver, ISO, and other styles
10

Rudnicky, Alexander I., Michelle Sakamoto, and Joseph H. Polifroni. "Evaluating spoken language interaction." In the workshop. Morristown, NJ, USA: Association for Computational Linguistics, 1989. http://dx.doi.org/10.3115/1075434.1075459.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Spoken language"

1

Boisen, Sean, Yen-Lu Chow, Andrww Haas, Robert Ingria, Salim Roukos, and David Stallard. The BBN Spoken Language System. Fort Belvoir, VA: Defense Technical Information Center, January 1989. http://dx.doi.org/10.21236/ada457481.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Acero, Alejandro, and Richard M. Stern. Towards Environment-Independent Spoken Language Systems. Fort Belvoir, VA: Defense Technical Information Center, January 1990. http://dx.doi.org/10.21236/ada457727.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Schwartz, R., L. Nguyen, F. Kubala, G. CHou, G. Zavaliagkos, and J. Makhoul. On Using Written Language Training Data for Spoken Language Modeling. Fort Belvoir, VA: Defense Technical Information Center, January 1994. http://dx.doi.org/10.21236/ada460657.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lunati, Jean-Michel, and Alexander I. Rudnicky. The Design of a Spoken Language Interface. Fort Belvoir, VA: Defense Technical Information Center, January 1990. http://dx.doi.org/10.21236/ada457799.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Henry, Paula P., Timothy J. Mermagen, and Tomasz R. Letowski. An Evaluation of a Spoken Language Interface. Fort Belvoir, VA: Defense Technical Information Center, April 2005. http://dx.doi.org/10.21236/ada432271.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Makhoul, J., and M. Bates. Usable, Real-Time, Interactive Spoken Language Systems. Fort Belvoir, VA: Defense Technical Information Center, September 1994. http://dx.doi.org/10.21236/ada286349.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Makhoul, John, and Madeleine Bates. Usable, Real-Time, Interactive Spoken Language Systems. Fort Belvoir, VA: Defense Technical Information Center, September 1992. http://dx.doi.org/10.21236/ada257998.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Zahorian, Stephen. Open-Source Multi-Language Audio Database for Spoken Language Processing Applications. Fort Belvoir, VA: Defense Technical Information Center, December 2012. http://dx.doi.org/10.21236/ada571008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Hirschman, Lynette, Stephanie Seneff, David Goodine, and Michael Phillips. Integrating Syntax and Semantics into Spoken Language Understanding. Fort Belvoir, VA: Defense Technical Information Center, January 1991. http://dx.doi.org/10.21236/ada460560.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Bates, Madeleine, Dan Ellard, Pat Peterson, and Varda Shaked. Using Spoken Language to Facilitate Military Transportation Planning. Fort Belvoir, VA: Defense Technical Information Center, January 1991. http://dx.doi.org/10.21236/ada460640.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography