Journal articles on the topic 'Distant speech'

To see the other types of publications on this topic, follow the link: Distant speech.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Distant speech.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Sorokin, V. N. "Distant Speech Detection." Acoustical Physics 69, no. 4 (August 2023): 565–73. http://dx.doi.org/10.1134/s1063771023600250.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ashwini, Jaya Kumar, and Ramaswamy Kumaraswamy. "Single-Channel Speech Enhancement Techniques for Distant Speech Recognition." Journal of Intelligent Systems 22, no. 2 (June 1, 2013): 81–93. http://dx.doi.org/10.1515/jisys-2012-0051.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractThis article presents an overview of the single-channel dereverberation methods suitable for distant speech recognition (DSR) application. The dereverberation methods are mainly classified based on the domain of enhancement of speech signal captured by a distant microphone. Many single-channel speech enhancement methods focus on either denoising or dereverberating the distorted speech signal. There are very few methods that consider both noise and reverberation effects. Such methods are discussed under a multistage approach in this article. The article concludes with a hypothesis that the methods that do not require an a priori reverberation impulse response is desirable in varying the environmental conditions for DSR applications such as intelligent home and office environments, humanoid robots, and automobiles rather than the methods that require an a priori reverberation impulse response.
3

Guerrero Flores, Cristina, Georgina Tryfou, and Maurizio Omologo. "Cepstral distance based channel selection for distant speech recognition." Computer Speech & Language 47 (January 2018): 314–32. http://dx.doi.org/10.1016/j.csl.2017.08.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Nematollahi, Mohammad Ali, and S. A. R. Al-Haddad. "Distant Speaker Recognition: An Overview." International Journal of Humanoid Robotics 13, no. 02 (May 25, 2016): 1550032. http://dx.doi.org/10.1142/s0219843615500322.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Distant speaker recognition (DSR) system assumes the microphones are far away from the speaker’s mouth. Also, the position of microphones can vary. Furthermore, various challenges and limitation in terms of coloration, ambient noise and reverberation can bring some difficulties for recognition of the speaker. Although, applying speech enhancement techniques can attenuate speech distortion components, it may remove speaker-specific information and increase the processing time in real-time application. Currently, many efforts have been investigated to develop DSR for commercial viable systems. In this paper, state-of-the-art techniques in DSR such as robust feature extraction, feature normalization, robust speaker modeling, model compensation, dereverberation and score normalization are discussed to overcome the speech degradation components i.e., reverberation and ambient noise. Performance results on DSR show that whenever speaker to microphone distant increases, recognition rates decreases and equal error rate (EER) increases. Finally, the paper concludes that applying robust feature and robust speaker model varying lesser with distant, can improve the DSR performance.
5

Swietojanski, Pawel, Arnab Ghoshal, and Steve Renals. "Convolutional Neural Networks for Distant Speech Recognition." IEEE Signal Processing Letters 21, no. 9 (September 2014): 1120–24. http://dx.doi.org/10.1109/lsp.2014.2325781.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Pardede, Hilman F., Asri R. Yuliani, and Rika Sustika. "Convolutional Neural Network and Feature Transformation for Distant Speech Recognition." International Journal of Electrical and Computer Engineering (IJECE) 8, no. 6 (December 1, 2018): 5381. http://dx.doi.org/10.11591/ijece.v8i6.pp5381-5388.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In many applications, speech recognition must operate in conditions where there are some distances between speakers and the microphones. This is called distant speech recognition (DSR). In this condition, speech recognition must deal with reverberation. Nowadays, deep learning technologies are becoming the the main technologies for speech recognition. Deep Neural Network (DNN) in hybrid with Hidden Markov Model (HMM) is the commonly used architecture. However, this system is still not robust against reverberation. Previous studies use Convolutional Neural Networks (CNN), which is a variation of neural network, to improve the robustness of speech recognition against noise. CNN has the properties of pooling which is used to find local correlation between neighboring dimensions in the features. With this property, CNN could be used as feature learning emphasizing the information on neighboring frames. In this study we use CNN to deal with reverberation. We also propose to use feature transformation techniques: linear discriminat analysis (LDA) and maximum likelihood linear transformation (MLLT), on mel frequency cepstral coefficient (MFCC) before feeding them to CNN. We argue that transforming features could produce more discriminative features for CNN, and hence improve the robustness of speech recognition against reverberation. Our evaluations on Meeting Recorder Digits (MRD) subset of Aurora-5 database confirm that the use of LDA and MLLT transformations improve the robustness of speech recognition. It is better by 20% relative error reduction on compared to a standard DNN based speech recognition using the same number of hidden layers.
7

NAKANO, Alberto Yoshihiro, Seiichi NAKAGAWA, and Kazumasa YAMAMOTO. "Distant Speech Recognition Using a Microphone Array Network." IEICE Transactions on Information and Systems E93-D, no. 9 (2010): 2451–62. http://dx.doi.org/10.1587/transinf.e93.d.2451.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ravanelli, Mirco, and Maurizio Omologo. "Automatic context window composition for distant speech recognition." Speech Communication 101 (July 2018): 34–44. http://dx.doi.org/10.1016/j.specom.2018.05.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Matassoni, Marco, Maurizio Omologo, Diego Giuliani, and Piergiorgio Svaizer. "Hidden Markov model training with contaminated speech material for distant-talking speech recognition." Computer Speech & Language 16, no. 2 (April 2002): 205–23. http://dx.doi.org/10.1006/csla.2002.0191.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Mustafaev, M. Sh, V. A. Vissarionov, E. M. Tarchokova, and S. A. Dyshekova. "Basics of complex rehabilitation of patients with speech disorders after uranoplasty." Medical alphabet, no. 3 (June 12, 2020): 40–42. http://dx.doi.org/10.33667/2078-5631-2020-3-40-42.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Result of the study a theoretical overview technologies for eliminating cleft of the palate and their distant results for the formation of correct speech. The process of speech formation and the influence of technical features of first and reconstructive uranoplastics using pharyngeal flap on this process was analyzed.
11

Rodomagoulakis, Isidoros, and Petros Maragos. "Improved Frequency Modulation Features for Multichannel Distant Speech Recognition." IEEE Journal of Selected Topics in Signal Processing 13, no. 4 (August 2019): 841–49. http://dx.doi.org/10.1109/jstsp.2019.2923372.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Вотінцева, М., Т. Клименко, and І. Суїма. "DEVELOPMENT OF SPEECH COMPETENCES IN TERMS OF DISTANT LEARNING." Journal “Ukrainian sense”, no. 2 (December 1, 2023): 109–17. http://dx.doi.org/10.15421/462222.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Background. Features of formation of students’ abilities and skills concerning such speech competence, as speaking in the conditions of online learning. The purpose. Traditional education has many means to correct various issues that arise in the learning process. The conditions of distance learning consist of two important areas: technological and methodological and didactic, which cannot be separated from each other. The purpose of this article is to analyze and describe the various elements of modern online learning, both in terms of technology and in terms of methodological and didactic means. This goal involves solving the following tasks: the study of methodological and language didactic means that promote the development of speaking competence at foreign language classes during distance learning. Explore the benefits of different online means. Methods. Discriptive method. Results. On the one hand, properly formed communication skills, thinking, directed motivation, as well as the topic and content of tasks play an important role in the development of speech competencies. On the other hand, the combination of synchronous and asynchronous means allows to implement the learning process on the internet in full. Discussion. The study describes a system of tools and methods for learning a foreign language, as well as. stages of learning to speak. This allows to summarize some elements of the didactic base. Knowing the stages of preparation for different types of communication, as well as the focus – situational or communicative, one can use a variety of methods and exercises to achieve free communication. The focused assistance during study is possible when some aspects of psychological development, personal knowledge of a particular student and the communicative situation are taken into account and it can give confidence to students and bring more satisfaction from the results. It is revealed that with the help of the most modern means for synchronous and asynchronous communication the distance learning system could be able to work in a constant normal rhythm. Thus, the distance learning process is carried out using a combination of synchronous and asynchronous means, while maintaining flexibility and convenience, and expands the quality and efficiency of both methods of communication. The application and combination of traditional knowledge and modern technologies allows to implement a successful distance learning in teaching a foreign language. The further development of our research lies in the field of linguodidactic description of means of listening competence development.
13

Van Vliet, Dennis. "Familiar lessons from a distant land." Hearing Journal 63, no. 8 (August 2010): 52. http://dx.doi.org/10.1097/01.hj.0000387932.90475.ec.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Cornell, Samuele, Maurizio Omologo, Stefano Squartini, and Emmanuel Vincent. "Overlapped Speech Detection and speaker counting using distant microphone arrays." Computer Speech & Language 72 (March 2022): 101306. http://dx.doi.org/10.1016/j.csl.2021.101306.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Bryant, Gregory A., Pierre Liénard, and H. Clark Barrett. "Recognizing infant-directed speech across distant cultures: Evidence from Africa." Journal of Evolutionary Psychology 10, no. 2 (June 2012): 47–59. http://dx.doi.org/10.1556/jep.10.2012.2.1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Khoubrouy, Soudeh A., and John H. L. Hansen. "Microphone Array Processing Strategies for Distant-Based Automatic Speech Recognition." IEEE Signal Processing Letters 23, no. 10 (October 2016): 1344–48. http://dx.doi.org/10.1109/lsp.2016.2592683.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Himawan, Ivan, Petr Motlicek, David Imseng, and Sridha Sridharan. "Feature mapping using far-field microphones for distant speech recognition." Speech Communication 83 (October 2016): 1–9. http://dx.doi.org/10.1016/j.specom.2016.07.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Takiguchi, T., S. Nakamura, and K. Shikano. "HMM-separation-based speech recognition for a distant moving speaker." IEEE Transactions on Speech and Audio Processing 9, no. 2 (2001): 127–40. http://dx.doi.org/10.1109/89.902279.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Oo, Zeyan, Longbiao Wang, Khomdet Phapatanaburi, Masahiro Iwahashi, Seiichi Nakagawa, and Jianwu Dang. "Phase and reverberation aware DNN for distant-talking speech enhancement." Multimedia Tools and Applications 77, no. 14 (February 20, 2018): 18865–80. http://dx.doi.org/10.1007/s11042-018-5686-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Ely, Richard, and Allyssa McCabe. "Remembered voices." Journal of Child Language 20, no. 3 (October 1993): 671–96. http://dx.doi.org/10.1017/s0305000900008539.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
ABSTRACTThe speech children spontaneously quote was examined in two studies. In Study 1, a corpus of personal narratives from 96 children aged 4;0 to 9;0 was analysed; Study 2 investigated reported speech in 25 younger children aged 1;2 to 5;2 interacting with their parents. In both studies, the frequency of reported speech increased with age. Direct quotation was more common than indirect or summarized quotation at all ages. In Study 1, children quoted themselves more frequently than any other speaker, and their mothers more frequently than their fathers. Directives were the most commonly reported speech act from the distant past in both older (Study 1) and younger (Study 2) children. In Study 1, girls used reported speech more frequently than did boys, and their quotations were more direct in form than were those of boys.
21

BASYSTIUK, Oleh, and Nataliia MELNYKOVA. "MULTIMODAL SPEECH RECOGNITION BASED ON AUDIO AND TEXT DATA." Herald of Khmelnytskyi National University. Technical sciences 313, no. 5 (October 27, 2022): 22–25. http://dx.doi.org/10.31891/2307-5732-2022-313-5-22-25.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Systems of machine translation of texts from one language to another simulate the work of a human translator. Their performance depends on the ability to understand the grammar rules of the language. In translation, the basic units are not individual words, but word combinations or phraseological units that express different concepts. Only by using them, more complex ideas can be expressed through the translated text. The main feature of machine translation is different length for input and output. The ability to work with different lengths of input and output provides us with the approach of recurrent neural networks. A recurrent neural network (RNN) is a class of artificial neural network that has connections between nodes. In this case, a connection refers to a connection from a more distant node to a less distant node. The presence of connections allows the RNN to remember and reproduce the entire sequence of reactions to one stimulus. From the point of view of programming, such networks are analogous to cyclic execution, and from the point of view of the system, such networks are equivalent to a state machine. RNNs are commonly used to process word sequences in natural language processing. Usually, a hidden Markov model (HMM) and an N-program language model are used to process a sequence of words. Deep learning has completely changed the approach to machine translation. Researchers in the deep learning field has created simple solutions based on machine learning that outperform the best expert systems. In this paper was reviewed the main features of machine translation based on recurrent neural networks. The advantages of systems based on RNN using the sequence-to-sequence model against statistical translation systems are also highlighted in the article. Two machine translation systems based on the sequence-to-sequence model were constructed using Keras and PyTorch machine learning libraries. Based on the obtained results, libraries analysis was done, and their performance comparison.
22

Chernyak, Bronya R., Ann R. Bradlow, Joseph Keshet, and Matthew Goldrick. "A perceptual similarity space for speech based on self-supervised speech representations." Journal of the Acoustical Society of America 155, no. 6 (June 1, 2024): 3915–29. http://dx.doi.org/10.1121/10.0026358.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Speech recognition by both humans and machines frequently fails in non-optimal yet common situations. For example, word recognition error rates for second-language (L2) speech can be high, especially under conditions involving background noise. At the same time, both human and machine speech recognition sometimes shows remarkable robustness against signal- and noise-related degradation. Which acoustic features of speech explain this substantial variation in intelligibility? Current approaches align speech to text to extract a small set of pre-defined spectro-temporal properties from specific sounds in particular words. However, variation in these properties leaves much cross-talker variation in intelligibility unexplained. We examine an alternative approach utilizing a perceptual similarity space acquired using self-supervised learning. This approach encodes distinctions between speech samples without requiring pre-defined acoustic features or speech-to-text alignment. We show that L2 English speech samples are less tightly clustered in the space than L1 samples reflecting variability in English proficiency among L2 talkers. Critically, distances in this similarity space are perceptually meaningful: L1 English listeners have lower recognition accuracy for L2 speakers whose speech is more distant in the space from L1 speech. These results indicate that perceptual similarity may form the basis for an entirely new speech and language analysis approach.
23

Roeber, A. G., and Larry D. Eldridge. "A Distant Heritage: The Growth of Free Speech in Early America." American Historical Review 100, no. 2 (April 1995): 577. http://dx.doi.org/10.2307/2169143.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Anderson, David A., and Larry D. Eldridge. "A Distant Heritage: The Growth of Free Speech in Early America." Journal of American History 81, no. 4 (March 1995): 1672. http://dx.doi.org/10.2307/2081666.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Greiner, Jim, and Larry D. Eldridge. "A Distant Heritage: The Growth of Free Speech in Early America." Michigan Law Review 92, no. 6 (May 1994): 1693. http://dx.doi.org/10.2307/1289602.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Sterling, David L. "A Distant Heritage: The Growth of Free Speech in Early America." History: Reviews of New Books 23, no. 1 (July 1994): 16. http://dx.doi.org/10.1080/03612759.1994.9950880.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Kakino, Naoto, Takahiro Hukumori, Masanori Morise, and Takanobu Nishiura. "Distant-talking speech enhancement based on spectrum restoring with phoneme labels." Journal of the Acoustical Society of America 131, no. 4 (April 2012): 3446. http://dx.doi.org/10.1121/1.4708984.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Pertilä, Pasi, and Joonas Nikunen. "Distant speech separation using predicted time–frequency masks from spatial features." Speech Communication 68 (April 2015): 97–106. http://dx.doi.org/10.1016/j.specom.2015.01.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Smith, Jeffery A., and Larry D. Eldridge. "A Distant Heritage: The Growth of Free Speech in Early America." William and Mary Quarterly 51, no. 3 (July 1994): 557. http://dx.doi.org/10.2307/2947448.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Zwisler, Joshua James, and César Alejandro Cuellar Cedano. "Sociolinguistic proximity in animal-directed speech." Lenguaje 48, no. 2 (July 1, 2020): 354–68. http://dx.doi.org/10.25100/lenguaje.v48i2.7484.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This article explores how sociolinguistic proximity i.e. different varieties of socially close relationships enacted through speech interaction, is formed with animals in Ibagué, Colombia. It is common to hear that people speak with pets using ‘baby-talk’ or as friends. However, there are a range of registers/stances available to construct different social relationships through speech. Data regarding talk with pets and non-pet domestic animals from a self-report survey with a sample of 500 in the regional Colombian city of Ibagué was analysed using an experimental scale of sociolinguistic proximity devised by the authors. The results show that a variety of different relationships are created in speech with both pets and non-pets and that these relationships range from socially close to distant. Factors such as gender, education and owning a pet all affect the sociolinguistic proximity enacted through linguistic interaction with animals, with gender being the most influential of the variables.
31

Wesarg, Thomas, Susan Arndt, Konstantin Wiebe, Frauke Schmid, Annika Huber, Hans E. Mülder, Roland Laszig, Antje Aschendorff, and Iva Speck. "Speech Recognition in Noise in Single-Sided Deaf Cochlear Implant Recipients Using Digital Remote Wireless Microphone Technology." Journal of the American Academy of Audiology 30, no. 07 (July 2019): 607–18. http://dx.doi.org/10.3766/jaaa.17131.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractPrevious research in cochlear implant (CI) recipients with bilateral severe-to-profound sensorineural hearing loss showed improvements in speech recognition in noise using remote wireless microphone systems. However, to our knowledge, no previous studies have addressed the benefit of these systems in CI recipients with single-sided deafness.The objective of this study was to evaluate the potential improvement in speech recognition in noise for distant speakers in single-sided deaf (SSD) CI recipients obtained using the digital remote wireless microphone system, Roger. In addition, we evaluated the potential benefit in normal hearing (NH) participants gained by applying this system.Speech recognition in noise for a distant speaker in different conditions with and without Roger was evaluated with a two-way repeated-measures design in each group, SSD CI recipients, and NH participants. Post hoc analyses were conducted using pairwise comparison t-tests with Bonferroni correction.Eleven adult SSD participants aided with CIs and eleven adult NH participants were included in this study.All participants were assessed in 15 test conditions (5 listening conditions × 3 noise levels) each. The listening conditions for SSD CI recipients included the following: (I) only NH ear and CI turned off, (II) NH ear and CI (turned on), (III) NH ear and CI with Roger 14, (IV) NH ear with Roger Focus and CI, and (V) NH ear with Roger Focus and CI with Roger 14. For the NH participants, five corresponding listening conditions were chosen: (I) only better ear and weaker ear masked, (II) both ears, (III) better ear and weaker ear with Roger Focus, (IV) better ear with Roger Focus and weaker ear, and (V) both ears with Roger Focus. The speech level was fixed at 65 dB(A) at 1 meter from the speech-presenting loudspeaker, yielding a speech level of 56.5 dB(A) at the recipient's head. Noise levels were 55, 65, and 75 dB(A). Digitally altered noise recorded in school classrooms was used as competing noise. Speech recognition was measured in percent correct using the Oldenburg sentence test.In SSD CI recipients, a significant improvement in speech recognition was found for all listening conditions with Roger (III, IV, and V) versus all no-Roger conditions (I and II) at the higher noise levels (65 and 75 dB[A]). NH participants significantly benefited from the application of Roger in noise for higher levels, too. In both groups, no significant difference was detected between any of the different listening conditions at 55 dB(A) competing noise. There was also no significant difference between any of the Roger conditions III, IV, and V across all noise levels.The application of the advanced remote wireless microphone system, Roger, in SSD CI recipients provided significant benefits in speech recognition for distant speakers at higher noise levels. In NH participants, the application of Roger also produced a significant benefit in speech recognition in noise.
32

Rhody, Lisa Marie. "Beyond Darwinian Distance: Situating Distant Reading in a Feminist Ut Pictura Poesis Tradition." PMLA/Publications of the Modern Language Association of America 132, no. 3 (May 2017): 659–67. http://dx.doi.org/10.1632/pmla.2017.132.3.659.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The challenge facing “distant reading” has less to do with Franco Moretti's assertion that we must learn “how not to read” than with his implication that looking should take the place of reading. Not reading is the dirty open secret of all literary critics-there will always be that book (or those books) that you should have read, have not read, and probably won't read. Moretti is not endorsing a disinterest in reading either, like that reported in the 2004 National Endowment for the Arts' Reading at Risk, which notes that less than half the adult public in the United States read a work of literature in 2002 (3). In his “little pact with the devil” that substitutes patterns of devices, themes, tropes, styles, and parts of speech for thousands or millions of texts at a time, the devil is the image: trees, networks, and maps-spatial rather than verbal forms representing a textual corpus that disappears from view. In what follows, I consider Distant Reading as participating in the ut pictura poesis tradition-that is, the Western tradition of viewing poetry and painting as sister arts-to explain how ingrained our resistances are to Moretti's formalist approach. I turn to more recent interart examples to suggest interpretive alternatives to formalism for distant-reading methods.
33

Calandruccio, Lauren, Susanne Brouwer, Kristin J. Van Engen, Sumitrajit Dhar, and Ann R. Bradlow. "Masking Release Due to Linguistic and Phonetic Dissimilarity Between the Target and Masker Speech." American Journal of Audiology 22, no. 1 (June 2013): 157–64. http://dx.doi.org/10.1044/1059-0889(2013/12-0072).

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Purpose To investigate masking release for speech maskers for linguistically and phonetically close (English and Dutch) and distant (English and Mandarin) language pairs. Method Thirty-two monolingual speakers of English with normal audiometric thresholds participated in the study. Data are reported for an English sentence recognition task in English and for Dutch and Mandarin competing speech maskers (Experiment 1) and noise maskers (Experiment 2) that were matched either to the long-term average speech spectra or to the temporal modulations of the speech maskers from Experiment 1. Results Listener performance increased as the target-to-masker linguistic distance increased (English-in-English < English-in-Dutch < English-in-Mandarin). Conclusion Spectral differences between maskers can account for some, but not all, of the variation in performance between maskers; however, temporal differences did not seem to play a significant role.
34

Bloom, Sara. "New technology can link hearing care providers with distant patients." Hearing Journal 52, no. 7 (July 1999): 21–22. http://dx.doi.org/10.1097/00025572-199907000-00003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Szczygielski, Krzysztof. "Some remarks on Benito Mussolini’s speech Roma antica sul mare." Gdańskie Studia Prawnicze, no. 3(43)/2019 (November 4, 2019): 177–88. http://dx.doi.org/10.26881/gsp.2019.3.14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The purpose of this article is to analyse Benito mussolini’s speech titled Roma antica sul mare given on 5 october 1926 in Perugia. The fascist leader referred on many occasions to issues associated with the sea. making Italy the main power on the mediterranean Sea was a very important element of the Italian leader’s policy. his recollection of glorious events from distant past was an excellent way to create a vision of the restoration of the roman empire.
36

Haristiani, Nuria, and Asti Sopiyanti. "Analisis Kontrastif Tindak Tutur Meminta Maaf Dalam Bahasa Jepang Dan Bahasa Sunda." Jurnal Lingua Idea 10, no. 2 (December 26, 2019): 131. http://dx.doi.org/10.20884/1.jli.2019.10.2.2159.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In everyday life when someone did a wrong doing, that someone usually performs apologyspeech act to show his/her responsibilityor remorse. However,apology speech act may be conducted using different strategies influenced by several factors, such as cultural background, social values, social statutes, gender, or even the depth of the remorse felt by the wrong doer. This study aims to determine the level of awareness of apology by Japanese Native Speakers (JNS) and Sundanese Native Speakers (SNS) in an apology situation. Apology speech act strategies used in the same situation to five interlocutors namely 1) Distant lecturer (DT), 2) Closelecturer (DA), 3) Distant senior (KT), 4) Close senior (KA), and 5) Friend (T) were also examined. A Likert scale questionnaire was used to find out about the level of awareness of apology, while Discourse Completion Test (DCT) was conducted to examine about apology speech act strategies used by seventy four (74) JNS and seventy eight (78) SNS participated in the data collection of this study. From the results, the awareness of apology between JNS and SNS both different according to the interlocutors. While in apology strategies used, both JNS and SNS mainly used the expression of apology, acknowledgment of responsibility and offer of compensation. However, there is one striking difference strategy in apologizing between JNS and SNS. SNS frequently used address terms while JNS barely used address terms to their interlocutors.
37

Кириллова, Наталья Николаевна. "STUDING THE ORAL GENRES OF I.ANDRONIKOV AS A METHOD OF RHETORICAL EDUCATION IN DISTANT LEARNING." Вестник Тверского государственного университета. Серия: Филология, no. 3(70) (September 21, 2021): 205–12. http://dx.doi.org/10.26456/vtfilol/2021.3.205.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
В статье затрагивается вопрос эффективности образовательных механизмов в условиях дистанционного обучения. Для преодоления коммуникативных барьеров предлагается использовать метод изучения эталонов ораторской речи на примере исследования студентами устного выступления И. Андроникова «Первый раз на эстраде». Статья доказывает эффективность приема в освоении риторических умений. The article touches upon the issue of the effectiveness of educational mechanisms in the context of distant learning. To overcome the communication barriers, it is proposed to use the method of studying the standards of oral speech. On the example, of the students’ research of I. Andronikov’s oral speech «First time on the stage». This work shows the effectiveness of the method in the development of rhetorical skills.
38

Ren, Bo, Longbiao Wang, Liang Lu, Yuma Ueda, and Atsuhiko Kai. "Combination of bottleneck feature extraction and dereverberation for distant-talking speech recognition." Multimedia Tools and Applications 75, no. 9 (September 3, 2015): 5093–108. http://dx.doi.org/10.1007/s11042-015-2849-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Ferreira, Alberto E. A., and Diogo Alarcão. "Real-time blind source separation system with applications to distant speech recognition." Applied Acoustics 113 (December 2016): 170–84. http://dx.doi.org/10.1016/j.apacoust.2016.06.024.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Araki, Shoko, Masakiyo Fujimoto, Takuya Yoshioka, Marc Delcroix, Miquel Espi, and Tomohiro Nakatani. "Deep Learning Based Distant-talking Speech Processing in Real-world Sound Environments." NTT Technical Review 13, no. 11 (November 2015): 19–24. http://dx.doi.org/10.53829/ntr201511fa4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Lolja, Saimir A. "The Preservation of Albanian Tongue (Shqip) Since the Beginning." Advances in Language and Literary Studies 10, no. 6 (December 31, 2019): 152. http://dx.doi.org/10.7575/aiac.alls.v.10n.6p.152.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In the beginning, humans had a tongue (gjuhën, Shqip). Then, they could or couldn’t let go of the tongue (len…gjuhën, Shqip). Albanian natural tongue (Shqip) implies the use of the tongue in the mouth for articulating (shqiptoj, Shqip) words. The eternity of Shqip (Speech) is in its words that are wordy clauses, phrases, parts of or short sentences. The Speech (Shqip) and other lan…guages (len…gjuhët, Shqip) carry these kinds of wordy clauses to prove the permanency of Shqip.The Speech (Shqip) had local and schooled forms in distant antiquity. Therefore, various types of writing appear now. Since the schooled style was in general use and carried later in the papers and books of lan…guages (len…gjuhëve, Shqip), it has been preserved unchanged. Its pieces, no matter how small are, every time get easily read and quickly understood using contemporary Shqip.The Speech (Shqip) was the first stratum in the Euro-Mediterranean area. It was the Speech (Shqip) of Nephilims (Nëfillimëve). The Shqip of today can be used to read and understand both words in other idioms and ancient writings.
42

Denbaum-Restrepo, Nofiya, and Falcon Restrepo-Ramos. "A Sociolinguistic Examination of the Dual Usted in Medellín, Colombia: Evidence from Semi-spontaneous Speech and Implicit Language Attitudes." Hispania 107, no. 1 (March 2024): 87–105. http://dx.doi.org/10.1353/hpn.2024.a921463.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract: The system of second person singular forms of address (2PS) in Medellín, Colombia is tripartite consisting of tú, vos , and usted , while also including the existence of a dual usted . The current study compares usage of the intimate usted versus the distant usted with data from an oral discourse completion task (DCT) while also investigating listeners' implicit language attitudes toward the usage of usted utilizing the matched guise technique. A sample of speakers from Medellín (N=72) stratified by age, sex, and education level completed an oral DCT, and a subset (N=38) also completed a matched guise task. For usage, it was found that intimate usted occurred 36.7% of the time with distant usted occurring 63.3% of the time. A multivariate mixed effects regression selected all fixed effects as significant. Specifically, intimate usted was favored by commands, statements, indirect commands, male interlocutors, same age interlocutors, male speakers, younger speakers, and speakers with high school education. These findings show that although the default usted usage is with distant interlocutors, the intimate usted is extremely vital, especially between men and with younger speakers. The current study also demonstrates the importance of comparing usage data and attitudinal data.
43

Zou, Yuexian, Zhaoyi Liu, and Christian Ritz. "Enhancing Target Speech Based on Nonlinear Soft Masking Using a Single Acoustic Vector Sensor." Applied Sciences 8, no. 9 (August 23, 2018): 1436. http://dx.doi.org/10.3390/app8091436.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Enhancing speech captured by distant microphones is a challenging task. In this study, we investigate the multichannel signal properties of the single acoustic vector sensor (AVS) to obtain the inter-sensor data ratio (ISDR) model in the time-frequency (TF) domain. Then, the monotone functions describing the relationship between the ISDRs and the direction of arrival (DOA) of the target speaker are derived. For the target speech enhancement (SE) task, the DOA of the target speaker is given, and the ISDRs are calculated. Hence, the TF components dominated by the target speech are extracted with high probability using the established monotone functions, and then, a nonlinear soft mask of the target speech is generated. As a result, a masking-based speech enhancement method is developed, which is termed the AVS-SMASK method. Extensive experiments with simulated data and recorded data have been carried out to validate the effectiveness of our proposed AVS-SMASK method in terms of suppressing spatial speech interferences and reducing the adverse impact of the additive background noise while maintaining less speech distortion. Moreover, our AVS-SMASK method is computationally inexpensive, and the AVS is of a small physical size. These merits are favorable to many applications, such as robot auditory systems.
44

Pyne, Yvette, Yik Ming Wong, Haishuo Fang, and Edwin Simpson. "Analysis of ‘One in a Million’ primary care consultation conversations using natural language processing." BMJ Health & Care Informatics Online 30, no. 1 (April 2023): e100659. http://dx.doi.org/10.1136/bmjhci-2022-100659.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
BackgroundModern patient electronic health records form a core part of primary care; they contain both clinical codes and free text entered by the clinician. Natural language processing (NLP) could be employed to generate these records through ‘listening’ to a consultation conversation.ObjectivesThis study develops and assesses several text classifiers for identifying clinical codes for primary care consultations based on the doctor–patient conversation. We evaluate the possibility of training classifiers using medical code descriptions, and the benefits of processing transcribed speech from patients as well as doctors. The study also highlights steps for improving future classifiers.MethodsUsing verbatim transcripts of 239 primary care consultation conversations (the ‘One in a Million’ dataset) and novel additional datasets for distant supervision, we trained NLP classifiers (naïve Bayes, support vector machine, nearest centroid, a conventional BERT classifier and few-shot BERT approaches) to identify the International Classification of Primary Care-2 clinical codes associated with each consultation.ResultsOf all models tested, a fine-tuned BERT classifier was the best performer. Distant supervision improved the model’s performance (F1 score over 16 classes) from 0.45 with conventional supervision with 191 labelled transcripts to 0.51. Incorporating patients’ speech in addition to clinician’s speech increased the BERT classifier’s performance from 0.45 to 0.55 F1 (p=0.01, paired bootstrap test).ConclusionsOur findings demonstrate that NLP classifiers can be trained to identify clinical area(s) being discussed in a primary care consultation from audio transcriptions; this could represent an important step towards a smart digital assistant in the consultation room.
45

Namazie, Ali, Sassan Alavi, Thomas C. Calcaterra, Elliot Abemayor, and Keith E. Blackwell. "Adenoid Cystic Carcinoma of the Base of the Tongue." Annals of Otology, Rhinology & Laryngology 110, no. 3 (March 2001): 248–53. http://dx.doi.org/10.1177/000348940111000308.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
A retrospective review of 14 patients with adenoid cystic carcinoma of the tongue treated between 1955 and 1997 was performed. Treatment consisted of surgery (n = 2), radiotherapy (n = 2), chemotherapy (n = 1), or combination therapy (n = 9). The 2-, 5-, and 10-year absolute survival rates were 92%, 79%, and 63%, respectively. Seventy-five percent of the patients who died of cancer succumbed to distant metastases. However, long-term survival was common despite a high incidence of local and distant recurrence. The presence of positive surgical margins, the incidence of regional metastases, the incidence of perineural invasion, the initial stage of disease, and the eventual development of locoregional recurrence and distant metastases did not significantly alter the survival rate. Surgical extirpation combined with postoperative radiotherapy is advocated for the treatment of adenoid cystic carcinoma of the tongue. Given the indolent nature of this disease process, surgery should be directed toward conservation of speech and swallowing function.
46

Borges, Mariana. "The Distant Shores of Freedom e as contradições do lugar de fala do oprimido." Alea: Estudos Neolatinos 24, no. 2 (August 2022): 320–26. http://dx.doi.org/10.1590/1517-106x/202224218.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Holt, Yolanda F., and Tessa Bent. "Socio-ethnic expectations of race in speech perception." Journal of the Acoustical Society of America 151, no. 4 (April 2022): A98. http://dx.doi.org/10.1121/10.0010777.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The social construct of race, as a group of individuals to be visually or auditorily discriminated between, or from, or against is a human construction much like speech itself. This talk will share the analyses of a speech production and a speech perception experiment designed to evaluate how familiar and unfamiliar listeners parse the auditory spectral information contained in words to assign the speech token to a racial category. Familiar North Carolina listeners (n = 28) and unfamiliar Indiana listeners (n = 44) heard ten hVd words produced by three male and three female, Black and White talkers from two geographically distant dialect regions in east and west North Carolina (n = 24). Listener groups assigned most speech tokens to the correct racial category with greater than chance accuracy. Statistically significant differences in listener use of socio-ethnically aligned vowels as produced in the words hid, heyd, head, mid and low front vowels the back vowel as produced in whod and the diphthongs as produced in hoyd, hide and howed were noted. Results will be discussed in the context of socio-ethic talker group membership and patterns of both listener accuracy and listener error in assigning talkers to racial groups.
48

Cai, Chengkai, Kenta Iwai, and Takanobu Nishiura. "Speech Enhancement Based on Two-Stage Processing with Deep Neural Network for Laser Doppler Vibrometer." Applied Sciences 13, no. 3 (February 2, 2023): 1958. http://dx.doi.org/10.3390/app13031958.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The development of distant-talk measurement systems has been attracting attention since they can be applied to many situations such as security and disaster relief. One such system that uses a device called a laser Doppler vibrometer (LDV) to acquire sound by measuring an object’s vibration caused by the sound source has been proposed. Different from traditional microphones, an LDV can pick up the target sound from a distance even in a noisy environment. However, the acquired sounds are greatly distorted due to the object’s shape and frequency response. Due to the particularity of the degradation of observed speech, conventional methods cannot be effectively applied to LDVs. We propose two speech enhancement methods that are based on two-stage processing with deep neural networks for LDVs. With the first proposed method, the amplitude spectrum of the observed speech is first restored. The phase difference between the observed and clean speech is then estimated using the restored amplitude spectrum. With the other proposed method, the low-frequency components of the observed speech are first restored. The high-frequency components are then estimated by the restored low-frequency components. The evaluation results indicate that they improved the observed speech in sound quality, deterioration degree, and intelligibility.
49

WANG, Longbiao, Norihide KITAOKA, and Seiichi NAKAGAWA. "Distant-Talking Speech Recognition Based on Spectral Subtraction by Multi-Channel LMS Algorithm." IEICE Transactions on Information and Systems E94-D, no. 3 (2011): 659–67. http://dx.doi.org/10.1587/transinf.e94.d.659.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Pan, Xiang, Yue Bao, Yiting Zhu, Huangyu Dai, and Jiangfan Zhang. "Deconvolved Conventional Beamforming and Adaptive Cubature Kalman Filter Based Distant Speech Perception System." IEEE Access 8 (2020): 187948–58. http://dx.doi.org/10.1109/access.2020.3030814.

Full text
APA, Harvard, Vancouver, ISO, and other styles

To the bibliography