Artykuły w czasopismach na temat „Vocal recognition”

Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Vocal recognition.

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Vocal recognition”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.

1

Guo, Taiyang, Zhi Zhu, Shunsuke Kidani i Masashi Unoki. "Contribution of Common Modulation Spectral Features to Vocal-Emotion Recognition of Noise-Vocoded Speech in Noisy Reverberant Environments". Applied Sciences 12, nr 19 (4.10.2022): 9979. http://dx.doi.org/10.3390/app12199979.

Pełny tekst źródła
Streszczenie:
In one study on vocal emotion recognition using noise-vocoded speech (NVS), the high similarities between modulation spectral features (MSFs) and the results of vocal-emotion-recognition experiments indicated that MSFs contribute to vocal emotion recognition in a clean environment (with no noise and no reverberation). Other studies also clarified that vocal emotion recognition using NVS is not affected by noisy reverberant environments (signal-to-noise ratio is greater than 10 dB and reverberation time is less than 1.0 s). However, the contribution of MSFs to vocal emotion recognition in noisy reverberant environments is still unclear. We aimed to clarify whether MSFs can be used to explain the vocal-emotion-recognition results in noisy reverberant environments. We analyzed the results of vocal-emotion-recognition experiments and used an auditory-based modulation filterbank to calculate the modulation spectrograms of NVS. We then extracted ten MSFs as higher-order statistics of modulation spectrograms. As shown from the relationship between MSFs and vocal-emotion-recognition results, except for extremely high noisy reverberant environments, there were high similarities between MSFs and the vocal emotion recognition results in noisy reverberant environments, which indicates that MSFs can be used to explain such results in noisy reverberant environments. We also found that there are two common MSFs (MSKTk (modulation spectral kurtosis) and MSTLk (modulation spectral tilt)) that contribute to vocal emotion recognition in all daily environments.
Style APA, Harvard, Vancouver, ISO itp.
2

Sorokin, V. N., i I. S. Makarov. "Gender recognition from vocal source". Acoustical Physics 54, nr 4 (lipiec 2008): 571–78. http://dx.doi.org/10.1134/s1063771008040192.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Huang, Chunyuan. "Vocal Music Teaching Pharyngeal Training Method Based on Audio Extraction by Big Data Analysis". Wireless Communications and Mobile Computing 2022 (6.05.2022): 1–11. http://dx.doi.org/10.1155/2022/4572904.

Pełny tekst źródła
Streszczenie:
In the process of vocal music learning, incorrect vocalization methods and excessive use of voice have brought many problems to the voice and accumulated a lot of inflammation, so that the level of vocal music learning stagnated or even declined. How to find a way to improve yourself without damaging your voice has become a problem that we have been pursuing. Therefore, it is of great practical significance for vocal music teaching in normal universities to conduct in-depth research and discussion on “pharyngeal singing.” Based on audio extraction, this paper studies the vocal music teaching pharyngeal training method. Different methods of vocal music teaching pharyngeal training have different times. When the recognition amount is 3, the average recognition time of vocal music teaching pharyngeal training based on data mining is 0.010 seconds, the average recognition time of vocal music teaching pharyngeal training based on Internet of Things is 0.011 seconds, and the average recognition time of vocal music teaching pharyngeal training based on audio extraction is 0.006 seconds. The recognition time of the audio extraction method is much shorter than that of the other two traditional methods, because the audio extraction method can perform segmented training according to the changing trend of physical characteristics of notes, effectively extract the characteristics of vocal music teaching pharyngeal training, and shorten the recognition time. The learning of “pharyngeal singing” in vocal music teaching based on audio extraction is different from general vocal music training. It has its unique theory, concept, law, and sound image. In order to “liberate your voice,” it adopts large-capacity and large-scale training methods.
Style APA, Harvard, Vancouver, ISO itp.
4

Mo, Wenwen, i Yuan Yuan. "Design of Interactive Vocal Guidance and Artistic Psychological Intervention System Based on Emotion Recognition". Occupational Therapy International 2022 (17.06.2022): 1–9. http://dx.doi.org/10.1155/2022/1079097.

Pełny tekst źródła
Streszczenie:
The research on artistic psychological intervention to judge emotional fluctuations by extracting emotional features from interactive vocal signals has become a research topic with great potential for development. Based on the interactive vocal music instruction theory of emotion recognition, this paper studies the design of artistic psychological intervention system. This paper uses the vocal music emotion recognition algorithm to first train the interactive recognition network, in which the input is a row vector composed of different vocal music characteristics, and finally recognizes the vocal music of different emotional categories, which solves the problem of low data coupling in the artistic psychological intervention system. Among them, the vocal music emotion recognition experiment based on the interactive recognition network is mainly carried out from six aspects: the number of iterative training, the vocal music instruction rate, the number of emotion recognition signal nodes in the artistic psychological intervention layer, the number of sample sets, different feature combinations, and the number of emotion types. The input data of the system is a training class learning video, and actions and expressions need to be recognized before scoring. In the simulation process, before the completion of the sample indicators is unbalanced, the R language statistical analysis tool is used to balance the existing unbalanced data based on the artificial data synthesis method, and 279 uniformly classified samples are obtained. The 279 ∗ 7 dataset was used for statistical identification of the participants. The experimental results show that under the guidance of four different interactive vocal music, the vocal emotion recognition rate is between 65.85%-91.00%, which promotes the intervention of music therapy on artistic psychological intervention.
Style APA, Harvard, Vancouver, ISO itp.
5

Bryant, Gregory, i H. Clark Barrett. "Vocal Emotion Recognition Across Disparate Cultures". Journal of Cognition and Culture 8, nr 1-2 (2008): 135–48. http://dx.doi.org/10.1163/156770908x289242.

Pełny tekst źródła
Streszczenie:
AbstractThere exists substantial cultural variation in how emotions are expressed, but there is also considerable evidence for universal properties in facial and vocal affective expressions. This is the first empirical effort examining the perception of vocal emotional expressions across cultures with little common exposure to sources of emotion stimuli, such as mass media. Shuar hunter-horticulturalists from Amazonian Ecuador were able to reliably identify happy, angry, fearful and sad vocalizations produced by American native English speakers by matching emotional spoken utterances to emotional expressions portrayed in pictured faces. The Shuar performed similarly to English speakers who heard the same utterances in a content-filtered condition. These data support the hypothesis that vocal emotional expressions of basic affective categories manifest themselves in similar ways across quite disparate cultures.
Style APA, Harvard, Vancouver, ISO itp.
6

Masapollo, Matthew, Linda Polka, Lucie Menard i Athena Vouloumanos. "Infant recognition of infant vocal signals". Journal of the Acoustical Society of America 133, nr 5 (maj 2013): 3334. http://dx.doi.org/10.1121/1.4805602.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Konev, Anton, Evgeny Kostyuchenko i Alexey Yakimuk. "The program complex for vocal recognition". Journal of Physics: Conference Series 803 (styczeń 2017): 012077. http://dx.doi.org/10.1088/1742-6596/803/1/012077.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Sorokin, V. N., A. A. Tananykin i V. G. Trunov. "Speaker recognition using vocal source model". Pattern Recognition and Image Analysis 24, nr 1 (marzec 2014): 156–73. http://dx.doi.org/10.1134/s1054661814010179.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Sorokin, V. N. "Vocal Source Contribution to Speaker Recognition". Pattern Recognition and Image Analysis 28, nr 3 (lipiec 2018): 546–56. http://dx.doi.org/10.1134/s1054661818030197.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Houde, Robert A., i James M. Hillenbrand. "Vocal tract normalization for vowel recognition". Journal of the Acoustical Society of America 121, nr 5 (maj 2007): 3189. http://dx.doi.org/10.1121/1.4782401.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
11

Johnson, William F. "Recognition of Emotion From Vocal Cues". Archives of General Psychiatry 43, nr 3 (1.03.1986): 280. http://dx.doi.org/10.1001/archpsyc.1986.01800030098011.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
12

Wang, Ning, P. C. Ching, Nengheng Zheng i Tan Lee. "Robust Speaker Recognition Using Denoised Vocal Source and Vocal Tract Features". IEEE Transactions on Audio, Speech, and Language Processing 19, nr 1 (styczeń 2011): 196–205. http://dx.doi.org/10.1109/tasl.2010.2045800.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
13

Ekberg, Mattias, Josefine Andin, Stefan Stenfelt i Örjan Dahlström. "Effects of mild-to-moderate sensorineural hearing loss and signal amplification on vocal emotion recognition in middle-aged–older individuals". PLOS ONE 17, nr 1 (7.01.2022): e0261354. http://dx.doi.org/10.1371/journal.pone.0261354.

Pełny tekst źródła
Streszczenie:
Previous research has shown deficits in vocal emotion recognition in sub-populations of individuals with hearing loss, making this a high priority research topic. However, previous research has only examined vocal emotion recognition using verbal material, in which emotions are expressed through emotional prosody. There is evidence that older individuals with hearing loss suffer from deficits in general prosody recognition, not specific to emotional prosody. No study has examined the recognition of non-verbal vocalization, which constitutes another important source for the vocal communication of emotions. It might be the case that individuals with hearing loss have specific difficulties in recognizing emotions expressed through prosody in speech, but not non-verbal vocalizations. We aim to examine whether vocal emotion recognition difficulties in middle- aged-to older individuals with sensorineural mild-moderate hearing loss are better explained by deficits in vocal emotion recognition specifically, or deficits in prosody recognition generally by including both sentences and non-verbal expressions. Furthermore a, some of the studies which have concluded that individuals with mild-moderate hearing loss have deficits in vocal emotion recognition ability have also found that the use of hearing aids does not improve recognition accuracy in this group. We aim to examine the effects of linear amplification and audibility on the recognition of different emotions expressed both verbally and non-verbally. Besides examining accuracy for different emotions we will also look at patterns of confusion (which specific emotions are mistaken for other specific emotion and at which rates) during both amplified and non-amplified listening, and we will analyze all material acoustically and relate the acoustic content to performance. Together these analyses will provide clues to effects of amplification on the perception of different emotions. For these purposes, a total of 70 middle-aged-older individuals, half with mild-moderate hearing loss and half with normal hearing will perform a computerized forced-choice vocal emotion recognition task with and without amplification.
Style APA, Harvard, Vancouver, ISO itp.
14

Guo, Tingjun. "Application of Internet of Things Technology in Vocal Music Teaching Recording Equipment Assisted by Machine Learning". Wireless Communications and Mobile Computing 2022 (25.04.2022): 1–10. http://dx.doi.org/10.1155/2022/2091387.

Pełny tekst źródła
Streszczenie:
Vocal music teaching is a professional, technical, and practical subject. It is also an important part of music and art education and has certain social and educational significance. In vocal music teaching, recording equipment is an essential teaching tool. It plays a pivotal role in vocal music teaching. In recent years, with the rapid development of China’s social economy, education has also achieved great development, and it has also brought development opportunities to vocal music education. More and more advanced technologies and equipment have appeared to assist the smooth progress of vocal music teaching, especially recording equipment. This paper is aimed at studying the application of IoT technology in vocal music teaching recording equipment assisted by machine learning. Combined with machine learning and Internet of Things technology, the experiment of recognition effect of vocal music teaching recording equipment was carried out. It designs an end-to-end vocal music teaching recording device recognition model based on the Internet of Things technology assisted by machine learning. The experimental results show that the use of this model improves the recording recognition accuracy of vocal music teaching recording equipment by 20%.
Style APA, Harvard, Vancouver, ISO itp.
15

Guo, Tingjun. "Application of Internet of Things Technology in Vocal Music Teaching Recording Equipment Assisted by Machine Learning". Wireless Communications and Mobile Computing 2022 (25.04.2022): 1–10. http://dx.doi.org/10.1155/2022/2091387.

Pełny tekst źródła
Streszczenie:
Vocal music teaching is a professional, technical, and practical subject. It is also an important part of music and art education and has certain social and educational significance. In vocal music teaching, recording equipment is an essential teaching tool. It plays a pivotal role in vocal music teaching. In recent years, with the rapid development of China’s social economy, education has also achieved great development, and it has also brought development opportunities to vocal music education. More and more advanced technologies and equipment have appeared to assist the smooth progress of vocal music teaching, especially recording equipment. This paper is aimed at studying the application of IoT technology in vocal music teaching recording equipment assisted by machine learning. Combined with machine learning and Internet of Things technology, the experiment of recognition effect of vocal music teaching recording equipment was carried out. It designs an end-to-end vocal music teaching recording device recognition model based on the Internet of Things technology assisted by machine learning. The experimental results show that the use of this model improves the recording recognition accuracy of vocal music teaching recording equipment by 20%.
Style APA, Harvard, Vancouver, ISO itp.
16

Vitousek, Maren N., James S. Adelman, Nathan C. Gregory i James J. H. St Clair. "Heterospecific alarm call recognition in a non-vocal reptile". Biology Letters 3, nr 6 (2.10.2007): 632–34. http://dx.doi.org/10.1098/rsbl.2007.0443.

Pełny tekst źródła
Streszczenie:
The ability to recognize and respond to the alarm calls of heterospecifics has previously been described only in species with vocal communication. Here we provide evidence that a non-vocal reptile, the Galápagos marine iguana ( Amblyrhynchus cristatus ), can eavesdrop on the alarm call of the Galápagos mockingbird ( Nesomimus parvulus ) and respond with anti-predator behaviour. Eavesdropping on complex heterospecific communications demonstrates a remarkable degree of auditory discrimination in a non-vocal species.
Style APA, Harvard, Vancouver, ISO itp.
17

Pittman, Andrea L., i Terry L. Wiley. "Recognition of Speech Produced in Noise". Journal of Speech, Language, and Hearing Research 44, nr 3 (czerwiec 2001): 487–96. http://dx.doi.org/10.1044/1092-4388(2001/038).

Pełny tekst źródła
Streszczenie:
A two-part study examined recognition of speech produced in quiet and in noise by normal hearing adults. In Part I 5 women produced 50 sentences consisting of an ambiguous carrier phrase followed by a unique target word. These sentences were spoken in three environments: quiet, wide band noise (WBN), and meaningful multi-talker babble (MMB). The WBN and MMB competitors were presented through insert earphones at 80 dB SPL. For each talker, the mean vocal level, long-term average speech spectra, and mean word duration were calculated for the 50 target words produced in each speaking environment. Compared to quiet, the vocal levels produced in WBN and MMB increased an average of 14.5 dB. The increase in vocal level was characterized by increased spectral energy in the high frequencies. Word duration also increased an average of 77 ms in WBN and MMB relative to the quiet condition. In Part II, the sentences produced by one of the 5 talkers were presented to 30 adults in the presence of multi-talker babble under two conditions. Recognition was evaluated for each condition. In the first condition, the sentences produced in quiet and in noise were presented at equal signal-to-noise ratios (SNR E ). This served to remove the vocal level differences between the speech samples. In the second condition, the vocal level differences were preserved (SNR P ). For the SNR E condition, recognition of the speech produced in WBN and MMB was on average 15% higher than that for the speech produced in quiet. For the SNR P condition, recognition increased an average of 69% for these same speech samples relative to speech produced in quiet. In general, correlational analyses failed to show a direct relation between the acoustic properties measured in Part I and the recognition measures in Part II.
Style APA, Harvard, Vancouver, ISO itp.
18

Zupan, Barbra, Duncan Babbage, Dawn Neumann i Barry Willer. "Sex Differences in Emotion Recognition and Emotional Inferencing Following Severe Traumatic Brain Injury". Brain Impairment 18, nr 1 (21.10.2016): 36–48. http://dx.doi.org/10.1017/brimp.2016.22.

Pełny tekst źródła
Streszczenie:
The primary objective of the current study was to determine if men and women with traumatic brain injury (TBI) differ in their emotion recognition and emotional inferencing abilities. In addition to overall accuracy, we explored whether differences were contingent upon the target emotion for each task, or upon high- and low-intensity facial and vocal emotion expressions. A total of 160 participants (116 men) with severe TBI completed three tasks – a task measuring facial emotion recognition (DANVA-Faces), vocal emotion recognition (DANVA-Voices) and one measuring emotional inferencing (emotional inference from stories test (EIST)). Results showed that women with TBI were significantly more accurate in their recognition of vocal emotion expressions and also for emotional inferencing. Further analyses of task performance showed that women were significantly better than men at recognising fearful facial expressions and also facial emotion expressions high in intensity. Women also displayed increased response accuracy for sad vocal expressions and low-intensity vocal emotion expressions. Analysis of the EIST task showed that women were more accurate than men at emotional inferencing in sad and fearful stories. A similar proportion of women and men with TBI were impaired (≥ 2 SDs when compared to normative means) at facial emotion perception, χ2 = 1.45, p = 0.228, but a larger proportion of men was impaired at vocal emotion recognition, χ2 = 7.13, p = 0.008, and emotional inferencing, χ2 = 7.51, p = 0.006.
Style APA, Harvard, Vancouver, ISO itp.
19

Bodner, E., i R. Aharoni. "Improving happiness recognition in human voice among people suffering from social anxiety". European Psychiatry 26, S2 (marzec 2011): 1302. http://dx.doi.org/10.1016/s0924-9338(11)73007-3.

Pełny tekst źródła
Streszczenie:
IntroductionPrevious studies have shown that in comparison to healthy people, patients with social anxiety (SA) identify fewer emotions of happiness in vocal expressions.Objectives(1)To repeat previous studies on emotion recognition in SA patients;(2)To examine the effect of training on emotion recognition in SA patients.Aims(1)To examine the effect of training in emotion recognition of non-verbally vocal musical improvisations on the ability of SA patients to identify happiness in verbal vocal spoken language.(2)To create a preliminary procedure for improving this ability in SA patients. Methods: 41 SA patients and 39 healthy controls aged 24–40, were examined. SA diagnosis was conducted according to the norms of the Liebowitz (1987) questionnaire. Half underwent an intervention that focused them on happiness recognition in non-verbally vocal musical improvisations, and half did not. The four groups were then compared on level of precision of emotion recognition (happiness, fear, anger, sadness and surprise) in spoken language sentences.ResultsA Multivariate analysis of variance showed that SA patients significantly identified fewer emotions of happiness in vocal expressions than healthy controls. SA patients who were trained demonstrated a similar precision level of happiness recognition in woman spoken language sentences as healthy controls (ps < 0.05).ConclusionsOur findings demonstrate that short exposure (20 min) to non-verbally vocal musical improvisations immediately improves the ability of SA patients to recognize happiness in spoken language. Future studies can refine our procedure and examine its impact on SA patients over longer time periods (e.g., months).
Style APA, Harvard, Vancouver, ISO itp.
20

Minter, M. E., R. P. Hobson i L. Pring. "Recognition of Vocally Expressed Emotion by Congenitally Blind Children". Journal of Visual Impairment & Blindness 85, nr 10 (grudzień 1991): 411–15. http://dx.doi.org/10.1177/0145482x9108501007.

Pełny tekst źródła
Streszczenie:
Eight congenitally blind children, individually matched with eight sighted children, were tested for their ability to identify vocal expressions of emotion and the sounds of a range of non-emotional objects. They had specific difficulty recognizing emotions according to vocal qualities.
Style APA, Harvard, Vancouver, ISO itp.
21

Sauter, Disa A., Charlotte Panattoni i Francesca Happé. "Children's recognition of emotions from vocal cues". British Journal of Developmental Psychology 31, nr 1 (6.06.2012): 97–113. http://dx.doi.org/10.1111/j.2044-835x.2012.02081.x.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
22

Lin, I.-Fan, Takashi Yamada, Yoko Komine, Nobumasa Kato, Masaharu Kato i Makio Kashino. "Vocal Identity Recognition in Autism Spectrum Disorder". PLOS ONE 10, nr 6 (12.06.2015): e0129451. http://dx.doi.org/10.1371/journal.pone.0129451.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
23

Kurniawan, Sri, Adam J. Sporka i Susumu Harada. "Vocal interaction: beyond traditional automatic speech recognition". Universal Access in the Information Society 8, nr 2 (7.08.2008): 63–64. http://dx.doi.org/10.1007/s10209-008-0134-z.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
24

Mualem, Orit, i Michal Lavidor. "Music education intervention improves vocal emotion recognition". International Journal of Music Education 33, nr 4 (7.05.2015): 413–25. http://dx.doi.org/10.1177/0255761415584292.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
25

D, Pravena, Dhivya S i Durga Devi A. "Pathological Voice Recognition for Vocal Fold Disease". International Journal of Computer Applications 47, nr 13 (30.06.2012): 31–37. http://dx.doi.org/10.5120/7250-0314.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
26

Skriver, Christian P. "Airborne Message Entry by Voice Recognition". Proceedings of the Human Factors Society Annual Meeting 31, nr 4 (wrzesień 1987): 424–27. http://dx.doi.org/10.1177/154193128703100409.

Pełny tekst źródła
Streszczenie:
This report presents the results of an experiment that measured performance in a simulated ASW message entry task with two modes of data input—vocal and manual. The subjects (Ss) were 12 Naval enlisted men. The independent variable was message data entry mode—vocal or manual. The dependent variables were: time to enter 20 lines of text, data entry errors that were corrected by the Ss, and errors that remained undetected. All Ss were trained to use the voice recognition system with a 100 word vocabulary set. The task was for the S to read one line of message text from a display and then re-enter the text below the displayed text via either voice recognizer or keyboard until 20 lines of text had been entered. Keyboard entry was found to be slightly faster (11%) than voice recognition input. While the number of initial errors (corrected) in the vocal input mode was over three times greater than the number for manual input, the remaining input errors (uncorrected) were about the same.
Style APA, Harvard, Vancouver, ISO itp.
27

He, Zhuo. "Vocal Music Recognition Based on Deep Convolution Neural Network". Scientific Programming 2022 (2.02.2022): 1–10. http://dx.doi.org/10.1155/2022/7905992.

Pełny tekst źródła
Streszczenie:
In order to achieve fast and accurate music technique recognition and enhancement for vocal music teaching, the paper proposed a music recognition method based on a combination of migration learning and CNN (convolutional neural network). Firstly, the most standard timbre vocal music is preprocessed by panning, flipping, rotating, and scaling and then manually classified by vocal technique features such as breathing method, articulation method, pronunciation method, and pitch region training. Then, based on the migration learning method, the weight parameters obtained from the convolutional model trained on the sound dataset CNN are migrated to the sound recognition, and the convolutional and pooling layers of the convolutional model are used as feature extraction layers, while the top layer is redesigned as a global average pooling layer and a Softmax output layer, and some of the convolutional layers are frozen during training. The experimental results show that the average test accuracy of the model is 86%, the training time is about 1/2 of the original model, and the model size is only 74.2 M. The F1 values of the model are 0.88, 0.80, 0.83, and 0.85 in four aspects, such as breathing method, exhaling method, articulation method, and phonetic region training, etc. The experimental results show that the method is efficient for voice and vocal music teaching recognition. The experimental results show that the method is efficient, effective, and transferable for voice and vocal music teaching research.
Style APA, Harvard, Vancouver, ISO itp.
28

Briefer, Elodie F., Monica Padilla de la Torre i Alan G. McElligott. "Mother goats do not forget their kids’ calls". Proceedings of the Royal Society B: Biological Sciences 279, nr 1743 (20.06.2012): 3749–55. http://dx.doi.org/10.1098/rspb.2012.0986.

Pełny tekst źródła
Streszczenie:
Parent–offspring recognition is crucial for offspring survival. At long distances, this recognition is mainly based on vocalizations. Because of maturation-related changes to the structure of vocalizations, parents have to learn successive call versions produced by their offspring throughout ontogeny in order to maintain recognition. However, because of the difficulties involved in following the same individuals over years, it is not clear how long this vocal memory persists. Here, we investigated long-term vocal recognition in goats. We tested responses of mothers to their kids’ calls 7–13 months after weaning. We then compared mothers’ responses to calls of their previous kids with their responses to the same calls at five weeks postpartum. Subjects tended to respond more to their own kids at five weeks postpartum than 11–17 months later, but displayed stronger responses to their previous kids than to familiar kids from other females. Acoustic analyses showed that it is unlikely that mothers were responding to their previous kids simply because they confounded them with the new kids they were currently nursing. Therefore, our results provide evidence for strong, long-term vocal memory capacity in goats. The persistence of offspring vocal recognition beyond weaning could have important roles in kin social relationships and inbreeding avoidance.
Style APA, Harvard, Vancouver, ISO itp.
29

Zhang, Xiaoyan. "Research on Modeling of Vocal State Duration Based on Spectrogram Analysis". E3S Web of Conferences 236 (2021): 04043. http://dx.doi.org/10.1051/e3sconf/202123604043.

Pełny tekst źródła
Streszczenie:
In the early stage of vocal music education, students generally do not understand the structure of the human body, and have doubts about how to pronounce their voices scientifically. However, with the continuous development of computers, computer technology has become more and more developed, and computer processing speed has been greatly increased, which provides favorable conditions for the development of the application of vocal spectrum analysis technology in vocal music teaching. In this paper, we first study the GMM-SVM and DBN, and combine them to extract the deep Gaussian super vector DGS, and further construct the feature DGCS on the basis of DGS; then we study the convolutional neural network (CNN), which has achieved great success in the image recognition task in recent years, and design a CNN model to extract the deep fusion features of vocal music. The experimental simulations show that the CNN fusion-based speaker recognition system achieves very good results in terms of recognition rate.
Style APA, Harvard, Vancouver, ISO itp.
30

Globerson, Eitan, Noam Amir, Ofer Golan, Liat Kishon-Rabin i Michal Lavidor. "Psychoacoustic abilities as predictors of vocal emotion recognition". Attention, Perception, & Psychophysics 75, nr 8 (27.07.2013): 1799–810. http://dx.doi.org/10.3758/s13414-013-0518-x.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
31

Nuechterlein, Gary L., i Deborah Buitron. "Vocal Advertising and Sex Recognition in Eared Grebes". Condor 94, nr 4 (listopad 1992): 937–43. http://dx.doi.org/10.2307/1369290.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
32

Anderson, R. C. "RECOGNITION AND TREATMENT OF PARADOXICAL VOCAL FOLD DISORDER". Psychosomatic Medicine 61, nr 1 (1999): 98. http://dx.doi.org/10.1097/00006842-199901000-00083.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
33

Cho, Kwan Hyun. "Method And Apparatus For Vocal-Cord Signal Recognition". Journal of the Acoustical Society of America 129, nr 3 (2011): 1672. http://dx.doi.org/10.1121/1.3573348.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
34

Pell, Marc D., i Sonja A. Kotz. "On the Time Course of Vocal Emotion Recognition". PLoS ONE 6, nr 11 (7.11.2011): e27256. http://dx.doi.org/10.1371/journal.pone.0027256.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
35

Townsend, Simon W., Colin Allen i Marta B. Manser. "A simple test of vocal individual recognition in wild meerkats". Biology Letters 8, nr 2 (12.10.2011): 179–82. http://dx.doi.org/10.1098/rsbl.2011.0844.

Pełny tekst źródła
Streszczenie:
Individual recognition is thought to be a crucial ability facilitating the evolution of animal societies. Given its central importance, much research has addressed the extent of this capacity across the animal kingdom. Recognition of individuals vocally has received particular attention due, in part, to the insights it provides regarding the cognitive processes that underlie this skill. While much work has focused on vocal individual recognition in primates, there is currently very little data showing comparable skills in non-primate mammals under natural conditions. This may be because non-primate mammal societies do not provide obvious contexts in which vocal individual recognition can be rigorously tested. We addressed this gap in understanding by designing an experimental paradigm to test for individual recognition in meerkats ( Suricata suricatta ) without having to rely on naturally occurring social contexts. Results suggest that when confronted with a physically impossible scenario—the presence of the same conspecific meerkat in two different places—subjects responded more strongly than during the control, a physically possible setup. We argue that this provides the first clear evidence for vocal individual recognition in wild non-primate mammals and hope that this novel experimental design will allow more systematic cross-species comparisons of individual recognition under natural settings.
Style APA, Harvard, Vancouver, ISO itp.
36

Franco, Fabia, Marcia Chew i Joel Simon Swaine. "Preschoolers’ attribution of affect to music: A comparison between vocal and instrumental performance". Psychology of Music 45, nr 1 (6.08.2016): 131–49. http://dx.doi.org/10.1177/0305735616652954.

Pełny tekst źródła
Streszczenie:
Research has shown inconsistent results concerning the ability of young children to identify musical emotion. This study explores the influence of the type of musical performance (vocal vs. instrumental) on children’s affect identification. Using an independent-group design, novel child-directed music was presented in three conditions: instrumental, vocal-only, and song (instrumental plus vocals) to 3- to 6-year-olds previously screened for language development ( N = 76). A forced-choice task was used in which children chose a face expressing the emotion matching each musical track. All performance conditions comprised “happy” (major mode/fast tempo) and “sad” (minor mode/slow tempo) tracks. Nonsense syllables rather than words were used in the vocals in order to avoid the influence of lyrics on children’s decisions. The results showed that even the younger children were able to correctly identify the intended emotion in music, although “happy” music was more readily recognised and recognition appeared facilitated in the instrumental condition. Performance condition interacted with gender.
Style APA, Harvard, Vancouver, ISO itp.
37

Schall, Sonja, Stefan J. Kiebel, Burkhard Maess i Katharina von Kriegstein. "Voice Identity Recognition: Functional Division of the Right STS and Its Behavioral Relevance". Journal of Cognitive Neuroscience 27, nr 2 (luty 2015): 280–91. http://dx.doi.org/10.1162/jocn_a_00707.

Pełny tekst źródła
Streszczenie:
The human voice is the primary carrier of speech but also a fingerprint for person identity. Previous neuroimaging studies have revealed that speech and identity recognition is accomplished by partially different neural pathways, despite the perceptual unity of the vocal sound. Importantly, the right STS has been implicated in voice processing, with different contributions of its posterior and anterior parts. However, the time point at which vocal and speech processing diverge is currently unknown. Also, the exact role of the right STS during voice processing is so far unclear because its behavioral relevance has not yet been established. Here, we used the high temporal resolution of magnetoencephalography and a speech task control to pinpoint transient behavioral correlates: we found, at 200 msec after stimulus onset, that activity in right anterior STS predicted behavioral voice recognition performance. At the same time point, the posterior right STS showed increased activity during voice identity recognition in contrast to speech recognition whereas the left mid STS showed the reverse pattern. In contrast to the highly speech-sensitive left STS, the current results highlight the right STS as a key area for voice identity recognition and show that its anatomical-functional division emerges around 200 msec after stimulus onset. We suggest that this time point marks the speech-independent processing of vocal sounds in the posterior STS and their successful mapping to vocal identities in the anterior STS.
Style APA, Harvard, Vancouver, ISO itp.
38

Leedale, Amy E., Robert F. Lachlan, Elva J. H. Robinson i Ben J. Hatchwell. "Helping decisions and kin recognition in long-tailed tits: is call similarity used to direct help towards kin?" Philosophical Transactions of the Royal Society B: Biological Sciences 375, nr 1802 (18.05.2020): 20190565. http://dx.doi.org/10.1098/rstb.2019.0565.

Pełny tekst źródła
Streszczenie:
Most cooperative breeders live in discrete family groups, but in a minority, breeding populations comprise extended social networks of conspecifics that vary in relatedness. Selection for effective kin recognition may be expected for more related individuals in such kin neighbourhoods to maximize indirect fitness. Using a long-term social pedigree, molecular genetics, field observations and acoustic analyses, we examine how vocal similarity affects helping decisions in the long-tailed tit Aegithalos caudatus . Long-tailed tits are cooperative breeders in which help is typically redirected by males that have failed in their own breeding attempts towards the offspring of male relatives living within kin neighbourhoods. We identify a positive correlation between call similarity and kinship, suggesting that vocal cues offer a plausible mechanism for kin discrimination. Furthermore, we show that failed breeders choose to help males with calls more similar to their own. However, although helpers fine-tune their provisioning rates according to how closely related they are to recipients, their effort was not correlated with their vocal similarity to helped breeders. We conclude that although vocalizations are an important part of the recognition system of long-tailed tits, discrimination is likely to be based on prior association and may involve a combination of vocal and non-vocal cues. This article is part of the theme issue ‘Signal detection theory in recognition systems: from evolving models to experimental tests’.
Style APA, Harvard, Vancouver, ISO itp.
39

Chen, Fang, i Cristiano Masi. "Effect of Noise on Automatic Speech Recognition System Error Rate". Proceedings of the Human Factors and Ergonomics Society Annual Meeting 44, nr 37 (lipiec 2000): 606–9. http://dx.doi.org/10.1177/154193120004403716.

Pełny tekst źródła
Streszczenie:
Many studies have indicted that stress and workload can effect the recognition accuracy of the speech recognition system. This can include noise, vibration, G-force, information overload, vocal quality in noise, vocal quality and psychological stress, concurrent task performance and vocal fatigue. The commercially available speech recognition system has not yet reached the perfect design to recognize natural human speech. The military application of automatic speech recognition systems has been studied in a wide arrangement. Verbex’ Voice Master was recommended in its instruction book as especially suited well for use in a noisy environment. This system was selected as a candidate system for use in cockpits. Before implementing it in the cockpit, its strengths and weaknesses for special utterances need to be tested in a laboratory environment. The purpose of the study was to investigate the effects of noise on recognition accuracy in dual-task performance. The experiment was carried out in a noise-insulated room. The Verbex’ Voice Master speech recognition system was installed into the computer. Eleven male Swedish students were the subjects. Two noise levels were set up with a combination of mental workload and physical workload. The results showed that without noise and mental workload, the recognition accuracy could be as good as 99.4%. With noise and mental workload, the recognition accuracy could be reduced to 95%. The results indicated that noise had significant effects on the computer error while mental workload had significant effects on both subject error and computer error.
Style APA, Harvard, Vancouver, ISO itp.
40

Teixeira, João Paulo, Nuno Alves i Paula Odete Fernandes. "Vocal Acoustic Analysis". International Journal of E-Health and Medical Communications 11, nr 1 (styczeń 2020): 37–51. http://dx.doi.org/10.4018/ijehmc.2020010103.

Pełny tekst źródła
Streszczenie:
Vocal acoustic analysis is becoming a useful tool for the classification and recognition of laryngological pathologies. This technique enables a non-invasive and low-cost assessment of voice disorders, allowing a more efficient, fast, and objective diagnosis. In this work, ANN and SVM were experimented on to classify between dysphonic/control and vocal cord paralysis/control. A vector was made up of 4 jitter parameters, 4 shimmer parameters, and a harmonic to noise ratio (HNR), determined from 3 different vowels at 3 different tones, with a total of 81 features. Variable selection and dimension reduction techniques such as hierarchical clustering, multilinear regression analysis and principal component analysis (PCA) was applied. The classification between dysphonic and control was made with an accuracy of 100% for female and male groups with ANN and SVM. For the classification between vocal cords paralysis and control an accuracy of 78,9% was achieved for female group with SVM, and 81,8% for the male group with ANN.
Style APA, Harvard, Vancouver, ISO itp.
41

Weiss, Michael W., E. Glenn Schellenberg i Sandra E. Trehub. "Generality of the Memory Advantage for Vocal Melodies". Music Perception 34, nr 3 (1.02.2017): 313–18. http://dx.doi.org/10.1525/mp.2017.34.3.313.

Pełny tekst źródła
Streszczenie:
Children and adults, with or without music training, exhibit better memory for vocal melodies (without lyrics) than for instrumental melodies (Weiss, Schellenberg, Trehub, & Dawber, 2015; Weiss, Trehub, & Schellenberg, 2012; Weiss, Trehub, Schellenberg, & Habashi, 2016; Weiss, Vanzella, Schellenberg, & Trehub, 2015). In the present study, we compared adults’ memory for vocal and instrumental melodies, as before, but with two additional singers, one female (same pitch level as the original female) and one male (7 semitones lower). In an exposure phase, 90 participants (M = 4.1 years training, SD = 3.9) rated their liking of 24 melodies—6 each in voice, piano, banjo, and marimba. After a short break, they heard the same melodies plus 24 timbre-matched foils (6 per timbre) and rated their recognition of each melody. Recognition was better for vocal melodies than for melodies in every other timbre, replicating previous findings. Importantly, the memory advantage was comparable across voices, despite the fact that liking ratings for vocal melodies differed by singer. Our results provide support for the notion that the vocal advantage in memory for melodies is independent of the idiosyncrasies of specific singers or of vocal attractiveness, arising instead from enhanced processing of a biologically significant timbre.
Style APA, Harvard, Vancouver, ISO itp.
42

Beveridge, Scott, i Don Knox. "Popular music and the role of vocal melody in perceived emotion". Psychology of Music 46, nr 3 (30.06.2017): 411–23. http://dx.doi.org/10.1177/0305735617713834.

Pełny tekst źródła
Streszczenie:
The voice plays a crucial role in expressing emotion in popular music. However, the importance of the voice in this context has not been systematically assessed. This study investigates the emotional effect of vocal features in popular music. In particular, it focuses on nonverbal characteristics, including vocal melody and rhythm. To determine the efficacy of these features, they are used to construct a computational Music Emotion Recognition (MER) system. The system is based on the circumplex model that expresses emotion in terms of arousal and valence. Two independent studies were used to develop the system. The first study established models for predicting arousal and valence based on a range of acoustical and nonverbal vocal features. The second study was used for independent validation of these models. Results show that features describing rhythmic qualities of the vocal line produce emotion models with a high level of generalizability. In particular these models reliably predict emotional valence, a well-known issue in existing Music Emotion Recognition systems.
Style APA, Harvard, Vancouver, ISO itp.
43

Trinh, Dang-Linh, Minh-Cong Vo, Soo-Hyung Kim, Hyung-Jeong Yang i Guee-Sang Lee. "Self-Relation Attention and Temporal Awareness for Emotion Recognition via Vocal Burst". Sensors 23, nr 1 (24.12.2022): 200. http://dx.doi.org/10.3390/s23010200.

Pełny tekst źródła
Streszczenie:
Speech emotion recognition (SER) is one of the most exciting topics many researchers have recently been involved in. Although much research has been conducted recently on this topic, emotion recognition via non-verbal speech (known as the vocal burst) is still sparse. The vocal burst is concise and has meaningless content, which is harder to deal with than verbal speech. Therefore, in this paper, we proposed a self-relation attention and temporal awareness (SRA-TA) module to tackle this problem with vocal bursts, which could capture the dependency in a long-term period and focus on the salient parts of the audio signal as well. Our proposed method contains three main stages. Firstly, the latent features are extracted using a self-supervised learning model from the raw audio signal and its Mel-spectrogram. After the SRA-TA module is utilized to capture the valuable information from latent features, all features are concatenated and fed into ten individual fully-connected layers to predict the scores of 10 emotions. Our proposed method achieves a mean concordance correlation coefficient (CCC) of 0.7295 on the test set, which achieves the first ranking of the high-dimensional emotion task in the 2022 ACII Affective Vocal Burst Workshop & Challenge.
Style APA, Harvard, Vancouver, ISO itp.
44

Babaoğlu, Gizem, Başak Yazgan, Pınar Erturk, Etienne Gaudrain, Laura Rachman, Leanne Nagels, Stefan Launer i in. "Vocal emotion recognition by native Turkish children with normal hearing and with hearing aids". Journal of the Acoustical Society of America 151, nr 4 (kwiecień 2022): A278. http://dx.doi.org/10.1121/10.0011335.

Pełny tekst źródła
Streszczenie:
Development of vocal emotion recognition in children with normal hearing takes many years before reaching adult-like levels. In children with hearing loss, decreased audibility and potential loss of sensitivity to relevant acoustic cues may additionally affect vocal emotion perception. Hearing aids (HAs) are traditionally optimized for speech understanding, and it is not clear how children with HAs are performing in perceiving vocal emotions. In this study, we investigated vocal emotion recognition in native Turkish normal hearing children (NHC, age range: 5–18 years), normal hearing adults (NHA, age range: 18–45 years), and children with HAs (HAC, age range: 5–18 years), using pseudo-speech sentences expressed in one of the three emotions, happy, sad, or angry (Geneva Multimodal Emotion Portrayal (GEMEP) Corpus by Banziger and Scherer, 2010; EmoHI Test by Nagels et al., 2021). Visual inspection of the preliminary data suggests that performance increases with increasing age for NHC and that in general, HAC have lower recognition scores compared to NHC. Further analyses will be presented, along with acoustical analysis of the stimuli and an exploration of effects of HA settings. In addition, for cross-language comparison, these data will be compared to previously collected data with the same paradigm in children from the UK and the Netherlands.
Style APA, Harvard, Vancouver, ISO itp.
45

Kachouri, Abdennaceur, Tarak Hdiji ., Zied Sakka . i Mounir Samet . "Contribution to the Vocal Print Recognition in Arabic Language". Journal of Applied Sciences 7, nr 18 (1.09.2007): 2560–67. http://dx.doi.org/10.3923/jas.2007.2560.2567.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
46

Steen, Kim Arild, Ole Roland Therkildsen, Henrik Karstoft i Ole Green. "A Vocal-Based Analytical Method for Goose Behaviour Recognition". Sensors 12, nr 3 (21.03.2012): 3773–88. http://dx.doi.org/10.3390/s120303773.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
47

Akçay, Çağlar, Rose J. Swift, Veronica A. Reed i Janis L. Dickinson. "Vocal kin recognition in kin neighborhoods of western bluebirds". Behavioral Ecology 24, nr 4 (2013): 898–905. http://dx.doi.org/10.1093/beheco/art018.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
48

Policht, Richard, Vlastimil Hart, Denis Goncharov, Peter Surový, Vladimír Hanzal, Jaroslav Červený i Hynek Burda. "Vocal recognition of a nest-predator in black grouse". PeerJ 7 (15.03.2019): e6533. http://dx.doi.org/10.7717/peerj.6533.

Pełny tekst źródła
Streszczenie:
Corvids count among the important predators of bird nests. They are vocal animals and one can expect that birds threatened by their predation, such as black grouse, are sensitive to and recognize their calls. Within the framework of field studies, we noticed that adult black grouse were alerted by raven calls during periods outside the breeding season. Since black grouse are large, extremely precocial birds, this reaction can hardly be explained by sensitization specifically to the threat of nest predation by ravens. This surprising observation prompted us to study the phenomenon more systematically. According to our knowledge, the response of birds to corvid vocalization has been studied in altricial birds only. We tested whether the black grouse distinguishes and responds specifically to playback calls of the common raven. Black grouse recognized raven calls and were alerted, displaying typical neck stretching, followed by head scanning, and eventual escape. Surprisingly, males tended to react faster and exhibited a longer duration of vigilance behavior compared to females. Although raven calls are recognized by adult black grouse out of the nesting period, they are not directly endangered by the raven. We speculate that the responsiveness of adult grouse to raven calls might be explained as a learned response in juveniles from nesting hens that is then preserved in adults, or by a known association between the raven and the red fox. In that case, calls of the raven would be rather interpreted as a warning signal of probable proximity of the red fox.
Style APA, Harvard, Vancouver, ISO itp.
49

Toivonen, Maikku, i Pia Rämä. "N400 during recognition of voice identity and vocal affect". NeuroReport 20, nr 14 (wrzesień 2009): 1245–49. http://dx.doi.org/10.1097/wnr.0b013e32832ff26f.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
50

Gentner, Timothy Q. "Temporal scales of auditory objects underlying birdsong vocal recognition". Journal of the Acoustical Society of America 124, nr 2 (sierpień 2008): 1350–59. http://dx.doi.org/10.1121/1.2945705.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii