Journal articles on the topic 'Vocalization'

To see the other types of publications on this topic, follow the link: Vocalization.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Vocalization.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

DiMattina, Christopher, and Xiaoqin Wang. "Virtual Vocalization Stimuli for Investigating Neural Representations of Species-Specific Vocalizations." Journal of Neurophysiology 95, no. 2 (February 2006): 1244–62. http://dx.doi.org/10.1152/jn.00818.2005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Most studies investigating neural representations of species-specific vocalizations in non-human primates and other species have involved studying neural responses to vocalization tokens. One limitation of such approaches is the difficulty in determining which acoustical features of vocalizations evoke neural responses. Traditionally used filtering techniques are often inadequate in manipulating features of complex vocalizations. Furthermore, the use of vocalization tokens cannot fully account for intrinsic stochastic variations of vocalizations that are crucial in understanding the neural codes for categorizing and discriminating vocalizations differing along multiple feature dimensions. In this work, we have taken a rigorous and novel approach to the study of species-specific vocalization processing by creating parametric “virtual vocalization” models of major call types produced by the common marmoset ( Callithrix jacchus). The main findings are as follows. 1) Acoustical parameters were measured from a database of the four major call types of the common marmoset. This database was obtained from eight different individuals, and for each individual, we typically obtained hundreds of samples of each major call type. 2) These feature measurements were employed to parameterize models defining representative virtual vocalizations of each call type for each of the eight animals as well as an overall species-representative virtual vocalization averaged across individuals for each call type. 3) Using the same feature-measurement that was applied to the vocalization samples, we measured acoustical features of the virtual vocalizations, including features not explicitly modeled and found the virtual vocalizations to be statistically representative of the callers and call types. 4) The accuracy of the virtual vocalizations was further confirmed by comparing neural responses to real and synthetic virtual vocalizations recorded from awake marmoset auditory cortex. We found a strong agreement between the responses to token vocalizations and their synthetic counterparts. 5) We demonstrated how these virtual vocalization stimuli could be employed to precisely and quantitatively define the notion of vocalization “selectivity” by using stimuli with parameter values both within and outside the naturally occurring ranges. We also showed the potential of the virtual vocalization stimuli in studying issues related to vocalization categorizations by morphing between different call types and individual callers.
2

Yustian, Indra, Dedek Kurniawan, Zahrial Effendi, Doni Setiawan, Enggar Patriono, Laila Hanum, and Arum Setiawan. "Vocalization of Western Tarsier (Cephalopachus bancanus Horsfield, 1821) in Bangka Island, Indonesia." Journal of Tropical Biodiversity and Biotechnology 6, no. 3 (September 15, 2021): 65526. http://dx.doi.org/10.22146/jtbb.65526.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Every tarsier species performs different vocalization behaviour. Cephalopachus bancanus as one of the tarsier species listed as vulnerable in the IUCN red list has limited and different information about their vocalization. This research was designed to explore the species vocalization in the vicinity of Petaling Village, District of Bangka, Bangka Island, Indonesia. Tarsier vocalization inside temporary enclosures was recorded using a handy recorder and analysed using bioacoustics software Audacity 2.3.3 and Raven Pro 1.6.1. We described seven vocalization types with different functions and spectrogram patterns. One type of vocalization, squeak, is produced only by the infant. Two types of vocalizations (whistle and cheeps) were produced by the infant and adult, and four vocalization types were performed by adults. Those types of vocalizations can be heard within human hearing. Some types of vocalizations have peak frequencies at the ultrasonic level, i.e.: agonistic scream, alarm call, distress call, and hysteresis.
3

Kim, Ho, and Seunghee Ha. "Relation between Early Vocalizations and Words." Communication Sciences & Disorders 27, no. 1 (March 31, 2022): 1–13. http://dx.doi.org/10.12963/csd.22877.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Objectives: This study investigated the relationship of phonological characteristics between early vocalizations at 6-8 months, 12-14 months, and words at 18-20 months. Additionally, we aimed to identify which phonological characteristics of early vocalization can predict speech and language development at 18-20 months.Methods: Vocalizations were collected using Language ENvironmental Analysis (LENA) from 14 children at 6-8, 12-14, and 18-20 months. Vocalizations were classified as precanonical or canonical vocalization. Words were separated from the entire vocalizations at 18-20 months. Consonant inventories and phonological structures were analyzed in early vocalizations and words. Multiple regression analysis was performed to investigate whether the rate of canonical vocalizations, the number of consonant inventories, and the number of phonological structures in early vocalization are predictive of the number of consonant inventories and the number of different words at 18-20 months.Results: Consonant inventories and phonological structures in words at 18-20 months consisted of inventories which had been produced in early vocalization at 6-8 months and 12-14 months. The results showed that the ratio of canonical vocalizations at 6-8 months predicted the number of consonant inventories and the number of different words. The number of consonant inventories at 12-14 months also predicted the number of consonant inventories in words at 18-20 months.Conclusion: This study confirmed that the phonological development of early vocalization is closely related to later speech-language development, and the speech-language evaluation based on the phonological characteristics of early vocalization can provide a basis for early diagnosis and intervention in infants and toddlers.
4

Eliades, Steven J., and Xiaoqin Wang. "Sensory-Motor Interaction in the Primate Auditory Cortex During Self-Initiated Vocalizations." Journal of Neurophysiology 89, no. 4 (April 1, 2003): 2194–207. http://dx.doi.org/10.1152/jn.00627.2002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Little is known about sensory-motor interaction in the auditory cortex of primates at the level of single neurons and its role in supporting vocal communication. The present study investigated single-unit activities in the auditory cortex of a vocal primate, the common marmoset ( Callithrix jacchus), during self-initiated vocalizations. We found that 1) self-initiated vocalizations resulted in suppression of neural discharges in a majority of auditory cortical neurons. The vocalization-induced inhibition suppressed both spontaneous and stimulus-driven discharges. Suppressed units responded poorly to external acoustic stimuli during vocalization. 2) Vocalization-induced suppression began several hundred milliseconds prior to the onset of vocalization. 3) The suppression of cortical discharges reduced neural firings to below the rates expected from a unit's rate-level function, adjusted for known subcortical attenuation, and therefore was likely not entirely caused by subcortical attenuation mechanisms. 4) A smaller population of auditory cortical neurons showed increased discharges during self-initiated vocalizations. This vocalization-related excitation began after the onset of vocalization and is likely the result of acoustic feedback. Units showing this excitation responded nearly normally to external stimuli during vocalization. Based on these findings, we propose that the suppression of auditory cortical neurons, possibly originating from cortical vocal production centers, acts to increase the dynamic range of cortical responses to vocalization feedback for self monitoring. The excitatory responses, on the other hand, likely play a role in maintaining hearing sensitivity to the external acoustic environment during vocalization.
5

Heijmans, Shai. "About the 'Unreliability' of the Vocalization of Western Targum-Manuscripts." Aramaic Studies 9, no. 2 (2011): 279–89. http://dx.doi.org/10.1163/147783511x619854.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract The main argument for the unreliability of the Tiberian vocalization in Targum manuscripts of western origin is the inconsistency with which the vocalization signs are applied. The author argues that in certain manuscripts this inconsistency is the result of a non-Tiberian vocalization system which uses the Tiberian vocalization signs, the so-called Palestino-Tiberian vocalization system. A passage from an Ashkenazic Targumic manuscript with Palestino-Tiberian vocalization is examined and its 'inconsistencies' are explained in light of similar vocalizations in manuscripts of Rabbinic Hebrew. The author suggests that manuscripts with Palestino-Tiberian vocalization may reflect the pronunciation tradition of Palestinian Aramaic of Late Antiquity.
6

Eliades, Steven J., and Xiaoqin Wang. "Comparison of auditory-vocal interactions across multiple types of vocalizations in marmoset auditory cortex." Journal of Neurophysiology 109, no. 6 (March 15, 2013): 1638–57. http://dx.doi.org/10.1152/jn.00698.2012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Auditory-vocal interaction, the modulation of auditory sensory responses during vocal production, is an important but poorly understood neurophysiological phenomenon in nonhuman primates. This sensory-motor processing has important behavioral implications for self-monitoring during vocal production as well as feedback-mediated vocal control for both animals and humans. Previous studies in marmosets have shown that a large portion of neurons in the auditory cortex are suppressed during self-produced vocalization but have primarily focused on a single type of isolation vocalization. The present study expands previous analyses to compare auditory-vocal interaction of cortical responses between different types of vocalizations. We recorded neurons from the auditory cortex of unrestrained marmoset monkeys with implanted electrode arrays and showed that auditory-vocal interactions generalize across vocalization types. We found the following: 1) Vocal suppression and excitation are a general phenomenon, occurring for all four major vocalization types. 2) Within individual neurons, suppression was the more general response, occurring for multiple vocalization types, while excitation tended to be more specific to a single vocalization type. 3) A subset of neurons changed their responses between different types of vocalization, most often from strong suppression or excitation for one vocalization to unresponsive for another, and only rarely from suppression to excitation. 4) Differences in neural responses between vocalization types were weakly correlated with passive response properties, measured by playbacks of acoustic stimuli including recorded vocalizations. These results indicate that vocalization-induced modulation of the auditory cortex is a general phenomenon applicable to all vocalization types, but variations within individual neurons suggest possible vocalization-specific coding.
7

Lancaster, W. C., O. W. Henson, and A. W. Keating. "Respiratory muscle activity in relation to vocalization in flying bats." Journal of Experimental Biology 198, no. 1 (January 1, 1995): 175–91. http://dx.doi.org/10.1242/jeb.198.1.175.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The structure of the thoracic and abdominal walls of Pteronotus parnellii (Microchiroptera: Mormoopidae) was described with respect to their function in respiration and vocalization. We monitored electromyographic activity of respiratory and flight muscles in relation to echolocative vocalization. In flight, signals were telemetered with a small FM transmitter modified to summate the low-frequency myopotentials with biosonar signals from a ceramic-crystal microphone. Recordings were also made from the same bats confined to a small cage. Vocalizations were used as the parameter by which all muscle activities were correlated. A discrete burst of activity in the lateral abdominal wall muscles accompanied each vocalization. Diaphragmatic myopotentials occurred between groups of calls and did not coincide with activity of the abdominal wall or with vocalizations. Flight muscles were not active in resting bats. During flight, vocalizations and the abdominal muscle activity that accompanied them coincided with myopotentials of the pectoralis and serratus ventralis muscles. We propose that contractions of the lateral abdominal wall provide the primary power for the production of intense biosonar vocalization in flying and in stationary bats. In flight, synchronization of vocalization with activity of the pectoralis and serratus ventralis jointly contribute to the pressurization of the thoraco-abdominal cavity. This utilization of pressure that is normally generated in flight facilitates respiration and allows for the production of intense vocalizations with little additional energetic expenditure.
8

McCathren, Rebecca B., Paul J. Yoder, and Steven F. Warren. "The Relationship Between Prelinguistic Vocalization and Later Expressive Vocabulary in Young Children With Developmental Delay." Journal of Speech, Language, and Hearing Research 42, no. 4 (August 1999): 915–24. http://dx.doi.org/10.1044/jslhr.4204.915.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This study tested the relationship between prelinguistic vocalization and expressive vocabulary 1 year later in young children with mild to moderate developmental delays. Three vocalization variables were tested: rate of all vocalization, rate of vocalizations with consonants, and rate of vocalizations used interactively. The 58 toddlers in the study were 17–34 months old, not sensory impaired, and had Bayley Mental Development Indices (Bayley, 1969; Bayley, 1993) from 35–85. In addition, the children had fewer than 3 words in their expressive vocabularies and during classroom observation each showed at least one instance of intentional prelinguistic communication before testing. Selected sections of the Communication and Symbolic Behavior Scales procedures (CSBS; Wetherby & Prizant, 1993) were administered at the beginning and at the end of the study. The vocal measures were obtained in the initial CSBS session. One measure of expressive vocabulary was obtained in the CSBS session at the end of the study. In addition, expressive vocabulary was measured in a nonstructured play session at the end of the study. We predicted that rate of vocalization, rate of vocalizations with consonants, and rate of vocalizations used interactively would all be positively related to later expressive vocabulary. The results confirmed the predictions.
9

Czyżowski, Piotr, Sławomir Beeger, Mariusz Wójcik, Dorota Jarmoszczuk, Mirosław Karpiński, and Marian Flis. "Analysis of the Territorial Vocalization of the Pheasants Phasianus colchicus." Animals 12, no. 22 (November 19, 2022): 3209. http://dx.doi.org/10.3390/ani12223209.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The aim of the study was to assess the impact of the duration of the mating season and the time of day on the parameters of the vocalization pheasants (duration of vocalization, frequency of the sound wave, intervals between vocalizations). In the study, pheasant vocalization recorded in the morning (600–800) and in the afternoon (1600–1800) between April and June 2020 was analyzed. In total, the research material consisted of 258 separate vocalizations. After recognition of the individual songs of each bird, frequency-time indicators were collected from the samples to perform statistical analysis of the recorded sounds. The duration of the first syllable [s], the duration of the second syllable [s], the duration of the pause between the syllables [s], the intervals between successive vocalizations [min], and the peak frequency of the syllables I and II [Hz] were specified for each song. The duration of the syllables and the pauses between the syllables and vocalizations were determined through evaluation of spectrograms. The peak amplitude frequencies of the syllables were determined via time-frequency STFT analysis. Statistically significant differences in the distributions of the values of all variables between the analyzed months were demonstrated. The longest duration of total vocalization and the shortest time between vocalizations were recorded in May. Therefore, this month is characterized by the highest frequency and longest duration of vocalization, which is related to the peak of the reproductive period. The time of day was found to exert a significant effect on all variables except the duration of syllable II. The duration of vocalization was significantly shorter in the morning, which indicates that the cooks are more active at this time of day in the study area. The highest peak amplitude frequencies of both syllables were recorded in April, but they decreased in the subsequent months of observation. The time of day was also shown to have an impact on the peak amplitude frequencies, which had the highest values in the morning.
10

Feng, Min, Mengyao Zhai, Juncai Xu, Ning Ding, Nana Qiu, Huan Shao, Peiying Jin, and Xiaoyan Ke. "Towards-Person Vocalization Effect on Screening for Autism Spectrum Disorders in the Context of Frustration." Brain Sciences 11, no. 12 (December 16, 2021): 1651. http://dx.doi.org/10.3390/brainsci11121651.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The purpose of this study is to investigate the vocalization characteristics of infants with autism spectrum disorder (ASD) in the context of frustration. The duration and frequency of vocalization in 48 infants with ASD and 65 infants with typical development (TD) were followed up to 24 months later for subsequent diagnosis. The typical vocalizations of infants with ASD were retrospectively analyzed, such as speech-like vocalizations, nonspeech vocalizations, vocalizations towards the person and non-social vocalizations. The results showed that, compared with the TD group, vocalizations of infants with ASD during the still-face period had lower typical vocalizations and characteristics associated with social intention, and that these characteristics were closely related to the clinical symptoms of ASD, among which vocalizations towards the person accompanied by social intention had discriminative efficacy.
11

Clemmons, Janine R. "Development of a Selective Response To an Adult Vocalization in Nestling Black-Capped Chickadees." Behaviour 132, no. 1-2 (1995): 1–20. http://dx.doi.org/10.1163/156853995x00252.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractThere are many studies on how songbirds develop song production, but few on how songbirds develop appropriate responses to conspecific vocalizations. The black-capped chickadee, Parus atricapillus, produces a vocalization, the 'squawk', that stimulates gaping in nestlings during feeding. To determine whether nestlings gape selectively at the squawk, playbacks of several conspecific vocalizations plus a heterospecific vocalization were presented to nestlings within natural nests. A preference for the squawk did not appear until day 2-3 and then steadily increased, until by day 11-13, nestlings gaped only at the squawk. To determine whether there are constraints on which vocalization can develop as the gaping stimulus, newly-hatched nestlings were reinforced with food for gaping either at the squawk or the faint feebee, the two most common adult vocalizations at the nesting site. Regardless of reinforcement, nestlings gaped most frequently at the squawk. In addition, after the first few days posthatch, nestlings became as responsive to a third, unreinforced, heterospecific vocalization as to the squawk. The responsiveness to the heterospecific vocalization coincided with the expanding range of auditory sensitivity that occurs at the same age during passerine development. Thus, while field observations show that nestlings gape mostly to the squawk relative to other parental vocalizations, experimental evidence indicates that there is not an exclusive link between the signal (squawk) and its response (gaping), especially during the first week posthatch when parents use the signal most frequently. Rather, an effectively selective response may be achieved redundantly by a variety of factors. Possible factors that are discussed include matching acoustic structure to nestling perceptual biases and the behavior
12

Raposo, Marcos A., and Elizabeth Höfling. "Overestimation of vocal characters in Suboscine taxonomy (Aves: Passeriformes: Tyranni): causes and implications." Lundiana: International Journal of Biodiversity 4, no. 1 (December 2, 2022): 35–42. http://dx.doi.org/10.35699/2675-5327.2003.21833.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The difference in treatment of vocal features in Oscines and Suboscines passerine birds characterizes a large portion of the current studies on their taxonomy. In the former taxon, vocalization is supposed to be molded by learning, and consequently is not regarded as taxonomically informative. In the latter, a strong emphasis is given to vocalization because it supposedly reflects the genetic structure of populations. This paper reviews the various assumptions related to this difference in treatment, including the overestimation of the vocal characters in suboscine alpha taxonomy due to the alleged importance of vocalization under the framework of the species mate recognition system. The innate origin of suboscine vocalizations remains to be rigorously demonstrated and the use of vocalization as “super-characters” is prejudicial to bird taxonomy. Despite the possibility of being learned, vocalization should also be used in the taxonomic studies of oscine passerines. Keywords: Vocalization, Oscines, Suboscines, Birds, Passeriformes
13

Calvert, Wendy, and Ian Stirling. "Winter Distribution of Ringed Seals (Phoca hispida) in the Barrow Strait Area, Northwest Territories, Determined by Underwater Vocalizations." Canadian Journal of Fisheries and Aquatic Sciences 42, no. 7 (July 1, 1985): 1238–43. http://dx.doi.org/10.1139/f85-153.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In order to assess underice distribution of ringed seals (Phoca hispida) in winter, we made recordings from 23 to 30 April 1982 at 32 sites chosen to represent different habitats in the High Arctic. By regressing the vocalization rate at each site against variables for habitat quality, we found that sites in smooth interisland channels had significantly more vocalizations than sites in bays, and sites with frequent human activity had vocalization rates similar to the overall average. Although differences in vocalization rates correlated with some measured and estimated habitat variables, there was too much overlap between sites for vocalization rate alone to be useful in separating suitable and unsuitable pupping habitat. Recordings made at one site over 4 d showed a diel cycle in which vocalization rate was highest from about 08:30 to 16:30 and lowest at night.
14

Job, Damon A., Daryl J. Boness, and John M. Francis. "Individual variation in nursing vocalizations of Hawaiian monk seal pups, Monachus schauinslandi (Phocidae, Pinnipedia), and lack of maternal recognition." Canadian Journal of Zoology 73, no. 5 (May 1, 1995): 975–83. http://dx.doi.org/10.1139/z95-114.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Vocalizations of individual Hawaiian monk seal pups, Monachus schauinslandi, do not have unique attributes that enable females to recognize their own offspring. Despite low aggregation density during pupping, aggressive encounters are common between females with pups. Fostering is prevalent and may reflect confusion over the identity of pups following aggressive encounters between females. All pup vocalizations were simple in structure and contained true harmonics. The coefficients of variation revealed considerable variance in vocalization structure within pups. Controlling for age, multivariate analyses of variance revealed significant differences among pups in vocalization attributes. Significant developmental changes occurred in vocalization structure for some pups but not for others. Discriminant function analysis suggested that it would be difficult for females to distinguish between the vocalizations of pups. The results of experiments conducted in the field showed that females did not discriminate between filial and alien pups by voice. In addition, females tended not to foster pups that had vocalizations similar to those of their own offspring. Thus, females seem to be unable to recognize their pups by voice. The apparent lack of vocal recognition of pups may contribute to the high frequency of fostering in this species.
15

Perlman, Marcus, and Ashley A. Cain. "Iconicity in vocalization, comparisons with gesture, and implications for theories on the evolution of language." Gesture 14, no. 3 (December 31, 2014): 320–50. http://dx.doi.org/10.1075/gest.14.3.03per.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Scholars have often reasoned that vocalizations are extremely limited in their potential for iconic expression, especially in comparison to manual gestures (e.g., Armstrong & Wilcox, 2007; Tomasello, 2008). As evidence for an alternative view, we first review the growing body of research related to iconicity in vocalizations, including experimental work on sound symbolism, cross-linguistic studies documenting iconicity in the grammars and lexicons of languages, and experimental studies that examine iconicity in the production of speech and vocalizations. We then report an experiment in which participants created vocalizations to communicate 60 different meanings, including 30 antonymic pairs. The vocalizations were measured along several acoustic properties, and these properties were compared between antonyms. Participants were highly consistent in the kinds of sounds they produced for the majority of meanings, supporting the hypothesis that vocalization has considerable potential for iconicity. In light of these findings, we present a comparison between vocalization and manual gesture, and examine the detailed ways in which each modality can function in the iconic expression of particular kinds of meanings. We further discuss the role of iconic vocalizations and gesture in the evolution of language since our divergence from the great apes. In conclusion, we suggest that human communication is best understood as an ensemble of kinesis and vocalization, not just speech, in which expression in both modalities spans the range from arbitrary to iconic.
16

Van Parijs, Sofie M., and Kit M. Kovacs. "In-air and underwater vocalizations of eastern Canadian harbour seals, Phoca vitulina." Canadian Journal of Zoology 80, no. 7 (July 1, 2002): 1173–79. http://dx.doi.org/10.1139/z02-088.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Harbour seals, Phoca vitulina, have long been thought to be one of the least vocal pinniped species both in air and under water. However, recent studies have shown that males use underwater vocalizations intensively during the mating season. In air, harbour seals are still thought to be relatively silent. In this study we describe the vocal repertoire of Eastern Canadian harbour seals during the breeding season. Harbour seals from this area produced seven vocalization types in air and one vocalization type under water. In-air vocalizations are predominantly used by adult males during agonistic interactions. Other sex and age classes also vocalize, but less frequently. Nearest neighbour responses to in-air vocalizations were primarily agonistic when any age or sex class vocalized. In this study, seals produced an underwater roar vocalization closely resembling that produced by adult males during the mating season at other sites. Eastern Canadian harbour seals appear to be considerably more vocal when hauled out than is the norm for this species at other sites around the world.
17

West, R. A., and C. R. Larson. "Neurons of the anterior mesial cortex related to faciovocal activity in the awake monkey." Journal of Neurophysiology 74, no. 5 (November 1, 1995): 1856–69. http://dx.doi.org/10.1152/jn.1995.74.5.1856.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
1. The anterior mesial cortex, including the cingulate region, is thought to be involved in the voluntary control of vocalization. Previous recording studies have demonstrated that anterior mesial neurons discharge before conditioned and spontaneous vocalizations, but questions remain regarding the location and functional properties of these neurons. The present study was performed to provide a more complete description of the location and discharge properties of anterior mesial neurons involved in faciovocal behaviors. 2. Single-unit activity was recorded from neurons in the anterior mesial cortex of monkeys during performance of self-paced vocalizations and jaw openings. Cells were also tested for responsiveness to acoustic stimulation, and attempts were made to elicit vocalization through stimulation of the cortex surrounding related cells. Discharge properties of the cells were statistically analyzed, and correlation analysis was performed between measure of cell discharge and vocalization. 3. A total of 145 neurons were observed to modulate their discharge in association with vocalization or jaw opening. Four general classes of neurons were observed: neurons related only to vocalization, neurons related only to jaw opening, neurons related to both vocalization and jaw opening, and neurons related to other oromotor activities such as lip movements or reinforcement consumption. 4. Vocalization-related cells typically discharged tonically at a low frequency (mean 22 Hz), and many instances of long-lead activity (lead time > 500 ms) were noted. No neurons responded to acoustic stimulation, and electrical stimulation failed to elicit vocalization. Neural activity was not correlated with any measure of vocalization. 5. Neurons related to faciovocal behavior were located in the anterior cingulate sulcus and adjacent cortex of the mesial wall at a level just rostral to the genu of the arcuate sulcus. This region roughly corresponds to the rostral cingulate motor area and is located caudal to the traditionally described cingulate vocalization region. 6. In the present study we demonstrate the existence of an additional region in the medial wall that is involved in a variety of faciovocal behaviors such as vocalization, jaw opening, lip movements, and reinforcement consumption. The neurons do not appear to be strongly coupled to the execution of these acts. These results suggest that the activity of neurons in the anterior mesial cortex may relate to faciovocal behavior in a more global way than the activity of neurons in other motor areas.
18

Dilley, Laura, and Derek Houston. "Accuracy of the Language Environment Analysis (LENA) speech processing system in identifying communicative vocalizations of young children and adults." Journal of the Acoustical Society of America 150, no. 4 (October 2021): A358. http://dx.doi.org/10.1121/10.0008584.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The Language Environment Analysis (LENA) system is an automated audio processing system widely used for characterizing language behaviors of children and adults for clinical and basic research. While a number of studies have assessed LENA’s reliability, its accuracy at identifying and counting speech communicative events is still not well-characterized under a range of naturalistic conditions. In two studies, we examined accuracy of LENA's speech vocalization classifications, relative to human gold standard coding for audio events, as well as word and speech vocalization counts for adults and child utterances, respectively. We found that the weighted average of accurate classification of 100-msec frames by LENA for child speech, adult female speech, and adult male speech was 57%, 61%, and 57%, respectively. Further, an analysis of LENA’s ability to accurately discriminate frames of speech vocalizations from a “key child”—a child wearing the LENA device—from speech vocalizations of other child and adult talkers and sound sources showed that LENA correctly detected key child speech vocalization frames only 41% of the time. We are currently extending this research to examine the accuracy of LENA’s child vocalization count (CVC) and conversational turn count (CTC) measures.
19

Wang, X., M. M. Merzenich, R. Beitel, and C. E. Schreiner. "Representation of a species-specific vocalization in the primary auditory cortex of the common marmoset: temporal and spectral characteristics." Journal of Neurophysiology 74, no. 6 (December 1, 1995): 2685–706. http://dx.doi.org/10.1152/jn.1995.74.6.2685.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
1. The temporal and spectral characteristics of neural representations of a behaviorally important species-specific vocalization were studied in neuronal populations of the primary auditory cortex (A1) of barbiturate-anesthetized adult common marmosets (Callithrix jacchus), using both natural and synthetic vocalizations. The natural vocalizations used in electrophysiological experiments were recorded from the animals under study or from their conspecifics. These calls were frequently produced in vocal exchanges between members of our marmoset colony and are part of the well-defined and highly stereotyped vocal repertoire of this species. 2. The spectrotemporal discharge pattern of spatially distributed neuron populations in cortical field A1 was found to be correlated with the spectrotemporal acoustic pattern of a complex natural vocalization. However, the A1 discharge pattern was not a faithful replication of the acoustic parameters of a vocalization stimulus, but had been transformed into a more abstract representation than that in the auditory periphery. 3. Subpopulations of A1 neurons were found to respond selectively to natural vocalizations as compared with synthetic variations that had the same spectral but different temporal characteristics. A subpopulation responding selectively to a given monkey's call shared some but not all of its neuronal memberships with other individual-call-specific neuronal subpopulations. 4. In the time domain, responses of individual A1 units were phase-locked to the envelope of a portion of a complex vocalization, which was centered around a unit's characteristic frequency (CF). As a whole, discharges of A1 neuronal populations were phase-locked to discrete stimulus events but not to their rapidly changing spectral contents. The consequence was a reduction in temporal complexity and an increase in cross-population response synchronization. 5. In the frequency domain, major features of the stimulus spectrum were reflected in rate-CF profiles. The spectral features of a natural call were equally or more strongly represented by a subpopulation of A1 neurons that responded selectively to that call as compared with the entire responding A1 population. 6. Neuronal responses to a complex call were distributed very widely across cortical field A1. At the same time, the responses evoked by a vocalization scattered in discrete cortical patches were strongly synchronized to stimulus events and to each other. As a result, at any given time during the course of a vocalization, a coherent representation of the integrated spectrotemporal characteristics of a particular vocalization was present in a specific neuronal population. 7. These results suggest that the representation of behaviorally important and spectrotemporally complex species-specific vocalizations in A1 is 1) temporally integrated and 2) spectrally distributed in nature, and that the representation is carried by spatially dispersed and synchronized cortical cell assemblies that correspond to each individual's vocalizations in a specific and abstracted way.
20

DeVeney, Shari L., Anastasia Kyvelidou, and Paris Mather. "A home-based longitudinal study of vocalization behaviors across infants at low and elevated risk of autism." Autism & Developmental Language Impairments 6 (January 2021): 239694152110576. http://dx.doi.org/10.1177/23969415211057658.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Background and Aims: The purpose of this exploratory study was to expand existing literature on prelinguistic vocalizations by reporting results of the first home-based longitudinal study examining a wide variety of behaviors and characteristics, including early vocalizations, across infants at low and elevated risk of autism spectrum disorder (ASD). The study of vocalizations and vocalization changes across early developmental periods shows promise in reflecting early clinically significant differences across infants at low and elevated risk of ASD. Observations of early vocalizations and their differences during infancy could provide a reliable and essential component of an early developmental profile that would lower the average diagnostic age for ASD. However, studies employing observation of vocalization behaviors have been limited and often conducted in laboratory settings, reducing the external generalization of the findings. Methods: The present study was conducted to determine the consistency of previous findings with longitudinal data collected in home environments. Infants in the present study represented elevated risk from two etiological backgrounds, (a) infants born prematurely and with low birth weight and (b) infants who had an older sibling diagnosed with ASD. All data were collected in the infants’ homes and compared with data collected from infants with low likelihood of ASD. The study included 44 participants (31 in the low-risk sample, 13 in the high-risk sample) with vocalization behaviors observed at 6- and 12-months through 20-min semi-structured play interactions with caregivers. Observations were video-recorded and later coded for speech and non-speech vocalizations. Results: Differences in the 6-month vocalization behaviors were not statistically significant across risk levels of ASD. By 12 months; however, risk group differences were evident in the total number of vocalizations overall with specific differences across groups representing moderate to large, clinically relevant effects. Infants at low risk of ASD demonstrated significantly greater developmental change between 6- and 12-months than did the infants at high risk. Data were also reviewed for differences across high-risk group etiologies. Conclusions: The present study was unique and innovative in a number of ways as the first home-based longitudinal study examining infant vocal behaviors across low and high risk of ASD. Many of the present study findings were consistent with previous cross-sectional investigations of infants at elevated risk for ASD, indicating support for further home-based longitudinal study in this area. Findings also indicated some preliminary subgroup differences between high-risk etiologies of ASD. Vocalization differences across high risk groups had not been previously addressed in the literature. Implications: Vocalization differences are notable by 12-months of age between infants at low and elevated risk of ASD and infants at high risk demonstrated reduced developmental changes between 6- and 12-months compared to the infants at low risk. Observation of early infant vocalization behaviors may reasonably occur in the home, providing early childhood professionals and researchers with empirical support for data collection of child-caregiver interactions in this setting. Potential differences across high-risk etiologies warrant further investigation.
21

Seidl, Amanda, Alejandrina Cristia, Melanie Soderstrom, Eon-Suk Ko, Emily A. Abel, Ashleigh Kellerman, and A. J. Schwichtenberg. "Infant–Mother Acoustic–Prosodic Alignment and Developmental Risk." Journal of Speech, Language, and Hearing Research 61, no. 6 (June 19, 2018): 1369–80. http://dx.doi.org/10.1044/2018_jslhr-s-17-0287.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Purpose One promising early marker for autism and other communicative and language disorders is early infant speech production. Here we used daylong recordings of high- and low-risk infant–mother dyads to examine whether acoustic–prosodic alignment as well as two automated measures of infant vocalization are related to developmental risk status indexed via familial risk and developmental progress at 36 months of age. Method Automated analyses of the acoustics of daylong real-world interactions were used to examine whether pitch characteristics of one vocalization by the mother or the child predicted those of the vocalization response by the other speaker and whether other features of infants' speech in daylong recordings were associated with developmental risk status or outcomes. Results Low-risk and high-risk dyads did not differ in the level of acoustic–prosodic alignment, which was overall not significant. Further analyses revealed that acoustic–prosodic alignment did not predict infants' later developmental progress, which was, however, associated with two automated measures of infant vocalizations (daily vocalizations and conversational turns). Conclusions Although further research is needed, these findings suggest that automated measures of vocalizations drawn from daylong recordings are a possible early identification tool for later developmental progress/concerns. Supplemental Material https://osf.io/cdn3v/
22

Stirling, Ian, Wendy Calvert, and Cheryl Spencer. "Evidence of stereotyped underwater vocalizations of male Atlantic walruses (Odobenus rosmarus rosmarus)." Canadian Journal of Zoology 65, no. 9 (September 1, 1987): 2311–21. http://dx.doi.org/10.1139/z87-348.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Adult male Atlantic walruses (Odobenus rosmarus rosmarus) vocalize extensively underwater during the breeding season. The individual calls are composed of one or more short repetitious pulses which may vary individually in the number, pattern, and rate at which they are given. Individual male walruses give repeated stereotyped vocalization cycles totalling several hundred pulses each for up to several hours at a time, both while the whole body is submerged and between breaths with the head submerged while at the surface. We analyzed the vocalization cycles of a sample of different walruses, and sound spectrograms of particular calls from within those cycles, to test the hypothesis that the stereotyped vocalizations of individuals are unique and recognizable. In our sample, the pulse patterns of particular calls given by individual walruses in a series of vocalization cycles were nearly identical but were consistently different from the same call given by other animals. One call, the diving vocalization of a recognizable male, was identical in two different years.
23

Romanski, Lizabeth M., Bruno B. Averbeck, and Mark Diltz. "Neural Representation of Vocalizations in the Primate Ventrolateral Prefrontal Cortex." Journal of Neurophysiology 93, no. 2 (February 2005): 734–47. http://dx.doi.org/10.1152/jn.00675.2004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this study, we examined the role of the ventrolateral prefrontal cortex in encoding communication stimuli. Specifically, we recorded single-unit responses from the ventrolateral prefrontal cortext (vlPFC) in awake behaving rhesus macaques in response to species-specific vocalizations. We determined the selectivity of vlPFC cells for 10 types of rhesus vocalizations and also asked what types of vocalizations cluster together in the neuronal response. The data from the present study demonstrate that vlPFC auditory neurons respond to a variety of species-specific vocalizations from a previously characterized library. Most vlPFC neurons responded to two to five vocalizations, while a small percentage of cells responded either selectively to a particular vocalization type or nonselectively to most auditory stimuli tested. Use of information theoretic approaches to examine vocalization tuning indicates that on average, vlPFC neurons encode information about one or two vocalizations. Further analysis of the types of vocalizations that vlPFC cells typically respond to using hierarchical cluster analysis suggests that the responses of vlPFC cells to multiple vocalizations is not based strictly on the call's function or meaning but may be due to other features including acoustic morphology. These data are consistent with a role for the primate vlPFC in assessing distinctive acoustic features.
24

Cao, Dandan, Hong Zhou, Wei Wei, Miaowen Lei, Shibin Yuan, Dunwu Qi, and Zejun Zhang. "Vocal repertoire of adult captive red pandas (Ailurus fulgens)." Animal Biology 66, no. 2 (2016): 145–55. http://dx.doi.org/10.1163/15707563-00002493.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Vocal signals are a common communication tool used to recognize different individuals, advertise fertile phases or discriminate amongst potential mates. Therefore, a thorough understanding of vocal repertoires forms the basis for investigating the role of acoustic signaling in the sexual and social behavior of any animal. Red pandas (Ailurus fulgens) are classified as a vulnerable species and have declined by as much as 40% over the past 50 years in China. Adult red pandas are known to call frequently during mating and aggressive encounters; however, no quantitative description of their vocalizations has been attempted. Here, the vocal repertoire of captive red pandas was investigated. Acoustical and statistical analyses indicated seven vocalization types during the breeding season: “growl”, “bark”, “squeal”, “bleat”, “hoot”, “grunt” and “twitter”; the spectrogram for each vocalization type was extracted. The type of vocalizations produced varied with behavioral state and implies different functional contexts. Future studies are needed to uncover the functions of red panda vocalizations in individual recognition, sexual selection and social interaction.
25

Scheerer, Nichole E., Anupreet K. Tumber, and Jeffery A. Jones. "Attentional demands modulate sensorimotor learning induced by persistent exposure to changes in auditory feedback." Journal of Neurophysiology 115, no. 2 (February 1, 2016): 826–32. http://dx.doi.org/10.1152/jn.00799.2015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Hearing one's own voice is important for regulating ongoing speech and for mapping speech sounds onto articulator movements. However, it is currently unknown whether attention mediates changes in the relationship between motor commands and their acoustic output, which are necessary as growth and aging inevitably cause changes to the vocal tract. In this study, participants produced vocalizations while they heard their vocal pitch persistently shifted downward one semitone in both single- and dual-task conditions. During the single-task condition, participants vocalized while passively viewing a visual stream. During the dual-task condition, participants vocalized while also monitoring a visual stream for target letters, forcing participants to divide their attention. Participants' vocal pitch was measured across each vocalization, to index the extent to which their ongoing vocalization was modified as a result of the deviant auditory feedback. Smaller compensatory responses were recorded during the dual-task condition, suggesting that divided attention interfered with the use of auditory feedback for the regulation of ongoing vocalizations. Participants' vocal pitch was also measured at the beginning of each vocalization, before auditory feedback was available, to assess the extent to which the deviant auditory feedback was used to modify subsequent speech motor commands. Smaller changes in vocal pitch at vocalization onset were recorded during the dual-task condition, suggesting that divided attention diminished sensorimotor learning. Together, the results of this study suggest that attention is required for the speech motor control system to make optimal use of auditory feedback for the regulation and planning of speech motor commands.
26

Ramsdell-Hudock, Heather L., Anne S. Warlaumont, Lindsey E. Foss, and Candice Perry. "Classification of Infant Vocalizations by Untrained Listeners." Journal of Speech, Language, and Hearing Research 62, no. 9 (September 20, 2019): 3265–75. http://dx.doi.org/10.1044/2019_jslhr-s-18-0494.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Purpose To better enable communication among researchers, clinicians, and caregivers, we aimed to assess how untrained listeners classify early infant vocalization types in comparison to terms currently used by researchers and clinicians. Method Listeners were caregivers with no prior formal education in speech and language development. A 1st group of listeners reported on clinician/researcher-classified vowel, squeal, growl, raspberry, whisper, laugh, and cry vocalizations obtained from archived video/audio recordings of 10 infants from 4 through 12 months of age. A list of commonly used terms was generated based on listener responses and the standard research terminology. A 2nd group of listeners was presented with the same vocalizations and asked to select terms from the list that they thought best described the sounds. Results Classifications of the vocalizations by listeners largely overlapped with published categorical descriptors and yielded additional insight into alternate terms commonly used. The biggest discrepancies were found for the vowel category. Conclusion Prior research has shown that caregivers are accurate in identifying canonical babbling, a major prelinguistic vocalization milestone occurring at about 6–7 months of age. This indicates that caregivers are also well attuned to even earlier emerging vocalization types. This supports the value of continuing basic and clinical research on the vocal types infants produce in the 1st months of life and on their potential diagnostic utility, and may also help improve communication between speech-language pathologists and families.
27

Trösch, Cuzol, Parias, Calandreau, Nowak, and Lansade. "Horses Categorize Human Emotions Cross-Modally Based on Facial Expression and Non-Verbal Vocalizations." Animals 9, no. 11 (October 24, 2019): 862. http://dx.doi.org/10.3390/ani9110862.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Over the last few years, an increasing number of studies have aimed to gain more insight into the field of animal emotions. In particular, it is of interest to determine whether animals can cross-modally categorize the emotions of others. For domestic animals that share a close relationship with humans, we might wonder whether this cross-modal recognition of emotions extends to humans, as well. In this study, we tested whether horses could recognize human emotions and attribute the emotional valence of visual (facial expression) and vocal (non-verbal vocalization) stimuli to the same perceptual category. Two animated pictures of different facial expressions (anger and joy) were simultaneously presented to the horses, while a speaker played an emotional human non-verbal vocalization matching one of the two facial expressions. Horses looked at the picture that was incongruent with the vocalization more, probably because they were intrigued by the paradoxical combination. Moreover, horses reacted in accordance with the valence of the vocalization, both behaviorally and physiologically (heart rate). These results show that horses can cross-modally recognize human emotions and react emotionally to the emotional states of humans, assessed by non-verbal vocalizations.
28

Burchardt, Lara S., Philipp Norton, Oliver Behr, Constance Scharff, and Mirjam Knörnschild. "General isochronous rhythm in echolocation calls and social vocalizations of the bat Saccopteryx bilineata." Royal Society Open Science 6, no. 1 (January 2019): 181076. http://dx.doi.org/10.1098/rsos.181076.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Rhythm is an essential component of human speech and music but very little is known about its evolutionary origin and its distribution in animal vocalizations. We found a regular rhythm in three multisyllabic vocalization types (echolocation call sequences, male territorial songs and pup isolation calls) of the neotropical bat Saccopteryx bilineata . The intervals between element onsets were used to fit the rhythm for each individual. For echolocation call sequences, we expected rhythm frequencies around 6–24 Hz, corresponding to the wingbeat in S. bilineata which is strongly coupled to echolocation calls during flight. Surprisingly, we found rhythm frequencies between 6 and 24 Hz not only for echolocation sequences but also for social vocalizations, e.g. male territorial songs and pup isolation calls, which were emitted while bats were stationary. Fourier analysis of element onsets confirmed an isochronous rhythm across individuals and vocalization types. We speculate that attentional tuning to the rhythms of echolocation calls on the receivers' side might make the production of equally steady rhythmic social vocalizations beneficial.
29

Kittelberger, J. Matthew, Bruce R. Land, and Andrew H. Bass. "Midbrain Periaqueductal Gray and Vocal Patterning in a Teleost Fish." Journal of Neurophysiology 96, no. 1 (July 2006): 71–85. http://dx.doi.org/10.1152/jn.00067.2006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Midbrain structures, including the periaqueductal gray (PAG), are essential nodes in vertebrate motor circuits controlling a broad range of behaviors, from locomotion to complex social behaviors such as vocalization. Few single-unit recording studies, so far all in mammals, have investigated the PAG's role in the temporal patterning of these behaviors. Midshipman fish use vocalization to signal social intent in territorial and courtship interactions. Evidence has implicated a region of their midbrain, located in a similar position as the mammalian PAG, in call production. Here, extracellular single-unit recordings of PAG neuronal activity were made during forebrain-evoked fictive vocalizations that mimic natural call types and reflect the rhythmic output of a known hindbrain–spinal pattern generator. The activity patterns of vocally active PAG neurons were mostly correlated with features related to fictive call initiation. However, spike trains in a subset of neurons predicted the duration of vocal output. Duration is the primary feature distinguishing call types used in different social contexts and these cells may play a role in directly establishing this temporal dimension of vocalization. Reversible, lidocaine inactivation experiments demonstrated the necessity of the midshipman PAG for fictive vocalization, whereas tract-tracing studies revealed the PAG's connectivity to vocal motor centers in the fore- and hindbrain comparable to that in mammals. Together, these data support the hypotheses that the midbrain PAG of teleosts plays an essential role in vocalization and is convergent in both its functional and structural organization to the PAG of mammals.
30

Peters, Gustav, and Barbara A. Tonkin-Leyhausen. "The Tempo and Mode of Evolution of Acoustic Communication Signals of Felids." Evolution of Communication 2, no. 2 (December 31, 1998): 233–48. http://dx.doi.org/10.1075/eoc.2.2.05pet.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Based on the molecular clock model of evolution, molecular phylogenies represent reconstructions of the evolutionary process with a time scale. From these, inferences can be drawn about the evolution of other characters, including behaviour patterns. Mapping particular vocalization types in the Felidae (cats) on a published molecular phylogeny of this mammal family reveals that the distribution of these behavioural characters is fully congruent with it. Thence a time frame for the evolution of these vocalizations can be inferred, indicating large differences in their evolutionary age. Phylogenetic stasis for several million years in particular vocalization types refutes the hypothesis that behavioural characters are generally more susceptible to evolutionary change than morphological ones.
31

LEWIS, N. J., and J. F. HURNIK. "AN APPROACH RESPONSE OF PIGLETS TO THE SOW'S NURSING VOCALIZATIONS." Canadian Journal of Animal Science 66, no. 2 (June 1, 1986): 537–39. http://dx.doi.org/10.4141/cjas86-056.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The approach response of piglets to recorded sow nursing vocalizations was tested in a T-maze. Sixty-nine percent of piglets 1–14 d of age approached the vocalizations. Neither age nor experience nor hunger affected the strength of this response. An approach response to sow vocalizations may be important for the contiguity of the litter, and for the elicitation of the approach response of piglets in nursings initiated by the sow. Key words: Behavior, approach for nursing, piglets, vocalization
32

Zhang, S. P., R. Bandler, and P. J. Davis. "Brain stem integration of vocalization: role of the nucleus retroambigualis." Journal of Neurophysiology 74, no. 6 (December 1, 1995): 2500–2512. http://dx.doi.org/10.1152/jn.1995.74.6.2500.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
1. The descending pathways that mediate the periaqueductal gray (PAG)-evoked coordination of respiratory, laryngeal, and orofacial activity for vocalization have yet to be delineated. Two hypotheses have been offered. One theory is that this activity is mediated by a diffuse descending projection to parvocellular reticular interneurons, adjacent to the relevant laryngeal and orofacial motoneuronal pools. The second hypothesis is that the motor activity for vocalization is integrated via a projection from the PAG to a caudal medullary column of neurons, the nucleus retroambigualis (NRA). These hypotheses were tested with the use of a series of medullary transections combined with PAG stimulation. Transections that eliminated, in a series of caudal-to-rostral steps, the NRA, also eliminated the PAG-evoked cricothyroid and most of the thyroarytenoid laryngeal motor activity. These results indicate that the final common pathway for much of the laryngeal activity in PAG-evoked vocalization includes un initial synapse in the caudal medulla, presumably in the NRA. 2. The electromyographic changes evoked by microinjection of D,L-homocysteic acid (DLH) in the NRA of the unanesthetized, precollicular decerebrate cat were analyzed in order to delineate the NRA contribution to the coordinated respiratory, laryngeal, and oral muscle changes in vocalization. A total of 415 DLH injection sites were located at or caudal to the level of the obex. Vocalization was evoked at 46 of these sites, which were all confined to a restricted region of the ventrolateral medulla 1-3 mm caudal to the obex. This region corresponded to the rostral half of the NRA and the immediately adjacent medullary tegmentum. 3. In all experiments evidence was obtained that variable muscle activation, rather than functional and integrated muscle patterns, was represented within the NRA. Vocalization evoked by DLH microinjection in the NRA was usually associated with excitation of the cricothyroid, thyroarytenoid, external oblique, internal oblique, internal intercostal, and diaphragm muscles that occurred in a different manner from site to site. That is, injection at sites separated by 0.3-0.5 mm evoked quite different responses. 4. NRA-evoked vocalization was compared with PAG-evoked vocalization using small injections (1.5-4.5 nl) into each region. As well, larger microinjections (15-120 nl) into NRA were made for comparison with previous results from the PAG using similar doses. Within the PAG, stereotyped and relatively "fixed" patterns of muscle activity are represented, whereas within the NRA there was no representation of specific muscle patterns, but rather a partial topographic separation of "premotor neurons" regulating different muscles. Correspondingly, stereotyped vocalizations were never evoked from the NRA. Further, most NRA-evoked vocalizations were unusual in quality and would not be identified generally as feline. 5. Evidence was obtained for a separation of pathways from the PAG regulating sound production and orofacial modulation of that sound. In contrast to the results from the PAG, excitation of NRA neurons rarely evoked activity in the oral muscles (genioglossus or anterior belly of digastric) or orofacial modulation of sound production. 6. Our finding suggests that the NRA serves as an important substrate for the generation of respiratory pressure and larynges adduction, which are two essential aspects of not only vocalization but also several behaviors involving Valsava maneuvers such as coughing, vomiting, and defecation.
33

Oller, D. Kimbrough, Rebecca E. Eilers, Dale H. Bull, and Arlene Earley Carney. "Prespeech Vocalizations of a Deaf Infant." Journal of Speech, Language, and Hearing Research 28, no. 1 (March 1985): 47–63. http://dx.doi.org/10.1044/jshr.2801.47.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
A comparative study of the speech-like vocalizations of a deaf infant and a group of 11 hearing infants was conducted in order to examine the role of auditory experience in the development of the phonological and metaphonological capacity. Results indicated that from 8 to I3 months of age, the deaf subject differed strikingly from hearing infants of comparable age. She produced no repetitive canonical babbling, whereas all the hearing infants produced many canonical syllables. The topography of the deaf infant's vocalizations resembled that of 4–6-month-old (i.e., Expansion stage) hearing infants. Detailed comparisons of the proportion of production of various metaphonologically defined categories by the deaf infant and Expansion stage hearing infants demonstrated many similarities in vocalization, although possible differences were noted. It is concluded that hearing impairment notably affects vocalization development by the end of the first year of life, if not earlier. Spectrographic displays illustrate the categories of infant sounds produced by the deaf and hearing infants.
34

Boë, Louis-Jean, Thomas R. Sawallis, Joël Fagot, Pierre Badin, Guillaume Barbier, Guillaume Captier, Lucie Ménard, Jean-Louis Heim, and Jean-Luc Schwartz. "Which way to the dawn of speech?: Reanalyzing half a century of debates and data in light of speech science." Science Advances 5, no. 12 (December 2019): eaaw3916. http://dx.doi.org/10.1126/sciadv.aaw3916.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Recent articles on primate articulatory abilities are revolutionary regarding speech emergence, a crucial aspect of language evolution, by revealing a human-like system of proto-vowels in nonhuman primates and implicitly throughout our hominid ancestry. This article presents both a schematic history and the state of the art in primate vocalization research and its importance for speech emergence. Recent speech research advances allow more incisive comparison of phylogeny and ontogeny and also an illuminating reinterpretation of vintage primate vocalization data. This review produces three major findings. First, even among primates, laryngeal descent is not uniquely human. Second, laryngeal descent is not required to produce contrasting formant patterns in vocalizations. Third, living nonhuman primates produce vocalizations with contrasting formant patterns. Thus, evidence now overwhelmingly refutes the long-standing laryngeal descent theory, which pushes back “the dawn of speech” beyond ~200 ka ago to over ~20 Ma ago, a difference of two orders of magnitude.
35

Snowdon, Charles T., and David Teie. "Affective responses in tamarins elicited by species-specific music." Biology Letters 6, no. 1 (September 2, 2009): 30–32. http://dx.doi.org/10.1098/rsbl.2009.0593.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Theories of music evolution agree that human music has an affective influence on listeners. Tests of non-humans provided little evidence of preferences for human music. However, prosodic features of speech (‘motherese’) influence affective behaviour of non-verbal infants as well as domestic animals, suggesting that features of music can influence the behaviour of non-human species. We incorporated acoustical characteristics of tamarin affiliation vocalizations and tamarin threat vocalizations into corresponding pieces of music. We compared music composed for tamarins with that composed for humans. Tamarins were generally indifferent to playbacks of human music, but responded with increased arousal to tamarin threat vocalization based music, and with decreased activity and increased calm behaviour to tamarin affective vocalization based music. Affective components in human music may have evolutionary origins in the structure of calls of non-human animals. In addition, animal signals may have evolved to manage the behaviour of listeners by influencing their affective state.
36

Butt, Carrie A. "Vocalization." JAMA 303, no. 6 (February 10, 2010): 486. http://dx.doi.org/10.1001/jama.2010.64.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Zina, Juliana, and Célio F. B. Haddad. "Reproductive activity and vocalizations of Leptodactylus labyrinthicus (Anura: Leptodactylidae) in southeastern Brazil." Biota Neotropica 5, no. 2 (2005): 119–29. http://dx.doi.org/10.1590/s1676-06032005000300008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Vocalizations and reproductive activity of two Leptodactylus labyrinthicus populations were studied from Jun/2001 to Feb/2003 in the State of São Paulo, Brazil. Observations began at dusk and ended around 2300 h. Occasionally individuals were monitored throughout the night. Data on reproductive period, calling sites, adult snout-vent length (SVL), oviposition sites, and oviposition period was collected. Leptodactylus labyrinthicus had an extended breeding period associated mainly with rainfall. Males called from the edge of temporary or permanent ponds, began vocalization activity at dusk, and finished around 2300 or 2400h. During the peak of the vocalization period (Dec- Jan), calling activity could extend up to 0400 or 0500h. Three types of vocalizations associated with reproduction were recorded: advertisement call, territorial call, and courtship call. The advertisement call was the most common vocalization. Males and females showed no sexual dimorphism in SVL. However, the males of one population were significantly larger than those of the other population studied. This fact could be explained by frog-hunting in one of the areas, which could wipe out the larger males of the population. Foam nests were recorded mainly in Oct-Nov 2001/2002 in depressions at the edge of temporary ponds, always protected by vegetation. A mean of 6.5% of the eggs present in the foam were fertilized and the other 93.5% possibly are used as a food source by the tadpoles. Mean diameter of the foam nest was 25.4 cm and mean height was 11.4 cm.
38

Dermi, Devi Fauzia, Agung Sedayu, and Ratna Komala. "VARIASI POLA VOKALISASI PADA TAKSONOMI ANAK JENIS ELANG-ULAR (Spilornis cheela) DI PKEK, GARUT, JAWA BARAT." BIOMA 13, no. 2 (February 27, 2018): 1–9. http://dx.doi.org/10.21009/bioma13(2).1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Crested serpent eagle (Spilornis cheela) is a bird of prey with distinctive of uniue vocal. Based on several studies mentioned that there are differences in vocalization at the level of subspecies and is often used to study the role vocalization defining the subspecies in taxonomy. This research aimed to determine the role variation of vocalization pattern in taxonomy on subspecies eagle. The research was conducted from May to September 2017 at Kamojang Eagle Conservation Center. The method used is descriptive method with continuous sampling technique. The samples was an adult eagle from three subspecies serpent eagle. The location of observation determined by purposive sampling with the provisions listening post is less than 5 meters until 30 meters. The data is collected at 7 am to 5 pm. Data were analyzed using sound analysis software. The data taken are fundamental frequency, maximum frequency (MinF), minimum frequency (MinF) and duration. Differences between subspecies were analyzed using Kruskal Wallis and Mann-Whitney U statistical test with SPSS 17.0. The result of the research is The vocalizations can be utilized in the subspecies eagle taxonomy of the species to complement the morphological data, marked by significantly different results on each parameter of vocalization between (Spilornis cheela malayensis) and (Spilornis cheela natunensis). MaxF significantly different in (Spilornis cheela malayensis) and (Spilornis cheela bido), (Spilornis cheela natunensis) and (Spilornis cheela bido).
39

Lynch, Kathleen S., Matthew I. M. Louder, and Mark E. Hauber. "Species-Specific Auditory Forebrain Responses to Non-Learned Vocalizations in Juvenile Blackbirds." Brain, Behavior and Evolution 91, no. 4 (2018): 193–200. http://dx.doi.org/10.1159/000489115.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Species recognition mediates the association of individuals with conspecifics. Learned cues often facilitate species recognition via early social experience with parents and siblings. Yet, in some songbirds, the production of species-typical vocalizations develops in the absence of early social experiences. Here, we investigate the auditory-evoked neural responses of juvenile red-winged blackbirds (Agelaius phoeniceus), a nonparasitic (parental) species within the Icterid family and contrast these results with a closely related Icterid parasitic species that lacks parental care, the brown-headed cowbird (Molothrus ater). We demonstrate that immediate early gene (IEG) activity in the caudomedial mesopallium (CMM) is selectively evoked in response to conspecific non-learned vocalizations in comparison to 2 types of heterospecific non-learned vocalizations, independent of the acoustic similarity patterns between the playback stimuli. This pattern, however, was not detected in the caudomedial nidopallium (NCM). Because the red-winged blackbird is a parental species, the conspecific non-learned vocalization is presumably a familiar sound to the juvenile red-winged blackbird, whereas the heterospecific non-learned vocalizations are novel. We contrast results reported here with our recent demonstration of selective IEG induction in response to non-learned conspecific vocalizations in juvenile parasitic brown-headed cowbirds, in which conspecific non-learned vocalizations are presumably novel. In this case, selective IEG induction from conspecific non-learned vocalization occurred within NCM but not within CMM. By comparing closely related species with stark differences in the early exposure to conspecifics, we demonstrate that CMM and NCM respond to familiar vs. novel non-learned vocalizations in a manner that parallel previously reported regional responses to learned vocalizations such as conspecific songs.
40

Gourévitch, Boris, and Jos J. Eggermont. "Spatial Representation of Neural Responses to Natural and Altered Conspecific Vocalizations in Cat Auditory Cortex." Journal of Neurophysiology 97, no. 1 (January 2007): 144–58. http://dx.doi.org/10.1152/jn.00807.2006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This study shows the neural representation of cat vocalizations, natural and altered with respect to carrier and envelope, as well as time-reversed, in four different areas of the auditory cortex. Multiunit activity recorded in primary auditory cortex (AI) of anesthetized cats mainly occurred at onsets (<200-ms latency) and at subsequent major peaks of the vocalization envelope and was significantly inhibited during the stationary course of the stimuli. The first 200 ms of processing appears crucial for discrimination of a vocalization in AI. The dorsal and ventral parts of AI appear to have different roles in coding vocalizations. The dorsal part potentially discriminated carrier-altered meows, whereas the ventral part showed differences primarily in its response to natural and time-reversed meows. In the posterior auditory field, the different temporal response types of neurons, as determined by their poststimulus time histograms, showed discrimination for carrier alterations in the meow. Sustained firing neurons in the posterior ectosylvian gyrus (EP) could discriminate, among others, by neural synchrony, temporal envelope alterations of the meow, and time reversion thereof. These findings suggest an important role of EP in the detection of information conveyed by the alterations of vocalizations. Discrimination of the neural responses to different alterations of vocalizations could be based on either firing rate, type of temporal response, or neural synchrony, suggesting that all these are likely simultaneously used in processing of natural and altered conspecific vocalizations.
41

PEREIRA, ERICA M., IRENILZA DE A. NÄÄS, and RODRIGO G. GARCIA. "Vocalization of broilers can be used to identify their sex and genetic strain." Engenharia Agrícola 35, no. 2 (April 2015): 192–96. http://dx.doi.org/10.1590/1809-4430-eng.agric.v35n2p192-196/2015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In order to reach higher broiler performance, farmers target losses reduction. One way to make this possible is by rearing sexed broilers as male and female present diverse performance due to their physiological differences. Birds from different genetic strain also have a distinct performance at the same age. Considering that sexed flocks may present higher performance this study aimed to identify one-day-old chicks’ sex throughout their vocalization. This research also investigated the possibility of identifying the genetic strain by their vocalization attributes. A total of 120 chicks, half of them were from Cobb® genetic strain and the other half from Ross® genetic strain. From each group, a total of 30 were males and 30 females, which were previously separated by sex using their secondary physiological characteristics at the hatchery. Vocalizations audio recording was done inside a semi-anechoic chamber using a unidirectional microphone connected to an audio input of a digital recorder. Vocalizations were recorded for two minutes. Acoustic characteristics of the sounds were analyzed being calculated the fundamental frequency Pitch, the sound intensity, the first formant, and second formant. Results indicated that the vocalizations of both sexes could be identified by the second formant, and the genetic strain was detected by both the second formant and the Pitch.
42

D'Odorico, Laura, and Fabia Franco. "Selective production of vocalization types in different communication contexts." Journal of Child Language 18, no. 3 (October 1991): 475–99. http://dx.doi.org/10.1017/s0305000900011211.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
ABSTRACTThis study investigates the production of vocalization in adult–infant–toy interactions from 0;4 to 0;11. The hypothesis is that vocalizations are selectively uttered in relationship to their production context. Five infants (two girls, three boys) were intensively studied. Non-segmental acoustic features of vocalizations in four communicative contexts were analysed in relation to the individual infants, in order to reveal individual differences. The data were submitted to discriminant function analysis. Results show that (a) different patterns of non-segmental features characterize sounds produced in different contexts; (b) both inter-subject differences and intra-subject consistency are observed; (c) ‘selective production’ disappears after 0;9. These results are discussed in relationship to sound–meaning development.
43

Bordenave, Diane, and Lorraine McCune. "Grunt Vocalizations in Children With Disabilities: Relationships With Assessed Cognition and Language." Journal of Speech, Language, and Hearing Research 64, no. 11 (November 8, 2021): 4138–48. http://dx.doi.org/10.1044/2021_jslhr-21-00202.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Purpose The purpose of this study was to investigate the relationship of the grunt vocalizations to cognitive and expressive language status in children with disabilities. Children with typical development produce communicative grunts at the onset of referential word production and comprehension at 14–16 months of age and continue to use this vocalization for communication as they develop language. Method All grunt vocalizations produced by 26 children with disabilities (mental age: 3–56 months; communicative age: 47–69 months) were identified from video-recorded seminaturalistic play sessions. Grunts were identified as accompanying effort or attention or as communicative bids. Participants were grouped as prelinguistic, emergent, language delay, and language competent based on standardized assessments of cognitive and language level. The Mann–Whitney U test (1947) compared groups to determine the relationships between grunt production and cognitive and language status. Results As hypothesized, participants in the language delay group produced significantly more communicative grunts than those in the language competent group ( W = 39, p = .028 < .05). The children with a cognitive and language level lower than 9 months (prelinguistic group) failed to produce communicative grunts. Conclusions The results document grunt production in children with disabilities in the same contexts as typical children and support the hypothesized relationship between assessed cognition and language and communicative grunt production. These results require replication. This vocalization, if recognized in treatment, may unlock verbal communication in many nonverbal children with disabilities. Future longitudinal research should include controlled intervention to determine the potential effectiveness of building broader communicative skills on this simple vocalization.
44

Furuyama, Takafumi, Takafumi Shigeyama, Munenori Ono, Sachiko Yamaki, Kohta I. Kobayasi, Nobuo Kato, and Ryo Yamamoto. "Vocalization during agonistic encounter in Mongolian gerbils: Impact of sexual experience." PLOS ONE 17, no. 8 (August 2, 2022): e0272402. http://dx.doi.org/10.1371/journal.pone.0272402.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Behaviors and vocalizations associated with aggression are essential for animals to survive, reproduce, and organize social hierarchy. Mongolian gerbils (Meriones unguiculatus) are highly aggressive and frequently emit calls. We took advantage of these features to study the relationship between vocalizations and aggressive behaviors in virgin and sexually experienced male and female Mongolian gerbils through the same-sex resident-intruder test. Both sexes of resident gerbils exhibited aggressive responses toward intruders. Multiparous females exhibited the most aggressive responses among the four groups. We also confirmed two groups of vocalizations during the encounters: high-frequency (>24.6 kHz) and low-frequency (<24.6 kHz). At the timing of high-frequency vocalizations observed during the tests, the vast majority (96.2%) of the behavioral interactions were non-agonistic. While, at the timing of low-frequency vocalizations observed during the tests, around half (45%) of the behavioral interactions were agonistic. Low-frequency vocalizations were observed mainly during encounters in which multiparous females were involved. These results suggest that high- and low-frequency vocalizations relate to non-agonistic and agonistic interactions, respectively. In addition to affecting aggressive behavior, sexual experience also affects vocalization during encounters. These findings provide new insights into the modulatory effects of sex and sexual experience on vocalizations during agonistic encounters.
45

Cleator, Holly J., and Ian Stirling. "Winter Distribution of Bearded Seals (Erignathus barbatus) in the Penny Strait Area, Northwest Territories, as Determined by Underwater Vocalizations." Canadian Journal of Fisheries and Aquatic Sciences 47, no. 6 (June 1, 1990): 1071–76. http://dx.doi.org/10.1139/f90-123.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Vocalization surveys conducted in Penny Strait, Northwest Territories, indicated that before ice break-up, bearded seals (Erignathus barbatus) preferred regions of less stable ice where break-up occurred early and avoided stable, landfast ice or areas heavily used by walruses (Odobenus rosmarus). Water depth did not appear to influence distribution. Numbers of calls increased between mid-April and early June, probably because of an increase in rate of calling by individual seals. Vocalization surveys can be used to separate preferred habitats from unsuitable ones. Using a single hydrophone and our current understanding of bearded seal vocal behaviour, it is not possible to determine the absolute number of bearded seals at or near a site using vocalizations. However, it is possible to measure the relative abundance of seals for spatial and temporal comparisons.
46

McDaniel, Jena, Paul Yoder, Annette Estes, and Sally J. Rogers. "Predicting Expressive Language From Early Vocalizations in Young Children With Autism Spectrum Disorder: Which Vocal Measure Is Best?" Journal of Speech, Language, and Hearing Research 63, no. 5 (May 22, 2020): 1509–20. http://dx.doi.org/10.1044/2020_jslhr-19-00281.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Purpose This study was designed to test the incremental validity of more expensive vocal development variables relative to less expensive variables for predicting later expressive language in children with autism spectrum disorder (ASD). We devote particular attention to the added value of coding the quality of vocalizations over the quantity of vocalizations because coding quality adds expense to the coding process. We are also interested in the added value of more costly human-coded vocal variables relative to those generated through automated analyses. Method Eighty-seven children with ASD aged 13–30 months at study initiation participated. For quantity of vocalizations, we derived one variable from human coding of brief communication samples and one from an automated process for daylong naturalistic audio samples. For quality of vocalizations, we derived four human-coded variables and one automated variable. A composite expressive language measure was derived at study entry, and 6 and 12 months later. The 12 months–centered intercept of a simple linear growth trajectory was used to quantify later expressive language. Results When statistically controlling for human-coded or automated quantity of vocalization variables, human-coded quality of vocalization variables exhibited incremental validity for predicting later expressive language skills. Human-coded vocal variables also predicted later expressive language skills when controlling for the analogous automated vocal variables. Conclusion In sum, these findings support devoting resources to human coding of the quality of vocalizations from communication samples to predict later expressive language skills in young children with ASD despite the greater costs of deriving these variables. Supplemental Material https://doi.org/10.23641/asha.12276458
47

Ma, Weiyi, Anna Fiveash, and William Forde Thompson. "Spontaneous emergence of language-like and music-like vocalizations from an artificial protolanguage." Semiotica 2019, no. 229 (July 26, 2019): 1–23. http://dx.doi.org/10.1515/sem-2018-0139.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractHow did human vocalizations come to acquire meaning in the evolution of our species? Charles Darwin proposed that language and music originated from a common emotional signal system based on the imitation and modification of sounds in nature. This protolanguage is thought to have diverged into two separate systems, with speech prioritizing referential functionality and music prioritizing emotional functionality. However, there has never been an attempt to empirically evaluate the hypothesis that a single communication system can split into two functionally distinct systems that are characterized by music- and languagelike properties. Here, we demonstrate that when referential and emotional functions are introduced into an artificial communication system, that system will diverge into vocalization forms with speech- and music-like properties, respectively. Participants heard novel vocalizations as part of a learning task. Half referred to physical entities and half functioned to communicate emotional states. Participants then reproduced each sound with the defined communicative intention in mind. Each recorded vocalization was used as the input for another participant in a serial reproduction paradigm, and this procedure was iterated to create 15 chains of five participants each. Referential vocalizations were rated as more speech-like, whereas emotional vocalizations were rated as more music-like, and this association was observed cross-culturally. In addition, a stable separation of the acoustic profiles of referential and emotional vocalizations emerged, with some attributes diverging immediately and others diverging gradually across iterations. The findings align with Darwin’s hypothesis and provide insight into the roles of biological and cultural evolution in the divergence of language and music.
48

Richards, Jeffrey A., Dongxin Xu, Jill Gilkerson, Umit Yapanel, Sharmistha Gray, and Terrance Paul. "Automated Assessment of Child Vocalization Development Using LENA." Journal of Speech, Language, and Hearing Research 60, no. 7 (July 12, 2017): 2047–63. http://dx.doi.org/10.1044/2017_jslhr-l-16-0157.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Purpose To produce a novel, efficient measure of children's expressive vocal development on the basis of automatic vocalization assessment (AVA), child vocalizations were automatically identified and extracted from audio recordings using Language Environment Analysis (LENA) System technology. Method Assessment was based on full-day audio recordings collected in a child's unrestricted, natural language environment. AVA estimates were derived using automatic speech recognition modeling techniques to categorize and quantify the sounds in child vocalizations (e.g., protophones and phonemes). These were expressed as phone and biphone frequencies, reduced to principal components, and inputted to age-based multiple linear regression models to predict independently collected criterion-expressive language scores. From these models, we generated vocal development AVA estimates as age-standardized scores and development age estimates. Result AVA estimates demonstrated strong statistical reliability and validity when compared with standard criterion expressive language assessments. Conclusions Automated analysis of child vocalizations extracted from full-day recordings in natural settings offers a novel and efficient means to assess children's expressive vocal development. More research remains to identify specific mechanisms of operation.
49

Binos, Paris, Elena Theodorou, Thekla Elriz, and Kostas Konstantopoulos. "Effectiveness of Aural-Oral Approach Based on Volubility of a Deaf Child with Late-Mapping Bilateral Cochlear Implants." Audiology Research 11, no. 3 (August 5, 2021): 373–83. http://dx.doi.org/10.3390/audiolres11030035.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Background: The purpose of this study was to investigate the effectiveness of aural-oral habilitation (AO) over the traditional speech-language therapy, based on the number of vocalization-volubility of a deaf child with late-mapping bilateral cochlear implants using sequential measurements. Methods: The spontaneous productions during child interactions were analyzed. The child (CY, 7;0 years old) with a mean unaided pure-tone average (PTA) hearing loss >80 dB HL was assessed by using an assessment battery. Study design consisted of two phases: (a) baseline (end of speech therapy) and (b) end of AO treatment. Protophones were analyzed via acoustical analysis using PRAAT software. Results: One-way repeated-measure ANOVAs were conducted within and between phases. The analyses revealed significant differences between the ‘phase’ and the vocalization outcome (F = 9.4, df = 1, p = 0.035). Post hoc analyses revealed the significant difference between the mean number of disyllable vocalizations of AO approach (p = 0.05). The mean number of vocalizations was calculated for each protophone type, but no other significant difference was measured. Conclusions: AO approach proved effective as measured through volubility. The outcome of this study is indicative and is a starting point for broader research.
50

Szymańska, Justyna, Maciej Trojan, Anna Jakucińska, Katarzyna Wejchert, Maciej Kapusta, and Julia Sikorska. "Brain Functional Asymmetry of Chimpanzees (Pan troglodytes): the Example of Auditory Laterality." Polish Psychological Bulletin 48, no. 1 (March 1, 2017): 87–92. http://dx.doi.org/10.1515/ppb-2017-0011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract The aim of this study was to verify whether chimpanzees (Pan troglodytes) demonstrate an auditory laterality during the orientation reaction, and which hemisphere is responsible for processing the emotional stimuli and which for the species-specific vocalizations. The study involved nine chimpanzees from the Warsaw Municipal Zoological Garden. They were tested individually in their bedrooms. Chimpanzees approached a tube filled with food, located in the centre of the cage. Randomly selected sounds were played from the speakers when the subject was focused on getting food. Individual reactions were observed and outcomes reported. The four types of sound used: thunderstorm, dog barking, chimpanzee vocalization and a zookeeper’s voice. To test whether chimpanzees demonstrate auditory laterality we used a single sample X2 test. The existence of auditory laterality has been confirmed. The sound of the storm caused the orientation reaction to the left, while chimpanzee vocalization - to the right. On this basis we can conclude that among chimpanzees, arousing stimuli are being processed by the right hemisphere, and species-specific vocalizations by the left. However, the set of stimuli was limited so the study did not unequivocally resolve this issue.

To the bibliography