Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Sound.

Zeitschriftenartikel zum Thema „Sound“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Sound" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Mini, Darshana Sreedhar. „‘Un-sound’ Sounds“. Music, Sound, and the Moving Image 13, Nr. 1 (Juli 2019): 3–30. http://dx.doi.org/10.3828/msmi.2019.2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Regina, Frilia Shantika. „BUNYI SERTAAN PADA PELAFALAN PENYANYI YURA YUNITA: PEMANFAATAN KAJIAN FONETIK SEBAGAI BAHAN AJAR MATA KULIAH FONOLOGI“. Semantik 9, Nr. 2 (14.09.2020): 77–84. http://dx.doi.org/10.22460/semantik.v9i2.p77-84.

Der volle Inhalt der Quelle
Annotation:
One type of phonetic study is the accompanying sound. The sound of inclusion indaily life is often not recognized. This is becouse the accompanying sound is produced accidentally by the speaker. Assessment of speech sound can improve the ability of students to listen and identify one type of phonetic study, teh sound of inclusion. The research method used is a qualitative research method with descriptive analysis research type. The data in this study were divided into primary data in the from of songs sung by singer Yura Yunita and secondary data in the from of books, journals, and articles. Data analysis is done by collecting all the notes in the fiels, then reducing the data in the form of sound, after that present it into a pattern that is formed, to produce conclusions. Based on the results of an analysis of two songs entitled “Malam Sunyi” and “ Harus Bahagia” found thirtteen accompanying sounds. The thirteen accompanying sounds are included in teh types of sounds including labisation, palatalization, retroflexion, glottalization, aspiration, and nasalization. Thus, teh song spoken by Yura Yunita can be used as an alternative teaching materials for phonology courses, especially in the accompanying soun materials.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Méchoulan, Eric, und David F. Bell. „Are Sounds Sound? For an Enthusiastic Study of Sound Studies“. SubStance 49, Nr. 2 (2020): 3–29. http://dx.doi.org/10.1353/sub.2020.0007.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Isodarus, Praptomo Baryadi. „Facilitating Sounds in Indonesian“. Journal of Language and Literature 18, Nr. 2 (12.09.2018): 102–10. http://dx.doi.org/10.24071/joll.v18i2.1566.

Der volle Inhalt der Quelle
Annotation:
This article presents the research result of facilitating sounds in Indonesian. Facilitating sound is a sound which facilitates the pronunciation of a sound sequence in a word. Based on the data analysis, the facilitating sounds in Indonesian are [?], [y], [w], [?], [m], [n], [?], [?] and [??]. Sound [?] facilitates the consonant cluster pronunciation in a word. Sound [y] facilitates the pronunciation of the sound sequences [ia] and [aia] among syllables and morphemes. Sound [w] facilitates the pronunciation of sound sequence [ua] among syllables and morphemes and the sound sequence of [oa] and [aua] among morphemes. Sound [?] facilitates the sound sequence [aa] among syllables and morphemes and the sound sequence [oa] among syllables. Sound [m] facilitates the pronunciation of nasal sound sequence [N] in prefixes me(N) or pe(N)- whose morpheme base begins with sounds [b, p, f, v]. Sound [n] facilitates the pronunciation of sound sequences [d] and [t] in the beginning of the morpheme base. Sound [?] facilitates the pronunciation of sound sequence [N] in prefixes me(N) or pe(N)- whose morpheme base begins with the vowels [a, i, u, e, ?, ?, o, ?], [g], [h] and [k]. Sound [?] facilitates the pronunciation of sound sequence [N] in prefixes me(N) or pe(N)- whose morpheme base begins with sounds of [j, c, s]. Sound [??] facilitates the pronunciation of words which are formed by prefixes me(N) or pe(N)- with one syllable morpheme base.Keywords: facilitating sound, phonology, Indonesian
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

paine, garth. „endangered sounds: a sound project“. Organised Sound 10, Nr. 2 (August 2005): 149–62. http://dx.doi.org/10.1017/s1355771805000804.

Der volle Inhalt der Quelle
Annotation:
endangered sounds is a project that focuses on the exploration of sound marks (trade-marked sounds). the initial stage of this project was funded by arts victoria, and comprised legal searches that resulted in the listings of sound marks registered in australasia and the united states of america. this list was published on the internet with a call for volunteers to collect samples of the listed sounds internationally. the volunteer was sent a specimen tube with label and cap, and asked to collect the sound by placing the specimen tube close to the source (thereby capturing the air through which the sound travelled), securing the cap and then completing the label, documenting the time, place and nature of the sound (sound mark reg. no., sound mark description, time of capture, date of capture, location, etc.). these specimen tubes were collected and displayed in chemistry racks in the exhibition in the biennale of electronic arts, perth in 2004, illustrating the frequency and diversity of the environment into which these ‘private’, protected sounds have been released. the exhibition project consisted of:(1) a web portal listing all the sound marks listed in australasia and the usa, and negotiations are underway to expand that to include the eu.(2) a collection of sound marks in specimen tubes with caps and labels gathered internationally by people who volunteered to collect samples of sound marks in their environment.(3) a number of glass vacuum desiccator vessels containing a small loudspeaker and sound reproduction chip suspended in a vacuum, reproducing sound marks in the vacuum, notionally breaking the law, but as sound does not travel in a vacuum the gallery visitor hears no sound – what then is the jurisdiction of the sound mark?(4) a card index register of lost and deceased sounds.this project questions the legitimacy of privatising and protecting sounds that are released at random in public spaces. if i own a multi-million dollar penthouse in a city, and work night shifts, i have no recourse against the loud harley davidson or australian football league (afl) siren that wakes me from my precious sleep – both sounds are privately protected, making their recording, reproduction and broadcast illegal.while there are legal mechanisms for protection against repeat offenders, and many of us are committed to a culturally conditioned moral obligation re sound dispersion, there are no legal limits – i can call the police, but the football siren is already within legal standards and still permeates the private domain of city dwellings. the noise abatement legislation is only applicable to regular breaches of the law, and takes some time to sort out, but it does not apply to singular occurrences which, although within legislated limits, still disturb. additionally, the laws are based on amplitude and do not really address the issue of propagation. the ownership of the sound is not addressed in these legislative mechanisms – it should be; if the sound is an emblem of corporate identity, we should be able to choose not to be exposed to it, in the same way that we can place a ‘no junk mail’ sign on our letter boxes. acknowledgement of the private domain is sacrosanct in other areas of legislation, in fact heavily policed, but not addressed in discussions of the acoustic environment beyond amplitude limitations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Dudschig, Carolin, Ian Grant Mackenzie, Jessica Strozyk, Barbara Kaup und Hartmut Leuthold. „The Sounds of Sentences: Differentiating the Influence of Physical Sound, Sound Imagery, and Linguistically Implied Sounds on Physical Sound Processing“. Cognitive, Affective, & Behavioral Neuroscience 16, Nr. 5 (29.07.2016): 940–61. http://dx.doi.org/10.3758/s13415-016-0444-1.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Yu, Boya, Jie Bai, Linjie Wen und Yuying Chai. „Psychophysiological Impacts of Traffic Sounds in Urban Green Spaces“. Forests 13, Nr. 6 (19.06.2022): 960. http://dx.doi.org/10.3390/f13060960.

Der volle Inhalt der Quelle
Annotation:
The goal of this study is to investigate the psychophysiological effects of traffic sounds in urban green spaces. In a laboratory experiment, psychological and physiological responses to four traffic sounds were measured, including road, conventional train, high-speed train, and tram. The findings demonstrated that traffic sounds had significant detrimental psychological and physiological effects. In terms of psychological responses, the peak sound level outperformed the equivalent sound level in determining the psychological impact of traffic sounds. The physiological effects of traffic sounds were shown to be significantly influenced by sound type and sound level. The physiological response to the high-speed train sound differed significantly from the other three traffic sounds. The physiological effects of road traffic sounds were found to be unrelated to the sound level. On the contrary, as for the railway sounds, the change in sound level was observed to have a significant impact on the participants’ physiological indicators.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Imamori, Kanta, Atsuya Yoshiga und Junji Yoshida. „Sound quality evaluation for luxury refrigerator door closing sound“. INTER-NOISE and NOISE-CON Congress and Conference Proceedings 263, Nr. 5 (01.08.2021): 1845–54. http://dx.doi.org/10.3397/in-2021-1968.

Der volle Inhalt der Quelle
Annotation:
In this study, we carried out subjective evaluation tests employing 19 refrigerator door closing sounds to quantify the luxury feeling. By applying factor analysis to the subjective evaluation results, the sound quality of the refrigerator door closing sound was found to be expressed by the following two factors: overall loudness and the pitch of the sound. Subsequently, luxury feeling evaluation model was obtained through multiple regression analysis. As the result, the luxury feeling of the door closing sound was evaluated to be high when the sound was softer and had lower pitch. Then, we prepared several luxury door closing sounds according to the obtained evaluation model through a filter processing and conducted subjective evaluation tests again to verify the evaluation model. The result shows that the amplitude increased sound at low frequency band under 100 Hz, which was calculated to be high luxury by the evaluation model, was actually evaluated as the best among the presented sounds through the subjective test. And the luxury sound quality evaluation method was confirmed to be useful to quantify and estimate the sound quality of the refrigerator door closing sound.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Yu, Boya, Linjie Wen, Jie Bai und Yuying Chai. „Effect of Road and Railway Sound on Psychological and Physiological Responses in an Office Environment“. Buildings 12, Nr. 1 (22.12.2021): 6. http://dx.doi.org/10.3390/buildings12010006.

Der volle Inhalt der Quelle
Annotation:
The present study aims to explore the psychophysiological impact of different traffic sounds in office spaces. In this experiment, 30 subjects were recruited and exposed to different traffic sounds in a virtual reality (VR) office scene. The road traffic sound and three railway sounds (conventional train, high-speed train, and tram) with three sound levels (45, 55, and 65 dB) were used as the acoustic stimuli. Physiological responses, electrodermal activity (EDA) and heart rate (HR) were monitored throughout the experiment. Psychological evaluations under each acoustic stimulus were also measured using scales within the VR system. The results showed that both the psychological and the physiological responses were significantly affected by the traffic sounds. As for psychological responses, considerable adverse effects of traffic sounds were observed, which constantly increased with the increase in the sound level. The peak sound level was found to have a better performance than the equivalent sound level in the assessment of the psychological impact of traffic sounds. As for the physiological responses, significant effects of both the acoustic factors (sound type and sound level) and the non-acoustic factors (gender and exposure time) were observed. The relationship between sound level and physiological parameters varied among different sound groups. The variation in sound level hardly affected the participants’ HR and EDA when exposed to the conventional train and tram sounds. In contrast, HR and EDA were significantly affected by the levels of road traffic sound and high-speed train sound. Through a correlation analysis, a relatively weak correlation between the psychological evaluations and HR was found.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Oszczapinska, Urszula, Bridget Nance, Seojun Jang und Laurie M. Heller. „Typical sound level in environmental sound representations“. Journal of the Acoustical Society of America 153, Nr. 3_supplement (01.03.2023): A162. http://dx.doi.org/10.1121/10.0018517.

Der volle Inhalt der Quelle
Annotation:
Although the sound level reaching a listener’s ear depends upon the sound source level and the environment, a stable source level can be perceived (McDermott et al., 2021). Nonetheless, variation in sound level can disrupt recognition in a short-term old/new task (Susini et al., 2019). We asked whether there is evidence of long-term memory of the typical level of everyday sounds. First, we found that listeners can report the level at which they typically hear a sound. Next, we compared sound judgements over headphones (ESC-50 dataset) across two conditions: (1) “typical”: levels set to produce the loudness experienced as “typical” for each sound (as determined by pilot studies); and (2) “equal”: levels at 70 dB SPL. Recognition, familiarity, and pleasantness were judged. There was no significant difference in recognition accuracy between level conditions and no interaction with whether sounds were louder or softer than their typical levels. In addition, recognition increased as sound familiarity increased, but this did not interact with level condition. Furthermore, consistent with past findings, sound pleasantness decreased as loudness increased, but this effect did not depend upon the condition. [Work supported by REAM.]
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Endo, Hiroshi, Hidekazu Kaneko, Shuichi Ino und Waka Fujisaki. „An Attempt to Improve Food/Sound Congruity Using an Electromyogram Pseudo-Chewing Sound Presentation System“. Journal of Advanced Computational Intelligence and Intelligent Informatics 21, Nr. 2 (15.03.2017): 342–49. http://dx.doi.org/10.20965/jaciii.2017.p0342.

Der volle Inhalt der Quelle
Annotation:
Improving the texture of foods provided during nursing care is necessary to improve the appetite of elderly individuals. We developed a system to vary perceived food texture using pseudo-chewing sounds generated from electromyogram (EMG) signals. However, this previous system could not provide chewing sounds that were sufficiently congruous with foods. Because food/sound combinations that seem unnatural cause individuals to feel uncomfortable with pseudo-chewing sounds, food/sound congruity is important. This research aims to improve the derivation and presentation of pseudo-chewing sounds so as to be able to provide various kinds of chewing sounds. The developed system adjusts the volume of pseudo-chewing sounds that are stored in a digital audio player based on the amplitude of the EMG signal envelope. Using this system, food/sound congruity was examined with two kinds of softened Japanese pickles. Six kinds of pseudo-chewing sounds were tested (noisy chewing sound, EMG chewing sound, and four kinds of actual chewing sounds: rice cracker, cookie, and two kinds of Japanese pickles). Participants reported that food/sound combinations were unnatural with the noisy and EMG chewing sounds, whereas the combinations felt more natural with the pseudo-chewing sounds of Japanese pickles. We concluded that the newly developed system could effectively reduce the unnatural feeling of food/sound incongruity.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Fontana, Bill. „The Relocation of Ambient Sound: Urban Sound Sculpture“. Leonardo 41, Nr. 2 (April 2008): 154–58. http://dx.doi.org/10.1162/leon.2008.41.2.154.

Der volle Inhalt der Quelle
Annotation:
The author describes his sound sculptures which explore how various instances of sound possess musical form. He explains the sculptural qualities of sound and the aesthetic act of arranging sound into art. Detailed descriptions of three recent works illustrate how relocating sounds from one environment to another redefines them, giving them new acoustic meanings.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

SAMUELS, DAVID. „Sound.:Sound“. Journal of Linguistic Anthropology 14, Nr. 2 (Dezember 2004): 304–5. http://dx.doi.org/10.1525/jlin.2004.14.2.304.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

ZHAO, Huanqi, Kean CHEN, Liang YAN, Bing ZHOU, Jiangong ZHANG, Jun ZHANG, Yunyun DENG, Han LI und Hao LI. „Suppression and analysis on annoyance of motor vehicle noise using water sound injection“. Xibei Gongye Daxue Xuebao/Journal of Northwestern Polytechnical University 40, Nr. 3 (Juni 2022): 560–67. http://dx.doi.org/10.1051/jnwpu/20224030560.

Der volle Inhalt der Quelle
Annotation:
Different from the traditional noise control approaches based on sound energy reduction, this study focuses on the annoyance suppression of motor vehicle noise based on audio injection. Firstly, eight motor vehicle noises were selected as the target sounds and three different controllable sounds were used to overlap with different signal-to-noise ratios. Then absolute threshold and combined sound annoyance subjective evaluation experiment were carried out to obtain absolute threshold range and the existence zone of "destructive effect" for combined noise annoyance. Next the influencing factors of the destructive effect are analyzed. The results of the study found that the identifiability of the target sound and the controllable sound is important to the selection of the controllable sound. For most target sounds with high identifiability, the controllable sound with high identifiability and high SNR should be selected. For most target sounds with low identifiability, the controllable sound with low identifiability and low SNR should be selected. Moreover, after the addition of optimal controllable sound, the descend value of the annoyance degree of the target sound with high identifiability is lower than that of the target sound with low identifiability. The results will also provide guidance for the research on audio injection effects, physical mechanisms, the controllable sound selection and optimization design.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Popper, Arthur N., und Robin D. Calfee. „Sound and sturgeon: Bioacoustics and anthropogenic sound“. Journal of the Acoustical Society of America 154, Nr. 4 (01.10.2023): 2021–35. http://dx.doi.org/10.1121/10.0021166.

Der volle Inhalt der Quelle
Annotation:
Sturgeons are basal bony fishes, most species of which are considered threatened and/or endangered. Like all fishes, sturgeons use hearing to learn about their environment and perhaps communicate with conspecifics, as in mating. Thus, anything that impacts the ability of sturgeon to hear biologically important sounds could impact fitness and survival of individuals and populations. There is growing concern that the sounds produced by human activities (anthropogenic sound), such as from shipping, commercial barge navigation on rivers, offshore windfarms, and oil and gas exploration, could impact hearing by aquatic organisms. Thus, it is critical to understand how sturgeon hear, what they hear, and how they use sound. Such data are needed to set regulatory criteria for anthropogenic sound to protect these animals. However, very little is known about sturgeon behavioral responses to sound and their use of sound. To help understand the issues related to sturgeon and anthropogenic sound, this review first examines what is known about sturgeon bioacoustics. It then considers the potential effects of anthropogenic sound on sturgeon and, finally identifies areas of research that could substantially improve knowledge of sturgeon bioacoustics and effects of anthropogenic sound. Filling these gaps will help regulators establish appropriate protection for sturgeon.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Chot, Mathiang, und Huiming Zhang. „Spatial separation between two sounds affects the timing of action potentials elicited by the sounds in the rat's auditory midbrain neurons“. Journal of the Acoustical Society of America 154, Nr. 4_supplement (01.10.2023): A237. http://dx.doi.org/10.1121/10.0023397.

Der volle Inhalt der Quelle
Annotation:
Timing of action potentials (i.e., spikes) elicited by sounds is used by auditory neurons to encode and process acoustic information. In the presence of multiple sounds, the timing of sound-driven spikes is dependent on the temporal, spectral, and spatial relationships among the sounds. We used two tone bursts with different frequencies to form a train of stimuli that were presented at a random odor and a constant rate. Such a train was used to mimic two competing sounds that occurred at the same (50%) probability or a novel sound (i.e., a low probability oddball sound) that was interleaved with a frequently occurring background sound (i.e., a high probability standard sound). We used the rat as an animal model to study how the spatial relationship between two sounds affected the timing of spikes elicited by the sounds in individual neurons in the auditory midbrain. Results indicate that a lower probability of sound presentation led to a higher temporal precision of the timing of the first spike elicited by the sound and the timing could be affected by a spatial separation between two sounds. These results are important for understanding neural mechanisms responsible for hearing in a natural acoustic environment.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Min, Dongki, Buhm Park und Junhong Park. „Artificial Engine Sound Synthesis Method for Modification of the Acoustic Characteristics of Electric Vehicles“. Shock and Vibration 2018 (2018): 1–8. http://dx.doi.org/10.1155/2018/5209207.

Der volle Inhalt der Quelle
Annotation:
Sound radiation from electric motor-driven vehicles is negligibly small compared to sound radiation from internal combustion engine automobiles. When running on a local road, an artificial sound is required as a warning signal for the safety of pedestrians. In this study, an engine sound was synthesized by combining artificial mechanical and combustion sounds. The mechanical sounds were made by summing harmonic components representing sounds from rotating engine cranks. The harmonic components, including not only magnitude but also phase due to frequency, were obtained by the numerical integration method. The combustion noise was simulated by random sounds with similar spectral characteristics to the measured value and its amplitude was synchronized by the rotating speed. Important parameters essential for the synthesized sound to be evaluated as radiation from actual engines were proposed. This approach enabled playing of sounds for arbitrary engines. The synthesized engine sounds were evaluated for recognizability of vehicle approach and sound impression through auditory experiments.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Sarbaini, Albarra. „ظاهرة تدريس الأصوات العربية“. An Nabighoh Jurnal Pendidikan dan Pembelajaran Bahasa Arab 18, Nr. 2 (08.03.2017): 211. http://dx.doi.org/10.32332/an-nabighoh.v18i2.335.

Der volle Inhalt der Quelle
Annotation:
A language is made up of units of units of a particular sound, with the sound unit up unit formed millions and millions of words in diverse situations. Each language is a bona fide treasures of the selected sound from all possible sounds that can be spoken man, which may differ from other treasures of the sounds of language. Arabic sounds denoted by "ض" for example, is not found in other languages. The substance of the sound is also different between one language to another language, the difference is exactly what became the beginning of the sound teaching problem. Number of Arabic alphabet is not much different from the number of alphabetic languages ​​other languages, but Arabic has a long vowel sounds in addition to the applicable short vowel regularly. So also with the sound madd, syiddah and ghunnah with variations that will eventually give birth to the sound of the sound that gets the emphasis on a word (NaBr) and the tone or tune a particular song on a sentence (tanghim) according to the size of a strong and level geloranya meaning behind the words concerned. Key Words: Language Elements, Sounds of language
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Yu, Boya, und Yuying Chai. „Psychophysiological responses to traffic noises in urban green spaces“. INTER-NOISE and NOISE-CON Congress and Conference Proceedings 265, Nr. 4 (01.02.2023): 3864–71. http://dx.doi.org/10.3397/in_2022_0548.

Der volle Inhalt der Quelle
Annotation:
The present study aims to explore the psychophysiological impact of different traffic sounds in urban green spaces. In the experiment, 30 subjects were recruited and exposed to different traffic sounds in the virtual reality (VR) scene. The road traffic sound and three railway sounds (con-ventional train, high-speed train, and tram) with three sound levels (45, 55, and 65 dB) were used as the acoustic stimuli. Physiological responses, electrodermal activity (EDA) and heart rate (HR) were monitored throughout the experiment. Psychological evaluations under each acoustic stimuli were also measured using scales within the VR system. The results showed that both the psychological and the physiological responses were significantly affected by the traffic sounds. As for psychological responses, considerable adverse effects of traffic sounds were observed, which constantly increased with the increase of the sound level. The peak sound level was found to have a better performance than the equivalent sound level in the assessment of the psycho-logical impact of traffic sounds. As for the physiological responses, significant effects of both the acoustic factors (sound type and sound level) and the non-acoustic factors (gender and exposure time) were observed. The physiological effect of high-speed train noise was significantly differ-ent from those of the other three traffic noises. The relationship between sound level and physi-ological parameters varied among different sound groups. The variation of sound level could hardly affect the participants' HR and EDA when exposed to the road traffic noise. On the con-trary, the physiological responses were significantly affected by the sound level of rail traffic noise. By a correlation analysis, no linear correlation between the psychological evaluations and HR was found.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Harusawa, Koki, Yumi Inamura, Masaaki Hiroe, Hideyuki Hasegawa, Kentaro Nakamura und Mari Ueda. „Measurement of very high frequency (VHF) sound in our daily experiences“. INTER-NOISE and NOISE-CON Congress and Conference Proceedings 263, Nr. 2 (01.08.2021): 4275–82. http://dx.doi.org/10.3397/in-2021-2647.

Der volle Inhalt der Quelle
Annotation:
Recently, it is frequently reported that very high frequency (VHF) sounds are emitted from daily necessaries such as home electric appliances. Although we measured VHF sounds from home electric appliances in our previous study, the origins of such VHF sounds have not yet been identified. In the present study, we tried to identify the VHF sound source in each home electric appliance using a "sound camera", which visualizes the spatial distribution of the sound intensity using a microphone array. The sound camera visualized the location of the sound source at frequencies from 2 to 52 kHz with a field of view of 63 degrees. The sound camera elucidated that the VHF sounds were emitted from the power source of a LET light, the ventilation duct of an electric fan, and the body of an IH cooker. Their frequency characteristics were dependent on the sound source, i.e., combinations of pure tones in the LED light and distributing in a wide frequency range in the electric fan.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Langkjær, Birger. „Making fictions sound real - On film sound, perceptual realism and genre“. MedieKultur: Journal of media and communication research 26, Nr. 48 (17.05.2010): 13. http://dx.doi.org/10.7146/mediekultur.v26i48.2115.

Der volle Inhalt der Quelle
Annotation:
This article examines the role that sound plays in making fictions perceptually real to film audiences, whether these fictions are realist or non-realist in content and narrative form. I will argue that some aspects of film sound practices and the kind of experiences they trigger are related to basic rules of human perception, whereas others are more properly explained in relation to how aesthetic devices, including sound, are used to characterise the fiction and thereby make it perceptually real to its audience. Finally, I will argue that not all genres can be defined by a simple taxonomy of sounds. Apart from an account of the kinds of sounds that typically appear in a specific genre, a genre analysis of sound may also benefit from a functionalist approach that focuses on how sounds can make both realist and non-realist aspects of genres sound real to audiences.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Haehn, Luise, Sabine J. Schlittmeier und Christian Böffel. „Exploring the Impact of Ambient and Character Sounds on Player Experience in Video Games“. Applied Sciences 14, Nr. 2 (09.01.2024): 583. http://dx.doi.org/10.3390/app14020583.

Der volle Inhalt der Quelle
Annotation:
Elaborate sound design, including background music, ambient sounds (sounds describing the game world), and character sounds (sounds generated by the character’s actions), plays a pivotal role in modern video games. However, the influence of these different types of sound on the player’s experience has not been extensively researched. This study examines the influence of these sound types on immersion, avatar identification, fun, and perceived competence. In two experiments, participants played League of Legends under four different sound conditions. The first experiment (N1 = 32) revealed a non-significant trend in the effect of character sounds on avatar identification. Ambient sounds, however, were limited because the task restricted participants’ movement across the game map. Consequently, we adapted the task to allow for a wider variety of ambient sounds in the second experiment (N2 = 32). Here, a significant impact of character sounds on immersion, avatar identification, and fun was observed, as well as an interaction effect of character sounds and ambient sounds on fun. Furthermore, we observed a trend, though not statistically significant, suggesting that ambient sounds may influence the player’s sense of flow. These findings underline the distinct effects of different sound types, and we discuss implications for the design of sound in video games.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Yudha, Feizal Mandala, Nurizzati Nurizzati und Yenni Hayati. „UNSUR BUNYI DALAM BUKU KUMPULAN PUISI TIDAK ADA NEW YORK HARI INI KARYA M. AAN MANSYUR“. Jurnal Bahasa dan Sastra 6, Nr. 3 (15.02.2019): 276. http://dx.doi.org/10.24036/81037210.

Der volle Inhalt der Quelle
Annotation:
This study aimed to describe and analyze the form and use of sound in the Poetry Collection Book Tidak Ada New York Hari Ini by M. Aan Mansyur. To achieve these objectives the theory is used: (1) the nature of poetry, (2) theory of structuralism, (3) elements of poetry building, (4) sound elements in poetry, (5) alliteration and aconance, (6) cacophony and efoni, (7) anaphora and epiphora, and (8) the function of sound in poetry. Based on the analysis in the poetry collection, There is Tidak Ada New York Hari Ini, the work of M. Aan Mansyur, found 14 sounds of alliteration, 24 sounds of aconance, 17 sounds of cacophony, 23 sounds of ephony, 10 sounds of anaphora and 2 sounds of epiphora. There are 4 functions of the use of sound elements, namely: (1) expressive energy, (2) giving suggestions to the reader, (3) means of musicality, and (4) providing an atmosphere / special impression.Keyword: form, exploiting of sound, anthology poems, Tidak Ada New York Hari Ini, M. Aan Mansyur, sound.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Koyama, Yumi, Jun Toyotani, Makoto Morinaga, Hyojin Lee und Yasushi Shimizu. „On a recording method for ambient sounds with a confidential speech“. INTER-NOISE and NOISE-CON Congress and Conference Proceedings 268, Nr. 4 (30.11.2023): 4163–67. http://dx.doi.org/10.3397/in_2023_0590.

Der volle Inhalt der Quelle
Annotation:
There are many reports that environmental noise in healthcare facilities affects patients and healthcare workers both physically and psychologically. To study the sound environment in hospital wards, it is necessary to analyze live sounds (conversations, sounds of daily life, alarms, etc.) generated by human activities. Recording in hospital wards, as in all healthcare facilities, requires the protection of personal information. Sound recorded in healthcare facilities may include conversations that people do not want to hear. Therefore, it is necessary to develop a recording method that solves confidentiality issues. We have already confirmed that by fragmenting recorded sounds at short time intervals, the time-averaged sound pressure level of 1/3 octave band frequency of the sounds does not change. This time we developed a new PC application that fragments and then stores the data without storing original sound data. We analyzed the time-averaged sound pressure level of 1/3 octave band frequency of fragmented speech and alarm sounds using this new application. The results showed that the time-averaged sound pressure levels of fragmented and non-fragmented sounds were almost the same.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Tanigawa, Risako, Kohei Yatabe und Yasuhiro Oikawa. „High-speed optical imaging and spatio-temporal analysis of sound sources of edge tone phenomena“. INTER-NOISE and NOISE-CON Congress and Conference Proceedings 265, Nr. 3 (01.02.2023): 4286–91. http://dx.doi.org/10.3397/in_2022_0613.

Der volle Inhalt der Quelle
Annotation:
Aerodynamic sounds are one of the noises of high-speed trains, automobiles, and wind turbines. To understand the characteristics of those noises, measuring sound sources is important. In general, microphones are used for measuring aerodynamic sounds. However, measuring the sound fields inside flow fields is difficult for microphones because they disturb flows. Thus, optical measurement methods have been applied to visualize aerodynamic sounds. The optical method can measure the sound fields without installing devices inside measurement fields. Therefore, it can capture the sound around sources. In this paper, we performed visualization and spatio-temporal analysis of sound sources of edge tones using parallel phase-shifting interferometry (PPSI). We experimentally confirmed the difference in pressure fluctuations near the sound source depending on the frequency of the edge tones.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Takada, Masayuki, und Kanji Goto. „Auditory impression of amplitude-modulated vehicle horn sounds and their detectability in noisy environment“. INTER-NOISE and NOISE-CON Congress and Conference Proceedings 268, Nr. 4 (30.11.2023): 4874–81. http://dx.doi.org/10.3397/in_2023_0693.

Der volle Inhalt der Quelle
Annotation:
Vehicle horn sounds, due to their high sound pressure levels, induce negative psychological reactions in listeners, especially pedestrians. In order to reduce such negative effects, a horn sound with a lower sound pressure level is desired. At the same time, a horn sound should be easily perceived by drivers, considering its use on busy roads. The amplitude-modulated horn sound may help to solve these problems. Therefore, we synthesized amplitude-modulated horn sounds, and conducted psychoacoustical experiments to investigate the relationship between the acoustic characteristics and the perceived quality of these sounds. The results showed that stimuli with shallower modulation depths were less unpleasant than those with deeper modulation depths, and stimuli with modulation frequencies below 40 Hz suppressed auditory unpleasantness more than those with higher modulation frequencies. To confirm their effectiveness for detection in noisy conditions, stimuli with different signal-to-noise ratios were created by combining the amplitude-modulated horn sound and road traffic noise. The amplitude-modulated horn sound in road traffic noise was detected even in conditions where the signal-to-noise ratio was 3 dB lower than when the horn sound was not amplitude-modulated. The results indicate that amplitude-modulated horn sounds have the potential to improve the sound environment around roads.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Imai, Mutsumi, und Sotaro Kita. „The sound symbolism bootstrapping hypothesis for language acquisition and language evolution“. Philosophical Transactions of the Royal Society B: Biological Sciences 369, Nr. 1651 (19.09.2014): 20130298. http://dx.doi.org/10.1098/rstb.2013.0298.

Der volle Inhalt der Quelle
Annotation:
Sound symbolism is a non-arbitrary relationship between speech sounds and meaning. We review evidence that, contrary to the traditional view in linguistics, sound symbolism is an important design feature of language, which affects online processing of language, and most importantly, language acquisition. We propose the sound symbolism bootstrapping hypothesis , claiming that (i) pre-verbal infants are sensitive to sound symbolism, due to a biologically endowed ability to map and integrate multi-modal input, (ii) sound symbolism helps infants gain referential insight for speech sounds, (iii) sound symbolism helps infants and toddlers associate speech sounds with their referents to establish a lexical representation and (iv) sound symbolism helps toddlers learn words by allowing them to focus on referents embedded in a complex scene, alleviating Quine's problem. We further explore the possibility that sound symbolism is deeply related to language evolution, drawing the parallel between historical development of language across generations and ontogenetic development within individuals. Finally, we suggest that sound symbolism bootstrapping is a part of a more general phenomenon of bootstrapping by means of iconic representations, drawing on similarities and close behavioural links between sound symbolism and speech-accompanying iconic gesture.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Hernawati, Heni. „Analisis Persepsi terhadap Bunyi Frikatif Bahasa Jepang [s, z, ɕ, ʑ] pada Pembelajar Bahasa Jepang yang Berbahasa Ibu Bahasa Jawa“. Japanese Research on Linguistics, Literature, and Culture 1, Nr. 1 (27.11.2018): 16–27. http://dx.doi.org/10.33633/jr.v1i1.2141.

Der volle Inhalt der Quelle
Annotation:
People who learn Japanese, especially those who speak Javanese as their mother tongue, meet difficulties to differ fricative sounds in Japanese [s, z, ɕ, ʑ]. That’s why, this research is aimed to find the factors which cause the wrong perception to fricative sounds in Japanese for those learners who speak Javanese as their mother tongue. This research was done to 16 respondents by asking them to answer listening test of 24 words in multiple choice questions with four options from the vocabularies which have no meaning made from sounds [s, z, ɕ, ʑ] which are combined to vowel sound /a/, /u/, /o/. Based on the research result, it can be concluded that the major factor which causes the wrong perception is inexistence of [s, z, ɕ, ʑ] sounds in Javanese phonemic system, thus it affects their listening competence. Sound [ɕ] is identified by the sound [s], and sound [z] is often identified by the sound [Ɉ]. In addition, sound [ʑ] is also changed by the sound [Ɉ] and its alophone [Ɉh]. The learners is often confused to listen this sound as [ɕ]. Keywods: analysis of perception, fricative, Javanese, Japanese, Indonesian native speakers
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Yuan, J., X. Cao, D. Wang, J. Chen und S. Wang. „Research on Bus Interior Sound Quality Based on Masking Effects“. Fluctuation and Noise Letters 17, Nr. 04 (14.09.2018): 1850037. http://dx.doi.org/10.1142/s0219477518500372.

Der volle Inhalt der Quelle
Annotation:
Masking effect is a very common psychoacoustic phenomenon, which occurs when there is a suitable sound that masks the original sound. In this paper, we will discuss bus interior sound quality based on the masking effects and the appropriate masking sound selection to mask the original sounds inside a bus. We developed three subjective evaluation indexes which are noisiness, acceptability and anxiety. These were selected to reflect passengers’ feelings more accurately when they are subject to the masking sound. To analyze the bus interior sound quality with various masking sounds, the subjective–objective synthesis evaluation model was constructed using fuzzy mathematics. According to the study, the appropriate masking sound can mask the bus interior noise and optimize the bus interior sound quality.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Alteyp, Osman Alteyp Alwasila. „Elision of the Lateral Sound Sun Laam in Definite Article in Arabic (AL)“. Theory and Practice in Language Studies 10, Nr. 8 (01.08.2020): 873. http://dx.doi.org/10.17507/tpls.1008.04.

Der volle Inhalt der Quelle
Annotation:
This study investigates: what kind of sound change in the lateral sound (sun laam) before the coronal sound of Arabic(/∫/, /ð/, /ð /, /ṣ/, /s/, /d/, /d/, /n/, /ẓ/, /z/, /Ѳ/, /t/, /t /, and /r/).; the extent to which the coronal and the vowel sound cause the elision of the lateral sound and whether the elision of sun laam is the main indicator of geminate the coronal sound. The sample of the study is a list of Arabic words containing the coronal sound of Arabic initially and preceded by a definite article. The significance of this study shows the benefit of describing and analyzing the distinctive features of the immediate sounds within continuant speech for finding out what exactly causes changes in a phoneme in such speech. A descriptive analytic approach is used to describe the distinctive features of the sun laam and the coronal sounds, as well as to analyze the linguistic environment (the sound pattern including the definite article /ال/ /al/ before the coronal sound).The most important results are: the sun laam is completely elided before the coronal sounds. The elision of Sun Laam and the intensity of the vowel sound shape the geminate of the coronal sound.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Bae, Myung-Jin. „Concepts of Sound Control System Using Absolute Sound Level“. Journal Of The Acoustical Society Of Korea 33, Nr. 1 (2014): 60. http://dx.doi.org/10.7776/ask.2014.33.1.060.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Spector, Ferrinne, und Daphne Maurer. „Early Sound Symbolism for Vowel Sounds“. i-Perception 4, Nr. 4 (Januar 2013): 239–41. http://dx.doi.org/10.1068/i0535.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Tachibana, Hideki, und Hiroo Yano. „Sound intensity measurement for impulsive sounds“. Journal of the Acoustical Society of America 84, S1 (November 1988): S33. http://dx.doi.org/10.1121/1.2026270.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Kasumyan, A. O. „Sounds and sound production in fishes“. Journal of Ichthyology 48, Nr. 11 (Dezember 2008): 981–1030. http://dx.doi.org/10.1134/s0032945208110039.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Kidd, Gary R., und Charles S. Watson. „Sound quality judgments of everyday sounds“. Journal of the Acoustical Society of America 106, Nr. 4 (Oktober 1999): 2267. http://dx.doi.org/10.1121/1.427740.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

KUNO, Kazuhiro. „Design of sound. (II). Familiar sounds.“ Journal of Environmental Conservation Engineering 25, Nr. 5 (1996): 306–11. http://dx.doi.org/10.5956/jriet.25.306.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Wolfe, Virginia, Cynthia Presley und Jennifer Mesaris. „The Importance of Sound Identification Training in Phonological Intervention“. American Journal of Speech-Language Pathology 12, Nr. 3 (August 2003): 282–88. http://dx.doi.org/10.1044/1058-0360(2003/074).

Der volle Inhalt der Quelle
Annotation:
Little is known about the relevance of sound identification training in phonological intervention. Some treatment approaches incorporate sound identification training; others do not. The purpose of the present study was to compare articulatory improvement following treatment with and without sound identification training. Nine preschool children with severe phonological disorders were randomly assigned to 2 groups for the treatment of stimulable sound errors: (a) mixed training with concurrent production and sound identification training and (b) production-only training. Articulatory improvement was evaluated as a function of treatment type and pretraining sound identification scores. No overall difference was found between the 2 treatment types except for sounds that had been poorly identified. Articulatory errors with low identification scores made greater progress after receiving mixed training with both production and sound identification training. For error sounds receiving production training, significant relationships were found between both pre- and posttraining identification scores and articulatory improvement, suggesting (a) that perception of error sounds prior to treatment may affect degree of improvement and (b) that production training may improve perception of error sounds. Different views exist with regard to the targeting of stimulable error sounds for treatment. Results of the present study suggest that sound identification in addition to stimulability may be an important consideration in target selection as well as treatment mode.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Itskov, Pavel M., Ekaterina Vinnik, Christian Honey, Jan Schnupp und Mathew E. Diamond. „Sound sensitivity of neurons in rat hippocampus during performance of a sound-guided task“. Journal of Neurophysiology 107, Nr. 7 (01.04.2012): 1822–34. http://dx.doi.org/10.1152/jn.00404.2011.

Der volle Inhalt der Quelle
Annotation:
To investigate how hippocampal neurons encode sound stimuli, and the conjunction of sound stimuli with the animal's position in space, we recorded from neurons in the CA1 region of hippocampus in rats while they performed a sound discrimination task. Four different sounds were used, two associated with water reward on the right side of the animal and the other two with water reward on the left side. This allowed us to separate neuronal activity related to sound identity from activity related to response direction. To test the effect of spatial context on sound coding, we trained rats to carry out the task on two identical testing platforms at different locations in the same room. Twenty-one percent of the recorded neurons exhibited sensitivity to sound identity, as quantified by the difference in firing rate for the two sounds associated with the same response direction. Sensitivity to sound identity was often observed on only one of the two testing platforms, indicating an effect of spatial context on sensory responses. Forty-three percent of the neurons were sensitive to response direction, and the probability that any one neuron was sensitive to response direction was statistically independent from its sensitivity to sound identity. There was no significant coding for sound identity when the rats heard the same sounds outside the behavioral task. These results suggest that CA1 neurons encode sound stimuli, but only when those sounds are associated with actions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Vani, Giovani Gio, und Dwinta Arswendah. „Penerapan Diagetic Sound Pada Film Dokudrama “Bondar Hatabosi”“. ROLLING 6, Nr. 2 (08.11.2023): 135. http://dx.doi.org/10.19184/rolling.v6i2.43685.

Der volle Inhalt der Quelle
Annotation:
The artwork "The Role of Sound in the Bondar Hatabosi Docudrama Film" aims to create a docudrama film that provides a reflection on how the importance of natural and environmental sounds in the form of culture in Luat Hatabosi village is especially able to shape the emotions of the audience who hear it. The creation of this film is in the docudrama genre because it tells a story based on a true story from historical re-enactment with elements of dramatization. For this reason, the use of real sound is very appropriate to the genre of this film, namely docudrama. The object of creating this work of art is the sounds of nature and the environment in the form of culture in Luat Hatabosi village so that it can play an important role in the docudrama film "Bondar Hatabosi", such as forming emotions in the audience. The audience's emotions in question, such as positive vibrations, relaxation, calm, and so on, are caused by the nature of natural sounds which are able to stimulate the brain in controlling human emotions. The concept for creating this work focuses on the diegetic sound aspect. Diegetic sound itself is all forms of sound contained in the story world of the film. Diegetic sound in films combines it into two forms, namely onscreen sound and offscreen sound. The sound on the screen is defined as all the sounds produced by the actors in the story and the objects in the frame. Meanwhile, offscreen sound is defined as all sound produced from outside the frame.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Skurativskyi, Vadym, und Valerii Andriienko. „Specifics of Sound Effects in Feature Films“. Bulletin of Kyiv National University of Culture and Arts. Series in Audiovisual Art and Production 6, Nr. 2 (20.10.2023): 204–12. http://dx.doi.org/10.31866/2617-2674.6.2.2023.289307.

Der volle Inhalt der Quelle
Annotation:
The purpose of the article is to analyze the specifics of sound effects used in feature films, to identify methods of using various variations of sounds in cinema, to clarify the process of creating sound effects and to trace the development of historical trends in the field of creating sounds for cinema, as well as to predict the specifics of sound effects development in feature films in the future. Research Methodology. The following methods were used: theoretical – to study sound effects in feature films, to study various scientific and practical recommendations for working with sound; empirical – to use the acquired knowledge through personal experience; analytical – to search for the interaction of sound with a person; analysis and synthesis, as well as the method of observation. Scientific novelty. For the first time, the specifics of sound effects in feature films have been studied using a retrospective view of the phenomenality of the object and its further progression, and the specifics of sound effects used in feature films have been analyzed. The study reveals specific aspects of the transformation of sound and sound effects through the use of different variations of sounds that can be used in feature films in the future. Conclusions. The article analyses the specifics of sound effects used in fiction films. The usage specificity of different variations of sounds has been determined, and the peculiarities of the sound effects creation process have been revealed using the method of observation and analytical conclusion. A unique role in the study was played by identifying the historical aspects of sound effects usage in feature films, which became possible by tracing the development of historical trends and directing the vector of the research field to the projection of the future.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Matsuda, Hiroshi, und Nobuo Machida. „Psychological responses to amplitude-modulated low-frequency sound“. INTER-NOISE and NOISE-CON Congress and Conference Proceedings 268, Nr. 6 (30.11.2023): 2838–45. http://dx.doi.org/10.3397/in_2023_0413.

Der volle Inhalt der Quelle
Annotation:
Amplitude-modulated low-frequency sound is a sound in which a pure tone with a carrier frequency lower than 100 Hz is a carrier wave, and the sound pressure level continuously fluctuates over time. This study measured psychological responses to amplitude-modulated low-frequency sounds using psychological questionnaires. We measured three sensations peculiar to low-frequency sounds, annoyance, vibratory sensation, and oppressive sensation caused by amplitude-modulated low-frequency sounds using a seven-grade rating scale. In addition, we measured the loudness and fluctuating sensation caused by amplitude-modulated low-frequency sounds using a magnitude estimation method. This paper discusses the relationship between the psychological response quantities and the physical quantity that constitutes the amplitude-modulated low-frequency sound, the dose-response relationship when exposed to amplitude-modulated low-frequency sound above the sensation threshold.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Gautama Simanjuntak, Juniarto, Mega Putri Amelya, Fitri Nuraeni und Rika Raffiudin. „Keragaman Suara Tonggeret dan Jangkrik di Taman Nasional Gunung Gede Pangrango“. Jurnal Sumberdaya Hayati 6, Nr. 1 (03.12.2020): 20–25. http://dx.doi.org/10.29244/jsdh.6.1.20-25.

Der volle Inhalt der Quelle
Annotation:
Indonesia is a biodiversity country and has much of samples of bioacoustics but there are no bioacoustics data collected and saved to be referred. Bioacoustics is a study of frequency range, sound amplitudo intensity, sound fluctuation, and sound patterns. It is very useful to study more about population presumption and species determination. This insect bioacoustics research is done at Gunung Gede Pangrango National Park and aims to analyse variety of sound frequency of cicada and cricket. Methods which are used are recording the sounds, editing and analyzing the record result with Praat and Raven Lite 2.0 softwares, and analysing the environment. Analysing the sounds which is done is to find miximum frequency, minimum frequency, and average frequency. The result of the sounds analysis is compared to database in Singing Insect of North America (SINA). Environmental analysing includes temperature, air humidity, and light intensity. There are nine cicada sound recording files and twenty four cricket sound recording files. Cicada has high sound characteristic (9,168.2 Hz) and cricket has low sound characteristic (3,311.80 Hz). Comparation to Singing Insect of North America (SINA) database shows that the cicada’s sound is resemble to Tibicen marginalis and the cricket’s sound is resemble to Grylodes sigillatus.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Zhang, Lu. „Design of Heart Sound Analyzer“. Advanced Materials Research 1042 (Oktober 2014): 131–34. http://dx.doi.org/10.4028/www.scientific.net/amr.1042.131.

Der volle Inhalt der Quelle
Annotation:
There is important physiological and pathological information in heart sound, so the patients’ information can be obtained by detection of their heart sounds. In the hardware of the system, the heart sound sensor HKY06B is used to acquire the heart sound signal, and the DSP chip TMS320VC5416 is used to process the heart sound. De-noising based on wavelet and HHT and other technical are used in the process of heart sound. There are five steps in the system: acquisition, de-noising, segmentation, feature extraction, and finally, heart sounds are classified
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Petersen, Manuel, Mesud Zaimovic und Albert Albers. „Evaluating emotionalizing effects of active sound designs“. INTER-NOISE and NOISE-CON Congress and Conference Proceedings 268, Nr. 4 (30.11.2023): 4689–700. http://dx.doi.org/10.3397/in_2023_0666.

Der volle Inhalt der Quelle
Annotation:
Numerous results in the field of musicological research and traffic psychology research show that emotions can be altered by certain sets of pitches or sound characteristics, and emotions on the other hand influence our driving behaviour. Regardless, there is no research on how different artificial vehicle sounds could influence the driving behaviour via emotions. We want to create an active sound design that can alter the driver's emotions to increase traffic safety in certain driving scenarios. To evaluate which harmonic compositions and sound characteristics could be used as stimuli, we first extracted sound characteristics from different music pieces or sounds that have a proven emotionalizing effect, and generated new sounds based on said characteristics that also could work in the context of an active sound design for electrical vehicles. With these sounds, we conducted a subject study with 45 participants to evaluate if people perceive the intended emotions, and further determine the best sounds to use in future subject studies to evaluate their impact on drivers' behaviour. This paper describes the extracted sounds, their characteristics and psychoacoustical properties. Furthermore, it discusses the correlation between the sound properties and the perception of and emotional effect on the subjects.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Yonemura, Miki, Shinichi Sakamoto, Hideo Tomizawa, Yasuhiro Ishiwata, Shiniji Nakazawa, Yuko Arai, Masayoshi Hamaguchi und Akihisa Takahashi. „Experimental studies on the effect of ceiling materials on the sound environment of underpass concourse using the station simulator“. INTER-NOISE and NOISE-CON Congress and Conference Proceedings 268, Nr. 8 (30.11.2023): 558–66. http://dx.doi.org/10.3397/in_2023_0093.

Der volle Inhalt der Quelle
Annotation:
Sound environments in railway stations are often noisy because of sound-reflective materials and many noise sources. Especially in underpass concourses, structure-borne sounds are radiated from the ceiling due to vibrations when trains pass by. This results in a high noise level, which often interferes with listening to the announcements. In this study, to examine interior materials that effectively reduce such noise in railway stations, the following measurements and laboratory experiments were carried out: 1) Field measurements of vibrations of the ceiling and environmental sound in stations in Tokyo. 2) Reproduction of vibrations and sounds in the station simulator (full-scale mock-up of a station building), under the four different types of ceilings. 3) Auditory tests to evaluate the noisiness of train passing sounds and environmental sounds, using reproduced sounds in the station simulator as test sounds. As a result, the ceiling with damping material was effective for structure-borne sound, while the ceiling with porous sound-absorbing material was effective for air-borne sound. This result suggests that it is important to consider the placement of ceiling materials according to the noise situations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

van Erp-van der Kooij, Elaine, Lois F. de Graaf, Dennis A. de Kruijff, Daphne Pellegrom, Renilda de Rooij, Nian I. T. Welters und Jeroen van Poppel. „Using Sound Location to Monitor Farrowing in Sows“. Animals 13, Nr. 22 (16.11.2023): 3538. http://dx.doi.org/10.3390/ani13223538.

Der volle Inhalt der Quelle
Annotation:
Precision Livestock Farming systems can help pig farmers prevent health and welfare issues around farrowing. Five sows were monitored in two field studies. A Sorama L642V sound camera, visualising sound sources as coloured spots using a 64-microphone array, and a Bascom XD10-4 security camera with a built-in microphone were used in a farrowing unit. Firstly, sound spots were compared with audible sounds, using the Observer XT (Noldus Information Technology), analysing video data at normal speed. This gave many false positives, including visible sound spots without audible sounds. In total, 23 of 50 piglet births were visible, but none were audible. The sow’s behaviour changed when farrowing started. One piglet was silently crushed. Secondly, data were analysed at a 10-fold slower speed when comparing sound spots with audible sounds and sow behaviour. This improved results, but accuracy and specificity were still low. When combining audible sound with visible sow behaviour and comparing sound spots with combined sound and behaviour, the accuracy was 91.2%, the error was 8.8%, the sensitivity was 99.6%, and the specificity was 69.7%. We conclude that sound cameras are promising tools, detecting sound more accurately than the human ear. There is potential to use sound cameras to detect the onset of farrowing, but more research is needed to detect piglet births or crushing.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Kim, Hyun-Don, Kazunori Komatani, Tetsuya Ogata und Hiroshi G. Okuno. „Binaural Active Audition for Humanoid Robots to Localise Speech over Entire Azimuth Range“. Applied Bionics and Biomechanics 6, Nr. 3-4 (2009): 355–67. http://dx.doi.org/10.1155/2009/817874.

Der volle Inhalt der Quelle
Annotation:
We applied motion theory to robot audition to improve the inadequate performance. Motions are critical for overcoming the ambiguity and sparseness of information obtained by two microphones. To realise this, we first designed a sound source localisation system integrated with cross-power spectrum phase (CSP) analysis and an EM algorithm. The CSP of sound signals obtained with only two microphones was used to localise the sound source without having to measure impulse response data. The expectation-maximisation (EM) algorithm helped the system to cope with several moving sound sources and reduce localisation errors. We then proposed a way of constructing a database for moving sounds to evaluate binaural sound source localisation. We evaluated our sound localisation method using artificial moving sounds and confirmed that it could effectively localise moving sounds slower than 1.125 rad/s. Consequently, we solved the problem of distinguishing whether sounds were coming from the front or rear by rotating and/or tipping the robot's head that was equipped with only two microphones. Our system was applied to a humanoid robot called SIG2, and we confirmed its ability to localise sounds over the entire azimuth range as the success rates for sound localisation in the front and rear areas were 97.6% and 75.6% respectively.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Rudi Karma, Samsuddin Samsuddin und Sulpina Sulpina. „Struktur Produksi Bunyi Bahasa Anak Usia 2 Tahun (Studi Kasus Ilyas)“. GERAM 11, Nr. 1 (20.06.2023): 92–100. http://dx.doi.org/10.25299/geram.2023.vol11(1).12381.

Der volle Inhalt der Quelle
Annotation:
The production structure of sound in language by 2-year-old children is an interesting issue in the realm of language research. It lies in the imperfect yet meaningful language production of children. In their language production, some sound omissions and changes occur. This research aims to describe the structure of sound production in 2-year-old children. It is a field research conducted using a qualitative descriptive method. The research findings indicate that the structure of sound production involves sound omissions and changes. Sound omissions occur in consonant sounds such as [r], [m], [n], [ŋ], [t], [k], and [l]. Sound changes occur in sounds such as [t→c], [s→c], [au→o], and [ai→e]. These omissions and changes consistently occur
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Klett, Joseph. „Sound on Sound“. Sociological Theory 32, Nr. 2 (Juni 2014): 147–61. http://dx.doi.org/10.1177/0735275114536896.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Park, Jihun. „Sound Propagation and Reconstruction Algorithm Based on Geometry“. International Journal of Online and Biomedical Engineering (iJOE) 15, Nr. 13 (30.09.2019): 86. http://dx.doi.org/10.3991/ijoe.v15i13.11212.

Der volle Inhalt der Quelle
Annotation:
This paper presents a method of simulating sound propagation and reconstruction for the virtual reality applications. The algorithm being developed in this paper is based on a ray sound theory. If we are given 3 dimensional geometry input as well as sound sources as inputs, we can compute sound effects over the entire boundary surfaces. In this paper, we present two approaches to compute sound field: The first approach, called forward tracing, traces sounds emanating from sound sources, while the second approach, called geometry based computation, computes possible propagation routes between sources and receivers. We compare two approaches and propose a geometry-based sound computation method for outdoor simulation. This approach is computationally more efficient than the forward sound tracing. The physical environment affects sound propagation simulation by impulse- response. When a sound source waveform and numerically computed impulse in time is convoluted, a synthetic sound is generated. This technique can be easily generalized to synthesize realistic stereo sounds for the virtual reality applications. At the same time, the simulation result can be visualized using VRML.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie