Artykuły w czasopismach na temat „Sound data”

Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Sound data.

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Sound data”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.

1

NARA, SHIGETOSHI, NAOYA ABE, MASATO WADA i JOUSUKE KUROIWA. "A NOVEL METHOD OF SOUND DATA DESCRIPTION BY MEANS OF CELLULAR AUTOMATA AND ITS APPLICATION TO DATA COMPRESSION". International Journal of Bifurcation and Chaos 09, nr 06 (czerwiec 1999): 1211–17. http://dx.doi.org/10.1142/s0218127499000869.

Pełny tekst źródła
Streszczenie:
A novel method of binary data description using cellular automata is proposed. As an actual example, several trials are made to describe vocal and musical sound data digitized in a standard data format. The reproduced sounds are evaluated by "actual listening", by calculating "the signal to noise ratio" with respect to the original sound data, and by comparing "the Fourier power spectra" with those of the original sounds. The results show that this method is quite effective and provides a new means of data compression applicable in any field of digital data recording or transferring.
Style APA, Harvard, Vancouver, ISO itp.
2

BAKIR, Çiğdem. "Compressing English Speech Data with Hybrid Methods without Data Loss". International Journal of Applied Mathematics Electronics and Computers 10, nr 3 (30.09.2022): 68–75. http://dx.doi.org/10.18100/ijamec.1166951.

Pełny tekst źródła
Streszczenie:
Understanding the mechanism of speech formation is of great importance in the successful coding of the speech signal. It is also used for various applications, from authenticating audio files to connecting speech recording to data acquisition device (e.g. microphone). Speech coding is of vital importance in the acquisition, analysis and evaluation of sound, and in the investigation of criminal events in forensics. For the collection, processing, analysis, extraction and evaluation of speech or sounds recorded as audio files, which play an important role in crime detection, it is necessary to compress the audio without data loss. Since there are many voice changing software available today, the number of recorded speech files and their correct interpretation play an important role in detecting originality. Using various techniques such as signal processing, noise extraction, filtering on an incomprehensible speech recording, improving the speech, making them comprehensible, determining whether there is any manipulation on the speech recording, understanding whether it is original, whether various methods of addition and subtraction are used, coding of sounds, the code must be decoded and the decoded sounds must be transcribed. In this study, first of all, what sound coding is, its purposes, areas of use, classification of sound coding according to some features and techniques are given. Moreover, in our study speech coding was done on the English audio data. This dataset is the real dataset and consists of approximately 100000 voice recordings. Speech coding was done using waveform, vocoders and hybrid methods and the success of all the methods used on the system we created was measured. Hybrid models gave more successful results than others. The results obtained will set an example for our future work.
Style APA, Harvard, Vancouver, ISO itp.
3

Kim, Eunbeen, Jaeuk Moon, Jonghwa Shim i Eenjun Hwang. "DualDiscWaveGAN-Based Data Augmentation Scheme for Animal Sound Classification". Sensors 23, nr 4 (10.02.2023): 2024. http://dx.doi.org/10.3390/s23042024.

Pełny tekst źródła
Streszczenie:
Animal sound classification (ASC) refers to the automatic identification of animal categories by sound, and is useful for monitoring rare or elusive wildlife. Thus far, deep-learning-based models have shown good performance in ASC when training data is sufficient, but suffer from severe performance degradation if not. Recently, generative adversarial networks (GANs) have shown the potential to solve this problem by generating virtual data. However, in a multi-class environment, existing GAN-based methods need to construct separate generative models for each class. Additionally, they only consider the waveform or spectrogram of sound, resulting in poor quality of the generated sound. To overcome these shortcomings, we propose a two-step sound augmentation scheme using a class-conditional GAN. First, common features are learned from all classes of animal sounds, and multiple classes of animal sounds are generated based on the features that consider both waveforms and spectrograms using class-conditional GAN. Second, we select data from the generated data based on the confidence of the pretrained ASC model to improve classification performance. Through experiments, we show that the proposed method improves the accuracy of the basic ASC model by up to 18.3%, which corresponds to a performance improvement of 13.4% compared to the second-best augmentation method.
Style APA, Harvard, Vancouver, ISO itp.
4

Han, Hong-Su. "Sound data interpolating circuit". Journal of the Acoustical Society of America 100, nr 2 (1996): 692. http://dx.doi.org/10.1121/1.416225.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Son, Myoung-Jin, i Seok-Pil Lee. "COVID-19 Diagnosis from Crowdsourced Cough Sound Data". Applied Sciences 12, nr 4 (9.02.2022): 1795. http://dx.doi.org/10.3390/app12041795.

Pełny tekst źródła
Streszczenie:
The highly contagious and rapidly mutating COVID-19 virus is affecting individuals worldwide. A rapid and large-scale method for COVID-19 testing is needed to prevent infection. Cough testing using AI has been shown to be potentially valuable. In this paper, we propose a COVID-19 diagnostic method based on an AI cough test. We used only crowdsourced cough sound data to distinguish between the cough sound of COVID-19-positive people and that of healthy people. First, we used the COUGHVID cough database to segment only the cough sound from the original cough data. An effective audio feature set was then extracted from the segmented cough sounds. A deep learning model was trained on the extracted feature set. The COVID-19 diagnostic system constructed using this method had a sensitivity of 93% and a specificity of 94%, and achieved better results than models trained by other existing methods.
Style APA, Harvard, Vancouver, ISO itp.
6

Yamamoto, Takaharu. "Digital sound data storing device". Journal of the Acoustical Society of America 93, nr 1 (styczeń 1993): 596. http://dx.doi.org/10.1121/1.405540.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Osborne, D. W. "Sound and data on DBS". Electronics and Power 31, nr 6 (1985): 449. http://dx.doi.org/10.1049/ep.1985.0283.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Kaper, H. G., E. Wiebel i S. Tipei. "Data sonification and sound visualization". Computing in Science & Engineering 1, nr 4 (1999): 48–58. http://dx.doi.org/10.1109/5992.774840.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Suhartini, Endang, Murdianto Murdianto i Nanik Setyowati. "OPTIMALISASI PELAYANAN BINA KOMUNIKASI MELALUI PROGRAM PERSEPSI BUNYI DAN IRAMA (BKPBI), UNTUK ANAK YANG BERKEBUTUHAN KUSUS TUNARUNGGU DI SDLB NEGERI JENANGAN PONOROGO". BASICA: Journal of Arts and Science in Primary Education 1, nr 1 (15.05.2021): 58–71. http://dx.doi.org/10.37680/basica.v1i1.777.

Pełny tekst źródła
Streszczenie:
Children with special needs are children with different characteristics from normal children in general. Especially deaf children are children who have impaired hearing either totally or have residual hearing. Deaf communication requires services that can support their communication difficulties. In this case SDLB Negeri Jenang Ponorogo organized a communication development program with Communication Development through the Sound and Rhythm Perception Program (BKPBI). In this study the author is intended to discuss more about; Forms of service stages, learning implementation strategies and service results of sound and rhythm perception programs in SDLB Negeri Jenang Ponorogo. This research uses a qualitative approach methodology with the type of case study research. The data in this study are words and actions, while the source of the data are the Principal and teachers at SDLB Negeri Jenang. Data collection methods are interviews, observation, and documentation. Data analysis techniques using data reduction, data presentation, and drawing conclusions. After conducting the analysis, the writer can conclude that the form of communication service development stages through sound and rhythm perception programs in the Jenang Negeri Extraordinary Elementary School is sound detection, sound discrimination, sound identification, sound comprehension, learning implementation strategies using review, overview, presentation, exercise, and summary, also by using the classical and individual models, while the results of the service program of perception of sound and rhythm in the State Elementary School Extraordinary, namely deaf children are able to recognize sounds, easy to respond to sounds such as background noises, the nature of sounds, creating sounds up to recognize types of musical instruments, able to identify sounds and detect the direction of sound.
Style APA, Harvard, Vancouver, ISO itp.
10

Roelandt, N., P. Aumond i L. Moisan. "CROWDSOURCED ACOUSTIC OPEN DATA ANALYSIS WITH FOSS4G TOOLS". International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVIII-4/W1-2022 (6.08.2022): 387–93. http://dx.doi.org/10.5194/isprs-archives-xlviii-4-w1-2022-387-2022.

Pełny tekst źródła
Streszczenie:
Abstract. NoiseCapture is an Android application developed by the Gustave Eiffel University and the French National Centre for Scientific Research as central element of a participatory approach to environmental noise mapping. The application is open-source, and all its data are available freely. This study presents the results of the first exploratory analysis of 3 years of data collection through the lens of sound sources. This analysis is only based on the tags given by the users and not on the sound spectrum of the measurement, which will be studied at a later stage. The first results are encouraging, we were able to observe well known temporal sound source dynamics like road sounds temporal dynamic related to commuting or bird songs in the dataset. We also found correlations between wind and rain tags and their measurements by the the national meteorological service. The context of the study, the Free and Open Source Software tools and techniques used and literate programming benefits are presented.
Style APA, Harvard, Vancouver, ISO itp.
11

Kim, Yunbin, Jaewon Sa, Yongwha Chung, Daihee Park i Sungju Lee. "Resource-Efficient Pet Dog Sound Events Classification Using LSTM-FCN Based on Time-Series Data". Sensors 18, nr 11 (18.11.2018): 4019. http://dx.doi.org/10.3390/s18114019.

Pełny tekst źródła
Streszczenie:
The use of IoT (Internet of Things) technology for the management of pet dogs left alone at home is increasing. This includes tasks such as automatic feeding, operation of play equipment, and location detection. Classification of the vocalizations of pet dogs using information from a sound sensor is an important method to analyze the behavior or emotions of dogs that are left alone. These sounds should be acquired by attaching the IoT sound sensor to the dog, and then classifying the sound events (e.g., barking, growling, howling, and whining). However, sound sensors tend to transmit large amounts of data and consume considerable amounts of power, which presents issues in the case of resource-constrained IoT sensor devices. In this paper, we propose a way to classify pet dog sound events and improve resource efficiency without significant degradation of accuracy. To achieve this, we only acquire the intensity data of sounds by using a relatively resource-efficient noise sensor. This presents issues as well, since it is difficult to achieve sufficient classification accuracy using only intensity data due to the loss of information from the sound events. To address this problem and avoid significant degradation of classification accuracy, we apply long short-term memory-fully convolutional network (LSTM-FCN), which is a deep learning method, to analyze time-series data, and exploit bicubic interpolation. Based on experimental results, the proposed method based on noise sensors (i.e., Shapelet and LSTM-FCN for time-series) was found to improve energy efficiency by 10 times without significant degradation of accuracy compared to typical methods based on sound sensors (i.e., mel-frequency cepstrum coefficient (MFCC), spectrogram, and mel-spectrum for feature extraction, and support vector machine (SVM) and k-nearest neighbor (K-NN) for classification).
Style APA, Harvard, Vancouver, ISO itp.
12

Vardaxis, Nikolaos-Georgios, i Delphine Bard. "Review of acoustic comfort evaluation in dwellings: Part III—airborne sound data associated with subjective responses in laboratory tests". Building Acoustics 25, nr 4 (21.08.2018): 289–305. http://dx.doi.org/10.1177/1351010x18788685.

Pełny tekst źródła
Streszczenie:
Acoustic comfort has been used in engineering to refer to conditions of low noise levels or annoyance, while current standardized methods for airborne and impact sound reduction are used to assess acoustic comfort in dwellings. However, the results and descriptors acquired from acoustic measurements do not represent the human perception of sound or comfort levels. This article is a review of laboratory studies concerning airborne sound in dwellings. Specifically, this review presents studies that approach acoustic comfort via the association of objective and subjective data in laboratory listening tests, combining airborne sound acoustic data, and subjective ratings. The presented studies are tabulated and evaluated using Bradford Hill’s criteria. Many of them attempt to predict subjective noise annoyance and find the best single number quantity for that reason. The results indicate that subjective response to airborne sound is complicated and varies according to different sound stimuli. It can be associated sufficiently with airborne sound in general but different descriptors relate best to music sounds or speech stimuli. The inclusion of low frequencies down to 50 Hz in the measurements seems to weaken the association of self-reported responses to airborne sound types except for the cases of music stimuli.
Style APA, Harvard, Vancouver, ISO itp.
13

Aiello, Luca Maria, Rossano Schifanella, Daniele Quercia i Francesco Aletta. "Chatty maps: constructing sound maps of urban areas from social media data". Royal Society Open Science 3, nr 3 (marzec 2016): 150690. http://dx.doi.org/10.1098/rsos.150690.

Pełny tekst źródła
Streszczenie:
Urban sound has a huge influence over how we perceive places. Yet, city planning is concerned mainly with noise, simply because annoying sounds come to the attention of city officials in the form of complaints, whereas general urban sounds do not come to the attention as they cannot be easily captured at city scale. To capture both unpleasant and pleasant sounds, we applied a new methodology that relies on tagging information of georeferenced pictures to the cities of London and Barcelona. To begin with, we compiled the first urban sound dictionary and compared it with the one produced by collating insights from the literature: ours was experimentally more valid (if correlated with official noise pollution levels) and offered a wider geographical coverage. From picture tags, we then studied the relationship between soundscapes and emotions. We learned that streets with music sounds were associated with strong emotions of joy or sadness, whereas those with human sounds were associated with joy or surprise. Finally, we studied the relationship between soundscapes and people's perceptions and, in so doing, we were able to map which areas are chaotic, monotonous, calm and exciting. Those insights promise to inform the creation of restorative experiences in our increasingly urbanized world.
Style APA, Harvard, Vancouver, ISO itp.
14

Isodarus, Praptomo Baryadi. "Facilitating Sounds in Indonesian". Journal of Language and Literature 18, nr 2 (12.09.2018): 102–10. http://dx.doi.org/10.24071/joll.v18i2.1566.

Pełny tekst źródła
Streszczenie:
This article presents the research result of facilitating sounds in Indonesian. Facilitating sound is a sound which facilitates the pronunciation of a sound sequence in a word. Based on the data analysis, the facilitating sounds in Indonesian are [?], [y], [w], [?], [m], [n], [?], [?] and [??]. Sound [?] facilitates the consonant cluster pronunciation in a word. Sound [y] facilitates the pronunciation of the sound sequences [ia] and [aia] among syllables and morphemes. Sound [w] facilitates the pronunciation of sound sequence [ua] among syllables and morphemes and the sound sequence of [oa] and [aua] among morphemes. Sound [?] facilitates the sound sequence [aa] among syllables and morphemes and the sound sequence [oa] among syllables. Sound [m] facilitates the pronunciation of nasal sound sequence [N] in prefixes me(N) or pe(N)- whose morpheme base begins with sounds [b, p, f, v]. Sound [n] facilitates the pronunciation of sound sequences [d] and [t] in the beginning of the morpheme base. Sound [?] facilitates the pronunciation of sound sequence [N] in prefixes me(N) or pe(N)- whose morpheme base begins with the vowels [a, i, u, e, ?, ?, o, ?], [g], [h] and [k]. Sound [?] facilitates the pronunciation of sound sequence [N] in prefixes me(N) or pe(N)- whose morpheme base begins with sounds of [j, c, s]. Sound [??] facilitates the pronunciation of words which are formed by prefixes me(N) or pe(N)- with one syllable morpheme base.Keywords: facilitating sound, phonology, Indonesian
Style APA, Harvard, Vancouver, ISO itp.
15

Dick, Frederic, Ayse Pinar Saygin, Gaspare Galati, Sabrina Pitzalis, Simone Bentrovato, Simona D'Amico, Stephen Wilson, Elizabeth Bates i Luigi Pizzamiglio. "What is Involved and What is Necessary for Complex Linguistic and Nonlinguistic Auditory Processing: Evidence from Functional Magnetic Resonance Imaging and Lesion Data". Journal of Cognitive Neuroscience 19, nr 5 (maj 2007): 799–816. http://dx.doi.org/10.1162/jocn.2007.19.5.799.

Pełny tekst źródła
Streszczenie:
We used functional magnetic resonance imaging (fMRI) in conjunction with a voxel-based approach to lesion symptom mapping to quantitatively evaluate the similarities and differences between brain areas involved in language and environmental sound comprehension. In general, we found that language and environmental sounds recruit highly overlapping cortical regions, with cross-domain differences being graded rather than absolute. Within language-based regions of interest, we found that in the left hemisphere, language and environmental sound stimuli evoked very similar volumes of activation, whereas in the right hemisphere, there was greater activation for environmental sound stimuli. Finally, lesion symptom maps of aphasic patients based on environmental sounds or linguistic deficits [Saygin, A. P., Dick, F., Wilson, S. W., Dronkers, N. F., & Bates, E. Shared neural resources for processing language and environmental sounds: Evidence from aphasia. Brain, 126, 928–945, 2003] were generally predictive of the extent of blood oxygenation level dependent fMRI activation across these regions for sounds and linguistic stimuli in young healthy subjects.
Style APA, Harvard, Vancouver, ISO itp.
16

Lee, Jang Hyung, Sun Young Kyung, Pyung Chun Oh, Kwang Gi Kim i Dong Jin Shin. "Heart Sound Classification Using Multi Modal Data Representation and Deep Learning". Journal of Medical Imaging and Health Informatics 10, nr 3 (1.03.2020): 537–43. http://dx.doi.org/10.1166/jmihi.2020.2987.

Pełny tekst źródła
Streszczenie:
Heart anomalies are an important class of medical conditions from personal, public health and social perspectives and hence accurate and timely diagnoses are important. Heartbeat features two well known amplitude peaks termed S1 and S2. Some sound classification models rely on segmented sound intervals referenced to the locations of detected S1 and S2 peaks, which are often missing due to physiological causes and/or artifacts from sound sampling process. The constituent and combined models we propose are free from segmentation, which consequently is more robust and meritful from reliability aspects. Intuitive phonocardiogram representation with relatively simple deep learning architecture was found to be effective for classifying normal and abnormal heart sounds. A frequency spectrum based deep learning network also produced competitive classification results. When the classification models were merged in one via SVM, performance was seen to improve further. The SVM classification model, comprised of two time domain submodels and a frequency domain submodel, produced 0.9175 sensitivity, 0.8886 specificity and 0.9012 accuracy.
Style APA, Harvard, Vancouver, ISO itp.
17

Kihara, Yoshiro. "ROM circuit for reducing sound data". Journal of the Acoustical Society of America 94, nr 1 (lipiec 1993): 612. http://dx.doi.org/10.1121/1.407016.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
18

Ballatore, Andrea, David Gordon i Alexander P. Boone. "Sonifying data uncertainty with sound dimensions". Cartography and Geographic Information Science 46, nr 5 (15.08.2018): 385–400. http://dx.doi.org/10.1080/15230406.2018.1495103.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
19

Rika Kustina. "ONOMATOPE BAHASA DEVAYAN". Jurnal Metamorfosa 8, nr 1 (31.01.2020): 112–22. http://dx.doi.org/10.46244/metamorfosa.v8i1.348.

Pełny tekst źródła
Streszczenie:
Onomatopoeia is the naming of objects or deeds by sound imitation. Imitation of sound does not only include animal, human, natural, or audible sounds, but also sounds that describe moving objects, collisions, or human feelings or emotions. In this study, onomatopoeia is the result of an imitation of sound (which is more or less the same as the original sound) and is arbitrary. This study aims to describe the Devayan language onomatopoeia of natural sounds, animal sounds and human voices. This type of research uses a qualitative descriptive approach. The data source of this research is 7 community leaders of Simeulue, namely native speakers of the Devayan language in Simeulue Cut, data obtained from sounds imitated by the community. Data collection techniques used in this study were interview, record, see and note technique. The results showed that imitations of sounds originating from natural sounds were found to be around 26 imitations of sounds, including imitations of the sound "Falls" Druuhhmm! and the sound of "Thunder" geudamdum !. Furthermore, the sound of animals found about 25 sounds, such as imitation of the sound of "Buffalo" ongng ... a ... k !, the sound of "Rooster at dawn" my.ku..ut ...! and the sound of "Cats" meauu !. Finally, imitations of sounds originating from human voices found about 19 sounds, including imitation of human voice "sneeze" hacyhihh! The form of words contained in the mock sound data is a form of compounding morpheme to indicate a repetitive, prolongation of the voice that indicates activities and conditions that last long, and condensation of sounds that are marked with small letters that indicate something fast. They use these imitations in various conditions. Abstrak Onomatope adalah penamaan benda atau perbuatan dengan peniruan bunyi. Peniruan bunyi tersebut tidak hanya mencakup suara hewan, manusia, alam, atau suara yang dapat didengar saja, namun juga suara yang menggambarkan benda bergerak, benturan, maupun perasaan atau emosi manusia. Dalam penelitian ini, onomatope merupakan hasil tiruan bunyi (yang kurang lebih sama dengan suara aslinya) dan bersifat arbitrer. Penelitian ini bertujuan untuk mendeskripsikan onomatope bahasa Devayan suara alam, suara hewan dan suara manusia. Jenis penelitian ini menggunakan pendekatan deskriptif kualitatif. Sumber data penelitian ini adalah 7 tokoh masyarakat Simeulue, yaitu penutur asli bahasa Devayan yang ada di Simeulue Cut, data diperoleh dari bunyi-bunyi yang ditirukan oleh masyarakat tersebut. Teknik pengumpulan data yang digunakan dalam penelitian ini ialah teknik wawancara, rekam, simak dan catat. Hasil penelitian menunjukkan bahwa tiruan bunyi yang berasal dari suara alam ditemukan sekitar 26 tiruan bunyi, diantaranya seperti tiruan suara “Terjun” Druuhhmm!, suara “Angin berhembus kencang” Ffeooff! dan suara “Guntur” geudamdum!. Selanjutnya, suara hewan ditemukan sekitar 25 tiruan bunyi, diantaranya seperti Tiruan suara “Kerbau” ongng…a..k!, suara “Ayam jantan waktu subuh” ku.ku..ut…! dan suara “Kucing” meauu!. Terakhir, tiruan bunyi yang berasal dari suara manusia ditemukan sekitar 19 tiruan bunyi, diantaranya seperti Tiruan suara manusia “Bersin” hacyhihh!, suara “Batuk” huk..uhuk!, dan suara “Waktu teriris pisau pada bagian jari tangan” auch!. Bentuk kata yang terdapat pada data tiruan bunyi adalah bentuk pemajemukan morfem untuk menunjukkan suatu yang berulang-ulang, pemanjangan suara yang menunjukkan aktivitas dan keadaan yang berlangsung lama, dan pemadatan suara yang ditandai dengan huruf kecil yang menunjukkan sesuatu yang cepat. Mereka menggunakan tiruan tersebut dalam berbagai kondidsi yang ada. Kata Kunci: Onomatope, Bahasa, Devayan
Style APA, Harvard, Vancouver, ISO itp.
20

Saldanha, Jane, Shaunak Chakraborty, Shruti Patil, Ketan Kotecha, Satish Kumar i Anand Nayyar. "Data augmentation using Variational Autoencoders for improvement of respiratory disease classification". PLOS ONE 17, nr 8 (12.08.2022): e0266467. http://dx.doi.org/10.1371/journal.pone.0266467.

Pełny tekst źródła
Streszczenie:
Computerized auscultation of lung sounds is gaining importance today with the availability of lung sounds and its potential in overcoming the limitations of traditional diagnosis methods for respiratory diseases. The publicly available ICBHI respiratory sounds database is severely imbalanced, making it difficult for a deep learning model to generalize and provide reliable results. This work aims to synthesize respiratory sounds of various categories using variants of Variational Autoencoders like Multilayer Perceptron VAE (MLP-VAE), Convolutional VAE (CVAE) Conditional VAE and compare the influence of augmenting the imbalanced dataset on the performance of various lung sound classification models. We evaluated the quality of the synthetic respiratory sounds’ quality using metrics such as Fréchet Audio Distance (FAD), Cross-Correlation and Mel Cepstral Distortion. Our results showed that MLP-VAE achieved an average FAD of 12.42 over all classes, whereas Convolutional VAE and Conditional CVAE achieved an average FAD of 11.58 and 11.64 for all classes, respectively. A significant improvement in the classification performance metrics was observed upon augmenting the imbalanced dataset for certain minority classes and marginal improvement for the other classes. Hence, our work shows that deep learning-based lung sound classification models are not only a promising solution over traditional methods but can also achieve a significant performance boost upon augmenting an imbalanced training set.
Style APA, Harvard, Vancouver, ISO itp.
21

Keikhosrokiani, Pantea, A. Bhanupriya Naidu A/P Anathan, Suzi Iryanti Fadilah, Selvakumar Manickam i Zuoyong Li. "Heartbeat sound classification using a hybrid adaptive neuro-fuzzy inferences system (ANFIS) and artificial bee colony". DIGITAL HEALTH 9 (styczeń 2023): 205520762211507. http://dx.doi.org/10.1177/20552076221150741.

Pełny tekst źródła
Streszczenie:
Cardiovascular disease is one of the main causes of death worldwide which can be easily diagnosed by listening to the murmur sound of heartbeat sounds using a stethoscope. The murmur sound happens at the Lub-Dub, which indicates there are abnormalities in the heart. However, using the stethoscope for listening to the heartbeat sound requires a long time of training then only the physician can detect the murmuring sound. The existing studies show that young physicians face difficulties in this heart sound detection. Use of computerized methods and data analytics for detection and classification of heartbeat sounds will improve the overall quality of sound detection. Many studies have been worked on classifying the heartbeat sound; however, they lack the method with high accuracy. Therefore, this research aims to classify the heartbeat sound using a novel optimized Adaptive Neuro-Fuzzy Inferences System (ANFIS) by artificial bee colony (ABC). The data is cleaned, pre-processed, and MFCC is extracted from the heartbeat sounds. Then the proposed ABC-ANFIS is used to run the pre-processed heartbeat sound, and accuracy is calculated for the model. The results indicate that the proposed ABC-ANFIS model achieved 93% accuracy for the murmur class. The proposed ABC-ANFIS has higher accuracy in compared to ANFIS, PSO ANFIS, SVM, KSTM, KNN, and other existing studies. Thus, this study can assist physicians to classify heartbeat sounds for detecting cardiovascular disease in the early stages.
Style APA, Harvard, Vancouver, ISO itp.
22

Lee, Sheen-Woo, Sang Hoon Lee, Zhen Cheng i Woon Seung Yeo. "Auditory Display of Fluorescence Image Data in an In Vivo Tumor Model". Diagnostics 12, nr 7 (16.07.2022): 1728. http://dx.doi.org/10.3390/diagnostics12071728.

Pełny tekst źródła
Streszczenie:
Objectives: This research aims to apply an auditory display for tumor imaging using fluorescence data, discuss its feasibility for in vivo tumor evaluation, and check its potential for assisting enhanced cancer perception. Methods: Xenografted mice underwent fluorescence imaging after an injection of cy5.5-glucose. Spectral information from the raw data was parametrized to emphasize the near-infrared fluorescence information, and the resulting parameters were mapped to control a sound synthesis engine in order to provide the auditory display. Drag–click maneuvers using in-house data navigation software-generated sound from regions of interest (ROIs) in vivo. Results: Four different representations of the auditory display were acquired per ROI: (1) audio spectrum, (2) waveform, (3) numerical signal-to-noise ratio (SNR), and (4) sound itself. SNRs were compared for statistical analysis. Compared with the no-tumor area, the tumor area produced sounds with a heterogeneous spectrum and waveform, and featured a higher SNR as well (3.63 ± 8.41 vs. 0.42 ± 0.085, p < 0.05). Sound from the tumor was perceived by the naked ear as high-timbred and unpleasant. Conclusions: By accentuating the specific tumor spectrum, auditory display of fluorescence imaging data can generate sound which helps the listener to detect and discriminate small tumorous conditions in living animals. Despite some practical limitations, it can aid in the translation of fluorescent images by facilitating information transfer to the clinician in in vivo tumor imaging.
Style APA, Harvard, Vancouver, ISO itp.
23

Al Aziiz, Arief Nur Rahman, i Muhammad Ridwan. "PHONOLOGICAL VARIATION OF PASAR KLIWON ARABIC DIALECT SURAKARTA". PRASASTI: Journal of Linguistics 4, nr 1 (11.05.2019): 1. http://dx.doi.org/10.20961/prasasti.v4i1.4146.

Pełny tekst źródła
Streszczenie:
<p>This article studies about sound variation sand sound change in Arabic dialect Pasar Kliwon. The data searching use observe (<em>simak</em>) and conversation (<em>cakap</em>) method. The technique of data searching is record (<em>rekam</em>) and register (<em>catat</em>). The data searching refers to question list from 120 swadesh vocabularies. Data analysis used padan method and depends on informan’s speech organ. The analysis research use sound change theory according to Crowley (1992) and Muslich (2012). The vowel sound in Arabic dialect Pasar Kliwon divided by two kinds: short vowel sound and long vowel sound. There are twenty sevenconsonant sounds and divided by seven kinds: plosive, fricative, affricative, liquid, voiced, voiceless, and velariation sound. The sound variation of semi-vowel is <em>wawu</em> and <em>ya</em>&gt;’. The vowel sound change divided by four kinds: lenition, anaptycsis, apocope, metathesis. The consonant sound change divided by four kinds: lenition, anaptycsis, apocope, and sincope.The diftong sound change is monoftongitation.</p>
Style APA, Harvard, Vancouver, ISO itp.
24

Milind Gaikwad, Shrutika, Vedanti Arvind Patil i Aparna Nandurka. "Performance of normal hearing preschool children on audiometric ling’s six sound test". Journal of Otolaryngology-ENT Research 11, nr 6 (2019): 239–43. http://dx.doi.org/10.15406/joentr.2019.11.00442.

Pełny tekst źródła
Streszczenie:
Introduction: Ling’s six sound test is a quick and simple test to ascertain access to speech sounds essential for development of optimum listening and speaking skills. However, most of the normative data available for Ling’s six sound test is adult based. Therefore, there is a need to develop normative data for subsequent use with children who are receiving early intervention procedures. This study aims at obtaining the awareness and identification thresholds for Ling’s six sounds for normal hearing preschool children between 3 to 6 years of age. Methods: Each of the six sounds of Ling’s six sound test namely; |a| |i| |u| |s| |sh| |m| were presented in the sound field through an audiometer to fifty 3 to 6-year-old children with normal hearing sensitivity in a sound treated room. Each child was explained the task and was conditioned well before the testing. Both awareness and identification thresholds were obtained. The lowest level at which the child responded was noted as the threshold. Results: The lowest threshold for awareness and identification was obtained for the sound |a| (10.3 dB HL and 15.4 dB HL respectively). whereas the highest threshold was obtained for the sound |s| (18.2 dB HL and 24 dB HL respectively. A significant difference was seen in thresholds across all the sounds for both awareness and identification. Conclusions: The differences seen in thresholds across all the sounds for both awareness and identification are due to several higher order factors as well as the acoustic and spectral features of each of the Ling’s six sounds.
Style APA, Harvard, Vancouver, ISO itp.
25

Shin, Sanghyun, Abhishek Vaidya i Inseok Hwang. "Helicopter Cockpit Audio Data Analysis to Infer Flight State Information". Journal of the American Helicopter Society 65, nr 3 (1.07.2020): 1–8. http://dx.doi.org/10.4050/jahs.65.032001.

Pełny tekst źródła
Streszczenie:
In recent years, the National Transportation Safety Board has highlighted the importance of analyzing flight data as one of the effective methods to improve the safety and efficiency of helicopter operations. Since cockpit audio data contain various sounds from engines, alarms, crew conversations, and other sources within a cockpit, analyzing cockpit audio data can help identify the causes of incidents and accidents. Among the various types of the sounds in cockpit audio data, this paper focuses on cockpit alarm and engine sounds as an object of analysis. This paper proposes cockpit audio analysis algorithms, which can detect types and occurrence times of alarm sounds for an abnormal flight and estimate engine-related flight parameters such as an engine torque. This is achieved by the following: for alarm sound analysis, finding the highest correlation with the short time Fourier transform, and the Cumulative Sum Control Chart (CUSUM) using a database of the characteristic features of the alarm; and for engine sound analysis, using data mining and statistical modeling techniques to identify specific frequencies associated with engine operations. The proposed algorithm is successfully applied to a set of simulated audio data, which were generated by the X-plane flight simulator, and real audio data, which were recorded by GoPro cameras in Sikorsky S-76 helicopters to demonstrate its desired performance.
Style APA, Harvard, Vancouver, ISO itp.
26

Watanabe, Kazuyoshi, i Ryou Ishikawa. "Sound outputting devices using digital displacement data for a PWM sound signal". Journal of the Acoustical Society of America 98, nr 5 (listopad 1995): 2404. http://dx.doi.org/10.1121/1.413269.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
27

Yogatama, Adiprana. "Phonological Analysis of Indian Language". Register Journal 5, nr 1 (1.06.2012): 1–16. http://dx.doi.org/10.18326/rgt.v5i1.1-16.

Pełny tekst źródła
Streszczenie:
The aims of this research are to find out the language sounds produced by India speakers, to enrich the scientific realm of language sounds and to stimulate the students to deeply examine other foreign language sounds. For the purpose of the study, the researcher collected data from several sources. The data which were in the form of theoretical research literature were obtained from books in general linguistics, especially on Phonology, both English and Indonesian. For data or material which were in the form of research material to be studied, the researcher presented a native speaker of Indian language named Kour Herbinder. This research is a qualitative research with recording and note technique. To analyze the data, the researcher used phonetics chart, both for cconsonants and vowels. From the analysis result, the researcher found that the sounds in India language are dominated by alveolar sounds like usually pronounced by speakers of Indonesian Balinese dialect. The researcher also found that there are many variations of Indian language sound as allophones, such as sound [k '] is an allophone of [k], and sound [dh] is an allophone of [d]. The pronunciation of sound [t], [d] and [k] dominantly resembles with [t], [d] and [k] on Indonesian Balinese.Keywords: phonetics ; phonemics ; alveolar ; allophone
Style APA, Harvard, Vancouver, ISO itp.
28

Yogatama, Adiprana. "Phonological Analysis of Indian Language". Register Journal 5, nr 1 (1.06.2012): 1. http://dx.doi.org/10.18326/rgt.v5i1.249.

Pełny tekst źródła
Streszczenie:
The aims of this research are to find out the language sounds produced by India speakers, to enrich the scientific realm of language sounds and to stimulate the students to deeply examine other foreign language sounds. For the purpose of the study, the researcher collected data from several sources. The data which were in the form of theoretical research literature were obtained from books in general linguistics, especially on Phonology, both English and Indonesian. For data or material which were in the form of research material to be studied, the researcher presented a native speaker of Indian language named Kour Herbinder. This research is a qualitative research with recording and note technique. To analyze the data, the researcher used phonetics chart, both for cconsonants and vowels. From the analysis result, the researcher found that the sounds in India language are dominated by alveolar sounds like usually pronounced by speakers of Indonesian Balinese dialect. The researcher also found that there are many variations of Indian language sound as allophones, such as sound [k '] is an allophone of [k], and sound [dh] is an allophone of [d]. The pronunciation of sound [t], [d] and [k] dominantly resembles with [t], [d] and [k] on Indonesian Balinese.Keywords: phonetics ; phonemics ; alveolar ; allophone
Style APA, Harvard, Vancouver, ISO itp.
29

Kumar, Naveen, Viniitha Jagatheesan, Hewage Methsithini M Rodrigo, VM Kavithevan, Jonathan Wee TS, Ashwini Aithal P i Melissa Glenda Lewis. "Evaluation of blood Pressure changes on exposure to sound frequencies in the youths of different ethnicity". Bangladesh Journal of Medical Science 21, nr 4 (11.09.2022): 825–28. http://dx.doi.org/10.3329/bjms.v21i4.60279.

Pełny tekst źródła
Streszczenie:
Objectives: To analyse the effect of exposure to different sounds of various frequencies on the blood pressure level of participants. Methods: Present study involved 160 medical students belonging to four ethnic races (Malay, Chinese, Malaysian Indians, Sri Lankans). Informed consent and the record of normal blood pressure was obtained before the study. Participants were exposed to three different sounds (Traffic sounds of high noise frequency; Waterfall sound of moderate noise frequency; Night sound in woods of low noise frequency) with specified intervals. The systolic (SBP) and diastolic blood pressure (DBP) of all the participants were recorded after each exposure. Data of SBP and DBP was analysed statistically by repeated measures ANOVA using SPSS. Results: Results showed statistically significant (p<0.001) difference in the average SBP, DBP values from pre to post assessment in all different exposed sounds. While a significant difference (p<0.01) in the mean SBP values were noted for sound 1 and sound 3; the similar difference (p<0.05) in the average DBP values was noted only for sound 3 between the ethnicities. Data showed a rise in BP for sound 1 in all ethnicities except Malays, fall in BP upon the exposure to sound 2 and 3 among all the ethnicities. Conclusion: Exposure to different sounds was found to have a remarkable effect on individual’s blood pressure. However, its comparison with ethnicities is quite variable. Adverse effects of various sound exposures on the blood pressure levels of student population is crucial in terms of their curricular and physiological activities. Bangladesh Journal of Medical Science Vol. 21 No. 04 October’22 Page : 825-828
Style APA, Harvard, Vancouver, ISO itp.
30

Ballas, James A., i Mark E. Barnes. "Everyday Sound Perception and Aging". Proceedings of the Human Factors Society Annual Meeting 32, nr 3 (październik 1988): 194–97. http://dx.doi.org/10.1177/154193128803200305.

Pełny tekst źródła
Streszczenie:
Age related hearing loss is extensively documented in both longitudinal and cross-sectional studies but there are no direct studies of the ability of older persons to perceive everyday sounds. There is evidence suggesting some impairment. Vanderveer (1979) observed that older listeners had difficulty interpreting environmental sounds but did not report any performance data. Demands imposed by the stimulus properties of this type of sound and by the perceptual and cognitive processes found to mediate perception of this sound in college-aged listeners may present difficulty for older listeners. Forty-seven members of a retired organization were given a subset of sounds that had been used in previous identification studies. Identification data for the same set of sounds had been previously obtained from high school and college students (Ballas, Dick, & Groshek, 1987). The ability of the aged group to identify this set of sounds was not significantly different from the ability of a student group. In fact, uncertainties were closely matched except for a few sounds. Directions for future research are discussed.
Style APA, Harvard, Vancouver, ISO itp.
31

Munir, Misbahul. "Tahlil al-Akhtha’ al-Shautiyyah li al-Kalimat al-Thayyibat fi Hayat al-Muslimin (Dirasah Tahliliyyah Shautiyyah)". ALSINATUNA 3, nr 2 (20.08.2018): 163. http://dx.doi.org/10.28918/alsinatuna.v3i2.1241.

Pełny tekst źródła
Streszczenie:
This research aims to explain the sound error al-kalimāh aṭ-ṭayyibah, vowels and consonants. The research conducted at UIN Yogyakarta. Sources of data obtained from Arabic Linguistic Studies of Post Graduate Students in UIN Yogyakarta. The method used is descriptive-qualitative with case study approach. Data collection of this research used interviewing, taking notes, and recordings. This research shows that the mistakes found in vowels; either short, long, and double vowel sound. Fatḥah(َ) sounds /a/ becomes /o./ Ḍammah(ُ) sounds /u/ becomes /o/, /ū/, dan sukūn/ْ./ Kasrah(ِ) sounds /i/ becomes /e./ Syaddah(ّ) which read double will be not double. Fatḥah(َ) which followed alif(ا), it sounds /ā/ and becomes /a./ Fatḥah(َ) which followed alif(ا) dan tilda(~) above it, it sounds /ā/ and becomes /a./ Fatḥah(َ) which followed wawu sukūn(وْ), sounds /au/ and becomes /ao./ Fatḥah(َ) which followed ya’ sukūn(يْ), sounds /ai/ and becomes /ei./ The error sound of the consonants, consonant phoneme sound ع/‘/ becomes ا/’/, ح/ḥ/ becomes ه/h/ and ك/k/, ظ/ẓ/ becomes ز/z/ and ج/j/, ش/sy/ becomes س/s/, ق/q/ becomes ك/k/, ذ/ż/ becomes ظ/ẓ/ and د/d/, ت/t/ becomes ط/ṭ./ The sound of the vowels and consonants errors occur due to there is link between language with speakers.
Style APA, Harvard, Vancouver, ISO itp.
32

Gautama Simanjuntak, Juniarto, Mega Putri Amelya, Fitri Nuraeni i Rika Raffiudin. "Keragaman Suara Tonggeret dan Jangkrik di Taman Nasional Gunung Gede Pangrango". Jurnal Sumberdaya Hayati 6, nr 1 (3.12.2020): 20–25. http://dx.doi.org/10.29244/jsdh.6.1.20-25.

Pełny tekst źródła
Streszczenie:
Indonesia is a biodiversity country and has much of samples of bioacoustics but there are no bioacoustics data collected and saved to be referred. Bioacoustics is a study of frequency range, sound amplitudo intensity, sound fluctuation, and sound patterns. It is very useful to study more about population presumption and species determination. This insect bioacoustics research is done at Gunung Gede Pangrango National Park and aims to analyse variety of sound frequency of cicada and cricket. Methods which are used are recording the sounds, editing and analyzing the record result with Praat and Raven Lite 2.0 softwares, and analysing the environment. Analysing the sounds which is done is to find miximum frequency, minimum frequency, and average frequency. The result of the sounds analysis is compared to database in Singing Insect of North America (SINA). Environmental analysing includes temperature, air humidity, and light intensity. There are nine cicada sound recording files and twenty four cricket sound recording files. Cicada has high sound characteristic (9,168.2 Hz) and cricket has low sound characteristic (3,311.80 Hz). Comparation to Singing Insect of North America (SINA) database shows that the cicada’s sound is resemble to Tibicen marginalis and the cricket’s sound is resemble to Grylodes sigillatus.
Style APA, Harvard, Vancouver, ISO itp.
33

Mudusu, Rambabu, A. Nagesh i M. Sadanandam. "Enhancing Data Security Using Audio-Video Steganography". International Journal of Engineering & Technology 7, nr 2.20 (18.04.2018): 276. http://dx.doi.org/10.14419/ijet.v7i2.20.14777.

Pełny tekst źródła
Streszczenie:
Steganography may be a strategy for concealing any mystery information like content, picture, sound behind distinctive cowl document .In this paper we have a tendency to planned the mix of image steganography associated sound steganography with confront acknowledgment innovation as an instrument for verification. The purpose is to shroud the mystery information behind sound and therefore the beneficiary's face image of video, because it may be a use of various still casings of images and sound. During this technique we've chosen any casing of video to shroud beneficiary's face image and sound to hide the mystery data. Affordable calculation, maybe, increased LSB and RSA rule is been utilised to shroud mystery content and movie. PCA rule is employed for confront acknowledgment. The parameter for security and verification ar gotten at collector and transmitter facet that ar exactly indistinguishable, consequently the knowledge security is dilated.
Style APA, Harvard, Vancouver, ISO itp.
34

Kim, Hyun-Don, Kazunori Komatani, Tetsuya Ogata i Hiroshi G. Okuno. "Binaural Active Audition for Humanoid Robots to Localise Speech over Entire Azimuth Range". Applied Bionics and Biomechanics 6, nr 3-4 (2009): 355–67. http://dx.doi.org/10.1155/2009/817874.

Pełny tekst źródła
Streszczenie:
We applied motion theory to robot audition to improve the inadequate performance. Motions are critical for overcoming the ambiguity and sparseness of information obtained by two microphones. To realise this, we first designed a sound source localisation system integrated with cross-power spectrum phase (CSP) analysis and an EM algorithm. The CSP of sound signals obtained with only two microphones was used to localise the sound source without having to measure impulse response data. The expectation-maximisation (EM) algorithm helped the system to cope with several moving sound sources and reduce localisation errors. We then proposed a way of constructing a database for moving sounds to evaluate binaural sound source localisation. We evaluated our sound localisation method using artificial moving sounds and confirmed that it could effectively localise moving sounds slower than 1.125 rad/s. Consequently, we solved the problem of distinguishing whether sounds were coming from the front or rear by rotating and/or tipping the robot's head that was equipped with only two microphones. Our system was applied to a humanoid robot called SIG2, and we confirmed its ability to localise sounds over the entire azimuth range as the success rates for sound localisation in the front and rear areas were 97.6% and 75.6% respectively.
Style APA, Harvard, Vancouver, ISO itp.
35

Hermawan, Gede Satya. "Errors in Learning Japanese through Listening-Misheard Cases-". JAPANEDU: Jurnal Pendidikan dan Pengajaran Bahasa Jepang 4, nr 2 (29.12.2019): 126–32. http://dx.doi.org/10.17509/japanedu.v4i2.18317.

Pełny tekst źródła
Streszczenie:
This paper aims to study the error that happens when students learning Japanese through listening. This paper describes misheard cases by students during listening class. The data in this research collected from students’ quiz and test results. Students participated in this study were first-year and second-year students, including 37 first-years students and 24 second-year students, with total participants 81 students. The data collected in this study then categorized based on the type of errors. The results showed that the errors occurred include confusion between two sounds, reduction of sound, and mis-guessing long vowel. Confusing of two sounds happened when the students misheard two different sounds such as alveolar nasal consonant /n/ in [hinan] with liquid consonant /r/ as in [hiran]. Furthermore, reduction of sound is occurred when students confused the same vowel at particle with front or back vowel sound of the word, such as yamagaafureru which misheard with yamagafureru. This error occurred because the vowel sound /a/ on particle /ga/ which covering up the vowel sound /a/ in the front of the word afureru. Lastly, there are errors that happened because thin overlapping borderline between error or mistake, where students mostly misheard or mistaken short vowels sound such as [ba∫o] with long vowels such as [ba∫o:].
Style APA, Harvard, Vancouver, ISO itp.
36

Siswanto, Waluyo Adi, Wan Mohamed Akil Che Wahab, Musli Nizam Yahya, Al Emran Ismail i Ismail Nawi. "A Platform for Digital Reproduction Sound of Traditional Musical Instrument Kompang". Applied Mechanics and Materials 660 (październik 2014): 823–27. http://dx.doi.org/10.4028/www.scientific.net/amm.660.823.

Pełny tekst źródła
Streszczenie:
This work proposes a system for the digital reproduction sound ofkompang. Thekompangsounds are represented bybungandpakproduced by palm beating to the membrane. The sounds are recorded in an acoustical sound recording system. In this proposed system, the recorded sounds are then analyzed in a frequency analyzer SpectraPLUS. This frequency contents data can be used as the reference to check the reproduced digital sound. The recorded wave data is converted to MIDI format before being manipulated in Ableton synthesizer system to create modern keyboard notes but representingkompangsound. For the validation purpose, a subjective approach as an additional to the objective comparison with frequency contents is also proposed.
Style APA, Harvard, Vancouver, ISO itp.
37

Bandara, Meelan, Roshinie Jayasundara, Isuru Ariyarathne, Dulani Meedeniya i Charith Perera. "Forest Sound Classification Dataset: FSC22". Sensors 23, nr 4 (10.02.2023): 2032. http://dx.doi.org/10.3390/s23042032.

Pełny tekst źródła
Streszczenie:
The study of environmental sound classification (ESC) has become popular over the years due to the intricate nature of environmental sounds and the evolution of deep learning (DL) techniques. Forest ESC is one use case of ESC, which has been widely experimented with recently to identify illegal activities inside a forest. However, at present, there is a limitation of public datasets specific to all the possible sounds in a forest environment. Most of the existing experiments have been done using generic environment sound datasets such as ESC-50, U8K, and FSD50K. Importantly, in DL-based sound classification, the lack of quality data can cause misguided information, and the predictions obtained remain questionable. Hence, there is a requirement for a well-defined benchmark forest environment sound dataset. This paper proposes FSC22, which fills the gap of a benchmark dataset for forest environmental sound classification. It includes 2025 sound clips under 27 acoustic classes, which contain possible sounds in a forest environment. We discuss the procedure of dataset preparation and validate it through different baseline sound classification models. Additionally, it provides an analysis of the new dataset compared to other available datasets. Therefore, this dataset can be used by researchers and developers who are working on forest observatory tasks.
Style APA, Harvard, Vancouver, ISO itp.
38

Adityarini, Ida Ayu Putri, I. Wayan Pastika i I. Nyoman Sedeng. "INTERFERENSI FONOLOGI PADA PEMBELAJAR BIPA ASAL EROPA DI BALI". Aksara 32, nr 1 (1.07.2020): 167–86. http://dx.doi.org/10.29255/aksara.v32i1.409.167-186.

Pełny tekst źródła
Streszczenie:
This study aimed to determine phonological interference that occurs on BIPA learners from Europe in Bali. Oraland written data are used in this research that were obtained from learners' speeches and writings when learning Indonesian in the class. This research was guided by interference theory according to Weinreich (1953). Oral data were collected using proficient inversion techniques (SLC), skillful in-flight listening techniques (SBLC), recording techniques, and note taking. Writing data were collected by the test method. The data analyzed and presented in formal and informal forms. The results of data analysis showed that phonological interference that occurred in BIPA learners from Europe in Bali, namely in the form of vocal noise interference (occurred in vowels [a], [u], and [ə]), consonant sound interference (occurred in consonants [h] , [r], [g], [ŋ], [t], [g], and [ɲ]), interference in the form of sound addition (occurred in the sounds [ŋ] and [ɲ]), and interference in the form of sound removal ( occurred in consonants [r], vowel series [e] and [a], and consonants [h]). This interference occurred because of differences in vowel and consonant sounds in Indonesian and English. In addition, this interference was also caused by the different pronunciation of a vowel sound or consonant sound in both languages.
Style APA, Harvard, Vancouver, ISO itp.
39

Miner, Nadine E., Timothy E. Goldsmith i Thomas P. Caudell. "Perceptual Validation Experiments for Evaluating the Quality of Wavelet-Synthesized Sounds". Presence: Teleoperators and Virtual Environments 11, nr 5 (październik 2002): 508–24. http://dx.doi.org/10.1162/105474602320935847.

Pełny tekst źródła
Streszczenie:
This paper describes three psychoacoustic experiments that evaluated the perceptual quality of sounds generated from a new wavelet-based synthesis technique. The synthesis technique provides a method for modeling and synthesizing perceptually compelling sound. The experiments define a methodology for evaluating the effectiveness of any synthesized sound. An identification task and a context-based rating task evaluated the perceptual quality of individual sounds. These experiments confirmed that the wavelet technique synthesizes a wide variety of compelling sounds from a small model set. The third experiment obtained sound similarity ratings. Psychological scaling methods were applied to the similarity ratings to generate both spatial and network models of the perceptual relations among the synthesized sounds. These analysis techniques helped to refine and extend the sound models. Overall, the studies provided a framework to validate synthesized sounds for a variety of applications including virtual reality and data sonification systems.
Style APA, Harvard, Vancouver, ISO itp.
40

Rosyadi, Naila Nabila, i Nur Hastuti. "Contrastive Analysis of Onomatopoeic Use in Nursery Rhymes as Children’s Environmental Sounds Recognition in Japanese and Indonesian". E3S Web of Conferences 359 (2022): 03014. http://dx.doi.org/10.1051/e3sconf/202235903014.

Pełny tekst źródła
Streszczenie:
Nursery rhymes play a role in children’s language development and help them recognize and express the environmental sounds or sounds around them. Onomatopoeia or imitation words are often found in nursery rhymes. Every country has a different language, so it has different phonetic sounds to express onomatopoeia. In this research, the author will contrast the onomatopoeic use in Japanese and Indonesian nursery rhymes. The theory and classification of onomatopoeia used in this research are combinations proposed by Akimoto (2002) and Kaneda (1978). This qualitative research used the listening and note-taking methods from Youtube videos. The analysis data used in this research are the referential matching method. The result from the research data shows that in Japanese nursery rhymes, onomatopoeia is the sound of nature, the sound from an object, the sound of a human, the sound of an animal, object condition, object movement, human movement, animal movement, and human emotion are found. Meanwhile, in Indonesian nursery rhymes found, almost all types of onomatopoeia in Japanese are found except for the class of the sound of a human, object movement, and human emotion are not found.
Style APA, Harvard, Vancouver, ISO itp.
41

Carbonell, Kathy M., Radhika Aravamudhan i Andrew J. Lotto. "Presence of preceding sound affects the neural representation of speech sounds: Behavioral data." Journal of the Acoustical Society of America 128, nr 4 (październik 2010): 2322. http://dx.doi.org/10.1121/1.3508189.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
42

Levin, Beth, i Grace Song. "Making Sense of Corpus Data". International Journal of Corpus Linguistics 2, nr 1 (1.01.1997): 23–64. http://dx.doi.org/10.1075/ijcl.2.1.04lev.

Pełny tekst źródła
Streszczenie:
This paper demonstrates the essential role of corpus data in the development of a theory that explains and predicts word behavior. We make this point through a case study of verbs of sound, drawing our evidence primarily from the British National Corpus. We begin by considering pretheoretic notions of the verbs of sound as presented in corpus-based dictionaries and then contrast them with the predictions made by a theory of syntax, as represented by Chomsky's Government-Binding framework. We identify and classify the transitive uses of sixteen representative verbs of sound found in the corpus data. Finally, we consider what a linguistic account with both syntactic and lexical semantic components has to offer as an explanation of observed differences in the behavior of the sample verbs.
Style APA, Harvard, Vancouver, ISO itp.
43

Osborne, M. R. "On the inversion of sound channel data". ANZIAM Journal 42 (25.12.2000): 1097. http://dx.doi.org/10.21914/anziamj.v42i0.637.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
44

Curtis, Andrea J., Johannes U. Stoelwinder i John J. McNeil. "Management of waiting lists needs sound data". Medical Journal of Australia 191, nr 8 (październik 2009): 423–24. http://dx.doi.org/10.5694/j.1326-5377.2009.tb02875.x.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
45

Berman, David H. "Sound‐speed profiles from vertical array data". Journal of the Acoustical Society of America 93, nr 4 (kwiecień 1993): 2375. http://dx.doi.org/10.1121/1.406132.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
46

Hermann, T., i H. Ritter. "Sound and Meaning in Auditory Data Display". Proceedings of the IEEE 92, nr 4 (kwiecień 2004): 730–41. http://dx.doi.org/10.1109/jproc.2004.825904.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
47

Rabesandratana, Tania. "Researchers sound alarm on European data law". Science 366, nr 6468 (21.11.2019): 936. http://dx.doi.org/10.1126/science.366.6468.936.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
48

Ko, Jae-Hyuk. "Method for 3D Visualization of Sound Data". Journal of Digital Convergence 14, nr 7 (28.07.2016): 331–37. http://dx.doi.org/10.14400/jdc.2016.14.7.331.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
49

Geladi, Paul. "Data and sound, an opportunity for chemometrics?" Chemometrics and Intelligent Laboratory Systems 39, nr 1 (listopad 1997): 63–67. http://dx.doi.org/10.1016/s0169-7439(97)00046-4.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
50

Swart, D. J., A. Bekker i J. Bienert. "Electric vehicle sound stimuli data and enhancements". Data in Brief 21 (grudzień 2018): 1337–46. http://dx.doi.org/10.1016/j.dib.2018.10.074.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii