Journal articles on the topic 'Audio synthesi'

To see the other types of publications on this topic, follow the link: Audio synthesi.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Audio synthesi.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Mikulicz, Szymon. "Precise Inter-Device Audio Playback Synchronization for Linux." International Journal of Signal Processing Systems 9, no. 3 (September 2021): 17–21. http://dx.doi.org/10.18178/ijsps.9.3.17-21.

Full text
Abstract:
Wave Field Synthesis is a method producing sound that uses arrays of closely placed speakers. This creates an unique challenge for distributed playback systems. Because of clock frequency drift, the playback must constantly be corrected via interpolation and shifting in time of the played stream. In this paper a new approach to network based audio playback synchronization is presented, that makes heavy use of the PTP network time synchronization protocol and ALSA Linux audio subsystem. The software does not need any specialized hardware and can approximate precisely how the playback stream should be interpolated via a set of statistical indicators. The evaluation shows that the difference between two devices playing audio using the presented system is under 10 μs for 99 % of the time, which fully satisfies the requirements of Wave Field Synthesis. The system was compared to other network audio synchronization systems available currently: NetJack2, RAVENNA and Snapcast, all of which had from 10 to 50 times higher differences between two devices than the presented system.
APA, Harvard, Vancouver, ISO, and other styles
2

VOITKO, Viktoriia, Svitlana BEVZ, Sergii BURBELO, and Pavlo STAVYTSKYI. "AUDIO GENERATION TECHNOLOGY OF A SYSTEM OF SYNTHESIS AND ANALYSIS OF MUSIC COMPOSITIONS." Herald of Khmelnytskyi National University 305, no. 1 (February 23, 2022): 64–67. http://dx.doi.org/10.31891/2307-5732-2022-305-1-64-67.

Full text
Abstract:
System of audio synthesis and analysis of music compositions is considered. It consists of two primary parts, the audio analysis component, and the music synthesis component. An audio generation component implements various ways of creating audio sequences. One of them is aimed to record melodies played with voice and transform them into sequences played with selected musical instruments. In addition, an audio input created with a human voice can be utilized as a seed, that is used to generate similar music sequences using artificial intelligence. Finally, a manual approach for music generation and editing is available. After automatic mechanisms for composition generation are used, the results of their work are presented on a two-dimensional plane which represents the dependence of music note pitches on time. It is possible to manually adjust the result of audio generation or create new music sequences with this approach. A creation process could be used iteratively to create multiple parallel music sequences that are to be played as a single audio composition. To implement a seed-based audio synthesis, a deep learning architecture based on a variational autoencoder is used to train a neural network that can reproduce input-like data. When using such an approach an additional important step must be considered. All the input data must be converted from a raw audio format to spectrograms which are represented as grayscale images. Moreover, the result of a sound generation is also represented in a spectrogram and therefore, must be converted back to an output audio format that can be played using speakers. This is required as using spectrograms helps to discard redundant data that raw audio format contains and thus significantly reduces resources consumption and increases overall synthesis speed.
APA, Harvard, Vancouver, ISO, and other styles
3

George, E. Bryan, and Mark J. T. Smith. "Audio analysis/synthesis system." Journal of the Acoustical Society of America 97, no. 3 (March 1995): 2016. http://dx.doi.org/10.1121/1.412041.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Li, Naihan, Yanqing Liu, Yu Wu, Shujie Liu, Sheng Zhao, and Ming Liu. "RobuTrans: A Robust Transformer-Based Text-to-Speech Model." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (April 3, 2020): 8228–35. http://dx.doi.org/10.1609/aaai.v34i05.6337.

Full text
Abstract:
Recently, neural network based speech synthesis has achieved outstanding results, by which the synthesized audios are of excellent quality and naturalness. However, current neural TTS models suffer from the robustness issue, which results in abnormal audios (bad cases) especially for unusual text (unseen context). To build a neural model which can synthesize both natural and stable audios, in this paper, we make a deep analysis of why the previous neural TTS models are not robust, based on which we propose RobuTrans (Robust Transformer), a robust neural TTS model based on Transformer. Comparing to TransformerTTS, our model first converts input texts to linguistic features, including phonemic features and prosodic features, then feed them to the encoder. In the decoder, the encoder-decoder attention is replaced with a duration-based hard attention mechanism, and the causal self-attention is replaced with a "pseudo non-causal attention" mechanism to model the holistic information of the input. Besides, the position embedding is replaced with a 1-D CNN, since it constrains the maximum length of synthesized audio. With these modifications, our model not only fix the robustness problem, but also achieves on parity MOS (4.36) with TransformerTTS (4.37) and Tacotron2 (4.37) on our general set.
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Cheng-i., and Shlomo Dubnov. "Guided Music Synthesis with Variable Markov Oracle." Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment 10, no. 5 (June 29, 2021): 55–62. http://dx.doi.org/10.1609/aiide.v10i5.12767.

Full text
Abstract:
In this work the problem of guided improvisation is approached and elaborated; then a new method, Variable Markov Oracle, for guided music synthesis is proposed as the first step to tackle the guided improvisation problem. Variable Markov Oracle is based on previous results from Audio Oracle, which is a fast indexing and recombination method of repeating sub-clips in an audio signal. The newly proposed Variable Markov Oracle is capable of identifying inherent datapoint clusters in an audio signal while tracking the sequential relations among clusters at the same time. With a target audio signal indexed by Variable Markov Oracle, a query-matching algorithm is devised to synthesize new music materials by recombination of the target audio matched to a query audio. This approach makes the query-matching algorithm a solution to the guided music synthesis problem. The query-matching algorithm is efficient and intelligent since it follows the inherent clusters discovered by Variable Markov Oracle, creating a query-by-content result which allows numerous applications in concatenative synthesis, machine improvisation and interactive music system. Examples of using Variable Markov Oracle to synthesize new musical materials based on given music signals in the style of Jazz are shown.
APA, Harvard, Vancouver, ISO, and other styles
6

Cabrera, Andrés, JoAnn Kuchera-Morin, and Curtis Roads. "The Evolution of Spatial Audio in the AlloSphere." Computer Music Journal 40, no. 4 (December 2016): 47–61. http://dx.doi.org/10.1162/comj_a_00382.

Full text
Abstract:
Spatial audio has been at the core of the multimodal experience at the AlloSphere, a unique instrument for data discovery and exploration through interactive immersive display, since its conception. The AlloSphere multichannel spatial audio design has direct roots in the history of electroacoustic spatial audio and is the result of previous activities in spatial audio at the University of California at Santa Barbara. A concise technical description of the AlloSphere, its architectural and acoustic features, its unique 3-D visual projection system, and the current 54.1 Meyer Sound audio infrastructure is presented, with details of the audio software architecture and the immersive sound capabilities it supports. As part of the process of realizing scientific and artistic projects for the AlloSphere, spatial audio research has been conducted, including the use of decorrelation of audio signals to supplement spatialization and tackling the thorny problem of interactive up-mixing through the Sound Element Spatializer and the Zirkonium Chords project. The latter uses the metaphor of geometric spatial chords as a high-level means of spatial up-mixing in performance. Other developments relating to spatial audio are presented, such as Ryan McGee's Spatial Modulation Synthesis, which simultaneously explores the synthesis of space and timbre.
APA, Harvard, Vancouver, ISO, and other styles
7

Park, Se Jin, Minsu Kim, Joanna Hong, Jeongsoo Choi, and Yong Man Ro. "SyncTalkFace: Talking Face Generation with Precise Lip-Syncing via Audio-Lip Memory." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 2 (June 28, 2022): 2062–70. http://dx.doi.org/10.1609/aaai.v36i2.20102.

Full text
Abstract:
The challenge of talking face generation from speech lies in aligning two different modal information, audio and video, such that the mouth region corresponds to input audio. Previous methods either exploit audio-visual representation learning or leverage intermediate structural information such as landmarks and 3D models. However, they struggle to synthesize fine details of the lips varying at the phoneme level as they do not sufficiently provide visual information of the lips at the video synthesis step. To overcome this limitation, our work proposes Audio-Lip Memory that brings in visual information of the mouth region corresponding to input audio and enforces fine-grained audio-visual coherence. It stores lip motion features from sequential ground truth images in the value memory and aligns them with corresponding audio features so that they can be retrieved using audio input at inference time. Therefore, using the retrieved lip motion features as visual hints, it can easily correlate audio with visual dynamics in the synthesis step. By analyzing the memory, we demonstrate that unique lip features are stored in each memory slot at the phoneme level, capturing subtle lip motion based on memory addressing. In addition, we introduce visual-visual synchronization loss which can enhance lip-syncing performance when used along with audio-visual synchronization loss in our model. Extensive experiments are performed to verify that our method generates high-quality video with mouth shapes that best align with the input audio, outperforming previous state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
8

Kuntz, Matthieu, and Bernhard U. Seeber. "Spatial audio for interactive hearing research." INTER-NOISE and NOISE-CON Congress and Conference Proceedings 265, no. 2 (February 1, 2023): 5120–27. http://dx.doi.org/10.3397/in_2022_0741.

Full text
Abstract:
The use of sound field synthesis for hearing research has gained popularity due to the ability to auralize a wide range of sound scenes in a controlled and reproducible way. We are interested in reproducing acoustic environments for interactive hearing research, allowing participants to move freely over an extended area in the reproduced sound field. While the physically accurate sound field reproduction using sound field synthesis is limited to the sweet spot, it is unclear how different perceptual measures vary across the reproduction area and how suitable sound field synthesis is to evaluate them. To investigate the viability of listening experiments and provide a database for modelling approaches, measurements of binaural cues were carried out in the Simulated Open Field Environment loudspeaker array. Results show that the binaural cues are reproduced well close to the center, but exhibit more variance than in the corresponding free field case. Off center, lower interaural coherence is observed, which can affect binaural unmasking and speech intelligibility. In this work, we study binaural cues and speech reception thresholds over a wide area in the loudspeaker array to investigate the feasibility of psychoacoustic experiments involving speech understanding.
APA, Harvard, Vancouver, ISO, and other styles
9

Loy, D. Gareth. "The Systems Concepts Digital Synthesizer: An Architectural Retrospective." Computer Music Journal 37, no. 3 (September 2013): 49–67. http://dx.doi.org/10.1162/comj_a_00193.

Full text
Abstract:
In the mid 1970s, specialized hardware for synthesizing digital audio helped computer music research move beyond its early reliance on software synthesis running on slow mainframe computers. This hardware allowed for synthesis of complex musical scores in real time and for dynamic, interactive control of synthesis. Peter Samson developed one such device, the Systems Concepts Digital Synthesizer, for Stanford University's Center for Computer Research in Music and Acoustics. The “Samson Box” addressed the classical problems of digital audio synthesis with an elegance that still rewards study. This article thoroughly examines the principles underlying the Box's design—while considering how it was actually employed by its users—and describes the architecture's advantages and disadvantages. An interview with Samson is included.
APA, Harvard, Vancouver, ISO, and other styles
10

Bessell, David. "Dynamic Convolution Modeling, a Hybrid Synthesis Strategy." Computer Music Journal 37, no. 1 (March 2013): 44–51. http://dx.doi.org/10.1162/comj_a_00159.

Full text
Abstract:
This article outlines a hybrid approach to the synthesis of percussion sounds. The synthesis method described here combines techniques and concepts from physical modeling and convolution to produce audio synthesis of percussive instruments. This synthesis method not only achieves a high degree of realism in comparison with audio samples but also retains some of the flexibility associated with waveguide physical models. When the results are analyzed, the method exhibits some interesting detailed spectral features that have some aspects in common with the behavior of acoustic percussion instruments. In addition to outlining the synthesis process, the article discusses some of the more creative possibilities inherent in this approach, e.g., the use and free combination of excitation and resonance sources from beyond the realms of the purely percussive examples given.
APA, Harvard, Vancouver, ISO, and other styles
11

García, Víctor, Inma Hernáez, and Eva Navas. "Evaluation of Tacotron Based Synthesizers for Spanish and Basque." Applied Sciences 12, no. 3 (February 7, 2022): 1686. http://dx.doi.org/10.3390/app12031686.

Full text
Abstract:
In this paper, we describe the implementation and evaluation of Text to Speech synthesizers based on neural networks for Spanish and Basque. Several voices were built, all of them using a limited number of data. The system applies Tacotron 2 to compute mel-spectrograms from the input sequence, followed by WaveGlow as neural vocoder to obtain the audio signals from the spectrograms. The limited number of data used for training the models leads to synthesis errors in some sentences. To automatically detect those errors, we developed a new method that is able to find the sentences that have lost the alignment during the inference process. To mitigate the problem, we implemented a guided attention providing the system with the explicit duration of the phonemes. The resulting system was evaluated to assess its robustness, quality and naturalness both with objective and subjective measures. The results reveal the capacity of the system to produce good quality and natural audios.
APA, Harvard, Vancouver, ISO, and other styles
12

Elfitri, Ikhwana, Xiyu Shi, and Ahmet Kondoz. "Analysis by synthesis spatial audio coding." IET Signal Processing 8, no. 1 (February 2014): 30–38. http://dx.doi.org/10.1049/iet-spr.2013.0015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Dias, José Rodrigues, Rui Penha, Leonel Morgado, Pedro Alves da Veiga, Elizabeth Simão Carvalho, and Adérito Fernandes-Marcos. "Tele-Media-Art:Feasibility Tests of Web-Based Dance Education for the Blind Using Kinect and Sound Synthesis of Motion." International Journal of Technology and Human Interaction 15, no. 2 (April 2019): 11–28. http://dx.doi.org/10.4018/ijthi.2019040102.

Full text
Abstract:
Tele-media-art is a web-based asynchronous e-learning platform, enabling blind students to have dance and theatre classes remotely, using low-cost motion tracking technology feasible for home use. Teachers and students submit dance recordings augmented with sound synthesis of their motions. Sound synthesis is generated by processing Kinect motion capture data, enabling blind students to compare the audio feedback of their motions with the audio generated by the teacher's motions. To study the feasibility of this approach, the authors present data on early testing of the prototype, performed with blindfolded users.
APA, Harvard, Vancouver, ISO, and other styles
14

Maniyar, Huzaifa, Suneeta Veerappa Budihal, and Saroja V. Siddamal. "Persons facial image synthesis from audio with Generative Adversarial Networks." ECTI Transactions on Computer and Information Technology (ECTI-CIT) 16, no. 2 (May 28, 2022): 135–41. http://dx.doi.org/10.37936/ecticit.2022162.246995.

Full text
Abstract:
This paper proposes to build a framework with Generative Adversarial Network (GANs) to synthesize a person's facial image from audio input. Image and speech are the two main sources of information exchange between two entities. In some data intensive applications, a large amount of audio has to be translated into an understandable image format, with automated system, without human interference. This paper provides an end-to-end model for intelligible image reconstruction from an audio signal. The model uses a GAN architecture, which generates image features using audio waveforms for image synthesis. The model was created to produce facial images from audio of individual identities of a synthesized image of the speakers, based on the training dataset. The images of labelled persons are generated using excitation signals and the method obtained results with an accuracy of 96.88% for ungrouped data and 93.91% for grouped data.
APA, Harvard, Vancouver, ISO, and other styles
15

Fan, Di, Wenxue Sun, Huiyuan Zhao, Wenshuo Kang, and Changzhi Lv. "Audio and Video Matching Zero-Watermarking Algorithm Based on NSCT." Complexity 2022 (August 24, 2022): 1–14. http://dx.doi.org/10.1155/2022/3445583.

Full text
Abstract:
In the Internet age, information security is threatened anytime and anywhere and the copyright protection of audio and video as well as the need for matching detection is increasingly strong. In view of this need, this paper proposes a zero-watermarking algorithm for audio and video matching based on NSCT. The algorithm uses NSCT, DCT, SVD, and Schur decomposition to extract video features and audio features and generates zero-watermark stream through synthesis, which is stored in a third-party organization for detection and identification. The detection algorithm can obtain zero watermark from the audio and video to be tested and judge and locate tampering by comparing with the zero watermark of the third party. From the experimental results, this algorithm can not only detect whether the audio and video are mismatched due to tampering attacks but also locate the mismatched audio and video segments and protect the copyright.
APA, Harvard, Vancouver, ISO, and other styles
16

Yang, Chih-Chun, Wan-Cyuan Fan, Cheng-Fu Yang, and Yu-Chiang Frank Wang. "Cross-Modal Mutual Learning for Audio-Visual Speech Recognition and Manipulation." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 3 (June 28, 2022): 3036–44. http://dx.doi.org/10.1609/aaai.v36i3.20210.

Full text
Abstract:
As a key characteristic in audio-visual speech recognition (AVSR), relating linguistic information observed across visual and audio data has been a challenge, benefiting not only audio/visual speech recognition (ASR/VSR) but also for manipulating data within/across modalities. In this paper, we present a feature disentanglement-based framework for jointly addressing the above tasks. By advancing cross-modal mutual learning strategies, our model is able to convert visual or audio-based linguistic features into modality-agnostic representations. Such derived linguistic representations not only allow one to perform ASR, VSR, and AVSR, but also to manipulate audio and visual data output based on the desirable subject identity and linguistic content information. We perform extensive experiments on different recognition and synthesis tasks to show that our model performs favorably against state-of-the-art approaches on each individual task, while ours is a unified solution that is able to jointly tackle the aforementioned audio-visual learning tasks.
APA, Harvard, Vancouver, ISO, and other styles
17

Wyse, Lonce, and Srikumar Subramanian. "The Viability of the Web Browser as a Computer Music Platform." Computer Music Journal 37, no. 4 (December 2013): 10–23. http://dx.doi.org/10.1162/comj_a_00213.

Full text
Abstract:
The computer music community has historically pushed the boundaries of technologies for music-making, using and developing cutting-edge computing, communication, and interfaces in a wide variety of creative practices to meet exacting standards of quality. Several separate systems and protocols have been developed to serve this community, such as Max/MSP and Pd for synthesis and teaching, JackTrip for networked audio, MIDI/OSC for communication, as well as Max/MSP and TouchOSC for interface design, to name a few. With the still-nascent Web Audio API standard and related technologies, we are now, more than ever, seeing an increase in these capabilities and their integration in a single ubiquitous platform: the Web browser. In this article, we examine the suitability of the Web browser as a computer music platform in critical aspects of audio synthesis, timing, I/O, and communication. We focus on the new Web Audio API and situate it in the context of associated technologies to understand how well they together can be expected to meet the musical, computational, and development needs of the computer music community. We identify timing and extensibility as two key areas that still need work in order to meet those needs.
APA, Harvard, Vancouver, ISO, and other styles
18

Surges, Greg, Tamara Smyth, and Miller Puckette. "Generative Audio Systems Using Power-Preserving All-Pass Filters." Computer Music Journal 40, no. 1 (March 2016): 54–69. http://dx.doi.org/10.1162/comj_a_00344.

Full text
Abstract:
This article describes the use of second-order all-pass filters as components in a feedback network, with parameters made time varying to enable effects such as phase distortion in a generative audio system. The term “audio” is used here to distinguish from generative “music” systems, emphasizing the strong coupling between processes governing the production of high-level music and lower-level audio. The classical time-invariant implementation of an all-pass filter is subject to issues of instability that can arise when time-invariant filter parameters are allowed to vary over time. These instabilities are examined, along with the adoption of a power-preserving rotation matrix formulation of the all-pass filter to ensure stability and ultimately an improved synthesis for a generative audio system.
APA, Harvard, Vancouver, ISO, and other styles
19

Grimshaw, Mark, and Gareth Schott. "A Conceptual Framework for the Analysis of First-Person Shooter Audio and its Potential Use for Game Engines." International Journal of Computer Games Technology 2008 (2008): 1–7. http://dx.doi.org/10.1155/2008/720280.

Full text
Abstract:
We introduce and describe a new conceptual framework for the design and analysis of audio for immersive first-person shooter games, and discuss its potential implications for the development of the audio component of game engines. The framework was created in order to illustrate and acknowledge the direct role of in-game audio in shaping player-player interactions and in creating a sense of immersion in the game world. Furthermore, it is argued that the relationship between player and sound is best conceptualized theoretically as an acoustic ecology. Current game engines are capable of game world spatiality through acoustic shading, but the ideas presented here provide a framework to explore other immersive possibilities for game audio through real-time synthesis.
APA, Harvard, Vancouver, ISO, and other styles
20

Washizuka, Isamu. "Audio output device with speech synthesis technique." Journal of the Acoustical Society of America 89, no. 6 (June 1991): 3026. http://dx.doi.org/10.1121/1.400772.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Berdahl, Edgar J. "Audio-haptic interaction with modal synthesis models." Journal of the Acoustical Society of America 141, no. 5 (May 2017): 3620. http://dx.doi.org/10.1121/1.4987769.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Petrov, Milen, Asen Asenov, and Adelina Aleksieva-Petrova. "Software Server for Automatic Generation of Audio Lectures (uListenSrv)." International Journal of Advanced Corporate Learning (iJAC) 10, no. 1 (March 30, 2017): 55. http://dx.doi.org/10.3991/ijac.v10i1.6475.

Full text
Abstract:
<p class="Abstract">Facilitation of new methods for learning materials delivery and adoption of new learning experiences and practices in e-learning is always a challenge. Using synthesis of digital audio learning assets and learning objects as one of main sources for conducting learning is not new, but research on using audio lectures or combined audio with presentation lecture is not well investigated and adopted in traditional online learning environments. The main goal of current paper is to present requirements elicitation, software analysis, design, construction and testing of secure and reusable software architecture for production and delivery of learning resources with audio elements in university programming courses. Paper presents different architecture styles for designing the system and finish with presentation of development and usage of contemporary Software Server for Automatic Generation of Audio Lectures (uListenSrv). Main difference here is support of languages, not only in English, but not so popular languages, like Bulgarian language.</p>
APA, Harvard, Vancouver, ISO, and other styles
23

Nurani, Ima. "ANALISIS KEBUTUHAN PENGEMBANGAN MEDIA AUDIO VISUAL POKOK BAHASAN SINTESIS PROTEIN UNTUK SMA." Jurnal VARIDIKA 28, no. 1 (September 18, 2016): 90–95. http://dx.doi.org/10.23917/varidika.v28i1.1961.

Full text
Abstract:
The aim of this study is to obtain 1) description of the use of learning media as teaching resource on biology learning process that has been happening in the field 2) the views and constraints faced by teachers in the field for delivering protein synthesis subject. 3) formulation of learning media that needs to be developed on biology learning of protein synthesis subject. The observation result was in the form of teacher’s needs assessment instrument that analyzed using descriptive qualitative obtained that the use of learning media as teaching resource on biology learning process that has been happening in the field has not done optimally. There are constraints in delivery of material, especially material that cannot be observed directly or abstract. 2) The protein synthesis subject is a difficult subject, in its delivery, teachers need media that can visualize the process of protein synthesis for twelfth grade and explain it in detail and correctly so there is no misconception of students towards protein synthesis subject. 3) The learning media that needs to be developed in biology learning of protein synthesis subject is in the form of audio-visual media.
APA, Harvard, Vancouver, ISO, and other styles
24

Gurav, Kashmira Milind, Neha Kulkarni, Vittaldas Shetty, Vineet Vinay, Pratik Borade, Suraj Ghadge, and Ketaki Bhor. "Effectiveness of Audio and Audio-Visual Distraction Aids for Management of Pain and Anxiety in Children and Adults Undergoing Dental Treatment- A Systematic Review And Meta-Analysis." Journal of Clinical Pediatric Dentistry 46, no. 2 (March 1, 2022): 86–106. http://dx.doi.org/10.17796/1053-4625-46.2.2.

Full text
Abstract:
Dentists have a wide variety of techniques available to them such as tell -show-do, relaxation, distraction, systematic desensitisation, modelling, audio analgesia, hypnosis, and behaviour rehearsal. There is no concrete research as systematic review and meta-analysis indicating which explains the most effective distraction technique. Aim: To summarize effectiveness of audio and audio-visual (AV) distraction aids for management of pain and anxiety in children undergoing dental treatment. Study design: Literature search: PubMed/MEDLINE, DOAJ, Science Direct from June – July 2020 with randomized control clinical trials conducted on children with audio and AV distraction aids as intervention and those which had anxiety and pain as outcomes were searched. Fifty articles were identified and relevance was determined. 14 studies were included for qualitative synthesis and 05 were eligible for meta-analysis. Cochrane handbook used to assess the risk of bias. The meta analysis conducted using review manager 5.3 software. Results: Meta-analysis, cumulative mean difference for audio and AV distraction techniques was calculated with main outcomes as pulse rate, O2 level, Vehman’s picture and clinical test. These findings showed significant difference favoring the intervention (audio and AV) group when compared with control but indicating more effectiveness of AV distractions. Conclusion: Different audio-visual aids assist in reducing pain and anxiety in children but using audio distraction aids when audio-visual aids are not available could be acceptable way for distracting and treating children
APA, Harvard, Vancouver, ISO, and other styles
25

Zhao, Shuyi. "Tone Recognition Database of Electronic Pipe Organ Based on Artificial Intelligence." Mathematical Problems in Engineering 2021 (March 2, 2021): 1–12. http://dx.doi.org/10.1155/2021/5526517.

Full text
Abstract:
In the past few decades, artificial intelligence technology has experienced rapid development, and its application in modern industrial systems has grown rapidly. This research mainly discusses the construction of a database of electronic pipe organ tone recognition based on artificial intelligence. The timbre synthesis module realizes the timbre synthesis of the electronic pipe organ according to the current timbre parameters. The audio time domain information (that is, the audio data obtained by file analysis) is framed and windowed, and fast Fourier transform (FFT) is performed on each frame to obtain the frequency domain information of each frame. The harmonic peak method based on improved confidence is used to identify the pitch, obtain the fundamental tone of the tone, and calculate its multiplier. Based on the timbre parameters obtained in the timbre parameter editing interface, calculate the frequency domain information of the synthesized timbre of each frame, and then perform the inverse Fourier transform to obtain the time domain waveform of each frame; connect the time domain waveforms of different frames by the cross-average method to obtain the time-domain waveform of the synthesized tone (that is, the audio data of the synthesized tone). After collecting the sound of the electronic pipe organ, the audio needs to be denoised, and the imported audio file needs to be parsed to obtain the audio data information. Then, the audio data are frequency-converted and the timbre characteristic information is analyzed; the timbre parameters are obtained through the human-computer interaction interface based on artificial intelligence, and the timbre of the electronic pipe organ is generated. If the timbre effect is not satisfactory, you can re-edit the timbre parameters through the human-computer interaction interface to generate timbre. During the experiment, the overall recognition rate of 3762 notes and 286 beats was 88.6%. The model designed in this study can flexibly generate electronic pipe organ sound libraries of different qualities to meet the requirements of sound authenticity.
APA, Harvard, Vancouver, ISO, and other styles
26

Budhiono, Sofia Shieldy, and I. Dewa Made Bayu Atmaja Darmawan. "Pitch Transcription of Solo Instrument Tones Using the Autocorrelation Method." JELIKU (Jurnal Elektronik Ilmu Komputer Udayana) 8, no. 3 (January 25, 2020): 347. http://dx.doi.org/10.24843/jlk.2020.v08.i03.p18.

Full text
Abstract:
Pitch transcription is basically identifying and copying pitch on an audio or music. In this case, the pitch of a solo instrument music is processed to find the composition of the music tones, the method used is autocorrelation. After processing, the system will produce pitch transcription results from the audio that has been processed. this research done digitalization of an old method of identifying pitches into an application that is able to transcribe pitches in an audio. This Pitch Transcription application was created using the Python programming language with Librosa library as a library for audio processing. The purpose of making this system is to facilitate the identification of pitch from a solo instrument music. This Application Feature itself besides being able to show the results of pitch transcription can also display the Onset Graph, Signal Graph play the results of the synthesis transcribed audio sounds. Testing in this study uses audio sourced from 4 single instruments, there are flute, piano, violin and acoustic guitar. The test results show that the implementation of the autocorrelation method in the solo instrument tone transcription application has an accuracy of 92.85%.
APA, Harvard, Vancouver, ISO, and other styles
27

Kiefer, Chris. "Sample-level sound synthesis with recurrent neural networks and conceptors." PeerJ Computer Science 5 (July 8, 2019): e205. http://dx.doi.org/10.7717/peerj-cs.205.

Full text
Abstract:
Conceptors are a recent development in the field of reservoir computing; they can be used to influence the dynamics of recurrent neural networks (RNNs), enabling generation of arbitrary patterns based on training data. Conceptors allow interpolation and extrapolation between patterns, and also provide a system of boolean logic for combining patterns together. Generation and manipulation of arbitrary patterns using conceptors has significant potential as a sound synthesis method for applications in computer music but has yet to be explored. Conceptors are untested with the generation of multi-timbre audio patterns, and little testing has been done on scalability to longer patterns required for audio. A novel method of sound synthesis based on conceptors is introduced. Conceptular Synthesis is based on granular synthesis; sets of conceptors are trained to recall varying patterns from a single RNN, then a runtime mechanism switches between them, generating short patterns which are recombined into a longer sound. The quality of sound resynthesis using this technique is experimentally evaluated. Conceptor models are shown to resynthesise audio with a comparable quality to a close equivalent technique using echo state networks with stored patterns and output feedback. Conceptor models are also shown to excel in their malleability and potential for creative sound manipulation, in comparison to echo state network models which tend to fail when the same manipulations are applied. Examples are given demonstrating creative sonic possibilities, by exploiting conceptor pattern morphing, boolean conceptor logic and manipulation of RNN dynamics. Limitations of conceptor models are revealed with regards to reproduction quality, and pragmatic limitations are also shown, where rises in computation and memory requirements preclude the use of these models for training with longer sound samples. The techniques presented here represent an initial exploration of the sound synthesis potential of conceptors, demonstrating possible creative applications in sound design; future possibilities and research questions are outlined.
APA, Harvard, Vancouver, ISO, and other styles
28

Pueo Ortega, Basilio, and Victoria Tur Viñes. "Sonido espacial para una inmersión audiovisual de alto realismo." Revista ICONO14 Revista científica de Comunicación y Tecnologías emergentes 7, no. 2 (July 1, 2009): 334–45. http://dx.doi.org/10.7195/ri14.v7i2.330.

Full text
Abstract:
Los sistemas de vídeo y audio de alta inmersión tienen un auge impor-tante en entornos audiovisuales realistas. Las sensaciones visuales y sonoras que crean en el público se aproximan con un alto grado de similitud a lo percibido en el entorno real que pretenden recrear. Para ello, los estímulos deben contener toda la información necesaria, tanto espacial como temporal, que permita crear la ilusión de que el objeto audiovisual es real. En este artículo, se realiza un repaso de los sistemas audiovisuales que permiten esta recreación, con especial atención en los sistemas de audio envolvente. Se describe la técnica de audio 3D más prometedora, Wave Field Synthesis, junto con diversos campos de aplicación de entornos audiovisuales de alto realismo.
APA, Harvard, Vancouver, ISO, and other styles
29

Stacey, Jemaine E., Christopher Atkin, Helen Henshaw, Katherine L. Roberts, Harriet A. Allen, Lucy V. Justice, and Stephen P. Badham. "Does audio-visual information result in improved health-related decision-making compared with audio-only or visual-only information? Protocol for a systematic review and meta-analysis." BMJ Open 12, no. 4 (April 2022): e059599. http://dx.doi.org/10.1136/bmjopen-2021-059599.

Full text
Abstract:
IntroductionMaking health-related decisions can be difficult due to the amount and complexity of information available. Audio-visual information may improve memory for health information but whether audio-visual information can enhance health-related decisions has not been explored using quantitative methods. The objective of this systematic review is to understand how effective audio-visual information is for informing health-related decision-making compared with audio-only or visual-only information.Methods and analysisRandomised controlled trials (RCTs) will be included if they include audio-visual and either audio-only or visual-only information provision and decision-making in a health setting. Studies will be excluded if they are not reported in English. Twelve databases will be searched including: Ovid MEDLINE, PubMed and PsychINFO. The Cochrane Risk of Bias tool (V.7) will be used to assess risk of bias in included RCTs. Results will be synthesised primarily using a meta-analysis; where quantitative data are not reported, a narrative synthesis will be used.Ethics and disseminationNo ethical issues are foreseen. Data will be disseminated via academic publication and conference presentations. Findings may also be published in scientific newsletters and magazines. This review is funded by the Economic and Social Research Council.PROSPERO registration numberCRD42021255725.
APA, Harvard, Vancouver, ISO, and other styles
30

Böttcher, Niels, Héctor P. Martínez, and Stefania Serafin. "Procedural Audio in Computer Games Using Motion Controllers: An Evaluation on the Effect and Perception." International Journal of Computer Games Technology 2013 (2013): 1–16. http://dx.doi.org/10.1155/2013/371374.

Full text
Abstract:
A study has been conducted into whether the use of procedural audio affects players in computer games using motion controllers. It was investigated whether or not (1) players perceive a difference between detailed and interactive procedural audio and prerecorded audio, (2) the use of procedural audio affects their motor-behavior, and (3) procedural audio affects their perception of control. Three experimental surveys were devised, two consisting of game sessions and the third consisting of watching videos of gameplay. A skiing game controlled by a Nintendo Wii balance board and a sword-fighting game controlled by a Wii remote were implemented with two versions of sound, one sample based and the other procedural based. The procedural models were designed using a perceptual approach and by alternative combinations of well-known synthesis techniques. The experimental results showed that, when being actively involved in playing or purely observing a video recording of a game, the majority of participants did not notice any difference in sound. Additionally, it was not possible to show that the use of procedural audio caused any consistent change in the motor behavior. In the skiing experiment, a portion of players perceived the control of the procedural version as being more sensitive.
APA, Harvard, Vancouver, ISO, and other styles
31

Sanfilippo, Dario, and Andrea Valle. "Feedback Systems: An Analytical Framework." Computer Music Journal 37, no. 2 (June 2013): 12–27. http://dx.doi.org/10.1162/comj_a_00176.

Full text
Abstract:
The use of feedback-based systems in the music domain dates back to the 1960s. Their applications span from music composition and sound organization to audio synthesis and processing, as the interest in feedback resulted both from theoretical reflection on cybernetics and system theory, and from practical experimentation on analog circuits. The advent of computers has made possible the implementation of complex theoretical systems in audio-domain oriented applications, in some sense bridging the gap between theory and practice in the analog domain, and further increasing the range of audio and musical applications of feedback systems. In this article we first sketch a minimal history of feedback in music; second, we briefly introduce feedback systems from a theoretical point of view; then we propose a set of features that characterize them from the perspective of music applications; finally, we propose a typology targeted at feedback systems used in the audio/musical domain and discuss some relevant examples.
APA, Harvard, Vancouver, ISO, and other styles
32

Abbas, Wasfi. "Audio-Visual Poetry A Semiotic-Cultural Reading in Interactive Digital Poem (Vision of Hope)." Journal of Umm Al-Qura University for Language Sciences and Literature, no. 28 (August 1, 2021): 253–302. http://dx.doi.org/10.54940/ll78073145.

Full text
Abstract:
This research aims at reading the Saudi poet Mohamed Habibi’s interactive digital poem, ‘Vision of Hope’, through the digital semiotic-cultural approach, trying to depict the external and internal text relationships, identifying the type of rhetoric to which the smart movement of the image and the multifaceted diversity of voices have added, and recognizing the cultural dimension of depicting a human being as a thing. What adds to the importance of this research is pinpointing the role of the technical syntheses of figurative language in the poem, ‘Vision of Hope’ that served the poetic text to become animatic, and served the non-verbal texts to become poetic. The research concludes that though the poem is technically simple because of the absence of hypertext, it is significantly rich because of the existence of the technical synthesis that has contributed in giving each of its component the advantage of the other. In addition, although the poet follows his admiration of the beauty of the three musical compositions of Nusseir Shammah, some of them were not well employed. The poet’s creative experience of interactive digital poetry is promising and deserves to be supported and sustained.
APA, Harvard, Vancouver, ISO, and other styles
33

Hyun, Dong-Il, Young-Cheol Park, and Dae Hee Youn. "Improved Phase Synthesis for Parametric Stereo Audio Coding." Journal of the Institute of Electronics Engineers of Korea 50, no. 12 (December 25, 2013): 184–90. http://dx.doi.org/10.5573/ieek.2013.50.12.184.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Sharma, Garima, and Karthikeyan Umapathy. "Trends in Audio Texture Analysis, Synthesis, and Applications." Journal of the Audio Engineering Society 70, no. 3 (March 14, 2022): 108–27. http://dx.doi.org/10.17743/jaes.2021.0060.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Faller, C. "Parametric multichannel audio coding: synthesis of coherence cues." IEEE Transactions on Audio, Speech and Language Processing 14, no. 1 (January 2006): 299–310. http://dx.doi.org/10.1109/tsa.2005.854105.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Elfitri, I., Banu Günel, and A. M. Kondoz. "Multichannel Audio Coding Based on Analysis by Synthesis." Proceedings of the IEEE 99, no. 4 (April 2011): 657–70. http://dx.doi.org/10.1109/jproc.2010.2102310.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Ranjan, Rishabh, and Woon-Seng Gan. "Wave Field Synthesis: The Future of Spatial Audio." IEEE Potentials 32, no. 2 (March 2013): 17–23. http://dx.doi.org/10.1109/mpot.2012.2212051.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Vijayakumar, V., and C. Eswaran. "Synthesis of audio spectra using a diffraction model." Journal of the Acoustical Society of America 116, no. 4 (October 2004): 2594. http://dx.doi.org/10.1121/1.4785352.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Eswaran, C., and V. Vijayakumar. "Synthesis and dynamic behavior of diffracted audio spectra." Journal of the Acoustical Society of America 119, no. 5 (May 2006): 3441. http://dx.doi.org/10.1121/1.4786936.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Jia, Jia, Shen Zhang, Fanbo Meng, Yongxin Wang, and Lianhong Cai. "Emotional Audio-Visual Speech Synthesis Based on PAD." IEEE Transactions on Audio, Speech, and Language Processing 19, no. 3 (March 2011): 570–82. http://dx.doi.org/10.1109/tasl.2010.2052246.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Vijayakumar, V., and C. Eswaran. "Synthesis of audio spectra using a diffraction model." Journal of the Acoustical Society of America 120, no. 6 (December 2006): EL70—EL77. http://dx.doi.org/10.1121/1.2364470.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Sheikhzadeh-Nadjar, Hamid. "Method and system for real time audio synthesis." Journal of the Acoustical Society of America 121, no. 5 (2007): 2492. http://dx.doi.org/10.1121/1.2739188.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Kakihara, Kiyotsugu, Satoshi Nakamura, and Kiyohiro Shikano. "Facial movement synthesis by HMM from audio speech." Electronics and Communications in Japan (Part II: Electronics) 85, no. 4 (March 13, 2002): 37–46. http://dx.doi.org/10.1002/ecjb.10044.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Twomey, Robert, and Michael McCrea. "Transforming the Commonplace through Machine Perception: Light Field Synthesis and Audio Feature Extraction in the Rover Project." Leonardo 50, no. 4 (August 2017): 400–408. http://dx.doi.org/10.1162/leon_a_01458.

Full text
Abstract:
Rover is a mechatronic imaging device inserted into quotidian space, transforming the sights and sounds of the everyday through its peculiar modes of machine perception. Using computational light field photography and machine listening, it creates a kind of cinema following the logic of dreams: suspended but mobile, familiar yet infinitely variable in detail. Rover draws on diverse traditions of robotic exploration, landscape and still-life depiction, and audio field recording to create a hybrid form between photography and cinema. This paper describes the mechatronic, machine perception, and audio-visual synthesis techniques developed for the piece.
APA, Harvard, Vancouver, ISO, and other styles
45

Muhammad Sufyan As-Tsauri. "IMPLEMENTASI METODE TAMI OTAKA DALAM PEMBELAJARAN HAFALAN AL-QUR’AN DI TK PINTAR KOTA BANDUNG." Paedagogia: Jurnal Pendidikan 10, no. 1 (April 6, 2021): 67–84. http://dx.doi.org/10.24239/pdg.vol10.iss1.143.

Full text
Abstract:
The purpose of this research is to see the implementation, evaluation, and advantages and disadvantages of the Tami Otaka Method. The Tami Otaka Method is a method of memorizing by using the ability of the right brain by means of the students memorizing the Qur’an while moving their hands from the verse they are reading. The type of research taken is field research using a qualitative approach. The object of this research is TK PINTAR Bandung, which includes the principal, tahfidz teacher, curriculum, and students. To obtain research data, researcher used observation, interview, and documentation techniques. This research uses descriptive qualitative analysis, so that the results of this study are manifested in the form of words both orally and verbally. The results obtained are as follows: first, to achieve purpose of the learning, TK PINTAR Bandung uses the Tami Otaka method with the help of audio, visual, audio-visual and body movements of a teacher. Second, the curriculum used is a synthesis of the National Curriculum and the Typical Curriculum of TK PINTAR Bandung. Third, learning is carried out in groups. Fourth, the evaluation is given continuously everyday and there is also an evaluation every semester. Tujuan dari penelitian ini adalah untuk mengetahui pelaksanaan, penerapan, evaluasi serta kelebihan dan kekurangan dari Metode Tami Otaka. Metode Tami Otaka adalah Metode menghafal menggunakan kemampuan otak kanan dengan cara siswa menghafal Al-Qur’an sambil menggerakkan tangan dari arti ayat yang dibaca. Jenis penelitian yang diambil adalah penelitian lapangan dengan menggunakan pendekatan kualitatif. Objek penelitian ini adalah lembaga TK PINTAR Bandung yang di dalamnya ada kepala sekolah, guru tahfidz, kurikulum, dan siswa. Untuk memperoleh data penelitian, peneliti menggunakan teknik observasi, wawancara, dan dokumentasi. Penelitian ini menggunakan analisis kualitatif deskriptif, sehingga hasil dari penelitian ini diwujudkan dalam bentuk kata-kata baik tulisan maupun lisan. Hasil penelitian diperoleh sebagai berikut: pertama, untuk mencapai tujuan pembelajaran, TK PINTAR Bandung menggunakan metode Tami Otaka dengan dibantu media auido, visual, audio-visual dan gerakan tubuh dari seorang guru. Kedua, kurikulum yang digunakan sintesis dari kurikulum Nasional dan kurikulum khas TK PINTAR Bandung. Ketiga, pembelajaran dilakukan secara berkelompok. Keempat, evaluasi diberikan terus menerus setiap harinya dan ada juga evaluasi tiap semester.
APA, Harvard, Vancouver, ISO, and other styles
46

Ikeshiro, Ryo. "Audification and Non-Standard Synthesis in Construction in Self." Organised Sound 19, no. 1 (February 26, 2014): 78–89. http://dx.doi.org/10.1017/s1355771813000435.

Full text
Abstract:
The author's Construction in Self (2009) belongs to the interdisciplinary context of auditory display/music. Its use of data at audio rate could be described as both audification and non-standard synthesis. The possibilities of audio-rate data use and the relation between the above descriptions are explored, and then used to develop a conceptual and theoretical basis of the work.Vickers and Hogg's term ‘indexicality’ is used to contrast audio with control rate. The conceptual implications of its use within the digital medium and the possibility for the formation of higher-order structures are discussed. Grond and Hermann's notion of ‘familiarity’ is used to illustrate the difference between audification and non-standard synthesis, and the contexts of auditory display and music respectively. Familiarity is given as being determined by Dombois and Eckel's categories of data. Kubisch's Electrical Walks, Xenakis's GENDYN and the audification of seismograms are used as examples. Bogost's concept of the alien is introduced, and its relevance to the New Aesthetic and Algorave are discussed. Sound examples from Construction in Self are used to demonstrate the varying levels of familiarity or noise possible and suggested as providing a way of bridging the divide between institutional and underground electronic music.
APA, Harvard, Vancouver, ISO, and other styles
47

Sheng, Xiao Wei, Jun Wei Han, and Ming Hui Hao. "Sound Analysis and Synthesis for Audio Simulation System of Flight Simulator." Advanced Materials Research 748 (August 2013): 708–12. http://dx.doi.org/10.4028/www.scientific.net/amr.748.708.

Full text
Abstract:
Through simulating the feelings of vision, hearing, motion and force during flight, the flight simulator can build a realistic flight environment on the ground. Therefore, Flight simulators have an important role in pilot training field. Audio simulation system is a key component of the flight simulator, it has direct impact on the realism and immersion of flight simulation. In this paper, we introduced the development procedure of audio simulation system shortly. Software implementation and its key development technology were mainly focused for expressing the necessity of extracting sound sources from original cockpit recordings. Based on the development method and practical recording conditions, we dicussed sound short-time Fourier analysis and synthesis technology, and sound linear prediction analysis and synthesis technology in detail. The objective of using these technologies was to extract sound sources from original recordings and this was also the preparation step of sound simulation.
APA, Harvard, Vancouver, ISO, and other styles
48

Kendall, Gary S., Christopher Haworth, and Rodrigo F. Cádiz. "Sound Synthesis with Auditory Distortion Products." Computer Music Journal 38, no. 4 (December 2014): 5–23. http://dx.doi.org/10.1162/comj_a_00265.

Full text
Abstract:
This article describes methods of sound synthesis based on auditory distortion products, often called combination tones. In 1856, Helmholtz was the first to identify sum and difference tones as products of auditory distortion. Today this phenomenon is well studied in the context of otoacoustic emissions, and the “distortion” is understood as a product of what is termed the cochlear amplifier. These tones have had a rich history in the music of improvisers and drone artists. Until now, the use of distortion tones in technological music has largely been rudimentary and dependent on very high amplitudes in order for the distortion products to be heard by audiences. Discussed here are synthesis methods to render these tones more easily audible and lend them the dynamic properties of traditional acoustic sound, thus making auditory distortion a practical domain for sound synthesis. An adaptation of single-sideband synthesis is particularly effective for capturing the dynamic properties of audio inputs in real time. Also presented is an analytic solution for matching up to four harmonics of a target spectrum. Most interestingly, the spatial imagery produced by these techniques is very distinctive, and over loudspeakers the normal assumptions of spatial hearing do not apply. Audio examples are provided that illustrate the discussion.
APA, Harvard, Vancouver, ISO, and other styles
49

Planinec, Vedran, Kristian Jambrošić, Petar Franček, and Marko Horvat. "Virtual sound source perception challenges of binaural audio systems with head-tracking." INTER-NOISE and NOISE-CON Congress and Conference Proceedings 265, no. 6 (February 1, 2023): 1133–40. http://dx.doi.org/10.3397/in_2022_0156.

Full text
Abstract:
With the recent leaps in spatial audio technology, the use of binaural head-tracking for spatial audio can revolutionize the way how music and audio are experienced. Moreover, to research noise-related perception in laboratories, binaural head-tracking is frequently used as an audio reproduction system. Although this technology is getting significantly better in recent years, signal processing of binaural audio is often experiencing problems such as non-adequate sound source externalization and unacceptably high values of system response time since it influences the usability of the technology at fast head rotation. In this paper, the results of an experiment are presented, in which test subjects are determining the direction of a virtual sound source in the horizontal plane with multiple parameter changes. The experiment is done in a controlled environment in a listening room with known acoustical properties. Parameter variation includes: hardware head-tracker variation (commercial one and head-tracker based on simple embedded system), various software solutions for real-time binaural synthesis, and variation in the used Head-Related Transfer Functions. The problem of externalization of a virtual sound source via headphones is discussed, and possible solutions to the problem are given. Results of the experiment and recorded data are presented.
APA, Harvard, Vancouver, ISO, and other styles
50

Office, Editorial. "Audit Research Summaries." Maandblad Voor Accountancy en Bedrijfseconomie 90, no. 5 (May 10, 2016): 214–18. http://dx.doi.org/10.5117/mab.90.31307.

Full text
Abstract:
Ook deze maand presenteren wij weer enkele “Audit Research Summaries” uit de database van de American Accounting Association (www.auditingresearchsummaries. org). De eerste samenvatting betreft de studie met de titel “Auditing Related Party Transactions: A Literature Overview and Research Synthesis”. Deze studie verschaft een overzicht van onderzoek naar de controle van Related Party Transactions (RPTs). Een interessante bevinding luidt dat in geval van fraude die een accountant niet heeft weten te ontdekken, vaak de nodige RPTs hebben plaatsgevonden. Ook wordt geconstateerd dat in de gevallen dat RPTs gepaard gaan met een gebrekkige corporate governance, het risico op fraude relatief groot is.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography