Siga este enlace para ver otros tipos de publicaciones sobre el tema: Sound synthesi.

Artículos de revistas sobre el tema "Sound synthesi"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "Sound synthesi".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

KRONLAND-MARTINET, R., Ph GUILLEMAIN y S. YSTAD. "Modelling of natural sounds by time–frequency and wavelet representations". Organised Sound 2, n.º 3 (noviembre de 1997): 179–91. http://dx.doi.org/10.1017/s1355771898009030.

Texto completo
Resumen
Sound modelling is an important part of the analysis–synthesis process since it combines sound processing and algorithmic synthesis within the same formalism. Its aim is to make sound simulators by synthesis methods based on signal models or physical models, the parameters of which are directly extracted from the analysis of natural sounds. In this article the successive steps for making such systems are described. These are numerical synthesis and sound generation methods, analysis of natural sounds, particularly time–frequency and time–scale (wavelet) representations, extraction of pertinent parameters, and the determination of the correspondence between these parameters and those corresponding to the synthesis models. Additive synthesis, nonlinear synthesis, and waveguide synthesis are discussed.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Novkovic, Dragan, Marko Peljevic y Mateja Malinovic. "Synthesis and analysis of sounds developed from the Bose-Einstein condensate: Theory and experimental results". Muzikologija, n.º 24 (2018): 95–109. http://dx.doi.org/10.2298/muz1824095n.

Texto completo
Resumen
Two seemingly incompatible worlds of quantum physics and acoustics have their meeting point in experiments with the Bose-Einstein Condensate. From the very beginning, the Quantum Music project was based on the idea of converting the acoustic phenomena of quantum physics that appear in experiments into the sound domain accessible to the human ear. The first part of this paper describes the experimental conditions in which these acoustic phenomena occur. The second part of the paper describes the process of sound synthesis which was used to generate final sounds. Sound synthesis was based on the use of two types of basic data: theoretical formulas and the results of experiments with the Bose-Einstein condensate. The process of sound synthesis based on theoretical equations was conducted following the principles of additive synthesis, realized using the Java Script and Max MSP software. The synthesis of sounds based on the results of experiments was done using the MatLab software. The third part or the article deals with the acoustic analysis of the generated sounds, indicating some of the acoustic phenomena that have emerged. Also, we discuss the possible ways of using such sounds in the process of composing and performing contemporary music.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Miner, Nadine E., Timothy E. Goldsmith y Thomas P. Caudell. "Perceptual Validation Experiments for Evaluating the Quality of Wavelet-Synthesized Sounds". Presence: Teleoperators and Virtual Environments 11, n.º 5 (octubre de 2002): 508–24. http://dx.doi.org/10.1162/105474602320935847.

Texto completo
Resumen
This paper describes three psychoacoustic experiments that evaluated the perceptual quality of sounds generated from a new wavelet-based synthesis technique. The synthesis technique provides a method for modeling and synthesizing perceptually compelling sound. The experiments define a methodology for evaluating the effectiveness of any synthesized sound. An identification task and a context-based rating task evaluated the perceptual quality of individual sounds. These experiments confirmed that the wavelet technique synthesizes a wide variety of compelling sounds from a small model set. The third experiment obtained sound similarity ratings. Psychological scaling methods were applied to the similarity ratings to generate both spatial and network models of the perceptual relations among the synthesized sounds. These analysis techniques helped to refine and extend the sound models. Overall, the studies provided a framework to validate synthesized sounds for a variety of applications including virtual reality and data sonification systems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Min, Dongki, Buhm Park y Junhong Park. "Artificial Engine Sound Synthesis Method for Modification of the Acoustic Characteristics of Electric Vehicles". Shock and Vibration 2018 (2018): 1–8. http://dx.doi.org/10.1155/2018/5209207.

Texto completo
Resumen
Sound radiation from electric motor-driven vehicles is negligibly small compared to sound radiation from internal combustion engine automobiles. When running on a local road, an artificial sound is required as a warning signal for the safety of pedestrians. In this study, an engine sound was synthesized by combining artificial mechanical and combustion sounds. The mechanical sounds were made by summing harmonic components representing sounds from rotating engine cranks. The harmonic components, including not only magnitude but also phase due to frequency, were obtained by the numerical integration method. The combustion noise was simulated by random sounds with similar spectral characteristics to the measured value and its amplitude was synchronized by the rotating speed. Important parameters essential for the synthesized sound to be evaluated as radiation from actual engines were proposed. This approach enabled playing of sounds for arbitrary engines. The synthesized engine sounds were evaluated for recognizability of vehicle approach and sound impression through auditory experiments.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Miner, Nadine E. y Thomas P. Caudell. "A Wavelet Synthesis Technique for Creating Realistic Virtual Environment Sounds". Presence: Teleoperators and Virtual Environments 11, n.º 5 (octubre de 2002): 493–507. http://dx.doi.org/10.1162/105474602320935838.

Texto completo
Resumen
This paper describes a new technique for synthesizing realistic sounds for virtual environments. The four-phase technique described uses wavelet analysis to create a sound model. Parameters are extracted from the model to provide dynamic sound synthesis control from a virtual environment simulation. Sounds can be synthesized in real time using the fast inverse wavelet transform. Perceptual experiment validation is an integral part of the model development process. This paper describes the four-phase process for creating the parameterized sound models. Several developed models and perceptual experiments for validating the sound synthesis veracity are described. The developed models and results demonstrate proof of the concept and illustrate the potential of this approach.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

MANDELIS, JAMES y PHIL HUSBANDS. "GENOPHONE: EVOLVING SOUNDS AND INTEGRAL PERFORMANCE PARAMETER MAPPINGS". International Journal on Artificial Intelligence Tools 15, n.º 04 (agosto de 2006): 599–621. http://dx.doi.org/10.1142/s0218213006002837.

Texto completo
Resumen
This paper explores the application of evolutionary techniques to the design of novel sounds and their characteristics during performance. It is based on the "selective breeding" paradigm and as such dispensing with the need for detailed knowledge of the Sound Synthesis Techniques involved, in order to design sounds that are novel and of musical interest. This approach has been used successfully on several SSTs therefore validating it as an Adaptive Sound Meta-synthesis Technique. Additionally, mappings between the control and the parametric space are evolved as part of the sound setup. These mappings are used during performance.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Serquera, Jaime y Eduardo Reck Miranda. "Histogram Mapping Synthesis: A Cellular Automata-Based Technique for Flexible Sound Design". Computer Music Journal 38, n.º 4 (diciembre de 2014): 38–52. http://dx.doi.org/10.1162/comj_a_00267.

Texto completo
Resumen
Histogram mapping synthesis (HMS) is a new technique for sound design based on cellular automata (CA). Cellular automata are computational models that create moving images. In the context of HMS, and based on a novel digital signal processing approach, these images are analyzed by histogram measurements, giving a sequence of histograms as a result. In a nutshell, these histogram sequences are converted into spectrograms that, in turn, are rendered into sounds. Unlike other CA-based systems, the HMS mapping process is not intuition-based, nor is it totally arbitrary; it is based instead on resemblances discovered between the components of the histogram sequences and the spectral components of the sounds. Our main concern is to address the problem of the sound-design limitations of synthesis techniques based on CA. These limitations stem, fundamentally, from the unpredictable and autonomous nature of these computational models. As a result, one of the main advantages of HMS is that it affords more control over the sound-design process than other sound-synthesis techniques using CA. The timbres that we have designed with HMS range from those that are novel to those that are imitations of sounds produced by acoustic means. All the sounds obtained present dynamic features, and many of them, including some of those that are novel, retain important characteristics of sounds produced by acoustic means.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Corbella, Maurizio y Anna Katharina Windisch. "Sound Synthesis, Representation and Narrative Cinema in the Transition to Sound (1926-1935)". Cinémas 24, n.º 1 (26 de febrero de 2014): 59–81. http://dx.doi.org/10.7202/1023110ar.

Texto completo
Resumen
Since the beginnings of western media culture, sound synthesis has played a major role in articulating cultural notions of the fantastic and the uncanny. As a counterpart to sound reproduction, sound synthesis operated in the interstices of the original/copy correspondence and prefigured the construction of a virtual reality through the generation of novel sounds apparently lacking any equivalent with the acoustic world. Experiments on synthetic sound crucially intersected cinema’s transition to synchronous sound in the late 1920s, thus configuring a particularly fertile scenario for the redefinition of narrative paradigms and the establishment of conventions for sound film production. Sound synthesis can thus be viewed as a structuring device of such film genres as horror and science fiction, whose codification depended on the constitution of synchronized sound film. More broadly, sound synthesis challenged the basic implications of realism based on the rendering of speech and the construction of cinematic soundscapes.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

WRIGHT, MATTHEW, JAMES BEAUCHAMP, KELLY FITZ, XAVIER RODET, AXEL RÖBEL, XAVIER SERRA y GREGORY WAKEFIELD. "Analysis/synthesis comparison". Organised Sound 5, n.º 3 (diciembre de 2000): 173–89. http://dx.doi.org/10.1017/s1355771800005070.

Texto completo
Resumen
We compared six sound analysis/synthesis systems used for computer music. Each system analysed the same collection of twenty-seven varied input sounds, and output the results in Sound Description Interchange Format (SDIF). We describe each system individually then compare the systems in terms of availability, the sound model(s) they use, interpolation models, noise modelling, the mutability of various sound models, the parameters that must be set to perform analysis, and characteristic artefacts. Although we have not directly compared the analysis results among the different systems, our work has made such a comparison possible.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Yuan, J., X. Cao, D. Wang, J. Chen y S. Wang. "Research on Bus Interior Sound Quality Based on Masking Effects". Fluctuation and Noise Letters 17, n.º 04 (14 de septiembre de 2018): 1850037. http://dx.doi.org/10.1142/s0219477518500372.

Texto completo
Resumen
Masking effect is a very common psychoacoustic phenomenon, which occurs when there is a suitable sound that masks the original sound. In this paper, we will discuss bus interior sound quality based on the masking effects and the appropriate masking sound selection to mask the original sounds inside a bus. We developed three subjective evaluation indexes which are noisiness, acceptability and anxiety. These were selected to reflect passengers’ feelings more accurately when they are subject to the masking sound. To analyze the bus interior sound quality with various masking sounds, the subjective–objective synthesis evaluation model was constructed using fuzzy mathematics. According to the study, the appropriate masking sound can mask the bus interior noise and optimize the bus interior sound quality.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Sun, Zhongbo, Jiajia Jiang, Yao Li, Chunyue Li, Zhuochen Li, Xiao Fu y Fajie Duan. "An automated piecewise synthesis method for cetacean tonal sounds based on time-frequency spectrogram". Journal of the Acoustical Society of America 151, n.º 6 (junio de 2022): 3758–69. http://dx.doi.org/10.1121/10.0011551.

Texto completo
Resumen
Bionic signal waveform design plays an important role in biological research, as well as bionic underwater acoustic detection and communication. Most conventional methods cannot construct high-similarity bionic waveforms to match complex cetacean sounds or easily modify the time-frequency structure of the synthesized bionic signals. In our previous work, we proposed a synthesis and modification method for cetacean tonal sounds, but it requires a lot of manpower to construct each bionic signal segment to match the tonal sound contour. To solve these problems, an automated piecewise synthesis method is proposed. First, based on the time-frequency spectrogram of each tonal sound, the fundamental contour and each harmonic contour of the tonal sound is automatically recognized and extracted. Then, based on the extracted contours, four sub power frequency modulation bionic signal models are combined to match cetacean sound contours. Finally, combining the envelopes of the fundamental frequency and each harmonic, the synthesized bionic signal is obtained. Experimental results show that the Pearson correlation coefficient (PCC) between all true cetacean sounds and their corresponding bionic signals are higher than 0.95, demonstrating that the proposed method can automatically imitate all kinds of simple and complex cetacean tonal sounds with high similarity.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Xie, Liping, Chihua Lu, Zhien Liu, Yawei Zhu y Weizhi Song. "A method of generating car sounds based on granular synthesis algorithm". Noise Control Engineering Journal 70, n.º 4 (1 de julio de 2022): 384–93. http://dx.doi.org/10.3397/1/377031.

Texto completo
Resumen
The active sound generation technique for automobiles, where car sounds can be synthesized by means of electronic sound production, is one of the effective methods to achieve the target sound. A method for active sound generation of automobile based on the granular synthesis algorithm is put forward here to avoid the broadband beating phenomena that occur due to the mismatch of parameters of sound granular signals. The comparison of the function expression of sound signal and Hilbert transformation is performed based on the principle of overlap and add; moreover, those parameters (phase, frequency and amplitude) of sound signals are interpolated by means of the Hermite interpolation algorithm which can ensure the continuity of the phase, frequency and amplitude curves. Thus, the transition audio is constructed by means of the sound signal function here to splice the adjacent sound granules. Our simulations show that our method can be applied to solve the current broadband beating issue for splicing sound granules and achieve natural continuity of synthesized car sounds. The subjective test results also indicate that our transition audio can produce high quality audio restitution.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Hadjakos, Aristotelis. "Gaussian Process Synthesis of Artificial Sounds". Applied Sciences 10, n.º 5 (5 de marzo de 2020): 1781. http://dx.doi.org/10.3390/app10051781.

Texto completo
Resumen
In this paper, we propose Gaussian Process (GP) sound synthesis. A GP is used to sample random continuous functions, which are then used for wavetable or waveshaping synthesis. The shape of the sampled functions is controlled with the kernel function of the GP. Sampling multiple times from the same GP generates perceptually similar but non-identical sounds. Since there are many ways to choose the kernel function and its parameters, an interface aids the user in sound selection. The interface is based on a two-dimensional visualization of the sounds grouped by their similarity as judged by a t-SNE analysis of their Mel Frequency Cepstral Coefficient (MFCC) representations.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Buxton, Rachel T., Amber L. Pearson, Claudia Allou, Kurt Fristrup y George Wittemyer. "A synthesis of health benefits of natural sounds and their distribution in national parks". Proceedings of the National Academy of Sciences 118, n.º 14 (22 de marzo de 2021): e2013097118. http://dx.doi.org/10.1073/pnas.2013097118.

Texto completo
Resumen
Parks are important places to listen to natural sounds and avoid human-related noise, an increasingly rare combination. We first explore whether and to what degree natural sounds influence health outcomes using a systematic literature review and meta-analysis. We identified 36 publications examining the health benefits of natural sound. Meta-analyses of 18 of these publications revealed aggregate evidence for decreased stress and annoyance (g = −0.60, 95% CI = −0.97, −0.23) and improved health and positive affective outcomes (g = 1.63, 95% CI = 0.09, 3.16). Examples of beneficial outcomes include decreased pain, lower stress, improved mood, and enhanced cognitive performance. Given this evidence, and to facilitate incorporating public health in US national park soundscape management, we then examined the distribution of natural sounds in relation to anthropogenic sound at 221 sites across 68 parks. National park soundscapes with little anthropogenic sound and abundant natural sounds occurred at 11.3% of the sites. Parks with high visitation and urban park sites had more anthropogenic sound, yet natural sounds associated with health benefits also were frequent. These included animal sounds (audible for a mean of 59.3% of the time, SD: 23.8) and sounds from wind and water (mean: 19.2%, SD: 14.8). Urban and other parks that are extensively visited offer important opportunities to experience natural sounds and are significant targets for soundscape conservation to bolster health for visitors. Our results assert that natural sounds provide important ecosystem services, and parks can bolster public health by highlighting and conserving natural soundscapes.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Maruyama, Hironori, Kosuke Okada y Isamu Motoyoshi. "A two-stage spectral model for sound texture perception: Synthesis and psychophysics". i-Perception 14, n.º 1 (enero de 2023): 204166952311573. http://dx.doi.org/10.1177/20416695231157349.

Texto completo
Resumen
The natural environment is filled with a variety of auditory events such as wind blowing, water flowing, and fire crackling. It has been suggested that the perception of such textural sounds is based on the statistics of the natural auditory events. Inspired by a recent spectral model for visual texture perception, we propose a model that can describe the perceived sound texture only with the linear spectrum and the energy spectrum. We tested the validity of the model by using synthetic noise sounds that preserve the two-stage amplitude spectra of the original sound. Psychophysical experiment showed that our synthetic noises were perceived as like the original sounds for 120 real-world auditory events. The performance was comparable with the synthetic sounds produced by McDermott-Simoncelli's model which considers various classes of auditory statistics. The results support the notion that the perception of natural sound textures is predictable by the two-stage spectral signals.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

PEARSON, MARK. "TAO: a physical modelling system and related issues". Organised Sound 1, n.º 1 (abril de 1996): 43–50. http://dx.doi.org/10.1017/s1355771896000167.

Texto completo
Resumen
This paper describes TAO, a system for sound synthesis by physical modelling based on a new technique called cellular sound synthesis (CSS). The system provides a general mechanism for constructing an infinite variety of virtual instruments, and does so by providing a virtual acoustic material, elastic in nature, whose physical characteristics can be fine-tuned to produce different timbres. A wide variety of sounds such as plucked, hit, bowed and scraped sounds can be produced, all having natural physical and spacial qualities. Some of the musical and philosophical issues considered to be most important during the design and development of the system are touched upon, and the main features of the system are explained with reference to practical examples. Advantages and disadvantages of the synthesis technique and the prototype system are discussed, together with suggestions for future improvements.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Miranda, Eduardo R. y John Matthias. "Music Neurotechnology for Sound Synthesis: Sound Synthesis with Spiking Neuronal Networks". Leonardo 42, n.º 5 (octubre de 2009): 439–42. http://dx.doi.org/10.1162/leon.2009.42.5.439.

Texto completo
Resumen
Music neurotechnology is a new research area emerging at the crossroads of neurobiology, engineering sciences and music. Examples of ongoing research into this new area include the development of brain-computer interfaces to control music systems and systems for automatic classification of sounds informed by the neurobiology of the human auditory apparatus. The authors introduce neurogranular sampling, a new sound synthesis technique based on spiking neuronal networks (SNN). They have implemented a neurogranular sampler using the SNN model developed by Izhikevich, which reproduces the spiking and bursting behavior of known types of cortical neurons. The neurogranular sampler works by taking short segments (or sound grains) from sound files and triggering them when any of the neurons fire.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Elie, Benjamin, Benjamin Cotté y Xavier Boutillon. "Physically-based sound synthesis software for Computer-Aided-Design of piano soundboards". Acta Acustica 6 (2022): 30. http://dx.doi.org/10.1051/aacus/2022024.

Texto completo
Resumen
The design of pianos is mainly based on empirical knowledge due to the lack of a simple tool that could predict sound changes induced by modifications of the geometry and/or the mechanical properties of the soundboard. We introduce the concept of Sound Computer-Aided Design through the framework of a program that is intended to simulate the acoustic results of virtual pianos. The calculation of the sound is split into four modules that compute respectively the modal basis of the stiffened soundboard, the string dynamics excited by the hammer, the soundboard dynamics excited by the string vibration, and the sound radiation. The exact resemblance between synthesis and natural sounds is not the primary purpose of the software. However, sound synthesis of real and modified pianos are used as reference tests to assess our main objective, namely to reflect faithfully structural modifications in the produced sound, and thus to make this tool helpful for both instrument makers and researchers of the musical acoustics community.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Pluta, Marek Janusz, Daniel Tokarczyk y Jerzy Wiciak. "Application of a Musical Robot for Adjusting Guitar String Re-Excitation Parameters in Sound Synthesis". Applied Sciences 12, n.º 3 (5 de febrero de 2022): 1659. http://dx.doi.org/10.3390/app12031659.

Texto completo
Resumen
Sound synthesis methods based on physical modelling of acoustic instruments depend on data that require measurements and recordings. If a musical instrument is operated by a human, a difficulty in filtering out variability is introduced due to a lack of repeatability in excitation parameters, or in varying physical contact between a musician and an instrument, resulting in the damping of vibrating elements. Musical robots can solve this problem. Their repeatability and controllability allows studying even subtle phenomena. This paper presents an application of a robot in studying the re-excitation of a string in an acoustic guitar. The obtained results are used to improve a simple synthesis model of a vibrating string, based on the finite difference method. The improved model reproduced the observed phenomena, such as the alteration of the signal spectrum, damping, and ringing, all of which can be perceived by a human, and add up to the final sound of an instrument. Moreover, as it was demonstrated by using two different string plucking mechanisms, musical robots can be redesigned to study other sound production phenomena and, thus, to further improve the behaviours of and sounds produced by models applied in sound synthesis.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Narváez, Pedro y Winston S. Percybrooks. "Synthesis of Normal Heart Sounds Using Generative Adversarial Networks and Empirical Wavelet Transform". Applied Sciences 10, n.º 19 (8 de octubre de 2020): 7003. http://dx.doi.org/10.3390/app10197003.

Texto completo
Resumen
Currently, there are many works in the literature focused on the analysis of heart sounds, specifically on the development of intelligent systems for the classification of normal and abnormal heart sounds. However, the available heart sound databases are not yet large enough to train generalized machine learning models. Therefore, there is interest in the development of algorithms capable of generating heart sounds that could augment current databases. In this article, we propose a model based on generative adversary networks (GANs) to generate normal synthetic heart sounds. Additionally, a denoising algorithm is implemented using the empirical wavelet transform (EWT), allowing a decrease in the number of epochs and the computational cost that the GAN model requires. A distortion metric (mel–cepstral distortion) was used to objectively assess the quality of synthetic heart sounds. The proposed method was favorably compared with a mathematical model that is based on the morphology of the phonocardiography (PCG) signal published as the state of the art. Additionally, different heart sound classification models proposed as state-of-the-art were also used to test the performance of such models when the GAN-generated synthetic signals were used as test dataset. In this experiment, good accuracy results were obtained with most of the implemented models, suggesting that the GAN-generated sounds correctly capture the characteristics of natural heart sounds.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Rhys, Paul. "Smart Interfaces for Granular Synthesis of Sound by Fractal Organization". Computer Music Journal 40, n.º 3 (septiembre de 2016): 58–67. http://dx.doi.org/10.1162/comj_a_00374.

Texto completo
Resumen
This article describes software for granular synthesis of sound. The software features a graphical interface that enables easy creation and modification of sound clouds by deterministic fractal organization. Output sound clouds exist in multidimensional parameter–time space, and are constructed as a micropolyphony of statements of a single input melody or group of notes. The approach described here is an effective alternative to statistical methods, creating sounds with vitality and interest over a range of time scales. Standard techniques are used for the creation of individual grains. Innovation is demonstrated in the particular approach to fractal organization of the sound cloud and in the design of a smart interface to effect easy control of cloud morphology. The interface provides for intuitive control and reorganization of large amounts of data.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Valentin, Olivier, Pierre Grandjean, Clément Girin, Philippe-Aubert Gauthier, Alain Berry y Étienne Parizet. "Influence of sound spatial reproduction method on the detectability of reversing alarms in laboratory conditions". Acta Acustica 7 (2023): 9. http://dx.doi.org/10.1051/aacus/2023002.

Texto completo
Resumen
This paper investigates the ability of binaural recording and reproduction to be used for measuring the detectability of reversing alarms in laboratory experiments. A complex and repeatible scenario was created using a wave-field synthesis system (WFS), and in-situ recordings in a lime mine. The reproduced sound field was further recorded with a dummy-head. Participants were asked to achieve a visual task (target tracking) while detecting two types of reversing alarms (tonal and broadband), mimicking an approaching vehicle. The experiment was conducted twice : at the center of a WFS array and in a sound-proof booth, using binaural recordings presented with headphones. Results showed that the detection times measured using binaural listening were significantly different from those measured in a fully immersive sound field reproduction. These differences were also greater with tonal sounds compared to broadband sounds. This study shows the limitations of the binaural technique to be used for such applications.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Murphy, Emma, Mathieu Lagrange, Gary Scavone, Philippe Depalle y Catherine Guastavino. "Perceptual Evaluation of Rolling Sound Synthesis". Acta Acustica united with Acustica 97, n.º 5 (1 de septiembre de 2011): 840–51. http://dx.doi.org/10.3813/aaa.918464.

Texto completo
Resumen
Three listening tests were conducted to perceptually evaluate different versions of a new real-time synthesis approach for sounds of sustained contact interactions. This study aims to identify the most effective algorithm to create a realistic sound for rolling objects. In Experiment 1 and 2, participants were asked to rate the extent to which 6 different versions sounded like rolling sounds. Subsequently, in Experiment 3, participants compared the 6 versions best rated in Experiment 1 and 2, to the original recordings. Results are presented in terms of both statistical analysis of the most effective synthesis algorithm and qualitative user comments. On methodological grounds, the comparison of Experiments 1, 2 and 3 highlights major differences between judgments collected in reference to the original recordings as opposed to judgments based on memory representations of rolling sounds.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Moroni, Artemis, Jônatas Manzolli, Fernando Von Zuben y Ricardo Gudwin. "Vox Populi: An Interactive Evolutionary System for Algorithmic Music Composition". Leonardo Music Journal 10 (diciembre de 2000): 49–54. http://dx.doi.org/10.1162/096112100570602.

Texto completo
Resumen
While recent techniques of digital sound synthesis have put numerous new sounds on the musician's desktop, several artificial-intelligence (AI) techniques have also been applied to algorithmic composition. This article introduces Vox Populi, a system based on evolutionary computation techniques for composing music in real time. In Vox Populi, a population of chords codified according to MIDI protocol evolves through the application of genetic algorithms to maximize a fitness criterion based on physical factors relevant to music. Graphical controls allow the user to manipulate fitness and sound attributes.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

MIRANDA, EDUARDO RECK, JAMES CORREA y JOE WRIGHT. "Categorising complex dynamic sounds". Organised Sound 5, n.º 2 (agosto de 2000): 95–102. http://dx.doi.org/10.1017/s1355771800002065.

Texto completo
Resumen
Chaosynth is a cellular automata-based granular synthesis system whose capabilities for producing unusual complex dynamic sounds are limitless. However, due to its newness and flexibility, potential users have found it very hard to explore its possibilities as there is no clear referential framework to hold on to when designing sounds. Standard software synthesis systems take this framework for granted by adopting a taxonomy for synthesis instruments that has been inherited from the acoustic musical instruments tradition, i.e. woodwind, brass, string, percussion, etc. Sadly, the most interesting synthesised sounds that these systems can produce are simply referred to as effects. This scheme clearly does not meet the demands of more innovative software synthesizers. In order to alleviate this problem, we propose an alternative taxonomy for Chaosynth timbres. The paper begins with a brief introduction to the basic functioning of Chaosynth. It then presents our proposed taxonomy and ends with concluding comments. A number of examples are provided on this volume's Organised Sound CD.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Sutanti, Putu Asri Sri y Gst Ayu Vida Mastrika Giri. "Low Filtering Method for Noise Reduction at Text to Speech Application". JELIKU (Jurnal Elektronik Ilmu Komputer Udayana) 8, n.º 3 (25 de enero de 2020): 339. http://dx.doi.org/10.24843/jlk.2020.v08.i03.p17.

Texto completo
Resumen
Technological developments have greatly encouraged various researchers to develop several studies in the IT field. One branch of research in the IT field is sound synthesis. Some text-to-speech applications, are usually quite difficult to form and are less flexible in replacing existing types of sound. In addition, sometimes accent or how someone speaks is not well represented, so it is quite difficult to form a text-to-speech application by using the desired sound like user voice or other sounds. From the above problems, this research propose an application that can change text into sound or text-to-speech which is more flexible and in accordance with the wishes of the user. From the results of testing that has been done, this system has an accuracy of 70%.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Collins, Nick. "Experiments with a new customisable interactive evolution framework". Organised Sound 7, n.º 3 (diciembre de 2002): 267–73. http://dx.doi.org/10.1017/s1355771802003060.

Texto completo
Resumen
This article collates results from a number of applications of interactive evolution as a sound designer's tool for exploring the parameter spaces of synthesis algorithms. Experiments consider reverberation algorithms, wavetable synthesis, synthesis of percussive sounds and an analytical solution of the stiff string. These projects share the property of being difficult to probe by trial and error sampling of the parameter space. Interactive evolution formed the guidance principle for what quickly proved a more effective search through the multitude of parameter settings.The research was supported by building an interactive genetic algorithm library in the audio programming language SuperCollider. This library provided reusable code for the user interfaces and the underlying genetic algorithm itself, whilst preserving enough generality to support the framework of each individual investigation.Whilst there is nothing new in the use of genetic algorithms in sound synthesis tasks, the experiments conducted here investigate new applications such as reverb design and an analytical stiff string model not previously encountered in the literature. Further, the focus of this work is now shifting more into algorithmic composition research, where the generative algorithms are less clear-cut than those of these experiments. Lessons learned from the deployment of interactive evolution in sound design problems are very useful as a reference for the extension of the problem set.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Beauchamp, James. "Perceptually Correlated Parameters of Musical Instrument Tones". Archives of Acoustics 36, n.º 2 (1 de mayo de 2011): 225–38. http://dx.doi.org/10.2478/v10168-011-0018-8.

Texto completo
Resumen
AbstractIn Western music culture instruments have been developed according to unique instrument acoustical features based on types of excitation, resonance, and radiation. These include the woodwind, brass, bowed and plucked string, and percussion families of instruments. On the other hand, instrument performance depends on musical training, and music listening depends on perception of instrument output. Since musical signals are easier to understand in the frequency domain than the time domain, much effort has been made to perform spectral analysis and extract salient parameters, such as spectral centroids, in order to create simplified synthesis models for musical instrument sound synthesis. Moreover, perceptual tests have been made to determine the relative importance of various parameters, such as spectral centroid variation, spectral incoherence, and spectral irregularity. It turns out that the importance of particular parameters depends on both their strengths within musical sounds as well as the robustness of their effect on perception. Methods that the author and his colleagues have used to explore timbre perception are: 1) discrimination of parameter reduction or elimination; 2) dissimilarity judgments together with multidimensional scaling; 3) informal listening to sound morphing examples. This paper discusses ramifications of this work for sound synthesis and timbre transposition.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Vysotskaya, Marianna S. "On the Problem of Notation in Mixed Type Composition: From the Experience of Marco Stroppa". Vestnik of Saint Petersburg University. Arts 12, n.º 3 (2022): 414–31. http://dx.doi.org/10.21638/spbu15.2022.301.

Texto completo
Resumen
The evolutionary processes in the field of musical notation, which characterize the second half of the 20th century, reflected the main trend in the individualization of styles. The large-scale development of new instrumental techniques and technologies for synthesis and electronic sound processing stimulated the further development of a musical notation system as one of the means of visualizing a musical idea. Marco Stroppa, one of the leading composers of modern Europe, made significant developments in the field of graphic fixation of both new timbres and various aspects of the interaction of acoustic and electronic instruments within the framework of a mixed type composition. The interpenetration of the techniques of sound synthesis and instrumental writing as a special subject of Stroppa’s interest is reflected not only in his musical work, but also in his texts. The musicological literature in Russian about Stroppa is represented by the only article by the author of this publication, in which, for the first time, a number of aspects of Stroppa’s compositional method were analyzed using the example of the triptych “Traiettoria” for piano and computer-generated sounds, and the history of the birth of the piece was recreated. This publication focuses on the problem of notation in a mixed type composition and introduces into scientific use Stroppa’s compositional developments, implemented by him in the score “Traiettoria… deviate”, the first part of the “Traiettoria” cycle. The symbolic graphics of electronic sounds (“sound ‘objects’”) are considered, based on the composer’s commentary, such essential concepts for his workshop as a sound complex-“code”, temporal and frequency “staves” are characterized, examples of dynamic levels notation, pitch indication are presented as well as the schemes of the spatial disposition of the “synthetic orchestra” — a complex of multiple sound sources that organize the “spatial polyphony” of the piece.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Conan, Simon, Etienne Thoret, Mitsuko Aramaki, Olivier Derrien, Charles Gondre, Sølvi Ystad y Richard Kronland-Martinet. "An Intuitive Synthesizer of Continuous-Interaction Sounds: Rubbing, Scratching, and Rolling". Computer Music Journal 38, n.º 4 (diciembre de 2014): 24–37. http://dx.doi.org/10.1162/comj_a_00266.

Texto completo
Resumen
In this article, we propose a control strategy for synthesized continuous-interaction sounds. The framework of our research is based on the action–object paradigm that describes the sound as the result of an action on an object and that presumes the existence of sound invariants (i.e., perceptually relevant signal morphologies that carry information about the action's or the object's attributes). Auditory cues are investigated here for the evocations of rubbing, scratching, and rolling interactions. A generic sound-synthesis model that simulates these interactions is detailed. We then suggest an intuitive control strategy that enables users to navigate continuously from one interaction to another in an “action space,” thereby offering the possibility to simulate morphed interactions—for instance, ones that morph between rubbing and rolling.
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Harrison, Reginald Langford, Stefan Bilbao, James Perry y Trevor Wishart. "An Environment for Physical Modeling of Articulated Brass Instruments". Computer Music Journal 39, n.º 4 (diciembre de 2015): 80–95. http://dx.doi.org/10.1162/comj_a_00332.

Texto completo
Resumen
This article presents a synthesis environment for physical modeling of valved brass instrument sounds. Synthesis is performed using finite-difference time-domain methods that allow for flexible simulation of time-varying systems. Users have control over the instrument configuration as well as player parameters, such as mouth pressure, lip dynamics, and valve depressions, which can be varied over the duration of a gesture. This article introduces the model used in the environment, the development of code from prototyping in MATLAB and optimization in C, and the incorporation of the executable file in the Sound Loom interface of the Composers Desktop Project. Planned additions to the environment are then discussed. The environment binaries are available to download online along with example sounds and input files.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Ohta, Shinichi. "Electronic musical apparatus for synthesizing vocal sounds using formant sound synthesis techniques". Journal of the Acoustical Society of America 103, n.º 6 (junio de 1998): 3138. http://dx.doi.org/10.1121/1.423031.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Song, Eun-Sung, Young-Jun Lim y Bongju Kim. "A User-Specific Approach for Comfortable Application of Advanced 3D CAD/CAM Technique in Dental Environments Using the Harmonic Series Noise Model". Applied Sciences 9, n.º 20 (14 de octubre de 2019): 4307. http://dx.doi.org/10.3390/app9204307.

Texto completo
Resumen
Recently, there has been a focus on improving the user’s emotional state by providing high-quality sound beyond noise reduction against industrial product noise. Three-dimensional computer aided design and computer-aided manufacturing (3D CAD/CAM) dental milling machines are a major source of industrial product noise in the dental environment. Here, we propose a noise-control method to improve the sound quality in the dental environment. Our main goals are to analyze the acoustic characteristics of the sounds generated from the dental milling machine, to control the noise by active noise control, and to improve the sound quality of the residual noise by synthesized new sound. In our previous study, we demonstrated noise reduction in dental milling machines through tactile transducers. To improve the sound quality on residual noise, we performed frequency analysis, and synthesized sound similarly as musical instruments, using the harmonic series noise model. Our data suggest that noise improvement through synthesis may prove to be a useful tool in the development of dental devices.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Klatzky, Roberta L., Dinesh K. Pai y Eric P. Krotkov. "Perception of Material from Contact Sounds". Presence: Teleoperators and Virtual Environments 9, n.º 4 (agosto de 2000): 399–410. http://dx.doi.org/10.1162/105474600566907.

Texto completo
Resumen
Contact sounds can provide important perceptual cues in virtual environments. We investigated the relation between material perception and variables that govern the synthesis of contact sounds. A shape-invariant, auditory-decay parameter was a powerful determinant of the perceived material of an object. Subjects judged the similarity of synthesized sounds with respect to material (Experiment 1 and 2) or length (Experiment 3). The sounds corresponded to modal frequencies of clamped bars struck at an intermediate point, and they varied in fundamental frequency and frequency-dependent rate of decay. The latter parameter has been proposed as reflecting a shape-invariant material property: damping. Differences between sounds in both decay and frequency affected similarity judgments (magnitude of similarity and judgment duration), with decay playing a substantially larger role. Experiment 2, which varied the initial sound amplitude, showed that decay rate—rather than total energy or sound duration—was the critical factor in determining similarity. Experiment 3 demonstrated that similarity judgments in the first two studies were specific to instructions to judge material. Experiment 4, in which subjects assigned the sounds to one of four material categories, showed an influence of frequency and decay, but confirmed the greater importance of decay. Decay parameters associated with each category were estimated and found to correlate with physical measures of damping. The results support the use of a simplified model of material in virtual auditory environments.
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Andreu, Sergi y Monica Villanueva Aylagas. "Neural Synthesis of Sound Effects Using Flow-Based Deep Generative Models". Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment 18, n.º 1 (11 de octubre de 2022): 2–9. http://dx.doi.org/10.1609/aiide.v18i1.21941.

Texto completo
Resumen
Creating variations of sound effects for video games is a time-consuming task that grows with the size and complexity of the games themselves. The process usually comprises recording source material and mixing different layers of sound to create sound effects that are perceived as diverse during gameplay. In this work, we present a method to generate controllable variations of sound effects that can be used in the creative process of sound designers. We adopt WaveFlow, a generative flow model that works directly on raw audio and has proven to perform well for speech synthesis. Using a lower-dimensional mel spectrogram as the conditioner allows both user controllability and a way for the network to generate more diversity. Additionally, it gives the model style transfer capabilities. We evaluate several models in terms of the quality and variability of the generated sounds using both quantitative and subjective evaluations. The results suggest that there is a trade-off between quality and diversity. Nevertheless, our method achieves a quality level similar to that of the training set while generating perceivable variations according to a perceptual study that includes game audio experts.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

BROWN, RICHARD H. "The Spirit inside Each Object: John Cage, Oskar Fischinger, and “The Future of Music”". Journal of the Society for American Music 6, n.º 1 (febrero de 2012): 83–113. http://dx.doi.org/10.1017/s1752196311000411.

Texto completo
Resumen
AbstractLate in his career, John Cage often recalled his brief interaction with German abstract animator Oskar Fischinger in 1937 as the primary impetus for his early percussion works. Further examination of this connection reveals an important technological foundation to Cage's call for the expansion of musical resources. Fischinger's experiments with film phonography (the manipulation of the optical portion of sound film to synthesize sounds) mirrored contemporaneous refinements in recording and synthesis technology of electron beam tubes for film and television. New documentation on Cage's early career in Los Angeles, including research Cage conducted for his father John Cage, Sr.'s patents, explain his interest in these technologies. Finally, an examination of the sources of Cage's 1940 essay “The Future of Music: Credo” reveals the extent of Cage's knowledge of early sound synthesis and recording technologies and presents a more nuanced understanding of the historical relevance and origins of this document.
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Lazzarini, Victor y Joseph Timoney. "Synthesis of Resonance by Nonlinear Distortion Methods". Computer Music Journal 37, n.º 1 (marzo de 2013): 35–43. http://dx.doi.org/10.1162/comj_a_00160.

Texto completo
Resumen
This article explores techniques for synthesizing resonant sounds using the principle of nonlinear distortion. These methods can be grouped under the heading of “subtractive synthesis without filters,” the case for which has been made in the literature. Starting with a simple resonator model, this article looks at how the source-modifier arrangement can be reconstructed as a heterodyne structure made of a sinusoidal carrier and a complex modulator. From this, we examine how the modulator signal can be created with nonlinear distortion methods, looking at the classic case of phase-aligned formant synthesis and then our own modified frequency-modulation technique. The article concludes with some application examples of this sound-synthesis principle.
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

CHADABE, JOEL. "Look around and get engaged". Organised Sound 9, n.º 3 (diciembre de 2004): 315–16. http://dx.doi.org/10.1017/s1355771804000524.

Texto completo
Resumen
The term ‘computer music’, for those of us who lived through the beginnings, became meaningful during the pioneering period from the late 1950s through the 1970s. It was a positive term. It identified a specific genre of music, a major effort in musical experiment, research and exploration, a wealth of new sound-generating techniques, and a large palette of new sounds. Jean-Claude Risset, in Inharmonique, for example, used additive synthesis to extend principles of tonality into the microworld of spectral progression. John Chowning, in Stria, used the Golden Mean to define FM frequency ratios. For many of us, these were interesting ideas and beautiful sounds.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Pickering, Chris. "Sound Choices". Electric and Hybrid Vehicle Technology International 2018, n.º 1 (julio de 2018): 60–66. http://dx.doi.org/10.12968/s1467-5560(22)60324-5.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Gołaś, Andrzej y Roman Filipek. "Digital Synthesis of Sound Generated by Tibetan Bowls and Bells". Archives of Acoustics 41, n.º 1 (1 de marzo de 2016): 139–50. http://dx.doi.org/10.1515/aoa-2016-0014.

Texto completo
Resumen
Abstract The aim of this paper is to present methods of digitally synthesising the sound generated by vibroacoustic systems with distributed parameters. A general algorithm was developed to synthesise the sounds of selected musical instruments with an axisymmetrical shape and impact excitation, i.e., Tibetan bowls and bells. A coupled mechanical-acoustic field described by partial differential equations was discretized by using the Finite Element Method (FEM) implemented in the ANSYS package. The presented synthesis method is original due to the fact that the determination of the system response in the time domain to the pulse (impact) excitation is based on the numerical calculation of the convolution of the forcing function and impulse response of the system. This was calculated as an inverse Fourier transform of the system’s spectral transfer function. The synthesiser allows for obtaining a sound signal with the assumed, expected parameters by tuning the resonance frequencies which exist in the spectrum of the generated sound. This is accomplished, basing on the Design of Experiment (DOE) theory, by creating a meta-model which contains information on its response surfaces regarding the influence of the design parameters. The synthesis resulted in a sound pressure signal in selected points in space surrounding the instrument which is consistent with the signal generated by the actual instruments, and the results obtained can improve them.
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Petiot, Jean-François, Killian Legeay y Mathieu Lagrange. "Optimization of the Sound of Electric Vehicles According to Unpleasantness and Detectability". Proceedings of the Design Society: International Conference on Engineering Design 1, n.º 1 (julio de 2019): 3949–58. http://dx.doi.org/10.1017/dsi.2019.402.

Texto completo
Resumen
AbstractElectric Vehicles (EVs) are very quite at low speed, which can be hazardous for pedestrians. It is necessary to add warning sounds but this can represent an annoyance if they are poorly designed. On the other hand, they can be not enough detectable because of the masking effect due to the background noise. In this paper, we propose a method for the design of EV sounds that takes into account in the same time detectability and unpleasantness. It is based on user tests and implements Interactive Genetic Algorithms (IGA) for the optimization of the sounds. Synthesized EV sounds, based on additive synthesis and filtering, are proposed to a set of participants during a hearing test. An experimental protocol is proposed for the assessment of the detectability and the unpleasantness of the EV sounds. After the convergence of the method, sounds obtained with the IGA are compared to different sound design proposals. Results show that the quality of the sounds designed by the IGA method is significantly higher than the design proposals, validating the relevance of the approach.
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Yoga, I. Putu Harta y Gst Ayu Vida Mastrika Giri, S.Kom., M.Cs. "Virtual Hybrid Synthesizer Application". JELIKU (Jurnal Elektronik Ilmu Komputer Udayana) 8, n.º 3 (27 de enero de 2020): 357. http://dx.doi.org/10.24843/jlk.2020.v08.i03.p19.

Texto completo
Resumen
Hybrid synthesizers can be syntheses with digital or analog signals on a hardware device. In this article hybrid means make a virtual digital synthesis which combines several synthesis methods. The method used is the synthesis method of additive, substractive, and amplitude modulation (AM). Where the initial signal is an oscillator by making waves with the form Sinusoide Wave, Square Wave and Saw Tooth Wave. This virtual synthesis produces sound that has the same fundamental frequency as the fundamental frequency of notes. Keywords: Hibrid Synthesizer, Additive, Substractive, Amplitude Modulation
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Zhang, Jiaohua. "Piano sound formation as a parameter of performing intonation". Problems of Interaction Between Arts, Pedagogy and the Theory and Practice of Education 57, n.º 57 (10 de marzo de 2020): 259–69. http://dx.doi.org/10.34064/khnum1-57.16.

Texto completo
Resumen
Determining the specifics of sound production on the piano makes it possible to deepen the understanding of piano intonation, which is inseparable from the artistic concept, a choice of musical expressive means, methods of forming and reproducing sounds. The purpose of the article is to study the mechanical-acoustic features and properties of the piano in a holistic relationship with the organization of musical space and artistic means of performance, which were formed in the process of musical practice. Starting from B. Asafiev’s dialectical intonation theory, the methodology of the work reaches the systemic level, including the methods of historical, cultural and comparative research, general scientific logical methods of analysis, synthesis, induction and deduction. Realizing of the objectives of the article is carried out through the study of playing techniques and all the palette of piano touché used in their practice by pianists, as well as the factors that influence the formation of piano sound. It is claimed that conscious piano intonement, being the sound embodiment of musical thought, finds its direct expression through the specifics of sound formation on the studied instrument. The latter is inextricably linked with sound production techniques, dynamics, and pedaling.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Slater, Dan. "Chaotic Sound Synthesis". Computer Music Journal 22, n.º 2 (1998): 12. http://dx.doi.org/10.2307/3680960.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Cirio, Gabriel, Dingzeyu Li, Eitan Grinspun, Miguel A. Otaduy y Changxi Zheng. "Crumpling sound synthesis". ACM Transactions on Graphics 35, n.º 6 (11 de noviembre de 2016): 1–11. http://dx.doi.org/10.1145/2980179.2982400.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Depalle, Philippe y Xavier Rodet. "Sound synthesis process". Journal of the Acoustical Society of America 98, n.º 4 (octubre de 1995): 1838. http://dx.doi.org/10.1121/1.413368.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Woodhouse, Jim, David Politzer y Hossein Mansour. "Acoustics of the banjo: measurements and sound synthesis". Acta Acustica 5 (2021): 15. http://dx.doi.org/10.1051/aacus/2021009.

Texto completo
Resumen
Measurements of vibrational response of an American 5-string banjo and of the sounds of played notes on the instrument are presented, and contrasted with corresponding results for a steel-string guitar. A synthesis model, fine-tuned using information from the measurements, has been used to investigate what acoustical features are necessary to produce recognisable banjo-like sound, and to explore the perceptual salience of a wide range of design modifications. Recognisable banjo sound seems to depend on the pattern of decay rates of “string modes”, the loudness magnitude and profile, and a transient contribution to each played note from the “body modes”. A formant-like feature, peaking around 500–800 Hz on the banjo tested, is found to play a key role. At higher frequencies the dynamic behaviour of the bridge produces additional formant-like features, reminiscent of the “bridge hill” of the violin, and these also produce clear perceptual effects.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Scheuregger, Oliver, Jens Hjortkjær y Torsten Dau. "Identification and Discrimination of Sound Textures in Hearing-Impaired and Older Listeners". Trends in Hearing 25 (enero de 2021): 233121652110656. http://dx.doi.org/10.1177/23312165211065608.

Texto completo
Resumen
Sound textures are a broad class of sounds defined by their homogeneous temporal structure. It has been suggested that sound texture perception is mediated by time-averaged summary statistics measured from early stages of the auditory system. The ability of young normal-hearing (NH) listeners to identify synthetic sound textures increases as the statistics of the synthetic texture approach those of its real-world counterpart. In sound texture discrimination, young NH listeners utilize the fine temporal stimulus information for short-duration stimuli, whereas they switch to a time-averaged statistical representation as the stimulus’ duration increases. The present study investigated how younger and older listeners with a sensorineural hearing impairment perform in the corresponding texture identification and discrimination tasks in which the stimuli were amplified to compensate for the individual listeners’ loss of audibility. In both hearing impaired (HI) listeners and NH controls, sound texture identification performance increased as the number of statistics imposed during the synthesis stage increased, but hearing impairment was accompanied by a significant reduction in overall identification accuracy. Sound texture discrimination performance was measured across listener groups categorized by age and hearing loss. Sound texture discrimination performance was unaffected by hearing loss at all excerpt durations. The older listeners’ sound texture and exemplar discrimination performance decreased for signals of short excerpt duration, with older HI listeners performing better than older NH listeners. The results suggest that the time-averaged statistic representations of sound textures provide listeners with cues which are robust to the effects of age and sensorineural hearing loss.
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Miranda, Eduardo Reck, Peter Thomas y Paulo Vitor Itaboraí. "Q1Synth: A Quantum Computer Musical Instrument". Applied Sciences 13, n.º 4 (13 de febrero de 2023): 2386. http://dx.doi.org/10.3390/app13042386.

Texto completo
Resumen
This paper introduces Q1Synth, an unprecedented musical instrument that produces sounds from (a) quantum state vectors representing the properties of a qubit, and (b) its measurements. The instrument is presented on a computer screen (or mobile device, such as a tablet or smartphone) as a Bloch sphere, which is a visual representation of a qubit. The performer plays the instrument by rotating this sphere using a mouse. Alternatively, a gesture controller can be used, e.g., a VR glove. While the sphere is rotated, a continuously changing sound is produced. The instrument has a ‘measure key’. When the performer activates this key, the instrument generates a program (also known as a quantum circuit) to create the current state vector. Then, it sends the program to a quantum computer over the cloud for processing, that is, measuring, in quantum computing terminology. The computer subsequently returns the measurement, which is also rendered into sound. Currently, Q1Synth uses three different techniques to make sounds: frequency modulation (FM), subtractive synthesis, and granular synthesis. The paper explains how Q1Synth works and details its implementation. A setup developed for a musical performance, Spinnings, with three networked Q1Synth instruments is also reported. Q1Synth and Spinnings are examples of how creative practices can open the doors to new application pathways for quantum computing technology. Additionally, they illustrate how such emerging technology is leading to new approaches to musical instrument design and musical creativity.
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

RODET, XAVIER. "SOUND AND MUSIC FROM CHUA'S CIRCUIT". Journal of Circuits, Systems and Computers 03, n.º 01 (marzo de 1993): 49–61. http://dx.doi.org/10.1142/s0218126693000058.

Texto completo
Resumen
Nonlinear Dynamics have been very inspiring for musicians, but have rarely been considered specifically for sound synthesis. We discuss here the signals produced by Chua's circuit from an acoustical and musical point of view. We have designed a real-time simulation of Chua's circuit on a digital workstation allowing for easy experimentation with the properties and behaviors of the circuit and of the sounds. A surprisingly rich and novel family of musical sounds has been obtained. The audification of the local properties of the parameter space allows for easy determination of very complex structures which could not be computed analytically and would not be simple to determine by other methods. Finally, we have found that the time-delayed Chua's circuit can model the basic behavior of an interesting class of musical instruments.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía