Literatura académica sobre el tema "Sound synthesi"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Sound synthesi".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Artículos de revistas sobre el tema "Sound synthesi"

1

KRONLAND-MARTINET, R., Ph GUILLEMAIN y S. YSTAD. "Modelling of natural sounds by time–frequency and wavelet representations". Organised Sound 2, n.º 3 (noviembre de 1997): 179–91. http://dx.doi.org/10.1017/s1355771898009030.

Texto completo
Resumen
Sound modelling is an important part of the analysis–synthesis process since it combines sound processing and algorithmic synthesis within the same formalism. Its aim is to make sound simulators by synthesis methods based on signal models or physical models, the parameters of which are directly extracted from the analysis of natural sounds. In this article the successive steps for making such systems are described. These are numerical synthesis and sound generation methods, analysis of natural sounds, particularly time–frequency and time–scale (wavelet) representations, extraction of pertinent parameters, and the determination of the correspondence between these parameters and those corresponding to the synthesis models. Additive synthesis, nonlinear synthesis, and waveguide synthesis are discussed.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Novkovic, Dragan, Marko Peljevic y Mateja Malinovic. "Synthesis and analysis of sounds developed from the Bose-Einstein condensate: Theory and experimental results". Muzikologija, n.º 24 (2018): 95–109. http://dx.doi.org/10.2298/muz1824095n.

Texto completo
Resumen
Two seemingly incompatible worlds of quantum physics and acoustics have their meeting point in experiments with the Bose-Einstein Condensate. From the very beginning, the Quantum Music project was based on the idea of converting the acoustic phenomena of quantum physics that appear in experiments into the sound domain accessible to the human ear. The first part of this paper describes the experimental conditions in which these acoustic phenomena occur. The second part of the paper describes the process of sound synthesis which was used to generate final sounds. Sound synthesis was based on the use of two types of basic data: theoretical formulas and the results of experiments with the Bose-Einstein condensate. The process of sound synthesis based on theoretical equations was conducted following the principles of additive synthesis, realized using the Java Script and Max MSP software. The synthesis of sounds based on the results of experiments was done using the MatLab software. The third part or the article deals with the acoustic analysis of the generated sounds, indicating some of the acoustic phenomena that have emerged. Also, we discuss the possible ways of using such sounds in the process of composing and performing contemporary music.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Miner, Nadine E., Timothy E. Goldsmith y Thomas P. Caudell. "Perceptual Validation Experiments for Evaluating the Quality of Wavelet-Synthesized Sounds". Presence: Teleoperators and Virtual Environments 11, n.º 5 (octubre de 2002): 508–24. http://dx.doi.org/10.1162/105474602320935847.

Texto completo
Resumen
This paper describes three psychoacoustic experiments that evaluated the perceptual quality of sounds generated from a new wavelet-based synthesis technique. The synthesis technique provides a method for modeling and synthesizing perceptually compelling sound. The experiments define a methodology for evaluating the effectiveness of any synthesized sound. An identification task and a context-based rating task evaluated the perceptual quality of individual sounds. These experiments confirmed that the wavelet technique synthesizes a wide variety of compelling sounds from a small model set. The third experiment obtained sound similarity ratings. Psychological scaling methods were applied to the similarity ratings to generate both spatial and network models of the perceptual relations among the synthesized sounds. These analysis techniques helped to refine and extend the sound models. Overall, the studies provided a framework to validate synthesized sounds for a variety of applications including virtual reality and data sonification systems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Min, Dongki, Buhm Park y Junhong Park. "Artificial Engine Sound Synthesis Method for Modification of the Acoustic Characteristics of Electric Vehicles". Shock and Vibration 2018 (2018): 1–8. http://dx.doi.org/10.1155/2018/5209207.

Texto completo
Resumen
Sound radiation from electric motor-driven vehicles is negligibly small compared to sound radiation from internal combustion engine automobiles. When running on a local road, an artificial sound is required as a warning signal for the safety of pedestrians. In this study, an engine sound was synthesized by combining artificial mechanical and combustion sounds. The mechanical sounds were made by summing harmonic components representing sounds from rotating engine cranks. The harmonic components, including not only magnitude but also phase due to frequency, were obtained by the numerical integration method. The combustion noise was simulated by random sounds with similar spectral characteristics to the measured value and its amplitude was synchronized by the rotating speed. Important parameters essential for the synthesized sound to be evaluated as radiation from actual engines were proposed. This approach enabled playing of sounds for arbitrary engines. The synthesized engine sounds were evaluated for recognizability of vehicle approach and sound impression through auditory experiments.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Miner, Nadine E. y Thomas P. Caudell. "A Wavelet Synthesis Technique for Creating Realistic Virtual Environment Sounds". Presence: Teleoperators and Virtual Environments 11, n.º 5 (octubre de 2002): 493–507. http://dx.doi.org/10.1162/105474602320935838.

Texto completo
Resumen
This paper describes a new technique for synthesizing realistic sounds for virtual environments. The four-phase technique described uses wavelet analysis to create a sound model. Parameters are extracted from the model to provide dynamic sound synthesis control from a virtual environment simulation. Sounds can be synthesized in real time using the fast inverse wavelet transform. Perceptual experiment validation is an integral part of the model development process. This paper describes the four-phase process for creating the parameterized sound models. Several developed models and perceptual experiments for validating the sound synthesis veracity are described. The developed models and results demonstrate proof of the concept and illustrate the potential of this approach.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

MANDELIS, JAMES y PHIL HUSBANDS. "GENOPHONE: EVOLVING SOUNDS AND INTEGRAL PERFORMANCE PARAMETER MAPPINGS". International Journal on Artificial Intelligence Tools 15, n.º 04 (agosto de 2006): 599–621. http://dx.doi.org/10.1142/s0218213006002837.

Texto completo
Resumen
This paper explores the application of evolutionary techniques to the design of novel sounds and their characteristics during performance. It is based on the "selective breeding" paradigm and as such dispensing with the need for detailed knowledge of the Sound Synthesis Techniques involved, in order to design sounds that are novel and of musical interest. This approach has been used successfully on several SSTs therefore validating it as an Adaptive Sound Meta-synthesis Technique. Additionally, mappings between the control and the parametric space are evolved as part of the sound setup. These mappings are used during performance.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Serquera, Jaime y Eduardo Reck Miranda. "Histogram Mapping Synthesis: A Cellular Automata-Based Technique for Flexible Sound Design". Computer Music Journal 38, n.º 4 (diciembre de 2014): 38–52. http://dx.doi.org/10.1162/comj_a_00267.

Texto completo
Resumen
Histogram mapping synthesis (HMS) is a new technique for sound design based on cellular automata (CA). Cellular automata are computational models that create moving images. In the context of HMS, and based on a novel digital signal processing approach, these images are analyzed by histogram measurements, giving a sequence of histograms as a result. In a nutshell, these histogram sequences are converted into spectrograms that, in turn, are rendered into sounds. Unlike other CA-based systems, the HMS mapping process is not intuition-based, nor is it totally arbitrary; it is based instead on resemblances discovered between the components of the histogram sequences and the spectral components of the sounds. Our main concern is to address the problem of the sound-design limitations of synthesis techniques based on CA. These limitations stem, fundamentally, from the unpredictable and autonomous nature of these computational models. As a result, one of the main advantages of HMS is that it affords more control over the sound-design process than other sound-synthesis techniques using CA. The timbres that we have designed with HMS range from those that are novel to those that are imitations of sounds produced by acoustic means. All the sounds obtained present dynamic features, and many of them, including some of those that are novel, retain important characteristics of sounds produced by acoustic means.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Corbella, Maurizio y Anna Katharina Windisch. "Sound Synthesis, Representation and Narrative Cinema in the Transition to Sound (1926-1935)". Cinémas 24, n.º 1 (26 de febrero de 2014): 59–81. http://dx.doi.org/10.7202/1023110ar.

Texto completo
Resumen
Since the beginnings of western media culture, sound synthesis has played a major role in articulating cultural notions of the fantastic and the uncanny. As a counterpart to sound reproduction, sound synthesis operated in the interstices of the original/copy correspondence and prefigured the construction of a virtual reality through the generation of novel sounds apparently lacking any equivalent with the acoustic world. Experiments on synthetic sound crucially intersected cinema’s transition to synchronous sound in the late 1920s, thus configuring a particularly fertile scenario for the redefinition of narrative paradigms and the establishment of conventions for sound film production. Sound synthesis can thus be viewed as a structuring device of such film genres as horror and science fiction, whose codification depended on the constitution of synchronized sound film. More broadly, sound synthesis challenged the basic implications of realism based on the rendering of speech and the construction of cinematic soundscapes.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

WRIGHT, MATTHEW, JAMES BEAUCHAMP, KELLY FITZ, XAVIER RODET, AXEL RÖBEL, XAVIER SERRA y GREGORY WAKEFIELD. "Analysis/synthesis comparison". Organised Sound 5, n.º 3 (diciembre de 2000): 173–89. http://dx.doi.org/10.1017/s1355771800005070.

Texto completo
Resumen
We compared six sound analysis/synthesis systems used for computer music. Each system analysed the same collection of twenty-seven varied input sounds, and output the results in Sound Description Interchange Format (SDIF). We describe each system individually then compare the systems in terms of availability, the sound model(s) they use, interpolation models, noise modelling, the mutability of various sound models, the parameters that must be set to perform analysis, and characteristic artefacts. Although we have not directly compared the analysis results among the different systems, our work has made such a comparison possible.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Yuan, J., X. Cao, D. Wang, J. Chen y S. Wang. "Research on Bus Interior Sound Quality Based on Masking Effects". Fluctuation and Noise Letters 17, n.º 04 (14 de septiembre de 2018): 1850037. http://dx.doi.org/10.1142/s0219477518500372.

Texto completo
Resumen
Masking effect is a very common psychoacoustic phenomenon, which occurs when there is a suitable sound that masks the original sound. In this paper, we will discuss bus interior sound quality based on the masking effects and the appropriate masking sound selection to mask the original sounds inside a bus. We developed three subjective evaluation indexes which are noisiness, acceptability and anxiety. These were selected to reflect passengers’ feelings more accurately when they are subject to the masking sound. To analyze the bus interior sound quality with various masking sounds, the subjective–objective synthesis evaluation model was constructed using fuzzy mathematics. According to the study, the appropriate masking sound can mask the bus interior noise and optimize the bus interior sound quality.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Tesis sobre el tema "Sound synthesi"

1

PAPETTI, Stefano. "Sound modeling issues in interactive sonification - From basic contact events to synthesis and manipulation tools". Doctoral thesis, Università degli Studi di Verona, 2010. http://hdl.handle.net/11562/340961.

Texto completo
Resumen
Questa tesi affronta una varietà di temi di ricerca, spaziando dalla interazione uomo-macchina alla modellizzazione fisica. Ciò che unisce queste ampie aree di interesse è l'idea di utilizzare simulazioni numeriche di fenomeni acustici basate sulla fisica, al fine di implementare interfacce uomo-macchina che offrano feedback sonoro coerente con l'interazione dell'utente. A questo proposito, negli ultimi anni sono nate numerose nuove discipline che vanno sotto il nome di -- per citarne alcune -- auditory display, sonificazione e sonic interaction design. In questa tesi vengono trattate la progettazione e la realizzazione di algoritmi audio efficienti per la sonificazione interattiva. A tale scopo si fa uso di tecniche di modellazione fisica di suoni ecologici (everyday sounds), ovvero suoni che non rientrano nelle famiglie del parlato e dei suoni musicali.
The work presented in this thesis ranges over a variety of research topics, spacing from human-computer interaction to physical-modeling. What combines such broad areas of interest is the idea of using physically-based computer simulations of acoustic phenomena in order to provide human-computer interfaces with sound feedback which is consistent with the user interaction. In this regard, recent years have seen the emergence of several new disciplines that go under the name of -- to cite a few -- auditory display, sonification and sonic interaction design. This thesis deals with the design and implementation of efficient sound algorithms for interactive sonification. To this end, the physical modeling of everyday sounds is taken into account, that is sounds not belonging to the families of speech and musical sounds.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

FONTANA, Federico. "Physics-based models for the acoustic representation of space in virtual environments". Doctoral thesis, Università degli Studi di Verona, 2003. http://hdl.handle.net/11562/342240.

Texto completo
Resumen
In questo lavoro sono state affrontate alcune questioni inserite nel tema più generale della rappresentazione di scene e ambienti virtuali in contesti d’interazione uomo-macchina, nei quali la modalità acustica costituisca parte integrante o prevalente dell’informazione complessiva trasmessa dalla macchina all’utilizzatore attraverso un’interfaccia personale multimodale oppure monomodale acustica. Più precisamente è stato preso in esame il problema di come presentare il messaggio audio, in modo tale che lo stesso messaggio fornisca all’utilizzatore un’informazione quanto più precisa e utilizzabile relativamente al contesto rappresentato. Il fine di tutto ciò è riuscire a integrare all’interno di uno scenario virtuale almeno parte dell’informazione acustica che lo stesso utilizzatore, in un contesto stavolta reale, normalmente utilizza per trarre esperienza dal mondo circostante nel suo complesso. Ciò è importante soprattutto quando il focus dell’attenzione, che tipicamente impegna il canale visivo quasi completamente, è volto a un compito specifico.
This work deals with the simulation of virtual acoustic spaces using physics-based models. The acoustic space is what we perceive about space using our auditory system. The physical nature of the models means that they will present spatial attributes (such as, for example, shape and size) as a salient feature of their structure, in a way that space will be directly represented and manipulated by means of them.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Liao, Wei-Hsiang. "Modelling and transformation of sound textures and environmental sounds". Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066725/document.

Texto completo
Resumen
Le traitement et la synthèse des sons environnementaux sont devenue un sujet important. Une classe des sons, qui est très important pour la constitution d'environnements sonore, est la classe des textures sonores. Les textures sonores sont décrit par des relations stochastiques et qui contient des composantes non-sinusoïdales à caractère fortement bruité. Il a été montré récemment que la reconnaissance de textures sonores est basée sur des mesures statistiques caractérisant les enveloppes dans les bandes critiques. Il y actuellement très peu d'algorithmes qui permettent à imposer des propriétés statistiques de façon explicite lors de la synthèse de sons. L'algorithme qui impose l'ensemble de statistique qui est perceptivement relevant pour les textures sonore est très couteuse en temps de calcul. Nous proposons une nouvelle approche d'analyse-synthèse qui permet une analyse des statistiques relevant et un mécanisme efficace d'imposer ces statistiques dans le domaine temps-fréquence. La représentation temps-fréquence étudié dans cette thèse est la transformée de Fourier à court terme. Les méthodes proposées par contre sont plus générale et peuvent être généralisé à d'autres représentations temps-fréquence reposant sur des banques de filtres si certaines contraintes sont respectées. L'algorithme proposé dans cette thèse ouvre plusieurs perspectives. Il pourrait être utilisé pour générer des textures sonores à partir d'une description statistique créée artificiellement. Il pourrait servir de base pour des transformations avancées comme le morphing, et on pourrait aussi imaginer à utiliser le modèle pour développer un contrôle sémantique de textures sonores
The processing of environmental sounds has become an important topic in various areas. Environmental sounds are mostly constituted of a kind of sounds called sound textures. Sound textures are usually non-sinusoidal, noisy and stochastic. Several researches have stated that human recognizes sound textures with statistics that characterizing the envelopes of auditory critical bands. Existing synthesis algorithms can impose some statistical properties to a certain extent, but most of them are computational intensive. We propose a new analysis-synthesis framework that contains a statistical description that consists of perceptually important statistics and an efficient mechanism to adapt statistics in the time-frequency domain. The quality of resynthesised sound is at least as good as state-of-the-art but more efficient in terms of computation time. The statistic description is based on the STFT. If certain conditions are met, it can also adapt to other filter bank based time-frequency representations (TFR). The adaptation of statistics is achieved by using the connection between the statistics on TFR and the spectra of time-frequency domain coefficients. It is possible to adapt only a part of cross-correlation functions. This allows the synthesis process to focus on important statistics and ignore the irrelevant parts, which provides extra flexibility. The proposed algorithm has several perspectives. It could possibly be used to generate unseen sound textures from artificially created statistical descriptions. It could also serve as a basis for transformations like stretching or morphing. One could also expect to use the model to explore semantic control of sound textures
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Chapman, David P. "Playing with sounds : a spatial solution for computer sound synthesis". Thesis, University of Bath, 1996. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.307047.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Lee, Chung. "Sound texture synthesis using an enhanced overlap-add approach /". View abstract or full-text, 2008. http://library.ust.hk/cgi/db/thesis.pl?CSED%202008%20LEE.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Conan, Simon. "Contrôle intuitif de la synthèse sonore d’interactions solidiennes : vers les métaphores sonores". Thesis, Ecole centrale de Marseille, 2014. http://www.theses.fr/2014ECDM0012/document.

Texto completo
Resumen
Un des enjeux actuels de la synthèse sonore est le contrôle perceptif (i.e. à partir d’évocations) des processus de synthèse. En effet, les modèles de synthèse sonore dépendent généralement d’un grand nombre de paramètres de bas niveau dont la manipulation nécessite une expertise des processus génératifs. Disposer de contrôles perceptifs sur un synthétiseur offre cependant beaucoup d’avantages en permettant de générer les sons à partir d’une description du ressenti et en offrant à des utilisateurs non-experts la possibilité de créer et de contrôler des sons intuitivement. Un tel contrôle n’est pas immédiat et se base sur des hypothèses fortes liées à notre perception, notamment la présence de morphologies acoustiques, dénommées ``invariants'', responsables de l’identification d’un évènement sonore.Cette thèse aborde cette problématique en se focalisant sur les invariants liés à l’action responsable de la génération des sons. Elle s’articule suivant deux parties. La première a pour but d’identifier des invariants responsables de la reconnaissance de certaines interactions continues : le frottement, le grattement et le roulement. Le but est de mettre en œuvre un modèle de synthèse temps-réel contrôlable intuitivement et permettant d’effectuer des transitions perceptives continues entre ces différents types d’interactions (e.g. transformer progressivement un son de frottement en un son de roulement). Ce modèle s'inscrira dans le cadre du paradigme ``action-objet'' qui stipule que chaque son résulte d’une action (e.g. gratter) sur un objet (e.g. une plaque en bois). Ce paradigme s’adapte naturellement à une approche de la synthèse par modèle source-filtre, où l’information sur l’objet est contenue dans le ``filtre'', et l’information sur l’action dans la ``source''. Pour ce faire, diverses approches sont abordées : études de modèles physiques, approches phénoménologiques et tests perceptifs sur des sons enregistrés et synthétisés.La seconde partie de la thèse concerne le concept de ``métaphores sonores'' en élargissant la notion d’objet à des textures sonores variées. La question posée est la suivante : étant donnée une texture sonore quelconque, est-il possible de modifier ses propriétés intrinsèques pour qu’elle évoque une interaction particulière comme un frottement ou un roulement par exemple ? Pour créer ces métaphores, un processus de synthèse croisée est utilisé dans lequel la partie ``source'' est basée sur les morphologies sonores des actions précédemment identifiées et la partie ``filtre'' restitue les propriétés de la texture. L’ensemble de ces travaux ainsi que le paradigme choisi offre dès lors de nouvelles perspectives pour la constitution d’un véritable langage des sons
Perceptual control (i.e. from evocations) of sound synthesis processes is a current challenge. Indeed, sound synthesis models generally involve a lot of low-level control parameters, whose manipulation requires a certain expertise with respect to the sound generation process. Thus, intuitive control of sound generation is interesting for users, and especially non-experts, because they can create and control sounds from evocations. Such a control is not immediate and is based on strong assumptions linked to our perception, and especially the existence of acoustic morphologies, so-called ``invariants'', responsible for the recognition of specific sound events.This thesis tackles the problem by focusing on invariants linked to specific sound generating actions. If follows two main parts. The first is to identify invariants responsible for the recognition of three categories of continuous interactions: rubbing, scratching and rolling. The aim is to develop a real-time sound synthesizer with intuitive controls that enables users to morph continuously between the different interactions (e.g. progressively transform a rubbing sound into a rolling one). The synthesis model will be developed in the framework of the ``action-object'' paradigm which states that sounds can be described as the result of an action (e.g. scratching) on an object (e.g. a wood plate). This paradigm naturally fits the well-known source-filter approach for sound synthesis, where the perceptually relevant information linked to the object is described in the ``filter'' part, and the action-related information is described in the ``source'' part. To derive our generic synthesis model, several approaches are treated: physical models, phenomenological approaches and listening tests with recorded and synthesized sounds.The second part of the thesis deals with the concept of ``sonic metaphors'' by expanding the object notion to various sound textures. The question raised is the following: given any sound texture, is it possible to modify its intrinsic properties such that it evokes a particular interaction, like rolling or rubbing for instance? To create these sonic metaphors, a cross-synthesis process is used where the ``source'' part is based on the sound morphologies linked to the actions previously identified, and the ``filter'' part renders the sound texture properties. This work, together with the chosen paradigm offers new perspectives to build a sound language
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Caracalla, Hugo. "Sound texture synthesis from summary statistics". Electronic Thesis or Diss., Sorbonne université, 2019. http://www.theses.fr/2019SORUS676.

Texto completo
Resumen
Les textures sonores sont une catégorie de sons incluant le bruit de la pluie, le brouhaha d’une foule ou les pépiements d’un groupe d’oiseaux. Tous ces sons contiennent une part d’imprévisibilité qui n’est habituellement pas recherchée en synthèse sonore, et rend ainsi indispensable l’utilisation d’algorithmes dédiés. Cependant, la grande diversité de leurs propriétés complique la création d’un algorithme capable de synthétiser un large panel de textures. Cette thèse s’intéresse à la synthèse paramétrique de textures sonores. Dans ce paradigme, un ensemble de statistiques sont extraites d’une texture cible et progressivement imposées sur un bruit blanc. Si l’ensemble de statistiques est pertinent, le bruit blanc est alors modifié jusqu’à ressembler à la cible, donnant l’illusion d’avoir été enregistré quelques instants après. Dans un premier temps, nous proposons l’amélioration d’une méthode paramétrique basée sur des statistiques perceptuelles. Cette amélioration vise à améliorer la synthèse d’évènements à forte attaque ou singuliers en modifiant et simplifiant le processus d’imposition. Dans un second temps, nous adaptons une méthode paramétrique de synthèse de textures visuelles basée sur des statistiques extraites par un réseau de neurones convolutifs (CNN) afin de l’utiliser sur des textures sonores. Nous modifions l’ensemble de statistiques utilisées afin de mieux correspondre aux propriétés des signaux sonores, changeons l’architecture du CNN pour l’adapter aux événements présents dans les textures sonores et utilisons une représentation temps-fréquence prenant en compte à la fois amplitude et phase
Sound textures are a wide class of sounds that includes the sound of the rain falling, the hubbub of a crowd and the chirping of flocks of birds. All these sounds present an element of unpredictability which is not commonly sought after in sound synthesis, requiring the use of dedicated algorithms. However, the diverse audio properties of sound textures make the designing of an algorithm able to convincingly recreate varied textures a complex task. This thesis focuses on parametric sound texture synthesis. In this paradigm, a set of summary statistics are extracted from a target texture and iteratively imposed onto a white noise. If the set of statistics is appropriate, the white noise is modified until it resemble the target, sounding as if it had been recorded moments later. In a first part, we propose improvements to perceptual-based parametric method. These improvements aim at making its synthesis of sharp and salient events by mainly altering and simplifying its imposition process. In a second, we adapt a parametric visual texture synthesis method based statistics extracted by a Convolutional Neural Networks (CNN) to work on sound textures. We modify the computation of its statistics to fit the properties of sound signals, alter the architecture of the CNN to best fit audio elements present in sound textures and use a time-frequency representation taking both magnitude and phase into account
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Serquera, Jaime. "Sound synthesis with cellular automata". Thesis, University of Plymouth, 2012. http://hdl.handle.net/10026.1/1189.

Texto completo
Resumen
This thesis reports on new music technology research which investigates the use of cellular automata (CA) for the digital synthesis of dynamic sounds. The research addresses the problem of the sound design limitations of synthesis techniques based on CA. These limitations fundamentally stem from the unpredictable and autonomous nature of these computational models. Therefore, the aim of this thesis is to develop a sound synthesis technique based on CA capable of allowing a sound design process. A critical analysis of previous research in this area will be presented in order to justify that this problem has not been previously solved. Also, it will be discussed why this problem is worthwhile to solve. In order to achieve such aim, a novel approach is proposed which considers the output of CA as digital signals and uses DSP procedures to analyse them. This approach opens a large variety of possibilities for better understanding the self-organization process of CA with a view to identifying not only mapping possibilities for making the synthesis of sounds possible, but also control possibilities which enable a sound design process. As a result of this approach, this thesis presents a technique called Histogram Mapping Synthesis (HMS), which is based on the statistical analysis of CA evolutions by histogram measurements. HMS will be studied with four different automatons, and a considerable number of control mechanisms will be presented. These will show that HMS enables a reasonable sound design process. With these control mechanisms it is possible to design and produce in a predictable and controllable manner a variety of timbres. Some of these timbres are imitations of sounds produced by acoustic means and others are novel. All the sounds obtained present dynamic features and many of them, including some of those that are novel, retain important characteristics of sounds produced by acoustic means.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Picard-Limpens, Cécile. "Expressive Sound Synthesis for Animation". Phd thesis, Université de Nice Sophia-Antipolis, 2009. http://tel.archives-ouvertes.fr/tel-00440417.

Texto completo
Resumen
L'objectif principal de ce travail est de proposer des outils pour une synthèse en temps-réel, réaliste et expressive, des sons résultant d'interactions physiques entre objets dans une scène virtuelle. De fait, ces effets sonores, à l'exemple des bruits de collisions entre solides ou encore d'interactions continues entre surfaces, ne peuvent être prédéfinis et calculés en phase de pré-production. Dans ce cadre, nous proposons deux approches, la première basée sur une modélisation des phénomènes physiques à l'origine de l'émission sonore, la seconde basée sur le traitement d'enregistrements audio. Selon une approche physique, la source sonore est traitée comme la combinaison d'une excitation et d'un résonateur. Dans un premier temps, nous présentons une technique originale traduisant la force d'interaction entre surfaces dans le cas de contacts continus, tel que le roulement. Cette technique repose sur l'analyse des textures utilisées pour le rendu graphique des surfaces de la scène virtuelle. Dans un second temps, nous proposons une méthode d'analyse modale robuste et flexible traduisant les vibrations sonores du résonateur. Outre la possibilité de traiter une large variété de géométries et d'offrir une multi-résolution des paramètres modaux, la méthode permet de résoudre le problème de cohérence entre simulation physique et synthèse sonore, problème fréquemment rencontré en animation. Selon une approche empirique, nous proposons une technique de type granulaire, exprimant la synthèse sonore par un agencement cohérent de particules ou grains sonores. La méthode consiste tout d'abord en un prétraitement d'enregistrements destiné à constituer un matériel sonore sous forme compacte. Ce matériel est ensuite manipulé en temps réel pour, d'une part, une resynthèse complète des enregistrements originaux, et d'autre part, une utilisation flexible en fonction des données reportées par le moteur de simulation et/ou de procédures prédéfinies. Enfin, l'intérêt est porté sur les sons de fracture, au vu de leur utilisation fréquente dans les environnements virtuels, et en particulier les jeux vidéos. Si la complexité du phénomène rend l'emploi d'un modèle purement physique très coûteux, l'utilisation d'enregistrements est également inadaptée pour la grande variété de micro-événements sonores. Le travail de thèse propose ainsi un modèle hybride et des stratégies possibles afin de combiner une approche physique et une approche empirique. Le modèle ainsi conçu vise à reproduire l'événement sonore de la fracture, de son initiation à la création de micro-débris.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Picard, Limpens Cécile. "Expressive sound synthesis for animation". Nice, 2009. http://www.theses.fr/2009NICE4075.

Texto completo
Resumen
The main objective of this thesis is to provide tools for an expressive and real-time synthesis of sounds resulting from physical interactions of various objects in a 3D virtual environment. Indeed, these sounds, such as collisions sounds or sounds from continuous interaction between surfaces, are difficult to create in a pre-production process since they are highly dynamic and vary drastically depending on the interaction and objects. To achieve this goal, two approaches are proposed; the first one is based on simulation of physical phenomena responsible for sound production, the second one based on the processing of a recordings database. According to a physically based point of view, the sound source is modelled as the combination of an excitation and a resonator. We first present an original technique to model the interaction force for continuous contacts, such as rolling. Visual textures of objects in the environment are reused as a discontinuity map to create audible position-dependent variations during continuous contacts. We then propose a method for a robust and flexible modal analysis to formulate the resonator. Besides allowing to handle a large variety of geometries and proposing a multi-resolution of modal parameters, the technique enables us to solve the problems of coherence between physics simulation and sound synthesis that are frequently encountered in animation. Following a more empirical approach, we propose an innovative method that consists in bridging the gap between direct playback of audio recordings and physically based synthesis by retargetting audio grains extracted from recordings according to the output of a physics engine. In an off-line analysis task, we automatically segment audio recordings into atomic grains and we represent each original recording as a compact series of audio grains. During interactive animations, the grains are triggered individually or in sequence according to parameters reported from the physics engine and/or userdefined procedures. Finally, we address fracture events which commonly appear in virtual environments, especially in video games. Because of their complexity that makes a purely physical-based model prohibitively expensive and an empirical approach impracticable for the large variety of micro-events, this thesis opens the discussion on a hybrid model and the possible strategies to combine a physically based approach and an empirical approach. The model aims at appropriately rendering the sound corresponding to the fracture and to each specific sounding sample when material breaks into pieces
L'objectif principal de ce travail est de proposer des outils pour une synthèse en temps-réel, réaliste et expressive, des sons résultant d'interactions physiques entre objets dans une scène virtuelle. De fait, ces effets sonores, à l'exemple des bruits de collisions entre solides ou encore d'interactions continues entre surfaces, ne peuvent être prédéfinis et calculés en phase de pré-production. Dans ce cadre, nous proposons deux approches, la première basée sur une modélisation des phénomènes physiques à l'origine de l'émission sonore, la seconde basée sur le traitement d'enregistrements audio. Selon une approche physique, la source sonore est traitée comme la combinaison d'une excitation et d'un résonateur. Dans un premier temps, nous présentons une technique originale traduisant la force d'interaction entre surfaces dans le cas de contacts continus, tel que le roulement. Cette technique repose sur l'analyse des textures utilisées pour le rendu graphique des surfaces de la scène virtuelle. Dans un second temps, nous proposons une méthode d'analyse modale robuste et flexible traduisant les vibrations sonores du résonateur. Outre la possibilité de traiter une large variété de géométries et d'offrir une multi-résolution des paramètres modaux, la méthode permet de résoudre le problème de cohérence entre simulation physique et synthèse sonore, problème fréquemment rencontré en animation. Selon une approche empirique, nous proposons une technique de type granulaire, exprimant la synthèse sonore par un agencement cohérent de particules ou grains sonores. La méthode consiste tout d'abord en un prétraitement d'enregistrements destiné à constituer un matériel sonore sous forme compacte. Ce matériel est ensuite manipulé en temps réel pour, d'une part, une resynthèse complète des enregistrements originaux, et d'autre part, une utilisation flexible en fonction des données reportées par le moteur de simulation et/ou de procédures prédéfinies. Enfin, l'intérêt est porté sur les sons de fracture, au vu de leur utilisation fréquente dans les environnements virtuels, et en particulier les jeux vidéos. Si la complexité du phénomène rend l'emploi d'un modèle purement physique très coûteux, l'utilisation d'enregistrements est également inadaptée pour la grande variété de micro-événements sonores. Le travail de thèse propose ainsi un modèle hybride et des stratégies possibles afin de combiner une approche physique et une approche empirique. Le modèle ainsi conçu vise à reproduire l'événement sonore de la fracture, de son initiation à la création de micro-débris
Los estilos APA, Harvard, Vancouver, ISO, etc.

Libros sobre el tema "Sound synthesi"

1

Beauchamp, James W. Analysis, synthesis, and perception of musical sounds: The sound of music. New York: Springer, 2010.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Sound synthesis and sampling. 2a ed. Boston: Focal Press, 2004.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Sound synthesis and sampling. 3a ed. Oxford: Focal, 2009.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Sound synthesis and sampling. Oxford ; Boston: Focal Press, 1996.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Ziemer, Tim. Psychoacoustic Music Sound Field Synthesis. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-23033-3.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Hecker, Florian. Halluzination, Perspektive, Synthese. Editado por Müller Vanessa editor, Hecker Florian 1975- y Kunsthalle Wien. Berlin: Sternberg Press, 2019.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Ahrens, Jens. Analytic Methods of Sound Field Synthesis. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-25743-8.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Sueur, Jérôme. Sound Analysis and Synthesis with R. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-77647-7.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Sound synthesis: Analog and digital techniques. Blue Ridge Summit, PA: TAB Books, 1990.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Ahrens, Jens. Analytic Methods of Sound Field Synthesis. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Capítulos de libros sobre el tema "Sound synthesi"

1

Uncini, Aurelio. "Sound Synthesis". En Springer Topics in Signal Processing, 565–608. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-14228-4_8.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Sporer, Thomas, Karlheinz Brandenburg, Sandra Brix y Christoph Sladeczek. "Wave Field Synthesis". En Immersive Sound, 311–32. New York ; London : Routledge, 2017.: Routledge, 2017. http://dx.doi.org/10.4324/9781315707525-11.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Riches, Martin. "Mechanical Speech Synthesis". En Sound Inventions, 351–75. London: Focal Press, 2021. http://dx.doi.org/10.4324/9781003003526-35.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Liu, Shiguang y Dinesh Manocha. "Sound Rendering". En Sound Synthesis, Propagation, and Rendering, 45–52. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-79214-4_4.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Moffat, David, Rod Selfridge y Joshua D. Reiss. "Sound Effect Synthesis". En Foundations in Sound Design for Interactive Media, 274–99. New York, NY : Routledge, 2019. | Series: Sound design series; volume 2: Routledge, 2019. http://dx.doi.org/10.4324/9781315106342-13.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Mazzola, Guerino, Yan Pang, William Heinze, Kyriaki Gkoudina, Gian Afrisando Pujakusuma, Jacob Grunklee, Zilu Chen, Tianxue Hu y Yiqing Ma. "Standard Sound Synthesis". En Computational Music Science, 19–33. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-00982-3_4.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Avanzini, Federico. "Procedural Modeling of Interactive Sound Sources in Virtual Reality". En Sonic Interactions in Virtual Environments, 49–76. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04021-4_2.

Texto completo
Resumen
AbstractThis chapter addresses the first building block of sonic interactions in virtual environments, i.e., the modeling and synthesis of sound sources. Our main focus is on procedural approaches, which strive to gain recognition in commercial applications and in the overall sound design workflow, firmly grounded in the use of samples and event-based logics. Special emphasis is placed on physics-based sound synthesis methods and their potential for improved interactivity. The chapter starts with a discussion of the categories, functions, and affordances of sounds that we listen to and interact with in real and virtual environments. We then address perceptual and cognitive aspects, with the aim of emphasizing the relevance of sound source modeling with respect to the senses of presence and embodiment of a user in a virtual environment. Next, procedural approaches are presented and compared to sample-based approaches, in terms of models, methods, and computational costs. Finally, we analyze the state of the art in current uses of these approaches for Virtual Reality applications.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Sueur, Jérôme. "Synthesis". En Sound Analysis and Synthesis with R, 555–609. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-77647-7_18.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Xie, Bosun. "Spatial sound reproduction by wave field synthesis". En Spatial Sound, 439–96. Boca Raton: CRC Press, 2022. http://dx.doi.org/10.1201/9781003081500-10.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Sueur, Jérôme. "What Is Sound?" En Sound Analysis and Synthesis with R, 7–36. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-77647-7_2.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Actas de conferencias sobre el tema "Sound synthesi"

1

Lloyd, D. Brandon, Nikunj Raghuvanshi y Naga K. Govindaraju. "Sound synthesis for impact sounds in video games". En Symposium on Interactive 3D Graphics and Games. New York, New York, USA: ACM Press, 2011. http://dx.doi.org/10.1145/1944745.1944755.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Guatimosim, Júlio, José Henrique Padovani y Carlos Guatimosim. "Concatenative Sound Synthesis as a Technomorphic Model in Computer-Aided Composition". En Simpósio Brasileiro de Computação Musical. Sociedade Brasileira de Computação - SBC, 2021. http://dx.doi.org/10.5753/sbcm.2021.19431.

Texto completo
Resumen
The text presents a process aimed at computer-aided composition for percussion instruments based on Concatenative Sound Synthesis (CSS). After the introduction, we address the concept of ”technomorphism” and the influence of electroacoustic techniques in instrumental composition. The third section covers processes of instrumental sound synthesis and its development in the context of Computer-Aided Composition (CAC) and Computer-Aided Music Orchestration (CAMO). Then, we describe the general principles of Concatenative Sound Synthesis (CSS). The fifth section covers our adaptation of CSS as a technomorphic model for Computer-Aided Composition/Orchestration, employing a corpus of percussion sounds/instruments. In the final section, we discuss future developments and the mains characteristics of our implementation and strategy.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Bellona, Jon, Lin Bai, Luke Dahl y Amy LaViers. "Empirically Informed Sound Synthesis Application for Enhancing the Perception of Expressive Robotic Movement". En The 23rd International Conference on Auditory Display. Arlington, Virginia: The International Community for Auditory Display, 2017. http://dx.doi.org/10.21785/icad2017.049.

Texto completo
Resumen
Since people often communicate internal states and intentions through movement, robots can better interact with humans if they too can modify their movements to communicate changing state. These movements, which may be seen as supplementary to those required for workspace tasks, may be termed “expressive.” However, robot hardware, which cannot recreate the same range of dynamics as human limbs, often limit expressive capacity. One solution is to augment expressive robotic movement with expressive sound. To that end, this paper presents an application for synthesizing sounds that match various movement qualities. Its design is based on an empirical study analyzing sound and movement qualities, where movement qualities are parametrized according to Laban’s Effort System. Our results suggests a number of correspondences between movement qualities and sound qualities. These correspondences are presented here and discussed within the context of designing movement-quality-to-sound-quality mappings in our sound synthesis application. This application will be used in future work testing user perceptions of expressive movements with synchronous sounds.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Kreutzer, Cornelia, Jacqueline Walker y Michael O'Neill. "A parametric model for spectral sound synthesis of musical sounds". En 2008 International Conference on Audio, Language and Image Processing (ICALIP). IEEE, 2008. http://dx.doi.org/10.1109/icalip.2008.4590233.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Lykova, Marina P. "The content of speech therapy work on the development of language analysis and synthesis skills in preschool children". En Особый ребенок: Обучение, воспитание, развитие. Yaroslavl state pedagogical university named after К. D. Ushinsky, 2021. http://dx.doi.org/10.20323/978-5-00089-474-3-2021-326-330.

Texto completo
Resumen
The article presents the content of speech therapy work on the development of language analysis and synthesis skills in preschool children. The author offers a system of games and exercises for recognizing sounds, determining the number, sequence and place of a word in a sentence, forming the action of sound, syllabic analysis and synthesis in the mental plane
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Baird, Alice, Emilia Parada-Cabaleiro, Cameron Fraser, Simone Hantke y Björn Schuller. "The Perceived Emotion of Isolated Synthetic Audio". En AM'18: Sound in Immersion and Emotion. New York, NY, USA: ACM, 2018. http://dx.doi.org/10.1145/3243274.3243277.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Moore, Dylan, Rebecca Currano y David Sirkin. "Sound Decisions: How Synthetic Motor Sounds Improve Autonomous Vehicle-Pedestrian Interactions". En AutomotiveUI '20: 12th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3409120.3410667.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Kayahara, Takuro y Hiroki Abe. "Synthesis of footstep sounds of crowd from single step sound based on cognitive property of footstep sounds". En 2011 IEEE International Symposium on VR Innovation (ISVRI). IEEE, 2011. http://dx.doi.org/10.1109/isvri.2011.5759644.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

James, Doug. "Harmonic fluid sound synthesis". En ACM SIGGRAPH 2009 Computer Animation Fesitval. New York, New York, USA: ACM Press, 2009. http://dx.doi.org/10.1145/1596685.1596739.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

"Letter-to-sound rules for Korean". En Proceedings of 2002 IEEE Workshop on Speech Synthesis. IEEE, 2002. http://dx.doi.org/10.1109/wss.2002.1224370.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía