Thèses sur le sujet « Sound synthesi »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Sound synthesi.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleures thèses pour votre recherche sur le sujet « Sound synthesi ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.

1

PAPETTI, Stefano. « Sound modeling issues in interactive sonification - From basic contact events to synthesis and manipulation tools ». Doctoral thesis, Università degli Studi di Verona, 2010. http://hdl.handle.net/11562/340961.

Texte intégral
Résumé :
Questa tesi affronta una varietà di temi di ricerca, spaziando dalla interazione uomo-macchina alla modellizzazione fisica. Ciò che unisce queste ampie aree di interesse è l'idea di utilizzare simulazioni numeriche di fenomeni acustici basate sulla fisica, al fine di implementare interfacce uomo-macchina che offrano feedback sonoro coerente con l'interazione dell'utente. A questo proposito, negli ultimi anni sono nate numerose nuove discipline che vanno sotto il nome di -- per citarne alcune -- auditory display, sonificazione e sonic interaction design. In questa tesi vengono trattate la progettazione e la realizzazione di algoritmi audio efficienti per la sonificazione interattiva. A tale scopo si fa uso di tecniche di modellazione fisica di suoni ecologici (everyday sounds), ovvero suoni che non rientrano nelle famiglie del parlato e dei suoni musicali.
The work presented in this thesis ranges over a variety of research topics, spacing from human-computer interaction to physical-modeling. What combines such broad areas of interest is the idea of using physically-based computer simulations of acoustic phenomena in order to provide human-computer interfaces with sound feedback which is consistent with the user interaction. In this regard, recent years have seen the emergence of several new disciplines that go under the name of -- to cite a few -- auditory display, sonification and sonic interaction design. This thesis deals with the design and implementation of efficient sound algorithms for interactive sonification. To this end, the physical modeling of everyday sounds is taken into account, that is sounds not belonging to the families of speech and musical sounds.
Styles APA, Harvard, Vancouver, ISO, etc.
2

FONTANA, Federico. « Physics-based models for the acoustic representation of space in virtual environments ». Doctoral thesis, Università degli Studi di Verona, 2003. http://hdl.handle.net/11562/342240.

Texte intégral
Résumé :
In questo lavoro sono state affrontate alcune questioni inserite nel tema più generale della rappresentazione di scene e ambienti virtuali in contesti d’interazione uomo-macchina, nei quali la modalità acustica costituisca parte integrante o prevalente dell’informazione complessiva trasmessa dalla macchina all’utilizzatore attraverso un’interfaccia personale multimodale oppure monomodale acustica. Più precisamente è stato preso in esame il problema di come presentare il messaggio audio, in modo tale che lo stesso messaggio fornisca all’utilizzatore un’informazione quanto più precisa e utilizzabile relativamente al contesto rappresentato. Il fine di tutto ciò è riuscire a integrare all’interno di uno scenario virtuale almeno parte dell’informazione acustica che lo stesso utilizzatore, in un contesto stavolta reale, normalmente utilizza per trarre esperienza dal mondo circostante nel suo complesso. Ciò è importante soprattutto quando il focus dell’attenzione, che tipicamente impegna il canale visivo quasi completamente, è volto a un compito specifico.
This work deals with the simulation of virtual acoustic spaces using physics-based models. The acoustic space is what we perceive about space using our auditory system. The physical nature of the models means that they will present spatial attributes (such as, for example, shape and size) as a salient feature of their structure, in a way that space will be directly represented and manipulated by means of them.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Liao, Wei-Hsiang. « Modelling and transformation of sound textures and environmental sounds ». Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066725/document.

Texte intégral
Résumé :
Le traitement et la synthèse des sons environnementaux sont devenue un sujet important. Une classe des sons, qui est très important pour la constitution d'environnements sonore, est la classe des textures sonores. Les textures sonores sont décrit par des relations stochastiques et qui contient des composantes non-sinusoïdales à caractère fortement bruité. Il a été montré récemment que la reconnaissance de textures sonores est basée sur des mesures statistiques caractérisant les enveloppes dans les bandes critiques. Il y actuellement très peu d'algorithmes qui permettent à imposer des propriétés statistiques de façon explicite lors de la synthèse de sons. L'algorithme qui impose l'ensemble de statistique qui est perceptivement relevant pour les textures sonore est très couteuse en temps de calcul. Nous proposons une nouvelle approche d'analyse-synthèse qui permet une analyse des statistiques relevant et un mécanisme efficace d'imposer ces statistiques dans le domaine temps-fréquence. La représentation temps-fréquence étudié dans cette thèse est la transformée de Fourier à court terme. Les méthodes proposées par contre sont plus générale et peuvent être généralisé à d'autres représentations temps-fréquence reposant sur des banques de filtres si certaines contraintes sont respectées. L'algorithme proposé dans cette thèse ouvre plusieurs perspectives. Il pourrait être utilisé pour générer des textures sonores à partir d'une description statistique créée artificiellement. Il pourrait servir de base pour des transformations avancées comme le morphing, et on pourrait aussi imaginer à utiliser le modèle pour développer un contrôle sémantique de textures sonores
The processing of environmental sounds has become an important topic in various areas. Environmental sounds are mostly constituted of a kind of sounds called sound textures. Sound textures are usually non-sinusoidal, noisy and stochastic. Several researches have stated that human recognizes sound textures with statistics that characterizing the envelopes of auditory critical bands. Existing synthesis algorithms can impose some statistical properties to a certain extent, but most of them are computational intensive. We propose a new analysis-synthesis framework that contains a statistical description that consists of perceptually important statistics and an efficient mechanism to adapt statistics in the time-frequency domain. The quality of resynthesised sound is at least as good as state-of-the-art but more efficient in terms of computation time. The statistic description is based on the STFT. If certain conditions are met, it can also adapt to other filter bank based time-frequency representations (TFR). The adaptation of statistics is achieved by using the connection between the statistics on TFR and the spectra of time-frequency domain coefficients. It is possible to adapt only a part of cross-correlation functions. This allows the synthesis process to focus on important statistics and ignore the irrelevant parts, which provides extra flexibility. The proposed algorithm has several perspectives. It could possibly be used to generate unseen sound textures from artificially created statistical descriptions. It could also serve as a basis for transformations like stretching or morphing. One could also expect to use the model to explore semantic control of sound textures
Styles APA, Harvard, Vancouver, ISO, etc.
4

Chapman, David P. « Playing with sounds : a spatial solution for computer sound synthesis ». Thesis, University of Bath, 1996. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.307047.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Lee, Chung. « Sound texture synthesis using an enhanced overlap-add approach / ». View abstract or full-text, 2008. http://library.ust.hk/cgi/db/thesis.pl?CSED%202008%20LEE.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Conan, Simon. « Contrôle intuitif de la synthèse sonore d’interactions solidiennes : vers les métaphores sonores ». Thesis, Ecole centrale de Marseille, 2014. http://www.theses.fr/2014ECDM0012/document.

Texte intégral
Résumé :
Un des enjeux actuels de la synthèse sonore est le contrôle perceptif (i.e. à partir d’évocations) des processus de synthèse. En effet, les modèles de synthèse sonore dépendent généralement d’un grand nombre de paramètres de bas niveau dont la manipulation nécessite une expertise des processus génératifs. Disposer de contrôles perceptifs sur un synthétiseur offre cependant beaucoup d’avantages en permettant de générer les sons à partir d’une description du ressenti et en offrant à des utilisateurs non-experts la possibilité de créer et de contrôler des sons intuitivement. Un tel contrôle n’est pas immédiat et se base sur des hypothèses fortes liées à notre perception, notamment la présence de morphologies acoustiques, dénommées ``invariants'', responsables de l’identification d’un évènement sonore.Cette thèse aborde cette problématique en se focalisant sur les invariants liés à l’action responsable de la génération des sons. Elle s’articule suivant deux parties. La première a pour but d’identifier des invariants responsables de la reconnaissance de certaines interactions continues : le frottement, le grattement et le roulement. Le but est de mettre en œuvre un modèle de synthèse temps-réel contrôlable intuitivement et permettant d’effectuer des transitions perceptives continues entre ces différents types d’interactions (e.g. transformer progressivement un son de frottement en un son de roulement). Ce modèle s'inscrira dans le cadre du paradigme ``action-objet'' qui stipule que chaque son résulte d’une action (e.g. gratter) sur un objet (e.g. une plaque en bois). Ce paradigme s’adapte naturellement à une approche de la synthèse par modèle source-filtre, où l’information sur l’objet est contenue dans le ``filtre'', et l’information sur l’action dans la ``source''. Pour ce faire, diverses approches sont abordées : études de modèles physiques, approches phénoménologiques et tests perceptifs sur des sons enregistrés et synthétisés.La seconde partie de la thèse concerne le concept de ``métaphores sonores'' en élargissant la notion d’objet à des textures sonores variées. La question posée est la suivante : étant donnée une texture sonore quelconque, est-il possible de modifier ses propriétés intrinsèques pour qu’elle évoque une interaction particulière comme un frottement ou un roulement par exemple ? Pour créer ces métaphores, un processus de synthèse croisée est utilisé dans lequel la partie ``source'' est basée sur les morphologies sonores des actions précédemment identifiées et la partie ``filtre'' restitue les propriétés de la texture. L’ensemble de ces travaux ainsi que le paradigme choisi offre dès lors de nouvelles perspectives pour la constitution d’un véritable langage des sons
Perceptual control (i.e. from evocations) of sound synthesis processes is a current challenge. Indeed, sound synthesis models generally involve a lot of low-level control parameters, whose manipulation requires a certain expertise with respect to the sound generation process. Thus, intuitive control of sound generation is interesting for users, and especially non-experts, because they can create and control sounds from evocations. Such a control is not immediate and is based on strong assumptions linked to our perception, and especially the existence of acoustic morphologies, so-called ``invariants'', responsible for the recognition of specific sound events.This thesis tackles the problem by focusing on invariants linked to specific sound generating actions. If follows two main parts. The first is to identify invariants responsible for the recognition of three categories of continuous interactions: rubbing, scratching and rolling. The aim is to develop a real-time sound synthesizer with intuitive controls that enables users to morph continuously between the different interactions (e.g. progressively transform a rubbing sound into a rolling one). The synthesis model will be developed in the framework of the ``action-object'' paradigm which states that sounds can be described as the result of an action (e.g. scratching) on an object (e.g. a wood plate). This paradigm naturally fits the well-known source-filter approach for sound synthesis, where the perceptually relevant information linked to the object is described in the ``filter'' part, and the action-related information is described in the ``source'' part. To derive our generic synthesis model, several approaches are treated: physical models, phenomenological approaches and listening tests with recorded and synthesized sounds.The second part of the thesis deals with the concept of ``sonic metaphors'' by expanding the object notion to various sound textures. The question raised is the following: given any sound texture, is it possible to modify its intrinsic properties such that it evokes a particular interaction, like rolling or rubbing for instance? To create these sonic metaphors, a cross-synthesis process is used where the ``source'' part is based on the sound morphologies linked to the actions previously identified, and the ``filter'' part renders the sound texture properties. This work, together with the chosen paradigm offers new perspectives to build a sound language
Styles APA, Harvard, Vancouver, ISO, etc.
7

Caracalla, Hugo. « Sound texture synthesis from summary statistics ». Electronic Thesis or Diss., Sorbonne université, 2019. http://www.theses.fr/2019SORUS676.

Texte intégral
Résumé :
Les textures sonores sont une catégorie de sons incluant le bruit de la pluie, le brouhaha d’une foule ou les pépiements d’un groupe d’oiseaux. Tous ces sons contiennent une part d’imprévisibilité qui n’est habituellement pas recherchée en synthèse sonore, et rend ainsi indispensable l’utilisation d’algorithmes dédiés. Cependant, la grande diversité de leurs propriétés complique la création d’un algorithme capable de synthétiser un large panel de textures. Cette thèse s’intéresse à la synthèse paramétrique de textures sonores. Dans ce paradigme, un ensemble de statistiques sont extraites d’une texture cible et progressivement imposées sur un bruit blanc. Si l’ensemble de statistiques est pertinent, le bruit blanc est alors modifié jusqu’à ressembler à la cible, donnant l’illusion d’avoir été enregistré quelques instants après. Dans un premier temps, nous proposons l’amélioration d’une méthode paramétrique basée sur des statistiques perceptuelles. Cette amélioration vise à améliorer la synthèse d’évènements à forte attaque ou singuliers en modifiant et simplifiant le processus d’imposition. Dans un second temps, nous adaptons une méthode paramétrique de synthèse de textures visuelles basée sur des statistiques extraites par un réseau de neurones convolutifs (CNN) afin de l’utiliser sur des textures sonores. Nous modifions l’ensemble de statistiques utilisées afin de mieux correspondre aux propriétés des signaux sonores, changeons l’architecture du CNN pour l’adapter aux événements présents dans les textures sonores et utilisons une représentation temps-fréquence prenant en compte à la fois amplitude et phase
Sound textures are a wide class of sounds that includes the sound of the rain falling, the hubbub of a crowd and the chirping of flocks of birds. All these sounds present an element of unpredictability which is not commonly sought after in sound synthesis, requiring the use of dedicated algorithms. However, the diverse audio properties of sound textures make the designing of an algorithm able to convincingly recreate varied textures a complex task. This thesis focuses on parametric sound texture synthesis. In this paradigm, a set of summary statistics are extracted from a target texture and iteratively imposed onto a white noise. If the set of statistics is appropriate, the white noise is modified until it resemble the target, sounding as if it had been recorded moments later. In a first part, we propose improvements to perceptual-based parametric method. These improvements aim at making its synthesis of sharp and salient events by mainly altering and simplifying its imposition process. In a second, we adapt a parametric visual texture synthesis method based statistics extracted by a Convolutional Neural Networks (CNN) to work on sound textures. We modify the computation of its statistics to fit the properties of sound signals, alter the architecture of the CNN to best fit audio elements present in sound textures and use a time-frequency representation taking both magnitude and phase into account
Styles APA, Harvard, Vancouver, ISO, etc.
8

Serquera, Jaime. « Sound synthesis with cellular automata ». Thesis, University of Plymouth, 2012. http://hdl.handle.net/10026.1/1189.

Texte intégral
Résumé :
This thesis reports on new music technology research which investigates the use of cellular automata (CA) for the digital synthesis of dynamic sounds. The research addresses the problem of the sound design limitations of synthesis techniques based on CA. These limitations fundamentally stem from the unpredictable and autonomous nature of these computational models. Therefore, the aim of this thesis is to develop a sound synthesis technique based on CA capable of allowing a sound design process. A critical analysis of previous research in this area will be presented in order to justify that this problem has not been previously solved. Also, it will be discussed why this problem is worthwhile to solve. In order to achieve such aim, a novel approach is proposed which considers the output of CA as digital signals and uses DSP procedures to analyse them. This approach opens a large variety of possibilities for better understanding the self-organization process of CA with a view to identifying not only mapping possibilities for making the synthesis of sounds possible, but also control possibilities which enable a sound design process. As a result of this approach, this thesis presents a technique called Histogram Mapping Synthesis (HMS), which is based on the statistical analysis of CA evolutions by histogram measurements. HMS will be studied with four different automatons, and a considerable number of control mechanisms will be presented. These will show that HMS enables a reasonable sound design process. With these control mechanisms it is possible to design and produce in a predictable and controllable manner a variety of timbres. Some of these timbres are imitations of sounds produced by acoustic means and others are novel. All the sounds obtained present dynamic features and many of them, including some of those that are novel, retain important characteristics of sounds produced by acoustic means.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Picard-Limpens, Cécile. « Expressive Sound Synthesis for Animation ». Phd thesis, Université de Nice Sophia-Antipolis, 2009. http://tel.archives-ouvertes.fr/tel-00440417.

Texte intégral
Résumé :
L'objectif principal de ce travail est de proposer des outils pour une synthèse en temps-réel, réaliste et expressive, des sons résultant d'interactions physiques entre objets dans une scène virtuelle. De fait, ces effets sonores, à l'exemple des bruits de collisions entre solides ou encore d'interactions continues entre surfaces, ne peuvent être prédéfinis et calculés en phase de pré-production. Dans ce cadre, nous proposons deux approches, la première basée sur une modélisation des phénomènes physiques à l'origine de l'émission sonore, la seconde basée sur le traitement d'enregistrements audio. Selon une approche physique, la source sonore est traitée comme la combinaison d'une excitation et d'un résonateur. Dans un premier temps, nous présentons une technique originale traduisant la force d'interaction entre surfaces dans le cas de contacts continus, tel que le roulement. Cette technique repose sur l'analyse des textures utilisées pour le rendu graphique des surfaces de la scène virtuelle. Dans un second temps, nous proposons une méthode d'analyse modale robuste et flexible traduisant les vibrations sonores du résonateur. Outre la possibilité de traiter une large variété de géométries et d'offrir une multi-résolution des paramètres modaux, la méthode permet de résoudre le problème de cohérence entre simulation physique et synthèse sonore, problème fréquemment rencontré en animation. Selon une approche empirique, nous proposons une technique de type granulaire, exprimant la synthèse sonore par un agencement cohérent de particules ou grains sonores. La méthode consiste tout d'abord en un prétraitement d'enregistrements destiné à constituer un matériel sonore sous forme compacte. Ce matériel est ensuite manipulé en temps réel pour, d'une part, une resynthèse complète des enregistrements originaux, et d'autre part, une utilisation flexible en fonction des données reportées par le moteur de simulation et/ou de procédures prédéfinies. Enfin, l'intérêt est porté sur les sons de fracture, au vu de leur utilisation fréquente dans les environnements virtuels, et en particulier les jeux vidéos. Si la complexité du phénomène rend l'emploi d'un modèle purement physique très coûteux, l'utilisation d'enregistrements est également inadaptée pour la grande variété de micro-événements sonores. Le travail de thèse propose ainsi un modèle hybride et des stratégies possibles afin de combiner une approche physique et une approche empirique. Le modèle ainsi conçu vise à reproduire l'événement sonore de la fracture, de son initiation à la création de micro-débris.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Picard, Limpens Cécile. « Expressive sound synthesis for animation ». Nice, 2009. http://www.theses.fr/2009NICE4075.

Texte intégral
Résumé :
The main objective of this thesis is to provide tools for an expressive and real-time synthesis of sounds resulting from physical interactions of various objects in a 3D virtual environment. Indeed, these sounds, such as collisions sounds or sounds from continuous interaction between surfaces, are difficult to create in a pre-production process since they are highly dynamic and vary drastically depending on the interaction and objects. To achieve this goal, two approaches are proposed; the first one is based on simulation of physical phenomena responsible for sound production, the second one based on the processing of a recordings database. According to a physically based point of view, the sound source is modelled as the combination of an excitation and a resonator. We first present an original technique to model the interaction force for continuous contacts, such as rolling. Visual textures of objects in the environment are reused as a discontinuity map to create audible position-dependent variations during continuous contacts. We then propose a method for a robust and flexible modal analysis to formulate the resonator. Besides allowing to handle a large variety of geometries and proposing a multi-resolution of modal parameters, the technique enables us to solve the problems of coherence between physics simulation and sound synthesis that are frequently encountered in animation. Following a more empirical approach, we propose an innovative method that consists in bridging the gap between direct playback of audio recordings and physically based synthesis by retargetting audio grains extracted from recordings according to the output of a physics engine. In an off-line analysis task, we automatically segment audio recordings into atomic grains and we represent each original recording as a compact series of audio grains. During interactive animations, the grains are triggered individually or in sequence according to parameters reported from the physics engine and/or userdefined procedures. Finally, we address fracture events which commonly appear in virtual environments, especially in video games. Because of their complexity that makes a purely physical-based model prohibitively expensive and an empirical approach impracticable for the large variety of micro-events, this thesis opens the discussion on a hybrid model and the possible strategies to combine a physically based approach and an empirical approach. The model aims at appropriately rendering the sound corresponding to the fracture and to each specific sounding sample when material breaks into pieces
L'objectif principal de ce travail est de proposer des outils pour une synthèse en temps-réel, réaliste et expressive, des sons résultant d'interactions physiques entre objets dans une scène virtuelle. De fait, ces effets sonores, à l'exemple des bruits de collisions entre solides ou encore d'interactions continues entre surfaces, ne peuvent être prédéfinis et calculés en phase de pré-production. Dans ce cadre, nous proposons deux approches, la première basée sur une modélisation des phénomènes physiques à l'origine de l'émission sonore, la seconde basée sur le traitement d'enregistrements audio. Selon une approche physique, la source sonore est traitée comme la combinaison d'une excitation et d'un résonateur. Dans un premier temps, nous présentons une technique originale traduisant la force d'interaction entre surfaces dans le cas de contacts continus, tel que le roulement. Cette technique repose sur l'analyse des textures utilisées pour le rendu graphique des surfaces de la scène virtuelle. Dans un second temps, nous proposons une méthode d'analyse modale robuste et flexible traduisant les vibrations sonores du résonateur. Outre la possibilité de traiter une large variété de géométries et d'offrir une multi-résolution des paramètres modaux, la méthode permet de résoudre le problème de cohérence entre simulation physique et synthèse sonore, problème fréquemment rencontré en animation. Selon une approche empirique, nous proposons une technique de type granulaire, exprimant la synthèse sonore par un agencement cohérent de particules ou grains sonores. La méthode consiste tout d'abord en un prétraitement d'enregistrements destiné à constituer un matériel sonore sous forme compacte. Ce matériel est ensuite manipulé en temps réel pour, d'une part, une resynthèse complète des enregistrements originaux, et d'autre part, une utilisation flexible en fonction des données reportées par le moteur de simulation et/ou de procédures prédéfinies. Enfin, l'intérêt est porté sur les sons de fracture, au vu de leur utilisation fréquente dans les environnements virtuels, et en particulier les jeux vidéos. Si la complexité du phénomène rend l'emploi d'un modèle purement physique très coûteux, l'utilisation d'enregistrements est également inadaptée pour la grande variété de micro-événements sonores. Le travail de thèse propose ainsi un modèle hybride et des stratégies possibles afin de combiner une approche physique et une approche empirique. Le modèle ainsi conçu vise à reproduire l'événement sonore de la fracture, de son initiation à la création de micro-débris
Styles APA, Harvard, Vancouver, ISO, etc.
11

Lee, JungSuk. « Categorization and modeling of sound sources for sound analysis/synthesis ». Thesis, McGill University, 2013. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=116954.

Texte intégral
Résumé :
In this thesis, various sound analysis/re-synthesis schemes are investigated in a source/filter model framework, with emphasis on the source component. This research provides improved methods and tools for sound designers, composersand musicians to flexibly analyze and synthesize sounds used for gaming, film or computer music, ranging from abstract, complex sounds to those of real musical instruments. First, an analysis-synthesis scheme for the reproduction of a rolling ball sound is presented. The proposed scheme is based on the assumption that the rolling sound is generated by a concatenation of micro-contacts between a ball and a surface, each having associated resonances. Contact timing information is extracted from the rolling sound using an onset detection process, allowing for segmentation of a rolling sound. Segmented sound snippets are presumed to correspond to micro-contacts between a ball and a surface; thus, subband based linear predictions (LP) are performed to model time-varying resonances and anti-resonances. The segments are then resynthesized and overlap-added to form a complete rolling sound. A "granular" analysis/synthesis approach is also applied to various kinds of environmental sounds (rain, fireworks, walking, clapping) as an additional investigation into how the source type influences the strategic choices for the analysis/synthesis of sounds. The proposed granular analysis/synthesis system allows for flexible analysis of complex sounds and re-synthesis with temporal modification. Lastly, a novel approach to extract a pluck excitation from a recorded plucked string sound is proposed within a source / filter context using physical models. A time domain windowing method and an inverse filtering-based method are devised based on the behavior of wave propagation on the string. In addition, a parametric model of the pluck excitation as well as a method to estimate its parameters are addressed.
Dans cette thèse, nous avons étudié plusieurs scéhmas d'analyse/synthèse dans le cadre des modèles source/filtre, avec un attention particulière portée sur la composante de source. Cette recherche améliore les méthodes ainsi que les outils fournis créateurs de sons, compositeurs et musiciens désirant analyser et synthétiser avec flexibilité des sons destinés aux jeux vidéos, au cinéma ou à la musique par ordinateur. Ces sons peuvent aller de sons abstraits et complexes à ceux provenant d'instruments de musique existants. En premier lieu, un schéma d'analyse-synthèse est introduit permettant la reproduction du son d'une balle en train de rouler. Ce schéma est fondé sur l'hypothèse que le son de ce roulement est généré par la concaténation de micro-contacts entre balle et surface, chacune d'elles possédant sa proper série de résonances. L'information relative aux temps de contact est extradite du son du roulement que l'on cherche à reproduire au moyen d'une procédure détectant le début du son afin de le segmenter. Les segments de son ainsi isolés sont supposés correspondre aux micro-contacts entre la balle et la surface. Ainsi un algorithme de prédiction linéaire est effectué par sous-bande, préalablement extraites afin de modéliser des résonances et des anti-résonances variants dans le temps. Les segments sont ensuite re-synthétisés, superposés et additionnés pour reproduire le son du roulement dans son entier. Cette approche d'analyse/synthèse "granulaire" est également appliquée à plusieurs sons de types environnementaux (pluie, feux d'artifice, marche, claquement) afin d'explorer plus avant l'influence du type de la source sur l'analyse/synthèse des sons. Le système proposé permet une analyse flexible de sons complexes et leur synthèse, avec la possibilité d'ajouter des modifications temporelles.Enfin, une approche novatrice pour extraire le signal d'excitation d'un son de corde pincée est présentée dans le contexte de schémas source/filtre sur une modèlisation physique. A cet effet, nous introduisons une méthode de type fenêtrage, et une méthode de filtrage inverse fondée sur le type de propagation selon laquelle l'onde se déplace le long de la corde. De plus, un modèle paramétrique de l'excitation par pincement ainsi qu'une méthode d'estimation de ces paramètres sont détaillés.
Styles APA, Harvard, Vancouver, ISO, etc.
12

García, Ricardo A. (Ricardo Antonio) 1974. « Automatic generation of sound synthesis techniques ». Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/61542.

Texte intégral
Résumé :
Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2001.
Includes bibliographical references (p. 97-98).
Digital sound synthesizers, ubiquitous today in sound cards, software and dedicated hardware, use algorithms (Sound Synthesis Techniques, SSTs) capable of generating sounds similar to those of acoustic instruments and even totally novel sounds. The design of SSTs is a very hard problem. It is usually assumed that it requires human ingenuity to design an algorithm suitable for synthesizing a sound with certain characteristics. Many of the SSTs commonly used are the fruit of experimentation and a long refinement processes. A SST is determined by its "functional form" and "internal parameters". Design of SSTs is usually done by selecting a fixed functional form from a handful of commonly used SSTs, and performing a parameter estimation technique to find a set of internal parameters that will best emulate the target sound. A new approach for automating the design of SSTs is proposed. It uses a set of examples of the desired behavior of the SST in the form of "inputs + target sound". The approach is capable of suggesting novel functional forms and their internal parameters, suited to follow closely the given examples. Design of a SST is stated as a search problem in the SST space (the space spanned by all the possible valid functional forms and internal parameters, within certain limits to make it practical). This search is done using evolutionary methods; specifically, Genetic Programming (GP). A custom language for representing and manipulating SSTs as topology graphs and expression trees is proposed, as well as the mapping rules between both representations. Fitness functions that use analytical and perceptual distance metrics between the target and produced sounds are discussed. The AGeSS system (Automatic Generation of Sound Synthesizers) developed in the Media Lab is outlined, and some SSTs and their evolution are shown.
by Ricardo A. García.
S.M.
Styles APA, Harvard, Vancouver, ISO, etc.
13

Hahn, Henrik. « Expressive sampling synthesis. Learning extended source-filter models from instrument sound databases for expressive sample manipulations ». Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066564/document.

Texte intégral
Résumé :
Dans cette thèse un système de synthèse sonore imitative sera présenté, applicable à la plupart des instruments de quasi-harmoniques. Le système se base sur les enregistrements d’une note unique qui représentent une version quantifiée de l'espace de timbre possible d'un instrument par rapport à sa hauteur et son intensité. Une méthode de transformation permet alors de générer des signaux sonores de valeurs continues des paramètres de contrôle d'expression qui sont perceptuellement cohérent avec ses équivalents acoustiques. Un modèle paramétrique de l'instrument se présente donc basé sur un modèle de filtre de source étendu avec des manipulations distinctes sur les harmoniques d’un signal et ses composantes résiduelles. Une procédure d'évaluation subjective sera présentée afin d’évaluer une variété de résultats de transformation par une comparaison directe avec des enregistrements non modifiés, afin de comparer la perception entre les résultats synthétiques et leurs équivalents acoustiques
Within this thesis an imitative sound synthesis system will be introduced that is applicable to most quasi-harmonic instruments. The system bases upon single-note recordings that represent a quantized version of an instrument's possible timbre space with respect to its pitch and intensity dimension. A transformation method then allows to render sound signals with continuous values of the expressive control parameters which are perceptually coherent with its acoustic equivalents. A parametric instrument model is therefore presented based on an extended source-filter model with separate manipulations of a signal’s harmonic and residual components. A subjective evaluation procedure will be shown to assess a variety of transformation results by a direct comparison with unmodified recordings to determine how perceptually close the synthesis results are regarding their respective acoustic correlates
Styles APA, Harvard, Vancouver, ISO, etc.
14

Zita, Andreas. « Computational Real-Time Sound Synthesis of Rain ». Thesis, Linköping University, Department of Science and Technology, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-1830.

Texte intégral
Résumé :

Real-time sound synthesis in computer games using physical modeling is an area with great potential. To date, most sounds are pre-recorded to match a certain event. Instead by using a model to describe the sound producing event, a number of problems encountered when using pre-recorded sounds can be avoided. This thesis will introduce these problems and present a solution. The thesis will also evaluate one such physical model, for rain sound, and implement a real- time simulation to demonstrate the advantages of the method.

Styles APA, Harvard, Vancouver, ISO, etc.
15

Nilsson, Robin Lindh. « Contact Sound Synthesis in Real-time Applications ». Thesis, Blekinge Tekniska Högskola, Institutionen för kreativa teknologier, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3938.

Texte intégral
Résumé :
Synthesizing sounds which occur when physically-simulated objects collide in a virtual environment can give more dynamic and realistic sounds compared to pre-recorded sound effects. This real-time computation of sound samples can be computationally intense. In this study we investigate a synthesis algorithm operating in the frequency domain, previously shown to be more efficient than time domain synthesis, and propose a further optimization using multi-threading on the CPU. The multi-threaded synthesis algorithm was designed and implemented as part of a game being developed by Axolot Games. Measurements were done in three stress-testing cases to investigate how multi-threading improved the synthesis performance. Compared to our single-threaded approach, the synthesis speed was improved by 80% when using 8 threads, running on an i7 processor with hyper-threading enabled. We conclude that synthesis of contact sounds is viable for games and similar real-time applications, when using the investigated optimization. 140000 mode shapes were synthesized 30% faster than real-time, and this is arguably much more than a user can distinguish.
Syntetisering av ljud som uppstår när fysikobjekt kolliderar i en virtuell miljö kan ge mer dynamiska och realistiska ljudeffekter, men är krävande att beräkna. I det här examensarbetet implementerades ljudsyntes i frekvensdomänen baserat på en tidigare studie, och utvecklades sedan vidare till att utnyttja multipla trådar. Enligt mätningar i tre olika testfall kunde den multitrådade implementationen syntetisera 80% fler ljudvågor än den enkeltrådade, på en i7-processor.

Author's website: www.robinerd.com

Styles APA, Harvard, Vancouver, ISO, etc.
16

Vigliensoni, Martin Augusto. « Touchless gestural control of concatenative sound synthesis ». Thesis, McGill University, 2011. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=104846.

Texte intégral
Résumé :
This thesis presents research on three-dimensional position tracking technologies used to control concatenative sound synthesis and applies the achieved research results to the design of a new immersive interface for musical expression. The underlying concepts and characteristics of position tracking technologies are reviewed and musical applications using these technologies are surveyed to exemplify their use. Four position tracking systems based on different technologies are empirically compared according to their performance parameters, technical specifications, and practical considerations of use. Concatenative sound synthesis, a corpus-based synthesis technique grounded on the segmentation, analysis and concatenation of sound units, is discussed. Three implementations of this technique are compared according to the characteristics of the main components involved in the architecture of these systems. Finally, this thesis introduces SoundCloud, an implementation that extends the interaction possibilities of one of the concatenative synthesis systems reviewed, providing a novel visualisation application. SoundCloud allows a musician to perform with a database of sounds distributed in a three-dimensional descriptor space by exploring a performance space with her hands.
Ce mémoire de thèse présente une nouvelle interface pour l'expression musicale combinant la synthèse sonore par concaténation et les technologies de captation de mouvements dans l'espace. Ce travail commence par une présentation des dispositifs de capture de position de type main-libre, en étudiant leur principes de fonctionnement et leur caractéristiques. Des exemples de leur application dans les contextes musicaux sont aussi étudiés. Une attention toute particulière est accordée à quatre systèmes: leurs spécifications techniques ainsi que leurs performances (évaluées par des métriques quantitatives) sont comparées expérimentalement. Ensuite, la synthèse concaténative est décrite. Cette technique de synthèse sonore consiste à synthéthiser une séquence musicale cible à partir de sons pré-enregistrés, sélectionnés et concaténés en fonction de leur adéquation avec la cible. Trois implémentations de cette technique sont comparées, permettant ainsi d'en choisir une pour notre application. Enfin, nous décrivons SoundCloud, une nouvelle interface qui, en ajoutant une interface visuelle à la méthode de synthèse concaténative, permet d'en étendre les possibilités de contrôle. SoundCloud permet en effet de contrôler la synthése de sons en utilisant des gestes libres des mains pour naviguer au sein d'un espace tridimensionnel de descripteurs des sons d'une base de données.
Styles APA, Harvard, Vancouver, ISO, etc.
17

Valsamakis, Nikolas. « Non-standard sound synthesis with dynamic models ». Thesis, University of Plymouth, 2013. http://hdl.handle.net/10026.1/2841.

Texte intégral
Résumé :
This Thesis proposes three main objectives: (i) to provide the concept of a new generalized non-standard synthesis model that would provide the framework for incorporating other non-standard synthesis approaches; (ii) to explore dynamic sound modeling through the application of new non-standard synthesis techniques and procedures; and (iii) to experiment with dynamic sound synthesis for the creation of novel sound objects. In order to achieve these objectives, this Thesis introduces a new paradigm for non-standard synthesis that is based in the algorithmic assemblage of minute wave segments to form sound waveforms. This paradigm is called Extended Waveform Segment Synthesis (EWSS) and incorporates a hierarchy of algorithmic models for the generation of microsound structures. The concepts of EWSS are illustrated with the development and presentation of a novel non-standard synthesis system, the Dynamic Waveform Segment Synthesis (DWSS). DWSS features and combines a variety of algorithmic models for direct synthesis generation: list generation and permutation, tendency masks, trigonometric functions, stochastic functions, chaotic functions and grammars. The core mechanism of DWSS is based in an extended application of Cellular Automata. The potential of the synthetic capabilities of DWSS is explored in a series of Case Studies where a number of sound object were generated revealing (i) the capabilities of the system to generate sound morphologies belonging to other non-standard synthesis approaches and, (ii) the capabilities of the system of generating novel sound objects with dynamic morphologies. The introduction of EWSS and DWSS is preceded by an extensive and critical overview on the concepts of microsound synthesis, algorithmic composition, the two cultures of computer music, the heretical approach in composition, non- standard synthesis and sonic emergence along with the thorough examination of algorithmic models and their application in sound synthesis and electroacoustic composition. This Thesis also proposes (i) a new definition for “algorithmic composition”, (ii) the term “totalistic algorithmic composition”, and (iii) four discrete aspects of non-standard synthesis.
Styles APA, Harvard, Vancouver, ISO, etc.
18

Liuni, Marco. « Automatic adaptation of sound analysis and synthesis ». Paris 6, 2012. http://www.theses.fr/2012PA066105.

Texte intégral
Résumé :
En Analyse Temps-Fréquence, l'adaptativité est la possibilité de concevoir de représentations et opérateurs avec des caractéristiques qui peuvent être modifiées en fonction des objets à analyser. Dans cette thèse, on s'intéresse à des méthodes qui permettent de varier localement la résolution temps-fréquence pour l'analyse et la re-synthèse du son. Le premier objectif fondamental est la définition formelle d'un cadre mathématique qui puisse engendrer des méthodes adaptatives pour l'analyse du son. Le deuxième est de rendre l'adaptation automatique; on établit des critères pour définir localement la meilleure résolution temps-fréquence, en optimisant des mesures de parcimonie appropriées. Afin d'exploiter l'adaptativité dans le traitement spectral du son, on introduit des méthodes de reconstruction efficaces, basées sur des analyses à résolution variable, conçues pour préserver et améliorer les techniques actuelles de manipulation du son. L'idée principale est que les algorithmes adaptatifs puissent contribuer à la simplification de l'utilisation de méthodes de traitement du son qui nécessitent aujourd'hui un haut niveau d'expertise. En particulier, la nécessité d'une configuration manuelle détaillée constitue une limitation majeure dans les applications grand public de traitement du son de haute qualité (par exemple: transposition, compression/dilatation temporelle). Nous montrons des exemples où la gestion automatique de la résolution temps-fréquence permet non seulement de réduire significativement les paramètres à régler, mais aussi d'améliorer la qualité des traitements
In Time-Frequency Analysis, adaptivity is the possibility to conceive representations and operators whose characteristics can be modeled according to their input. In this work, we look for methods providing for a local variation of the time-frequency resolution for sound analysis and re-synthesis. The first and fundamental objective is thus the formal definition of mathematical models whose interpretation leads to theoretical and algorithmic methods for adaptive sound analysis. The second objective is to make the adaptation automatic; we establish criteria to define the best local time-frequency resolution, with the optimization of appropriate sparsity measures. To be able to exploit adaptivity in spectral sound processing, we then introduce efficient re-synthesis methods based on analyses with varying resolution, designed to preserve and improve the existing sound transformation techniques. Our main assumption is that algorithms based on adaptive representations will help to establish a generalization and simplification for the application of signal processing methods that today still require expert knowledge. In particular, the need to provide manual low level configuration is a major limitation for the use of advanced signal processing methods by large communities. The possibility to dispose of an automatic time-frequency resolution drastically limits the parameters to set, without affecting, and even ameliorating, the treatment quality: the result is an improvement of the user experience, even with high-quality sound processing techniques, like transposition and time-stretch
Styles APA, Harvard, Vancouver, ISO, etc.
19

Schwarz, Diemo. « Spectral envelopes in sound analysis and synthesis ». [S.l.] : Universität Stuttgart , Fakultät Informatik, 1998. http://www.bsz-bw.de/cgi-bin/xvms.cgi?SWB7084238.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
20

Wang, Shuai School of Electrical Engineering &amp Telecommunication UNSW. « Soundfield analysis and synthesis : recording, reproduction and compression ». Awarded by:University of New South Wales. School of Electrical Engineering and Telecommunication, 2007. http://handle.unsw.edu.au/1959.4/31502.

Texte intégral
Résumé :
Globally, the ever increasing consumer interest in multichannel audio is a major factor driving the research intent in soundfield reconstruction and compression. The popularity of the well commercialized 5.1 surround sound system and its 6-Channel audio has been strongly supported by the advent of powerful storage medium, DVD, as well as the use of efficient telecommunication techniques. However, this popularity has also revealed potential problems in the development of soundfield systems. Firstly, currently available soundfield systems have rather poor compatibility with irregular speaker arrangements. Secondly, bandwidth requirement is dramatically increased for multichannel audio representation with good temporal and spatial fidelity. This master???s thesis addresses these two major issues in soundfield systems. It introduces a new approach to analyze and sysnthesize soundfield, and compares this approach with currently popular systems. To facilitate this comparison, the behavior of soundfield has been reviewed from both physical and psychoacoustic perspectives, along with an extensive study of past and present soundfield systems and multichannel audio compression algorithms. The 1th order High Spatial Resolution (HSR) soundfield recording and reproduction has been implemented in this project, and subjectively evaluated using a series of MUSHRA tests to finalize the comparison.
Styles APA, Harvard, Vancouver, ISO, etc.
21

Giannakis, Konstantinos. « Sound mosaics : a graphical user interface for sound synthesis based on audio-visual associations ». Thesis, Middlesex University, 2001. http://eprints.mdx.ac.uk/6634/.

Texte intégral
Résumé :
This thesis presents the design of a Graphical User Interface (GUI) for computer-based sound synthesis to support users in the externalisation of their musical ideas when interacting with the System in order to create and manipulate sound. The approach taken consisted of three research stages. The first stage was the formulation of a novel visualisation framework to display perceptual dimensions of sound in Visual terms. This framework was based on the findings of existing related studies and a series of empirical investigations of the associations between auditory and visual precepts that we performed for the first time in the area of computer-based sound synthesis. The results of our empirical investigations suggested associations between the colour dimensions of brightness and saturation with the auditory dimensions of pitch and loudness respectively, as well as associations between the multidimensional precepts of visual texture and timbre. The second stage of the research involved the design and implementation of Sound Mosaics, a prototype GUI for sound synthesis based on direct manipulation of visual representations that make use of the visualisation framework developed in the first stage. We followed an iterative design approach that involved the design and evaluation of an initial Sound Mosaics prototype. The insights gained during this first iteration assisted us in revising various aspects of the original design and visualisation framework that led to a revised implementation of Sound Mosaics. The final stage of this research involved an evaluation study of the revised Sound Mosaics prototype that comprised two controlled experiments. First, a comparison experiment with the widely used frequency-domain representations of sound indicated that visual representations created with Sound Mosaics were more comprehensible and intuitive. Comprehensibility was measured as the level of accuracy in a series of sound image association tasks, while intuitiveness was related to subjects' response times and perceived levels of confidence. Second, we conducted a formative evaluation of Sound Mosaics, in which it was exposed to a number of users with and without musical background. Three usability factors were measured: effectiveness, efficiency, and subjective satisfaction. Sound Mosaics was demonstrated to perform satisfactorily in ail three factors for music subjects, although non-music subjects yielded less satisfactory results that can be primarily attributed to the subjects' unfamiliarity with the task of sound synthesis. Overall, our research has set the necessary groundwork for empirically derived and validated associations between auditory and visual dimensions that can be used in the design of cognitively useful GUIs for computer-based sound synthesis and related area.
Styles APA, Harvard, Vancouver, ISO, etc.
22

Coleman, Graham Keith. « Descriptor control of sound transformations and mosaicing synthesis ». Doctoral thesis, Universitat Pompeu Fabra, 2016. http://hdl.handle.net/10803/392138.

Texte intégral
Résumé :
Sampling, as a musical or synthesis technique, is a way to reuse recorded musical expressions. In this dissertation, several ways to expand sampling synthesis are explored, especially mosaicing synthesis, which imitates target signals by transforming and compositing source sounds, in the manner of a mosaic made of broken tile. One branch of extension consists of the automatic control of sound transformations towards targets defined in a perceptual space. The approach chosen uses models that predict how the input sound will be transformed as a function of the selected parameters. In one setting, the models are known, and numerical search can be used to find sufficient parameters; in the other, they are unknown and must be learned from data. Another branch focuses on the sampling itself. By mixing multiple sounds at once, perhaps it is possible to make better imitations, e.g. in terms of the harmony of the target. However, using mixtures leads to new computational problems, especially if properties like continuity, important to high quality sampling synthesis, are to be preserved. A new mosaicing synthesizer is presented which incorporates all of these elements: supporting automatic control of sound transformations using models, mixtures supported by perceptually relevant harmony and timbre descriptors, and preservation of continuity of the sampling context and transformation parameters. Using listening tests, the proposed hybrid algorithm was compared against classic and contemporary algorithms, and the hybrid algorithm performed well on a variety of quality measures.
El mostreig, com a tècnica musical o de síntesi, és una manera de reutilitzar expressions musicals enregistrades. En aquesta dissertació s’exploren estratègies d’ampliar la síntesi de mostreig, sobretot la síntesi de “mosaicing”. Aquesta última tracta d’imitar un senyal objectiu a partir d’un conjunt de senyals font, transformant i ordenant aquests senyals en el temps, de la mateixa manera que es faria un mosaic amb rajoles trencades. Una d’aquestes ampliacions de síntesi consisteix en el control automàtic de transformacions de so cap a objectius definits a l’espai perceptiu. L’estratègia elegida utilitza models que prediuen com es transformarà el so d’entrada en funció d’uns paràmetres seleccionats. En un cas, els models són coneguts, i cerques númeriques es poden fer servir per trobar paràmetres suficients; en l’altre, els models són desconeguts i s’han d’aprendre a partir de les dades. Una altra ampliació es centra en el mostreig en si. Mesclant múltiples sons a la vegada, potser és possible fer millors imitacions, més específicament millorar l’harmonia del resultat, entre d’altres. Tot i així, utilitzar múltiples mescles crea nous problemes computacionals, especialment si propietats com la continuïtat, important per a la síntesis de mostreig d’alta qualitat, han de ser preservades. En aquesta tesi es presenta un nou sintetitzador mosaicing que incorpora tots aquests elements: control automàtic de transformacions de so fent servir models, mescles a partir de descriptors d’harmonia i timbre perceptuals, i preservació de la continuïtat del context de mostreig i dels paràmetres de transformació. Fent servir proves d’escolta, l’algorisme híbrid proposat va ser comparat amb algorismes clàssics i contemporanis: l’algorisme híbrid va donar resultats positius a una varietat de mesures de qualitat.
Styles APA, Harvard, Vancouver, ISO, etc.
23

Van, den Doel Cornelis Pieter. « Sound synthesis for virtual reality and computer games ». Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape7/PQDD_0005/NQ38993.pdf.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
24

Mohd, Norowi Noris. « An artificial intelligence approach to concatenative sound synthesis ». Thesis, University of Plymouth, 2013. http://hdl.handle.net/10026.1/1606.

Texte intégral
Résumé :
Technological advancement such as the increase in processing power, hard disk capacity and network bandwidth has opened up many exciting new techniques to synthesise sounds, one of which is Concatenative Sound Synthesis (CSS). CSS uses data-driven method to synthesise new sounds from a large corpus of small sound snippets. This technique closely resembles the art of mosaicing, where small tiles are arranged together to create a larger image. A ‘target’ sound is often specified by users so that segments in the database that match those of the target sound can be identified and then concatenated together to generate the output sound. Whilst the practicality of CSS in synthesising sounds currently looks promising, there are still areas to be explored and improved, in particular the algorithm that is used to find the matching segments in the database. One of the main issues in CSS is the basis of similarity, as there are many perceptual attributes which sound similarity can be based on, for example it can be based on timbre, loudness, rhythm, and tempo and so on. An ideal CSS system needs to be able to decipher which of these perceptual attributes are anticipated by the users and then accommodate them by synthesising sounds that are similar with respect to the particular attribute. Failure to communicate the basis of sound similarity between the user and the CSS system generally results in output that mismatches the sound which has been envisioned by the user. In order to understand how humans perceive sound similarity, several elements that affected sound similarity judgment were first investigated. Of the four elements tested (timbre, melody, loudness, tempo), it was found that the basis of similarity is dependent on humans’ musical training where musicians based similarity on the timbral information, whilst non-musicians rely on melodic information. Thus, for the rest of the study, only features that represent the timbral information were included, as musicians are the target user for the findings of this study. Another issue with the current state of CSS systems is the user control flexibility, in particular during segment matching, where features can be assigned with different weights depending on their importance to the search. Typically, the weights (in some existing CSS systems that support the weight assigning mechanism) can only be assigned manually, resulting in a process that is both labour intensive and time consuming. Additionally, another problem was identified in this study, which is the lack of mechanism to handle homosonic and equidistant segments. These conditions arise when too few features are compared causing otherwise aurally different sounds to be represented by the same sonic values, or can also be a result of rounding off the values of the features extracted. This study addresses both of these problems through an extended use of Artificial Intelligence (AI). The Analysis Hierarchy Process (AHP) is employed to enable order dependent features selection, allowing weights to be assigned for each audio feature according to their relative importance. Concatenation distance is used to overcome the issues with homosonic and equidistant sound segments. The inclusion of AI results in a more intelligent system that can better handle tedious tasks and minimize human error, allowing users (composers) to worry less of the mundane tasks, and focusing more on the creative aspects of music making. In addition to the above, this study also aims to enhance user control flexibility in a CSS system and improve similarity result. The key factors that affect the synthesis results of CSS were first identified and then included as parametric options which users can control in order to communicate their intended creations to the system to synthesise. Comprehensive evaluations were carried out to validate the feasibility and effectiveness of the proposed solutions (timbral-based features set, AHP, and concatenation distance). The final part of the study investigates the relationship between perceived sound similarity and perceived sound interestingness. A new framework that integrates all these solutions, the query-based CSS framework, was then proposed. The proof-of-concept of this study, ConQuer, was developed based on this framework. This study has critically analysed the problems in existing CSS systems. Novel solutions have been proposed to overcome them and their effectiveness has been tested and discussed, and these are also the main contributions of this study.
Styles APA, Harvard, Vancouver, ISO, etc.
25

Métois, Eric. « Musical sound information : musical gestures and embedding synthesis ». Thesis, Massachusetts Institute of Technology, 1997. http://hdl.handle.net/1721.1/29125.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
26

Pearse, Stephen. « Agent-based graphic sound synthesis and acousmatic composition ». Thesis, University of Sheffield, 2016. http://etheses.whiterose.ac.uk/15892/.

Texte intégral
Résumé :
For almost a century composers and engineers have been attempting to create systems that allow drawings and imagery to behave as intuitive and efficient musical scores. Despite the intuitive interactions that these systems afford, they are somewhat underutilised by contemporary composers. The research presented here explores the concept of agency and artificial ecosystems as a means of creating and exploring new graphic sound synthesis algorithms. These algorithms are subsequently designed to investigate the creation of organic musical gesture and texture using granular synthesis. The output of this investigation consists of an original software artefact, The Agent Tool, alongside a suite of acousmatic musical works which the former was designed to facilitate. When designing new musical systems for creative exploration with vast parametric controls, careful constraints should be put in place to encourage focused development. In this instance, an evolutionary computing model is utilised as part of an iterative development cycle. Each iteration of the system’s development coincides with a composition presented in this portfolio. The features developed as part of this process subsequently serve the author’s compositional practice and inspiration. As the software package is designed to be flexible and open ended, each composition represents a refinement of features and controls for the creation of musical gesture and texture. This document subsequently discusses the creative inspirations behind each composition alongside the features and agents that were created. This research is contextualised through a review of established literature on graphic sound synthesis, evolutionary musical computing and ecosystemic approaches to sound synthesis and control.
Styles APA, Harvard, Vancouver, ISO, etc.
27

Mattes, Symeon. « Perceptual models for sound field analysis and synthesis ». Thesis, University of Southampton, 2016. https://eprints.soton.ac.uk/397216/.

Texte intégral
Résumé :
This thesis describes the methodology that has been followed for the implementation of a biologically inspired auditory signal processing model that predicts human sound localization of stationary acoustic sound sources in 3D space. The intended use of the the model is for the evaluation of audio systems. An attempt is made to develop both a theoretical and mathematical framework that can be adopted as a generalized theory for the development of biologically inspired models of human sound localization. The model makes use of a combination of monaural and binaural cues and within a psychoacoustical framework makes predictions of the location of a sound source given the sound presence signals delivered to the ears. Finally, the effectiveness of the model is evaluated through comparison with the experimental results of a listening test in which a number of human subjects made judgements of the location of real sound sources in 3D space under anechoic conditions.
Styles APA, Harvard, Vancouver, ISO, etc.
28

Orelli, Paiva Guilherme. « Vibroacoustic Characterization and Sound Synthesis of the Viola Caipira ». Thesis, Le Mans, 2017. http://www.theses.fr/2017LEMA1045/document.

Texte intégral
Résumé :
La viola caipira est un type de guitare brésilienne largement utilisée dans la musique populaire. Elle comprend dix cordes métalliques organisées en cinq paires, accordées à l'unisson ou à l'octave. Le travail de thèse porte sur l'analyse des spécificités des sons musicaux produits par cet instrument, peu étudié dans la littérature.L'analyse des mouvements des cordes pincées au moyen d'une caméra rapide montre l'importance des vibrations par sympathie qui donnent lieu à un halo sonore, constituant une signature perceptive importante. Ces mesures révèlent également l'existence de chocs entre cordes, qui ont des conséquences très clairement audibles. Les mobilités vibratoires au chevalet sont par ailleurs mesurées au moyen de la méthode du fil brisé, simple de mise en oeuvre et peu couteuse dans la mesure où elle évite l'utilisation d'un capteur d'effort. Associée à une analyse modale haute résolution (méthode ESPRIT), ces mesures permettent de déterminer les déformées modales aux points de couplage corde/caisse et donc de caractériser l'instrument.Une modélisation physique, basées une approche modale, est réalisée à des fins de synthèse sonore. Elle prend en compte les mouvements des cordes selon 2 polarisations, les couplages avec la caisse ainsi que les collisions entre cordes. Ce modèle est qualifié de modèle hybride car il combine une approche analytique pour décrire les vibrations des cordes et des données expérimentales décrivant la caisse.Les simulations dans le domaine temporel rendent compte des principales caractéristiques identifiées de la viola caipira
The viola caipira is a type of Brazilian guitar widely used in popular music. It consists of ten metallic strings arranged in five pairs, tuned in unison or octave. The thesis work focuses on the analysis of the specificities of musical sounds produced by this instrument, which has been little studied in the literature.The analysis of the motions of plucked strings using a high speed camera shows the existance of sympathetic vibrations, which results in a sound halo, constituting an important perceptive feature. These measurements also reveal the existence of shocks between strings, which lead to very clearly audible consequences. Bridges mobilities are also measured using the wire-breaking method, which is simple to use and inexpensive since it does not require the use of a force sensor. Combined with a high-resolution modal analysis (ESPRIT method), these measurements enable to determine the modal shapes at the string/body coupling points and thus to characterize the instrument.A physical modelling, based on a modal approach, is carried out for sound synthesis purposes. It takes into account the strings motions according to 2 polarizations, the couplings with the body and the collisions between strings. This model is called a hybrid model because it combines an analytical approach to describe the vibrations of strings and experimental data describing the body. Simulations in the time domain reveal the main characteristics of the viola caipira
Styles APA, Harvard, Vancouver, ISO, etc.
29

Misdariis, Nicolas. « Synthèse - Reproduction - Perception des Sons Instrumentaux et Environnementaux : Application au Design Sonore ». Thesis, Paris, CNAM, 2014. http://www.theses.fr/2015CNAM0955/document.

Texte intégral
Résumé :
Ce mémoire présente une composition d’études et de travaux de recherche orientés autour de trois grandes thématiques : la synthèse, la reproduction et la perception des sons, en considérant à la fois les sons de nature musicale mais aussi environnementale. Il vise en outre un champ d’application, le design sonore, qui implique globalement la création intentionnelle de sons du quotidien. La structure du document est conçu selon un schéma relativement uniforme et comporte, pour chaque partie, une présentation générale de la thématique apportant des éléments théoriques et des données relatives à l’état de l’art, suivie de développements spécifiques permettant de converger vers les sujets d’étude propres à chaque thème – explicitement, formalisme modal dans la synthèse par modélisation physique, pour la partie "Synthèse" ; mesure et contrôle de la directivité des instruments de musique, pour la partie "Reproduction" ; timbre et identification des sources sonores, pour la partie "Perception"– puis d’une présentation détaillée des travaux personnels relatifs à chacun des sujets, le cas échéant, sous la forme d’un article publié. Ces divers éléments de connaissance et d’expérience propose donc une contribution personnelle et originale, volontairement inscrite dans un cadre de recherche élargi, pluridisciplinaire et appliqué
This dissertation presents a composition of studies and research works articulated around three main topics : synthesis, reproduction and perception of sounds, considering both musical and environmental sounds. Moreover, it focuses on an application field, the sound design, that globally involves the conception of intentional everyday sounds. The document is based on a rather uniform structure and contains, for each part, a general presentation of the topic which brings theoretical elements together with an overview of the state-of-the-art, followed by more precise developments in order to focus on the specific matters related to each topic – in detail, modal formalism in sound synthesis by physical modeling, for the "Synthesis" section ; measurement and control of musical instruments directivity, for the "Reproduction" section ; timbre and sound sources identification, for the "Perception" section – and then followed by a detailed presentation of the personal works related to each matter, in some cases, in the form of published papers. Then, these several elements of knowledge and experience offer a personal and original contribution, deliberately put in a broad, multidisciplinary and applied framework
Styles APA, Harvard, Vancouver, ISO, etc.
30

Huang, Zhendong. « On the sound produced by a synthetic jet device ». Thesis, Boston University, 2014. https://hdl.handle.net/2144/21179.

Texte intégral
Résumé :
Thesis (M.Sc.Eng.) PLEASE NOTE: Boston University Libraries did not receive an Authorization To Manage form for this thesis or dissertation. It is therefore not openly accessible, though it may be available by request. If you are the author or principal advisor of this work and would like to request open access for it, please contact us at open-help@bu.edu. Thank you.
Synthetic jet is a quasi-steady jet of fluid generated by oscillating pressure drop across an orifice, produced by a piston-like actuator. A unique advantage of the synthetic jet is that it is able to transfer linear momentum without requiring an external fluid source, and has therefore attracted much research within the past decade. Principal applications include aerodynamic flow boundary-layer separation control, heat transfer enhancement, mixing enhancement, and flow-generated sound minimization. In this thesis, the method of deriving the volume flux equation for a duct is first reviewed, combined with this method, a simplified synthetic jet model is presented, based on the principles of aerodynamic sound, the pressure fluctuation in the acoustic far field is predicted. This model is then been used to predict the minimum synthetic jet cavity resonance frequency, acoustic power, acoustic efficiency, root-mean-square jet speed, acoustic spectrum and their dependence on the following independent parameters: the duct length and radius, the aperture radius, the piston vibration frequency, and the maximum piston velocity.
2031-01-01
Styles APA, Harvard, Vancouver, ISO, etc.
31

Chand, G. « Real-time digital synthesis of transient waveforms : Complex transient sound waveforms are analysed for subsequent real-time synthesis with variable parameters ». Thesis, University of Bradford, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.376681.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
32

Pyž, Gražina. « Analysis and synthesis of Lithuanian phoneme dynamic sound models ». Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2013. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2013~D_20131125_134056-50408.

Texte intégral
Résumé :
Speech is the most natural way of human communication. Text-to-speech (TTS) problem arises in various applications: reading email aloud, reading text from e-book aloud, services for the people with speech disorders. Construction of speech synthesizer is a very complex task. Researchers are trying to automate speech synthesis. In order to solve the problem of Lithuanian speech synthesis, it is necessary to develop mathematical models for Lithuanian speech sounds. The research object of the dissertation is Lithuanian vowel and semivowel phoneme models. The proposed vowel and semivowel phoneme models can be used for developing a TTS formant synthesizer. Lithuanian vowel and semivowel phoneme modelling framework based on a vowel and semivowel phoneme mathematical model and an automatic procedure of estimation of the vowel phoneme fundamental frequency and input determining is proposed. Using this framework, the phoneme signal is described as the output of a linear multiple-input and single-output (MISO) system. The MISO system is a parallel connection of single-input and single-output (SISO) systems whose input impulse amplitudes vary in time. Within this framework two synthesis methods are proposed: harmonic and formant. Simulation has revealed that that the proposed framework gives sufficiently good vowel and semivowel synthesis quality.
Kalba yra natūralus žmonių bendravimo būdas. Teksto-į-šneką (TTS) problemos atsiranda įvairiose srityse: elektroninių laiškų skaitymas balsu, teksto iš elektroninių knygų skaitymas balsu, paslaugos kalbos sutrikimų turintiems žmonėms. Kalbos sintezatoriaus kūrimas yra be galo sudėtingas uždavinys. Įvairių šalių mokslininkai bando automatizuoti kalbos sintezę. Siekiant išspręsti lietuvių kalbos sintezės problemą, būtina kurti naujus lietuvių kalbos garsų matematinius modelius. Disertacijos tyrimo objektas yra dinaminiai lietuviškos šnekos balsių ir pusbalsių fonemų modeliai. Pasiūlyti balsių ir pusbalsių fonemų dinaminiai modeliai gali būti panaudoti kuriant formantinį kalbos sintezatorių. Garsams aprašyti pasiūlyta modeliavimo sistema pagrįsta balsių ir pusbalsių fonemų matematiniu modeliu bei pagrindinio tono ir įėjimų nustatymo automatine procedūra. Fonemos signalas yra gaunamas kai daugelio-įėjimų ir vieno-išėjimo (MISO) sistemos išėjimas. MISO sistema susideda iš lygiagrečiai sujungtų vieno-įėjimo ir vieno-išėjimo (SISO) sistemų, kurių įėjimų amplitudes kinta laike. Disertacijoje du sintezės metodai sukurti: harmoninis ir formantinis. Eksperimentiniai rezultatai parodė, kad balsiai ir pusbalsiai sintezuoti minėta sistema skamba pakankamai natūraliai.
Styles APA, Harvard, Vancouver, ISO, etc.
33

Polfreman, Richard. « User-interface design for software based sound synthesis systems ». Thesis, University of Hertfordshire, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.363503.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
34

Itagaki, Takebumi. « Real-time sound synthesis on a multi-processor platform ». Thesis, Durham University, 1998. http://etheses.dur.ac.uk/4890/.

Texte intégral
Résumé :
Real-time sound synthesis means that the calculation and output of each sound sample for a channel of audio information must be completed within a sample period. At a broadcasting standard, a sampling rate of 32,000 Hz, the maximum period available is 31.25 μsec. Such requirements demand a large amount of data processing power. An effective solution for this problem is a multi-processor platform; a parallel and distributed processing system. The suitability of the MIDI [Music Instrument Digital Interface] standard, published in 1983, as a controller for real-time applications is examined. Many musicians have expressed doubts on the decade old standard's ability for real-time performance. These have been investigated by measuring timing in various musical gestures, and by comparing these with the subjective characteristics of human perception. An implementation and its optimisation of real-time additive synthesis programs on a multi-transputer network are described. A prototype 81-polyphonic-note- organ configuration was implemented. By devising and deploying monitoring processes, the network's performance was measured and enhanced, leading to an efficient usage; the 88-note configuration. Since 88 simultaneous notes are rarely necessary in most performances, a scheduling program for dynamic note allocation was then introduced to achieve further efficiency gains. Considering calculation redundancies still further, a multi-sampling rate approach was applied as a further step to achieve an optimal performance. The theories underlining sound granulation, as a means of constructing complex sounds from grains, and the real-time implementation of this technique are outlined. The idea of sound granulation is quite similar to the quantum-wave theory, "acoustic quanta". Despite the conceptual simplicity, the signal processing requirements set tough demands, providing a challenge for this audio synthesis engine. Three issues arising from the results of the implementations above are discussed; the efficiency of the applications implemented, provisions for new processors and an optimal network architecture for sound synthesis.
Styles APA, Harvard, Vancouver, ISO, etc.
35

Dzjaparidze, Michaël. « Exploring the creative potential of physically inspired sound synthesis ». Thesis, Queen's University Belfast, 2015. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.695331.

Texte intégral
Résumé :
This thesis accompanies a portfolio of compositions and, in addition, discusses a number of compositional approaches which use physical modelling and physically inspired sound synthesis methods for the creation of electroacoustic music. To this end, a software library has been developed for the purpose of the real-time simulation of systems of inter-connected 10 and 2D objects, which has proven to be indispensable for producing the music works. It should be made clear from the outset that the primary objective of the research was not to add any novel scientific knowledge to the field of physical modelling. Instead, the aim was to explore in depth the creative possibilities of technical research carried out by others and to show that it can be utilised in a form which aids my own creative practice. From a creative perspective, it builds upon concepts and ideas formulated earlier by composers Jean-Claude Risset and Denis Smalley, centred around the interpretation of timbre and sound as constructs which actively inform compositional decision-making and structuring processes. This involves the creation of harmony out of timbre and playing with the source-cause perception of the listener through the transformation of timbre over time. In addition, the thesis offers a discussion of gesture and texture as they commonly appear in electroacoustic music and motivates my own personal preference for focussing on the development of texture over time as a means for creating musical form and function.
Styles APA, Harvard, Vancouver, ISO, etc.
36

Fell, Mark. « Works in sound and pattern synthesis : folio of works ». Thesis, University of Surrey, 2013. http://epubs.surrey.ac.uk/804661/.

Texte intégral
Résumé :
An integrated portfolio of writings, music and audio-visual works that responds to aesthetic, technical and critical concerns encounter in the production of the portfolio. The written component initially considers the relationship between technical tools and creative practices, and then considers how temporality is constructed and treated in computer music softwares. In a series of commentaries, I show how these issues are both derived from and feature within a number of works contained here. The works are grouped into 3 sections: audio-visual works, microtemporal works, and works responding to house musics; a 4th further section 'three exhibitions' is included in the appendices. The works, produced between 2008 and 2013, explore various vocabularies and materials using sound synthesis and pattern generating procedures. Of particular interest is the relationship between temporality, image, sound and geometry; how works are encountered by the audience; and, the role of works as critical exegesis of the musical and technical histories within which they are embedded. The development and structure of each work is documented, and analysis of each is presented. In response to the folio a number of theoretical concerns are identified and articulated. A description of creative process based upon a distinction between thought, technology and practice is critiqued and alternatives are drawn from Heidegger's analysis of 'Being-in-the-world', Latour's account of action as constituted in networks of humans and non-humans, and Clark and Chalmers' extended mind hypothesis. Developing from this I offer a reading of the role of music in Husserl's account of temporality and suggest that music has a time-constituting function.
Styles APA, Harvard, Vancouver, ISO, etc.
37

Bouënard, Alexandre. « Synthesis of Music Performances : Virtual Character Animation as a Controller of Sound Synthesis ». Phd thesis, Université de Bretagne Sud, 2009. http://tel.archives-ouvertes.fr/tel-00497292.

Texte intégral
Résumé :
Ces dernières années ont vu l'émergence de nom- breuses interfaces musicales ayant pour objectif principal d'offrir de nouvelles expériences instru- mentales. La spécification de telles interfaces met généralement en avant l'expertise des musiciens à appréhender des données sensorielles multiples et hétérogènes (visuelles, sonores et tactiles). Ces interfaces mettent ainsi en jeu le traitement de ces différentes données pour la conception de nouveaux modes d'interaction. Cette thèse s'intéresse plus spécifiquement à l'analyse, la modélisation ainsi que la synthèse de situations in- strumentales de percussion. Nous proposons ainsi un système permettant de synthétiser les retours vi- suel et sonore de performances de percussion, dans lesquelles un percussionniste virtuel contrôle des pro- cessus de synthèse sonore. L'étape d'analyse montre l'importance du contrôle de l'extrémité de la mailloche par des percussionnistes ex- perts jouant de la timbale. Cette analyse nécessite la capture préalable des gestes instrumentaux de dif- férents percussionnistes. Elle conduit à l'extraction de paramètres à partir des trajectoires extremité capturées pour diverses variations de jeu. Ces paramètres sont quantitativement évalués par leur capacité à représen- ter ces variations. Le système de synthèse proposé dans ce travail met en oeuvre l'animation physique d'un percussionniste virtuel capable de contrôler des processus de synthèse sonore. L'animation physique met en jeu un nouveau mode de contrôle du modèle physique par la seule spé- cification de la trajectoire extrémité de la mailloche. Ce mode de contrôle est particulièrement pertinent au re- gard de l'importance du contrôle de la mailloche mis en évidence dans l'analyse précédente. L'approche physique adoptée est de plus utilisée pour permettre l'interaction du percussionniste virtuel avec un modèle physique de timbale. En dernier lieu, le système proposé est utilisé dans une perspective de composition musicale. La con- struction de nouvelles situations instrumentales de percussion est réalisée grâce à la mise en oeuvre de partitions gestuelles. Celles-ci sont obtenues par l'assemblage et l'articulation d'unités gestuelles canoniques disponibles dans les données capturées. Cette approche est appliquée à la composition et la synthèse d'exercices de percussion, et evaluée qualitativement par un professeur de percussion.
Styles APA, Harvard, Vancouver, ISO, etc.
38

Incerti, Eric. « Synthèse de sons par modélisation physique de structures vibrantes : applications pour la création musicale par ordinateur ». Grenoble INPG, 1996. http://www.theses.fr/1996INPG0115.

Texte intégral
Résumé :
La simulation informatique par modele physique permet de modeliser la chaine causale qui va de l'action humaine (le geste) a l'emission du son selon une serie de composants structurels (systemes excitateurs, structures vibrantes, resonateurs). Le theme central de ce travail de these est l'etude de la structure vibrante, objet producteur du phenomene sonore. Sur ce theme s'articulent plusieurs niveaux theoriques et experimentaux axes sur la mise au point d'un systeme de modelisation et simulation complet pour la creation musicale
Styles APA, Harvard, Vancouver, ISO, etc.
39

Abbado, Adriano. « Perceptual correspondences of abstract animation and synthetic sound ». Thesis, Massachusetts Institute of Technology, 1988. http://hdl.handle.net/1721.1/71112.

Texte intégral
Résumé :
Thesis (M.S.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 1988.
Includes bibliographical references (leaves 47-52).
by Adriano Abbado.
M.S.
Styles APA, Harvard, Vancouver, ISO, etc.
40

Villeneuve, Jérôme. « Mise en oeuvre de méthodes de résolution du problème inverse dans le cadre de la synthèse sonore par modélisation physique masses-interactions ». Thesis, Grenoble, 2013. http://www.theses.fr/2013GRENS041.

Texte intégral
Résumé :
Un "problème inverse", dans son sens général, consiste en une «inversion» de la relation de cause à effet. Il ne s'agit pas de produire un phénomène «cause» à partir d'un phénomène «effet», mais plutôt de s'essayer à définir un phénomène «cause» dont un effet observé serait la conséquence.Dans le contexte du formalisme de modélisation physique et de simulation CORDIS-ANIMA, et plus particulièrement dans le cadre de l'interface de création sonore et de composition musicale qui le met en œuvre, GENESIS, créés par le laboratoire ACROE-ICA, on identifie une problématique d'une telle nature : étant donné un phénomène sonore, quel modèle physique construire qui permettrait de l'obtenir ? Cette interrogation est fondamentale dans le cadre du processus de création engagé par l'utilisation de tels outils. En effet, pouvoir décrire et concevoir le procédé qui permet d'engendrer un phénomène ou un événement sonore (musical) préalablement définis est une nécessité inhérente à l'acte de création musicale. Réciproquement, disposer des éléments d'analyse et de décomposition de la chaîne de production du phénomène sonore permet d'envisager, par représentation, traitement direct, composition des éléments de cette décomposition, la production de phénomènes très riches, nouveaux, expressifs et présentant une cohérence intime avec les sons naturels sur lesquels l'expérience perceptive et cognitive est construite.Dans l'objectif d'aborder cette problématique, nous avons dû formuler et étudier deux des aspects fondamentaux qui la sous-tendent. Le premier concerne la description même du résultat final, le phénomène sonore. Celle-ci pouvant être de plusieurs natures et souvent difficile en termes objectifs et quantitatifs, notre approche a tout d'abord consisté à réduire le problème aux notions de contenu spectral, ou encore de « structure modale » définis par une approche phénoménologique de type signal. Le second concerne la nature fonctionnelle et paramétrique des modèles construits au sein du paradigme CORDIS-ANIMA. Étant, par essence, une métaphore du contexte instrumental, tout modèle doit alors être conçu comme la mise en interaction d'un couple « instrument/instrumentiste ». De ces spécifications nous avons alors pu définir UN problème inverse, dont la résolution a demandé la mise au point d'outils d'interprétation de données phénoménologiques en données paramétriques. Ce travail de thèse a finalement abouti à la mise en œuvre de ces nouveaux outils au sein même du logiciel de création GENESIS, ainsi que dans l'environnement didactique qui l'accompagne. Les modèles qui en résultent, répondent à des critères de cohérence, de clarté et ont pour première vocation d'être réintégrés au processus de création. Ils ne constituent pas une finalité en eux-mêmes, mais un appui proposé à l'utilisateur pour compléter sa démarche.En conclusion de ce travail, nous détaillons les directions pouvant être suivies à des fins d'extension ou éventuellement de reformulation de cette problématique
An “Inverse Problem”, usually consists in an inversion of the cause-to-effect relation. It's not about producing a “cause” phenomenon from a given “effect” phenomenon, but rather defining a “cause” phenomenon of which an observed effect would be the consequence. In the context of the CORDIS-ANIMA physical modeling and simulation formalism, and in particular within the GENESIS interface for sound synthesis and musical creation, both built by the ACROE-ICA laboratory, it is possible to identify such a problem: Considering a sound, which physical model could be built to produce it? This interrogation is fundamental if we consider the creative process engaged by the users of such tools. Indeed, being able to describe and to conceive the process which engenders a previously defined phenomenon or sonic (musical) event is an inherent need for the activity of musical creation. Reciprocally, disposing of elements for analyzing and decomposing the sound phenomenon's production chain allows to consider, by means of representation, direct processing, and re-composition, the production of very rich and expressive phenomena that present an intimate coherency with the natural sounds upon which the perceptive and cognitive experience are built.To approach this problem, we formulated and studied two underlying fundamental aspects. The first one covers the very description of the final result, the sound phenomenon. This description can be of different kinds and is often complex regarding objective and quantitative matters, therefore, our approach has consisted first in a reduction of the general problem by considering spectral content, or “modal structure”, defined by a phenomenological signal based approach. The second aspect concerns the functional and parametrical nature of models built with the CORDIS-ANIMA paradigm. Since all models are inherently a metaphor of an instrumental situation, each one must then be conceived as an interactive combination of an “instrument/instrumentist” couple. From these specifications we have defined ONE inverse problem, whose resolution required developing tools to interpret phenomenological data to parametrical data. Finally, this work has led to the implementation of these new tools in within the GENESIS software, as well as in its didactic environment. The resulting models fulfill coherence and clarity criteria and are intended to reintegrate the creative process. They do not constitute an end in themselves, rather a support proposed to the user in order to complete his process.As a conclusion to this work, we detail further directions that could be pursued in order to extend or possibly reformulate the inverse problem
Styles APA, Harvard, Vancouver, ISO, etc.
41

Rodgers, Tara. « Synthesizing sound : metaphor in audio-technical discourse and synthesis history ». Thesis, McGill University, 2011. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=97090.

Texte intégral
Résumé :
Synthesized sound is ubiquitous in contemporary music and aural environments around the world. Yet, relatively little has been written on its cultural origins and meanings. This dissertation constructs a long history of synthesized sound that examines the century before synthesizers were mass-produced in the 1970s, and attends to ancient and mythic themes that circulate in contemporary audio-technical discourse. Research draws upon archival materials including late-nineteenth and early-twentieth century acoustics texts, and inventors' publications, correspondence, and synthesizer product manuals from the 1940s through the 1970s. As a feminist history of synthesized sound, this project investigates how metaphors in audio-technical discourse are invested with notions of identity and difference. Through analyses of key concepts in the history of synthesized sound, I argue that audio-technical language and representation, which typically stands as neutral, in fact privileges the perspective of an archetypal Western, white, and male subject. I identify two primary metaphors for conceiving electronic sounds that were in use by the early-twentieth century and continue to inform sonic epistemologies: electronic sounds as waves, and electronic sounds as individuals. The wave metaphor, in circulation since ancient times, produces an affective orientation to audio technologies based on a masculinist and colonizing subject position, whereby the generation and control of electronic sound entails the pleasure and danger of navigating and taming unruly waves. The second metaphor took shape over the nineteenth century as sounds, like modern bodies and subjects, came to be understood as individual entities with varying properties to be analyzed and controlled. Notions of sonic individuation and variability emerged in the contexts of Darwinian thought and a cultural fascination with electricity as a kind of animating force. Practices of classifying sounds as individuals, sorted by desirable and undesirable aesthetic variations, were deeply entwined with epistemologies of gender and racial difference in Western philosophy and modern science. Synthesized sound also inherits other histories, including applications of the terms synthesis and synthetic in diverse cultural fields; designs of earlier mechanical and electronic devices; and developments in musical modernism and electronics hobbyist cultures. The long-term and broad perspective on synthesis history adopted in this study aims to challenge received truths in audio-technical discourse and resist the linear and coherent progress narratives often found in histories of technology and new media. This dissertation aims to make important contributions to fields of sound and media studies, which can benefit from feminist contributions generally and elaboration on forms and meanings of synthesis technologies specifically. Also, feminist scholars have extensively theorized visual cultures and technologies, with few extended investigations of sound and audio technologies. This project also aims to open up new directions in a field of feminist sound studies by historicizing notions of identity and difference in audio-technical discourse, and claiming the usefulness of sound to feminist thought.
Le son synthétique est omniprésent dans la musique contemporaine et dans l'environnement sonore à l'échelle mondiale. Cependant, on a relativement peu écrit sur sa signification ou sur ses origines culturelles. Cette thèse construit une longue histoire du son synthétique au cours du siècle avant que ne soit massivement introduit le synthétiseur dans les années 1970; et s'attache aux thèmes anciens et mythiques qui émanent dans le discours contemporain de la technique audio. Cette recherche s'appuie sur des documents d'archives, y compris ceux de la fin du xixe siècle et du début du xxe siècle, comprenant des textes acoustiques, des publications d'inventeurs, de la correspondance ou des manuels d'utilisation des synthétiseurs à partir des années 1940 jusqu'aux années 1970.En tant que récit féministe du son synthétique, ce projet étudie comment les métaphores dans le discours de la technique audio sont porteuses de notions d'identité et de différence. À travers l'analyse de concepts clés de l'histoire du son synthétique, j'affirme que le langage de la technique audio et sa représentation, qui passe habituellement pour neutre, privilégie en fait la perspective masculine, archétypale du sujet blanc occidental. J'identifie deux métaphores primaires pour la conception des sons électroniques qui ont été utilisés à l'aube du xxe siècle et qui contribuent sans cesse à une épistémologie du son: des sons électroniques comme des vagues et les sons électroniques en tant qu'individus. La métaphore des vagues, en circulation depuis l'aube des temps, est productrice d'un affect aux technologies audio, typiquement basé sur un point de vue masculin et colonisateur; où la création et le contrôle du son électronique entraîne le plaisir et le danger propre à la navigation sur une mer houleuse. La seconde métaphore a pris forme au cours du xixe siècle au moment où les sons, comme des organismes vivants modernes, sujets, se sont vus interprétés comme de véritables entités individuelles aux propriétés variables pouvant faire l'objet d'analyse et de contrôle. Les notions d'individuation et de variabilité sonore émergèrent dans le contexte d'une pensée Darwinienne, alors qu'une fascination culturelle pour l'électricité vue comme une sorte de puissance immuable, se forgeait. Les méthodes de classification des sons en tant qu'individus, triés en fonction de variations esthétiques désirables ou indésirables, ont été intimement liées aux épistémologies du sexe et de la différence raciale dans la philosophie occidentale et dans les sciences modernes. Le son électronique est aussi l'héritier d'autres histoires, incluant les usages de notions telles que synthèse ou synthétique dans divers champs culturels; le design des premiers dispositifs mécaniques et électroniques, ou encore l'évolution de la modernité musicale et le développement d'un public amateur de culture électronique. La perspective à long terme et le large spectre sur l'histoire de la synthèse musicale adoptée dans cette étude vise à contester les vérités reçues dans le discours ambiant de la technique audio et à résister à la progression d'histoires linéaires et cohérentes qu'on trouve encore trop souvent dans l'histoire de la technologie et des nouveaux médias. Cette thèse contribue d'une façon importante au domaine des études en son et médias, qui pourraient à leur tour bénéficier d'un apport féministe en général et plus spécifiquement de l'élaboration des formes et des significations des technologies de la synthèse musicale. En outre, si les universitaires féministes ont largement théorisé les nouvelles cultures technologiques ou visuelles, peu d'entre elles ont exploré le son et les techniques audio. Ce projet veut ouvrir de nouvelles voies dans un domaine d'études féministes du son dans une perspective historienne avec des notions d'identité et de différence dans le discours de la technique audio, tout en clamant l'utilité du son à une pensée féministe.
Styles APA, Harvard, Vancouver, ISO, etc.
42

Yadegari, Shahrokh David. « Self-similar synthesis on the border between sound and music ». Thesis, Massachusetts Institute of Technology, 1992. http://hdl.handle.net/1721.1/70661.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
43

Kesterton, Anthony James. « The synthesis of sound with application in a MIDI environment ». Thesis, Rhodes University, 1991. http://hdl.handle.net/10962/d1006701.

Texte intégral
Résumé :
The wide range of options for experimentation with the synthesis of sound are usually expensive, difficult to obtain, or limit the experimenter. The work described in this thesis shows how the IBM PC and software can be combined to provide a suitable platform for experimentation with different synthesis techniques. This platform is based on the PC, the Musical Instrument Digital Interface (MIDI) and a musical instrument called a digital sampler. The fundamental concepts of sound are described, with reference to digital sound reproduction. A number of synthesis techniques are described. These are evaluated according to the criteria of generality, efficiency and control. The techniques discussed are additive synthesis, frequency modulation synthesis, subtractive synthesis, granular synthesis, resynthesis, wavetable synthesis, and sampling. Spiral synthesis, physical modelling, waveshaping and spectral interpolation are discussed briefly. The Musical Instrument Digital Interface is a standard method of connecting digital musical instruments together. It is the MIDI standard and equipment conforming to that standard that makes this implementation of synthesis techniques possible. As a demonstration of the PC platform, additive synthesis, frequency modulation synthesis, granular synthesis and spiral synthesis have been implemented in software. A PC equipped with a MIDI interface card is used to perform the synthesis. The MIDI protocol is used to transmit the resultant sound to a digital sampler. The INMOS transputer is used as an accelerator, as the calculation of a waveform using software is a computational intensive process. It is concluded that sound synthesis can be performed successfully using a PC and the appropriate software, and utilizing the facilities provided by a MIDI environment including a digital sampler.
Styles APA, Harvard, Vancouver, ISO, etc.
44

Carey, Benedict Eris. « Notation Sequence Generation and Sound Synthesis in Interactive Spectral Music ». Thesis, The University of Sydney, 2013. http://hdl.handle.net/2123/9517.

Texte intégral
Résumé :
Notation sequence generation and sound synthesis in interactive spectral music This thesis consists of a preliminary analysis of existing spectral music paradigms and proposes a methodology to address issues that arise in real-time spectral music composition and performance scenarios. This exploration involves an overview of meaning in spectral music with a particular focus on the ‘sonic object’ as a vehicle for expression. A framework for the production of ‘interactive spectral music’ was created. This framework takes form as a group of software based compositional tools called SpectraScore developed for the Max for Live platform. Primarily, these tools allow the user to analyse incoming audio and directly apply the collected data towards the generation of synthesised sound and notation sequences. Also presented is an extension of these tools, a novel system of correlation between emotional descriptors and spectrally derived harmonic morphemes. The final component is a portfolio of works created as examples of the techniques explored in scored and recorded form. As a companion to these works, an analysis component outlines the programmatic aspects of each piece and illustrates how they are executed within the music. Each scored piece corresponds with a recording of a live performance or performances of the work included in the attached DVD, which comprises individual realisations of the interactive works. Keywords: Spectralism, Music and Emotion, Electronic Music, Spectral Music, Algorithmic Music, Real-time Notation
Styles APA, Harvard, Vancouver, ISO, etc.
45

Möhlmann, Daniel Verfasser], Otthein [Akademischer Betreuer] Herzog et Jörn [Akademischer Betreuer] [Loviscach. « A Parametric Sound Object Model for Sound Texture Synthesis / Daniel Möhlmann. Gutachter : Otthein Herzog ; Jörn Loviscach. Betreuer : Otthein Herzog ». Bremen : Staats- und Universitätsbibliothek Bremen, 2011. http://d-nb.info/1071992430/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
46

Strandberg, Carl. « Mediating Interactions in Games Using Procedurally Implemented Modal Synthesis : Do players prefer and choose objects with interactive synthetic sounds over objects with traditional sample based sounds ? » Thesis, Luleå tekniska universitet, Institutionen för konst, kommunikation och lärande, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-68015.

Texte intégral
Résumé :
Procedurally implemented synthetic audio could offer greater interactive potential for audio in games than the currently popular sample based approach does. At the same time, synthetic audio can reduce storage requirements that using sample based audio results in. This study examines these potentials, and looks at one game interaction in depth to gain knowledge around if players prefer and chooses objects with interactive sounds generated through procedurally implemented modal synthesis, over objects with traditionally implemented sample based sound. An in-game environment listening test was created where 20 subjects were asked to throw a ball, 35 times, at a wall to destroy wall tiles and reveal a message. For each throw they could select one of two balls; one ball had a modal synthesis sound that varied in pitch with how hard the ball was thrown, the other had a traditionally implemented sample based sound that did not correspond with how hard it was thrown but one of four samples was called at random. The subjects were then asked questions to evaluate how realistic they perceived the two versions to be, which they preferred, and how they perceived the sounds corresponding to interaction. The results show that the modal synthesis version is preferred and perceived as being more realistic than the sample based version, but wether this was a deciding factor in subjects’ choices could not be determined.
Styles APA, Harvard, Vancouver, ISO, etc.
47

Roche, Fanny. « Music sound synthesis using machine learning : Towards a perceptually relevant control space ». Thesis, Université Grenoble Alpes, 2020. http://www.theses.fr/2020GRALT034.

Texte intégral
Résumé :
Un des enjeux majeurs du marché des synthétiseurs et de la recherche en synthèse sonore aujourd'hui est de proposer une nouvelle forme de synthèse permettant de générer des sons inédits tout en offrant aux utilisateurs de nouveaux contrôles plus intuitifs afin de les aider dans leur recherche de sons. En effet, les synthétiseurs sont actuellement des outils très puissants qui offrent aux musiciens une large palette de possibilités pour la création de textures sonores, mais également souvent très complexes avec des paramètres de contrôle dont la manipulation nécessite généralement des connaissances expertes. Cette thèse s'intéresse ainsi au développement et à l'évaluation de nouvelles méthodes d'apprentissage machine pour la synthèse sonore permettant la génération de nouveaux sons de qualité tout en fournissant des paramètres de contrôle pertinents perceptivement.Le premier challenge que nous avons relevé a donc été de caractériser perceptivement le timbre musical synthétique en mettant en évidence un jeu de descripteurs verbaux utilisés fréquemment et de manière consensuelle par les musiciens. Deux études perceptives ont été menées : un test de verbalisation libre qui nous a permis de sélectionner huit termes communément utilisés pour décrire des sons de synthétiseurs, et une analyse à échelles sémantiques permettant d'évaluer quantitativement l'utilisation de ces termes pour caractériser un sous-ensemble de sons, ainsi que d'analyser leur "degré de consensualité".Dans un second temps, nous avons exploré l'utilisation d'algorithmes d'apprentissage machine pour l'extraction d'un espace de représentation haut-niveau avec des propriétés intéressantes d'interpolation et d'extrapolation à partir d'une base de données de sons, le but étant de mettre en relation cet espace avec les dimensions perceptives mises en évidence plus tôt. S'inspirant de précédentes études sur la synthèse sonore par apprentissage profond, nous nous sommes concentrés sur des modèles du type autoencodeur et avons réalisé une étude comparative approfondie de plusieurs types d'autoencodeurs sur deux jeux de données différents. Ces expériences, couplées avec une étude qualitative via un prototype non temps-réel développé durant la thèse, nous ont permis de valider les autoencodeurs, et en particulier l'autoencodeur variationnel (VAE), comme des outils bien adaptés à l'extraction d'un espace latent de haut-niveau dans lequel il est possible de se déplacer de manière continue et fluide en créant de tous nouveaux sons. Cependant, à ce niveau, aucun lien entre cet espace latent et les dimensions perceptives mises en évidence précédemment n'a pu être établi spontanément.Pour finir, nous avons donc apporté de la supervision au VAE en ajoutant une régularisation perceptive durant la phase d'apprentissage. En utilisant les échantillons sonores résultant du test perceptif avec échelles sémantiques labellisés suivant les huit dimensions perceptives, il a été possible de contraindre, dans une certaine mesure, certaines dimensions de l'espace latent extrait par le VAE afin qu'elles coïncident avec ces dimensions. Un test comparatif a été finalement réalisé afin d'évaluer l'efficacité de cette régularisation supplémentaire pour conditionner le modèle et permettre un contrôle perceptif (au moins partiel) de la synthèse sonore
One of the main challenges of the synthesizer market and the research in sound synthesis nowadays lies in proposing new forms of synthesis allowing the creation of brand new sonorities while offering musicians more intuitive and perceptually meaningful controls to help them reach the perfect sound more easily. Indeed, today's synthesizers are very powerful tools that provide musicians with a considerable amount of possibilities for creating sonic textures, but the control of parameters still lacks user-friendliness and may require some expert knowledge about the underlying generative processes. In this thesis, we are interested in developing and evaluating new data-driven machine learning methods for music sound synthesis allowing the generation of brand new high-quality sounds while providing high-level perceptually meaningful control parameters.The first challenge of this thesis was thus to characterize the musical synthetic timbre by evidencing a set of perceptual verbal descriptors that are both frequently and consensually used by musicians. Two perceptual studies were then conducted: a free verbalization test enabling us to select eight different commonly used terms for describing synthesizer sounds, and a semantic scale analysis enabling us to quantitatively evaluate the use of these terms to characterize a subset of synthetic sounds, as well as analyze how consensual they were.In a second phase, we investigated the use of machine learning algorithms to extract a high-level representation space with interesting interpolation and extrapolation properties from a dataset of sounds, the goal being to relate this space with the perceptual dimensions evidenced earlier. Following previous studies interested in using deep learning for music sound synthesis, we focused on autoencoder models and realized an extensive comparative study of several kinds of autoencoders on two different datasets. These experiments, together with a qualitative analysis made with a non real-time prototype developed during the thesis, allowed us to validate the use of such models, and in particular the use of the variational autoencoder (VAE), as relevant tools for extracting a high-level latent space in which we can navigate smoothly and create new sounds. However, so far, no link between this latent space and the perceptual dimensions evidenced by the perceptual tests emerged naturally.As a final step, we thus tried to enforce perceptual supervision of the VAE by adding a regularization during the training phase. Using the subset of synthetic sounds used in the second perceptual test and the corresponding perceptual grades along the eight perceptual dimensions provided by the semantic scale analysis, it was possible to constraint, to a certain extent, some dimensions of the VAE high-level latent space so as to match these perceptual dimensions. A final comparative test was then conducted in order to evaluate the efficiency of this additional regularization for conditioning the model and (partially) leading to a perceptual control of music sound synthesis
Styles APA, Harvard, Vancouver, ISO, etc.
48

Desvages, Charlotte Genevieve Micheline. « Physical modelling of the bowed string and applications to sound synthesis ». Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/31273.

Texte intégral
Résumé :
This work outlines the design and implementation of an algorithm to simulate two-polarisation bowed string motion, for the purpose of realistic sound synthesis. The algorithm is based on a physical model of a linear string, coupled with a bow, stopping fi ngers, and a rigid, distributed fingerboard. In one polarisation, the normal interaction forces are based on a nonlinear impact model. In the other polarisation, the tangential forces between the string and the bow, fingers, and fingerboard are based on a force-velocity friction curve model, also nonlinear. The linear string model includes accurate time-domain reproduction of frequency-dependent decay times. The equations of motion for the full system are discretised with an energy-balanced finite difference scheme, and integrated in the discrete time domain. Control parameters are dynamically updated, allowing for the simulation of a wide range of bowed string gestures. The playability range of the proposed algorithm is explored, and example synthesised gestures are demonstrated.
Styles APA, Harvard, Vancouver, ISO, etc.
49

Masri, Paul. « Computer modelling of sound for transformation and synthesis of musical signals ». Thesis, University of Bristol, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.246275.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
50

Treeby, Bradley E. « The effect of hair on human sound localisation cues ». University of Western Australia. School of Mechanical Engineering, 2007. http://theses.library.uwa.edu.au/adt-WU2007.0192.

Texte intégral
Résumé :
The acoustic scattering properties of the human head and torso match well with those of simple geometric shapes. Consequently, analytical scattering models can be utilised to account for the sound localisation cues introduced by these features. The traditional use of such models assumes that the head surface is completely rigid in nature. This thesis is concerned with modelling and understanding the effect of terminal scalp hair (i.e., a non-rigid head surface) on the auditory localisation cues. The head is modelled as a sphere, and the acoustical characteristics of hair are modelled using a locally-reactive equivalent impedance parameter. This allows the scattering boundary to be defined on the inner rigid surface of the head. The boundary assumptions are validated experimentally, through impedance measurement at oblique incidence and analysis of the near-field scattering pattern of a uniformly covered sphere. The impedance properties of human hair are also discussed, including trends with variations in sample thickness, bulk density, and fibre diameter. A general solution for the scattering of sound by a sphere with an arbitrarily distributed, locally reactive surface impedance is then presented. From this, an analytical solution is derived for a surface boundary that is evenly divided into two uniformly distributed hemispheres. For this boundary condition, cross-coupling is shown to exist between incoming and scattered wave modes of equi-order when the degrees are non-equal and opposite in parity. The overall effect of impedance on the resultant scattering characteristics is discussed in detail, both for uniform and for hemispherically divided surface boundaries. Finally, the analytical formulation and the impedance characteristics of hair are collectively utilised to investigate the effect of hair on human auditory localisation cues. The hair is shown to produce asymmetric perturbations to both the monaural and binaural cues. These asymmetries may help to resolve localisation confusions between sound stimuli positioned in the front and rear hemi-fields. The cue changes in the azimuth plane are characterised by two predominant features and remain consistent regardless of the decomposition baseline (i.e., the inclusion of a pinna offset, neck, etc). Experimental comparisons using a synthetic hair material show a good agreement with simulated results.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie