Дисертації з теми "Sound synthesi"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-50 дисертацій для дослідження на тему "Sound synthesi".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.
PAPETTI, Stefano. "Sound modeling issues in interactive sonification - From basic contact events to synthesis and manipulation tools." Doctoral thesis, Università degli Studi di Verona, 2010. http://hdl.handle.net/11562/340961.
Повний текст джерелаThe work presented in this thesis ranges over a variety of research topics, spacing from human-computer interaction to physical-modeling. What combines such broad areas of interest is the idea of using physically-based computer simulations of acoustic phenomena in order to provide human-computer interfaces with sound feedback which is consistent with the user interaction. In this regard, recent years have seen the emergence of several new disciplines that go under the name of -- to cite a few -- auditory display, sonification and sonic interaction design. This thesis deals with the design and implementation of efficient sound algorithms for interactive sonification. To this end, the physical modeling of everyday sounds is taken into account, that is sounds not belonging to the families of speech and musical sounds.
FONTANA, Federico. "Physics-based models for the acoustic representation of space in virtual environments." Doctoral thesis, Università degli Studi di Verona, 2003. http://hdl.handle.net/11562/342240.
Повний текст джерелаThis work deals with the simulation of virtual acoustic spaces using physics-based models. The acoustic space is what we perceive about space using our auditory system. The physical nature of the models means that they will present spatial attributes (such as, for example, shape and size) as a salient feature of their structure, in a way that space will be directly represented and manipulated by means of them.
Liao, Wei-Hsiang. "Modelling and transformation of sound textures and environmental sounds." Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066725/document.
Повний текст джерелаThe processing of environmental sounds has become an important topic in various areas. Environmental sounds are mostly constituted of a kind of sounds called sound textures. Sound textures are usually non-sinusoidal, noisy and stochastic. Several researches have stated that human recognizes sound textures with statistics that characterizing the envelopes of auditory critical bands. Existing synthesis algorithms can impose some statistical properties to a certain extent, but most of them are computational intensive. We propose a new analysis-synthesis framework that contains a statistical description that consists of perceptually important statistics and an efficient mechanism to adapt statistics in the time-frequency domain. The quality of resynthesised sound is at least as good as state-of-the-art but more efficient in terms of computation time. The statistic description is based on the STFT. If certain conditions are met, it can also adapt to other filter bank based time-frequency representations (TFR). The adaptation of statistics is achieved by using the connection between the statistics on TFR and the spectra of time-frequency domain coefficients. It is possible to adapt only a part of cross-correlation functions. This allows the synthesis process to focus on important statistics and ignore the irrelevant parts, which provides extra flexibility. The proposed algorithm has several perspectives. It could possibly be used to generate unseen sound textures from artificially created statistical descriptions. It could also serve as a basis for transformations like stretching or morphing. One could also expect to use the model to explore semantic control of sound textures
Chapman, David P. "Playing with sounds : a spatial solution for computer sound synthesis." Thesis, University of Bath, 1996. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.307047.
Повний текст джерелаLee, Chung. "Sound texture synthesis using an enhanced overlap-add approach /." View abstract or full-text, 2008. http://library.ust.hk/cgi/db/thesis.pl?CSED%202008%20LEE.
Повний текст джерелаConan, Simon. "Contrôle intuitif de la synthèse sonore d’interactions solidiennes : vers les métaphores sonores." Thesis, Ecole centrale de Marseille, 2014. http://www.theses.fr/2014ECDM0012/document.
Повний текст джерелаPerceptual control (i.e. from evocations) of sound synthesis processes is a current challenge. Indeed, sound synthesis models generally involve a lot of low-level control parameters, whose manipulation requires a certain expertise with respect to the sound generation process. Thus, intuitive control of sound generation is interesting for users, and especially non-experts, because they can create and control sounds from evocations. Such a control is not immediate and is based on strong assumptions linked to our perception, and especially the existence of acoustic morphologies, so-called ``invariants'', responsible for the recognition of specific sound events.This thesis tackles the problem by focusing on invariants linked to specific sound generating actions. If follows two main parts. The first is to identify invariants responsible for the recognition of three categories of continuous interactions: rubbing, scratching and rolling. The aim is to develop a real-time sound synthesizer with intuitive controls that enables users to morph continuously between the different interactions (e.g. progressively transform a rubbing sound into a rolling one). The synthesis model will be developed in the framework of the ``action-object'' paradigm which states that sounds can be described as the result of an action (e.g. scratching) on an object (e.g. a wood plate). This paradigm naturally fits the well-known source-filter approach for sound synthesis, where the perceptually relevant information linked to the object is described in the ``filter'' part, and the action-related information is described in the ``source'' part. To derive our generic synthesis model, several approaches are treated: physical models, phenomenological approaches and listening tests with recorded and synthesized sounds.The second part of the thesis deals with the concept of ``sonic metaphors'' by expanding the object notion to various sound textures. The question raised is the following: given any sound texture, is it possible to modify its intrinsic properties such that it evokes a particular interaction, like rolling or rubbing for instance? To create these sonic metaphors, a cross-synthesis process is used where the ``source'' part is based on the sound morphologies linked to the actions previously identified, and the ``filter'' part renders the sound texture properties. This work, together with the chosen paradigm offers new perspectives to build a sound language
Caracalla, Hugo. "Sound texture synthesis from summary statistics." Electronic Thesis or Diss., Sorbonne université, 2019. http://www.theses.fr/2019SORUS676.
Повний текст джерелаSound textures are a wide class of sounds that includes the sound of the rain falling, the hubbub of a crowd and the chirping of flocks of birds. All these sounds present an element of unpredictability which is not commonly sought after in sound synthesis, requiring the use of dedicated algorithms. However, the diverse audio properties of sound textures make the designing of an algorithm able to convincingly recreate varied textures a complex task. This thesis focuses on parametric sound texture synthesis. In this paradigm, a set of summary statistics are extracted from a target texture and iteratively imposed onto a white noise. If the set of statistics is appropriate, the white noise is modified until it resemble the target, sounding as if it had been recorded moments later. In a first part, we propose improvements to perceptual-based parametric method. These improvements aim at making its synthesis of sharp and salient events by mainly altering and simplifying its imposition process. In a second, we adapt a parametric visual texture synthesis method based statistics extracted by a Convolutional Neural Networks (CNN) to work on sound textures. We modify the computation of its statistics to fit the properties of sound signals, alter the architecture of the CNN to best fit audio elements present in sound textures and use a time-frequency representation taking both magnitude and phase into account
Serquera, Jaime. "Sound synthesis with cellular automata." Thesis, University of Plymouth, 2012. http://hdl.handle.net/10026.1/1189.
Повний текст джерелаPicard-Limpens, Cécile. "Expressive Sound Synthesis for Animation." Phd thesis, Université de Nice Sophia-Antipolis, 2009. http://tel.archives-ouvertes.fr/tel-00440417.
Повний текст джерелаPicard, Limpens Cécile. "Expressive sound synthesis for animation." Nice, 2009. http://www.theses.fr/2009NICE4075.
Повний текст джерелаL'objectif principal de ce travail est de proposer des outils pour une synthèse en temps-réel, réaliste et expressive, des sons résultant d'interactions physiques entre objets dans une scène virtuelle. De fait, ces effets sonores, à l'exemple des bruits de collisions entre solides ou encore d'interactions continues entre surfaces, ne peuvent être prédéfinis et calculés en phase de pré-production. Dans ce cadre, nous proposons deux approches, la première basée sur une modélisation des phénomènes physiques à l'origine de l'émission sonore, la seconde basée sur le traitement d'enregistrements audio. Selon une approche physique, la source sonore est traitée comme la combinaison d'une excitation et d'un résonateur. Dans un premier temps, nous présentons une technique originale traduisant la force d'interaction entre surfaces dans le cas de contacts continus, tel que le roulement. Cette technique repose sur l'analyse des textures utilisées pour le rendu graphique des surfaces de la scène virtuelle. Dans un second temps, nous proposons une méthode d'analyse modale robuste et flexible traduisant les vibrations sonores du résonateur. Outre la possibilité de traiter une large variété de géométries et d'offrir une multi-résolution des paramètres modaux, la méthode permet de résoudre le problème de cohérence entre simulation physique et synthèse sonore, problème fréquemment rencontré en animation. Selon une approche empirique, nous proposons une technique de type granulaire, exprimant la synthèse sonore par un agencement cohérent de particules ou grains sonores. La méthode consiste tout d'abord en un prétraitement d'enregistrements destiné à constituer un matériel sonore sous forme compacte. Ce matériel est ensuite manipulé en temps réel pour, d'une part, une resynthèse complète des enregistrements originaux, et d'autre part, une utilisation flexible en fonction des données reportées par le moteur de simulation et/ou de procédures prédéfinies. Enfin, l'intérêt est porté sur les sons de fracture, au vu de leur utilisation fréquente dans les environnements virtuels, et en particulier les jeux vidéos. Si la complexité du phénomène rend l'emploi d'un modèle purement physique très coûteux, l'utilisation d'enregistrements est également inadaptée pour la grande variété de micro-événements sonores. Le travail de thèse propose ainsi un modèle hybride et des stratégies possibles afin de combiner une approche physique et une approche empirique. Le modèle ainsi conçu vise à reproduire l'événement sonore de la fracture, de son initiation à la création de micro-débris
Lee, JungSuk. "Categorization and modeling of sound sources for sound analysis/synthesis." Thesis, McGill University, 2013. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=116954.
Повний текст джерелаDans cette thèse, nous avons étudié plusieurs scéhmas d'analyse/synthèse dans le cadre des modèles source/filtre, avec un attention particulière portée sur la composante de source. Cette recherche améliore les méthodes ainsi que les outils fournis créateurs de sons, compositeurs et musiciens désirant analyser et synthétiser avec flexibilité des sons destinés aux jeux vidéos, au cinéma ou à la musique par ordinateur. Ces sons peuvent aller de sons abstraits et complexes à ceux provenant d'instruments de musique existants. En premier lieu, un schéma d'analyse-synthèse est introduit permettant la reproduction du son d'une balle en train de rouler. Ce schéma est fondé sur l'hypothèse que le son de ce roulement est généré par la concaténation de micro-contacts entre balle et surface, chacune d'elles possédant sa proper série de résonances. L'information relative aux temps de contact est extradite du son du roulement que l'on cherche à reproduire au moyen d'une procédure détectant le début du son afin de le segmenter. Les segments de son ainsi isolés sont supposés correspondre aux micro-contacts entre la balle et la surface. Ainsi un algorithme de prédiction linéaire est effectué par sous-bande, préalablement extraites afin de modéliser des résonances et des anti-résonances variants dans le temps. Les segments sont ensuite re-synthétisés, superposés et additionnés pour reproduire le son du roulement dans son entier. Cette approche d'analyse/synthèse "granulaire" est également appliquée à plusieurs sons de types environnementaux (pluie, feux d'artifice, marche, claquement) afin d'explorer plus avant l'influence du type de la source sur l'analyse/synthèse des sons. Le système proposé permet une analyse flexible de sons complexes et leur synthèse, avec la possibilité d'ajouter des modifications temporelles.Enfin, une approche novatrice pour extraire le signal d'excitation d'un son de corde pincée est présentée dans le contexte de schémas source/filtre sur une modèlisation physique. A cet effet, nous introduisons une méthode de type fenêtrage, et une méthode de filtrage inverse fondée sur le type de propagation selon laquelle l'onde se déplace le long de la corde. De plus, un modèle paramétrique de l'excitation par pincement ainsi qu'une méthode d'estimation de ces paramètres sont détaillés.
García, Ricardo A. (Ricardo Antonio) 1974. "Automatic generation of sound synthesis techniques." Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/61542.
Повний текст джерелаIncludes bibliographical references (p. 97-98).
Digital sound synthesizers, ubiquitous today in sound cards, software and dedicated hardware, use algorithms (Sound Synthesis Techniques, SSTs) capable of generating sounds similar to those of acoustic instruments and even totally novel sounds. The design of SSTs is a very hard problem. It is usually assumed that it requires human ingenuity to design an algorithm suitable for synthesizing a sound with certain characteristics. Many of the SSTs commonly used are the fruit of experimentation and a long refinement processes. A SST is determined by its "functional form" and "internal parameters". Design of SSTs is usually done by selecting a fixed functional form from a handful of commonly used SSTs, and performing a parameter estimation technique to find a set of internal parameters that will best emulate the target sound. A new approach for automating the design of SSTs is proposed. It uses a set of examples of the desired behavior of the SST in the form of "inputs + target sound". The approach is capable of suggesting novel functional forms and their internal parameters, suited to follow closely the given examples. Design of a SST is stated as a search problem in the SST space (the space spanned by all the possible valid functional forms and internal parameters, within certain limits to make it practical). This search is done using evolutionary methods; specifically, Genetic Programming (GP). A custom language for representing and manipulating SSTs as topology graphs and expression trees is proposed, as well as the mapping rules between both representations. Fitness functions that use analytical and perceptual distance metrics between the target and produced sounds are discussed. The AGeSS system (Automatic Generation of Sound Synthesizers) developed in the Media Lab is outlined, and some SSTs and their evolution are shown.
by Ricardo A. García.
S.M.
Hahn, Henrik. "Expressive sampling synthesis. Learning extended source-filter models from instrument sound databases for expressive sample manipulations." Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066564/document.
Повний текст джерелаWithin this thesis an imitative sound synthesis system will be introduced that is applicable to most quasi-harmonic instruments. The system bases upon single-note recordings that represent a quantized version of an instrument's possible timbre space with respect to its pitch and intensity dimension. A transformation method then allows to render sound signals with continuous values of the expressive control parameters which are perceptually coherent with its acoustic equivalents. A parametric instrument model is therefore presented based on an extended source-filter model with separate manipulations of a signal’s harmonic and residual components. A subjective evaluation procedure will be shown to assess a variety of transformation results by a direct comparison with unmodified recordings to determine how perceptually close the synthesis results are regarding their respective acoustic correlates
Zita, Andreas. "Computational Real-Time Sound Synthesis of Rain." Thesis, Linköping University, Department of Science and Technology, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-1830.
Повний текст джерелаReal-time sound synthesis in computer games using physical modeling is an area with great potential. To date, most sounds are pre-recorded to match a certain event. Instead by using a model to describe the sound producing event, a number of problems encountered when using pre-recorded sounds can be avoided. This thesis will introduce these problems and present a solution. The thesis will also evaluate one such physical model, for rain sound, and implement a real- time simulation to demonstrate the advantages of the method.
Nilsson, Robin Lindh. "Contact Sound Synthesis in Real-time Applications." Thesis, Blekinge Tekniska Högskola, Institutionen för kreativa teknologier, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3938.
Повний текст джерелаSyntetisering av ljud som uppstår när fysikobjekt kolliderar i en virtuell miljö kan ge mer dynamiska och realistiska ljudeffekter, men är krävande att beräkna. I det här examensarbetet implementerades ljudsyntes i frekvensdomänen baserat på en tidigare studie, och utvecklades sedan vidare till att utnyttja multipla trådar. Enligt mätningar i tre olika testfall kunde den multitrådade implementationen syntetisera 80% fler ljudvågor än den enkeltrådade, på en i7-processor.
Author's website: www.robinerd.com
Vigliensoni, Martin Augusto. "Touchless gestural control of concatenative sound synthesis." Thesis, McGill University, 2011. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=104846.
Повний текст джерелаCe mémoire de thèse présente une nouvelle interface pour l'expression musicale combinant la synthèse sonore par concaténation et les technologies de captation de mouvements dans l'espace. Ce travail commence par une présentation des dispositifs de capture de position de type main-libre, en étudiant leur principes de fonctionnement et leur caractéristiques. Des exemples de leur application dans les contextes musicaux sont aussi étudiés. Une attention toute particulière est accordée à quatre systèmes: leurs spécifications techniques ainsi que leurs performances (évaluées par des métriques quantitatives) sont comparées expérimentalement. Ensuite, la synthèse concaténative est décrite. Cette technique de synthèse sonore consiste à synthéthiser une séquence musicale cible à partir de sons pré-enregistrés, sélectionnés et concaténés en fonction de leur adéquation avec la cible. Trois implémentations de cette technique sont comparées, permettant ainsi d'en choisir une pour notre application. Enfin, nous décrivons SoundCloud, une nouvelle interface qui, en ajoutant une interface visuelle à la méthode de synthèse concaténative, permet d'en étendre les possibilités de contrôle. SoundCloud permet en effet de contrôler la synthése de sons en utilisant des gestes libres des mains pour naviguer au sein d'un espace tridimensionnel de descripteurs des sons d'une base de données.
Valsamakis, Nikolas. "Non-standard sound synthesis with dynamic models." Thesis, University of Plymouth, 2013. http://hdl.handle.net/10026.1/2841.
Повний текст джерелаLiuni, Marco. "Automatic adaptation of sound analysis and synthesis." Paris 6, 2012. http://www.theses.fr/2012PA066105.
Повний текст джерелаIn Time-Frequency Analysis, adaptivity is the possibility to conceive representations and operators whose characteristics can be modeled according to their input. In this work, we look for methods providing for a local variation of the time-frequency resolution for sound analysis and re-synthesis. The first and fundamental objective is thus the formal definition of mathematical models whose interpretation leads to theoretical and algorithmic methods for adaptive sound analysis. The second objective is to make the adaptation automatic; we establish criteria to define the best local time-frequency resolution, with the optimization of appropriate sparsity measures. To be able to exploit adaptivity in spectral sound processing, we then introduce efficient re-synthesis methods based on analyses with varying resolution, designed to preserve and improve the existing sound transformation techniques. Our main assumption is that algorithms based on adaptive representations will help to establish a generalization and simplification for the application of signal processing methods that today still require expert knowledge. In particular, the need to provide manual low level configuration is a major limitation for the use of advanced signal processing methods by large communities. The possibility to dispose of an automatic time-frequency resolution drastically limits the parameters to set, without affecting, and even ameliorating, the treatment quality: the result is an improvement of the user experience, even with high-quality sound processing techniques, like transposition and time-stretch
Schwarz, Diemo. "Spectral envelopes in sound analysis and synthesis." [S.l.] : Universität Stuttgart , Fakultät Informatik, 1998. http://www.bsz-bw.de/cgi-bin/xvms.cgi?SWB7084238.
Повний текст джерелаWang, Shuai School of Electrical Engineering & Telecommunication UNSW. "Soundfield analysis and synthesis: recording, reproduction and compression." Awarded by:University of New South Wales. School of Electrical Engineering and Telecommunication, 2007. http://handle.unsw.edu.au/1959.4/31502.
Повний текст джерелаGiannakis, Konstantinos. "Sound mosaics : a graphical user interface for sound synthesis based on audio-visual associations." Thesis, Middlesex University, 2001. http://eprints.mdx.ac.uk/6634/.
Повний текст джерелаColeman, Graham Keith. "Descriptor control of sound transformations and mosaicing synthesis." Doctoral thesis, Universitat Pompeu Fabra, 2016. http://hdl.handle.net/10803/392138.
Повний текст джерелаEl mostreig, com a tècnica musical o de síntesi, és una manera de reutilitzar expressions musicals enregistrades. En aquesta dissertació s’exploren estratègies d’ampliar la síntesi de mostreig, sobretot la síntesi de “mosaicing”. Aquesta última tracta d’imitar un senyal objectiu a partir d’un conjunt de senyals font, transformant i ordenant aquests senyals en el temps, de la mateixa manera que es faria un mosaic amb rajoles trencades. Una d’aquestes ampliacions de síntesi consisteix en el control automàtic de transformacions de so cap a objectius definits a l’espai perceptiu. L’estratègia elegida utilitza models que prediuen com es transformarà el so d’entrada en funció d’uns paràmetres seleccionats. En un cas, els models són coneguts, i cerques númeriques es poden fer servir per trobar paràmetres suficients; en l’altre, els models són desconeguts i s’han d’aprendre a partir de les dades. Una altra ampliació es centra en el mostreig en si. Mesclant múltiples sons a la vegada, potser és possible fer millors imitacions, més específicament millorar l’harmonia del resultat, entre d’altres. Tot i així, utilitzar múltiples mescles crea nous problemes computacionals, especialment si propietats com la continuïtat, important per a la síntesis de mostreig d’alta qualitat, han de ser preservades. En aquesta tesi es presenta un nou sintetitzador mosaicing que incorpora tots aquests elements: control automàtic de transformacions de so fent servir models, mescles a partir de descriptors d’harmonia i timbre perceptuals, i preservació de la continuïtat del context de mostreig i dels paràmetres de transformació. Fent servir proves d’escolta, l’algorisme híbrid proposat va ser comparat amb algorismes clàssics i contemporanis: l’algorisme híbrid va donar resultats positius a una varietat de mesures de qualitat.
Van, den Doel Cornelis Pieter. "Sound synthesis for virtual reality and computer games." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape7/PQDD_0005/NQ38993.pdf.
Повний текст джерелаMohd, Norowi Noris. "An artificial intelligence approach to concatenative sound synthesis." Thesis, University of Plymouth, 2013. http://hdl.handle.net/10026.1/1606.
Повний текст джерелаMétois, Eric. "Musical sound information : musical gestures and embedding synthesis." Thesis, Massachusetts Institute of Technology, 1997. http://hdl.handle.net/1721.1/29125.
Повний текст джерелаPearse, Stephen. "Agent-based graphic sound synthesis and acousmatic composition." Thesis, University of Sheffield, 2016. http://etheses.whiterose.ac.uk/15892/.
Повний текст джерелаMattes, Symeon. "Perceptual models for sound field analysis and synthesis." Thesis, University of Southampton, 2016. https://eprints.soton.ac.uk/397216/.
Повний текст джерелаOrelli, Paiva Guilherme. "Vibroacoustic Characterization and Sound Synthesis of the Viola Caipira." Thesis, Le Mans, 2017. http://www.theses.fr/2017LEMA1045/document.
Повний текст джерелаThe viola caipira is a type of Brazilian guitar widely used in popular music. It consists of ten metallic strings arranged in five pairs, tuned in unison or octave. The thesis work focuses on the analysis of the specificities of musical sounds produced by this instrument, which has been little studied in the literature.The analysis of the motions of plucked strings using a high speed camera shows the existance of sympathetic vibrations, which results in a sound halo, constituting an important perceptive feature. These measurements also reveal the existence of shocks between strings, which lead to very clearly audible consequences. Bridges mobilities are also measured using the wire-breaking method, which is simple to use and inexpensive since it does not require the use of a force sensor. Combined with a high-resolution modal analysis (ESPRIT method), these measurements enable to determine the modal shapes at the string/body coupling points and thus to characterize the instrument.A physical modelling, based on a modal approach, is carried out for sound synthesis purposes. It takes into account the strings motions according to 2 polarizations, the couplings with the body and the collisions between strings. This model is called a hybrid model because it combines an analytical approach to describe the vibrations of strings and experimental data describing the body. Simulations in the time domain reveal the main characteristics of the viola caipira
Misdariis, Nicolas. "Synthèse - Reproduction - Perception des Sons Instrumentaux et Environnementaux : Application au Design Sonore." Thesis, Paris, CNAM, 2014. http://www.theses.fr/2015CNAM0955/document.
Повний текст джерелаThis dissertation presents a composition of studies and research works articulated around three main topics : synthesis, reproduction and perception of sounds, considering both musical and environmental sounds. Moreover, it focuses on an application field, the sound design, that globally involves the conception of intentional everyday sounds. The document is based on a rather uniform structure and contains, for each part, a general presentation of the topic which brings theoretical elements together with an overview of the state-of-the-art, followed by more precise developments in order to focus on the specific matters related to each topic – in detail, modal formalism in sound synthesis by physical modeling, for the "Synthesis" section ; measurement and control of musical instruments directivity, for the "Reproduction" section ; timbre and sound sources identification, for the "Perception" section – and then followed by a detailed presentation of the personal works related to each matter, in some cases, in the form of published papers. Then, these several elements of knowledge and experience offer a personal and original contribution, deliberately put in a broad, multidisciplinary and applied framework
Huang, Zhendong. "On the sound produced by a synthetic jet device." Thesis, Boston University, 2014. https://hdl.handle.net/2144/21179.
Повний текст джерелаSynthetic jet is a quasi-steady jet of fluid generated by oscillating pressure drop across an orifice, produced by a piston-like actuator. A unique advantage of the synthetic jet is that it is able to transfer linear momentum without requiring an external fluid source, and has therefore attracted much research within the past decade. Principal applications include aerodynamic flow boundary-layer separation control, heat transfer enhancement, mixing enhancement, and flow-generated sound minimization. In this thesis, the method of deriving the volume flux equation for a duct is first reviewed, combined with this method, a simplified synthetic jet model is presented, based on the principles of aerodynamic sound, the pressure fluctuation in the acoustic far field is predicted. This model is then been used to predict the minimum synthetic jet cavity resonance frequency, acoustic power, acoustic efficiency, root-mean-square jet speed, acoustic spectrum and their dependence on the following independent parameters: the duct length and radius, the aperture radius, the piston vibration frequency, and the maximum piston velocity.
2031-01-01
Chand, G. "Real-time digital synthesis of transient waveforms : Complex transient sound waveforms are analysed for subsequent real-time synthesis with variable parameters." Thesis, University of Bradford, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.376681.
Повний текст джерелаPyž, Gražina. "Analysis and synthesis of Lithuanian phoneme dynamic sound models." Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2013. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2013~D_20131125_134056-50408.
Повний текст джерелаKalba yra natūralus žmonių bendravimo būdas. Teksto-į-šneką (TTS) problemos atsiranda įvairiose srityse: elektroninių laiškų skaitymas balsu, teksto iš elektroninių knygų skaitymas balsu, paslaugos kalbos sutrikimų turintiems žmonėms. Kalbos sintezatoriaus kūrimas yra be galo sudėtingas uždavinys. Įvairių šalių mokslininkai bando automatizuoti kalbos sintezę. Siekiant išspręsti lietuvių kalbos sintezės problemą, būtina kurti naujus lietuvių kalbos garsų matematinius modelius. Disertacijos tyrimo objektas yra dinaminiai lietuviškos šnekos balsių ir pusbalsių fonemų modeliai. Pasiūlyti balsių ir pusbalsių fonemų dinaminiai modeliai gali būti panaudoti kuriant formantinį kalbos sintezatorių. Garsams aprašyti pasiūlyta modeliavimo sistema pagrįsta balsių ir pusbalsių fonemų matematiniu modeliu bei pagrindinio tono ir įėjimų nustatymo automatine procedūra. Fonemos signalas yra gaunamas kai daugelio-įėjimų ir vieno-išėjimo (MISO) sistemos išėjimas. MISO sistema susideda iš lygiagrečiai sujungtų vieno-įėjimo ir vieno-išėjimo (SISO) sistemų, kurių įėjimų amplitudes kinta laike. Disertacijoje du sintezės metodai sukurti: harmoninis ir formantinis. Eksperimentiniai rezultatai parodė, kad balsiai ir pusbalsiai sintezuoti minėta sistema skamba pakankamai natūraliai.
Polfreman, Richard. "User-interface design for software based sound synthesis systems." Thesis, University of Hertfordshire, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.363503.
Повний текст джерелаItagaki, Takebumi. "Real-time sound synthesis on a multi-processor platform." Thesis, Durham University, 1998. http://etheses.dur.ac.uk/4890/.
Повний текст джерелаDzjaparidze, Michaël. "Exploring the creative potential of physically inspired sound synthesis." Thesis, Queen's University Belfast, 2015. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.695331.
Повний текст джерелаFell, Mark. "Works in sound and pattern synthesis : folio of works." Thesis, University of Surrey, 2013. http://epubs.surrey.ac.uk/804661/.
Повний текст джерелаBouënard, Alexandre. "Synthesis of Music Performances: Virtual Character Animation as a Controller of Sound Synthesis." Phd thesis, Université de Bretagne Sud, 2009. http://tel.archives-ouvertes.fr/tel-00497292.
Повний текст джерелаIncerti, Eric. "Synthèse de sons par modélisation physique de structures vibrantes : applications pour la création musicale par ordinateur." Grenoble INPG, 1996. http://www.theses.fr/1996INPG0115.
Повний текст джерелаAbbado, Adriano. "Perceptual correspondences of abstract animation and synthetic sound." Thesis, Massachusetts Institute of Technology, 1988. http://hdl.handle.net/1721.1/71112.
Повний текст джерелаIncludes bibliographical references (leaves 47-52).
by Adriano Abbado.
M.S.
Villeneuve, Jérôme. "Mise en oeuvre de méthodes de résolution du problème inverse dans le cadre de la synthèse sonore par modélisation physique masses-interactions." Thesis, Grenoble, 2013. http://www.theses.fr/2013GRENS041.
Повний текст джерелаAn “Inverse Problem”, usually consists in an inversion of the cause-to-effect relation. It's not about producing a “cause” phenomenon from a given “effect” phenomenon, but rather defining a “cause” phenomenon of which an observed effect would be the consequence. In the context of the CORDIS-ANIMA physical modeling and simulation formalism, and in particular within the GENESIS interface for sound synthesis and musical creation, both built by the ACROE-ICA laboratory, it is possible to identify such a problem: Considering a sound, which physical model could be built to produce it? This interrogation is fundamental if we consider the creative process engaged by the users of such tools. Indeed, being able to describe and to conceive the process which engenders a previously defined phenomenon or sonic (musical) event is an inherent need for the activity of musical creation. Reciprocally, disposing of elements for analyzing and decomposing the sound phenomenon's production chain allows to consider, by means of representation, direct processing, and re-composition, the production of very rich and expressive phenomena that present an intimate coherency with the natural sounds upon which the perceptive and cognitive experience are built.To approach this problem, we formulated and studied two underlying fundamental aspects. The first one covers the very description of the final result, the sound phenomenon. This description can be of different kinds and is often complex regarding objective and quantitative matters, therefore, our approach has consisted first in a reduction of the general problem by considering spectral content, or “modal structure”, defined by a phenomenological signal based approach. The second aspect concerns the functional and parametrical nature of models built with the CORDIS-ANIMA paradigm. Since all models are inherently a metaphor of an instrumental situation, each one must then be conceived as an interactive combination of an “instrument/instrumentist” couple. From these specifications we have defined ONE inverse problem, whose resolution required developing tools to interpret phenomenological data to parametrical data. Finally, this work has led to the implementation of these new tools in within the GENESIS software, as well as in its didactic environment. The resulting models fulfill coherence and clarity criteria and are intended to reintegrate the creative process. They do not constitute an end in themselves, rather a support proposed to the user in order to complete his process.As a conclusion to this work, we detail further directions that could be pursued in order to extend or possibly reformulate the inverse problem
Rodgers, Tara. "Synthesizing sound: metaphor in audio-technical discourse and synthesis history." Thesis, McGill University, 2011. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=97090.
Повний текст джерелаLe son synthétique est omniprésent dans la musique contemporaine et dans l'environnement sonore à l'échelle mondiale. Cependant, on a relativement peu écrit sur sa signification ou sur ses origines culturelles. Cette thèse construit une longue histoire du son synthétique au cours du siècle avant que ne soit massivement introduit le synthétiseur dans les années 1970; et s'attache aux thèmes anciens et mythiques qui émanent dans le discours contemporain de la technique audio. Cette recherche s'appuie sur des documents d'archives, y compris ceux de la fin du xixe siècle et du début du xxe siècle, comprenant des textes acoustiques, des publications d'inventeurs, de la correspondance ou des manuels d'utilisation des synthétiseurs à partir des années 1940 jusqu'aux années 1970.En tant que récit féministe du son synthétique, ce projet étudie comment les métaphores dans le discours de la technique audio sont porteuses de notions d'identité et de différence. À travers l'analyse de concepts clés de l'histoire du son synthétique, j'affirme que le langage de la technique audio et sa représentation, qui passe habituellement pour neutre, privilégie en fait la perspective masculine, archétypale du sujet blanc occidental. J'identifie deux métaphores primaires pour la conception des sons électroniques qui ont été utilisés à l'aube du xxe siècle et qui contribuent sans cesse à une épistémologie du son: des sons électroniques comme des vagues et les sons électroniques en tant qu'individus. La métaphore des vagues, en circulation depuis l'aube des temps, est productrice d'un affect aux technologies audio, typiquement basé sur un point de vue masculin et colonisateur; où la création et le contrôle du son électronique entraîne le plaisir et le danger propre à la navigation sur une mer houleuse. La seconde métaphore a pris forme au cours du xixe siècle au moment où les sons, comme des organismes vivants modernes, sujets, se sont vus interprétés comme de véritables entités individuelles aux propriétés variables pouvant faire l'objet d'analyse et de contrôle. Les notions d'individuation et de variabilité sonore émergèrent dans le contexte d'une pensée Darwinienne, alors qu'une fascination culturelle pour l'électricité vue comme une sorte de puissance immuable, se forgeait. Les méthodes de classification des sons en tant qu'individus, triés en fonction de variations esthétiques désirables ou indésirables, ont été intimement liées aux épistémologies du sexe et de la différence raciale dans la philosophie occidentale et dans les sciences modernes. Le son électronique est aussi l'héritier d'autres histoires, incluant les usages de notions telles que synthèse ou synthétique dans divers champs culturels; le design des premiers dispositifs mécaniques et électroniques, ou encore l'évolution de la modernité musicale et le développement d'un public amateur de culture électronique. La perspective à long terme et le large spectre sur l'histoire de la synthèse musicale adoptée dans cette étude vise à contester les vérités reçues dans le discours ambiant de la technique audio et à résister à la progression d'histoires linéaires et cohérentes qu'on trouve encore trop souvent dans l'histoire de la technologie et des nouveaux médias. Cette thèse contribue d'une façon importante au domaine des études en son et médias, qui pourraient à leur tour bénéficier d'un apport féministe en général et plus spécifiquement de l'élaboration des formes et des significations des technologies de la synthèse musicale. En outre, si les universitaires féministes ont largement théorisé les nouvelles cultures technologiques ou visuelles, peu d'entre elles ont exploré le son et les techniques audio. Ce projet veut ouvrir de nouvelles voies dans un domaine d'études féministes du son dans une perspective historienne avec des notions d'identité et de différence dans le discours de la technique audio, tout en clamant l'utilité du son à une pensée féministe.
Yadegari, Shahrokh David. "Self-similar synthesis on the border between sound and music." Thesis, Massachusetts Institute of Technology, 1992. http://hdl.handle.net/1721.1/70661.
Повний текст джерелаKesterton, Anthony James. "The synthesis of sound with application in a MIDI environment." Thesis, Rhodes University, 1991. http://hdl.handle.net/10962/d1006701.
Повний текст джерелаCarey, Benedict Eris. "Notation Sequence Generation and Sound Synthesis in Interactive Spectral Music." Thesis, The University of Sydney, 2013. http://hdl.handle.net/2123/9517.
Повний текст джерелаMöhlmann, Daniel Verfasser], Otthein [Akademischer Betreuer] Herzog, and Jörn [Akademischer Betreuer] [Loviscach. "A Parametric Sound Object Model for Sound Texture Synthesis / Daniel Möhlmann. Gutachter: Otthein Herzog ; Jörn Loviscach. Betreuer: Otthein Herzog." Bremen : Staats- und Universitätsbibliothek Bremen, 2011. http://d-nb.info/1071992430/34.
Повний текст джерелаStrandberg, Carl. "Mediating Interactions in Games Using Procedurally Implemented Modal Synthesis : Do players prefer and choose objects with interactive synthetic sounds over objects with traditional sample based sounds?" Thesis, Luleå tekniska universitet, Institutionen för konst, kommunikation och lärande, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-68015.
Повний текст джерелаRoche, Fanny. "Music sound synthesis using machine learning : Towards a perceptually relevant control space." Thesis, Université Grenoble Alpes, 2020. http://www.theses.fr/2020GRALT034.
Повний текст джерелаOne of the main challenges of the synthesizer market and the research in sound synthesis nowadays lies in proposing new forms of synthesis allowing the creation of brand new sonorities while offering musicians more intuitive and perceptually meaningful controls to help them reach the perfect sound more easily. Indeed, today's synthesizers are very powerful tools that provide musicians with a considerable amount of possibilities for creating sonic textures, but the control of parameters still lacks user-friendliness and may require some expert knowledge about the underlying generative processes. In this thesis, we are interested in developing and evaluating new data-driven machine learning methods for music sound synthesis allowing the generation of brand new high-quality sounds while providing high-level perceptually meaningful control parameters.The first challenge of this thesis was thus to characterize the musical synthetic timbre by evidencing a set of perceptual verbal descriptors that are both frequently and consensually used by musicians. Two perceptual studies were then conducted: a free verbalization test enabling us to select eight different commonly used terms for describing synthesizer sounds, and a semantic scale analysis enabling us to quantitatively evaluate the use of these terms to characterize a subset of synthetic sounds, as well as analyze how consensual they were.In a second phase, we investigated the use of machine learning algorithms to extract a high-level representation space with interesting interpolation and extrapolation properties from a dataset of sounds, the goal being to relate this space with the perceptual dimensions evidenced earlier. Following previous studies interested in using deep learning for music sound synthesis, we focused on autoencoder models and realized an extensive comparative study of several kinds of autoencoders on two different datasets. These experiments, together with a qualitative analysis made with a non real-time prototype developed during the thesis, allowed us to validate the use of such models, and in particular the use of the variational autoencoder (VAE), as relevant tools for extracting a high-level latent space in which we can navigate smoothly and create new sounds. However, so far, no link between this latent space and the perceptual dimensions evidenced by the perceptual tests emerged naturally.As a final step, we thus tried to enforce perceptual supervision of the VAE by adding a regularization during the training phase. Using the subset of synthetic sounds used in the second perceptual test and the corresponding perceptual grades along the eight perceptual dimensions provided by the semantic scale analysis, it was possible to constraint, to a certain extent, some dimensions of the VAE high-level latent space so as to match these perceptual dimensions. A final comparative test was then conducted in order to evaluate the efficiency of this additional regularization for conditioning the model and (partially) leading to a perceptual control of music sound synthesis
Desvages, Charlotte Genevieve Micheline. "Physical modelling of the bowed string and applications to sound synthesis." Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/31273.
Повний текст джерелаMasri, Paul. "Computer modelling of sound for transformation and synthesis of musical signals." Thesis, University of Bristol, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.246275.
Повний текст джерелаTreeby, Bradley E. "The effect of hair on human sound localisation cues." University of Western Australia. School of Mechanical Engineering, 2007. http://theses.library.uwa.edu.au/adt-WU2007.0192.
Повний текст джерела