Literatura académica sobre el tema "Sound synthesi"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Sound synthesi".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Artículos de revistas sobre el tema "Sound synthesi"
KRONLAND-MARTINET, R., Ph GUILLEMAIN y S. YSTAD. "Modelling of natural sounds by time–frequency and wavelet representations". Organised Sound 2, n.º 3 (noviembre de 1997): 179–91. http://dx.doi.org/10.1017/s1355771898009030.
Texto completoNovkovic, Dragan, Marko Peljevic y Mateja Malinovic. "Synthesis and analysis of sounds developed from the Bose-Einstein condensate: Theory and experimental results". Muzikologija, n.º 24 (2018): 95–109. http://dx.doi.org/10.2298/muz1824095n.
Texto completoMiner, Nadine E., Timothy E. Goldsmith y Thomas P. Caudell. "Perceptual Validation Experiments for Evaluating the Quality of Wavelet-Synthesized Sounds". Presence: Teleoperators and Virtual Environments 11, n.º 5 (octubre de 2002): 508–24. http://dx.doi.org/10.1162/105474602320935847.
Texto completoMin, Dongki, Buhm Park y Junhong Park. "Artificial Engine Sound Synthesis Method for Modification of the Acoustic Characteristics of Electric Vehicles". Shock and Vibration 2018 (2018): 1–8. http://dx.doi.org/10.1155/2018/5209207.
Texto completoMiner, Nadine E. y Thomas P. Caudell. "A Wavelet Synthesis Technique for Creating Realistic Virtual Environment Sounds". Presence: Teleoperators and Virtual Environments 11, n.º 5 (octubre de 2002): 493–507. http://dx.doi.org/10.1162/105474602320935838.
Texto completoMANDELIS, JAMES y PHIL HUSBANDS. "GENOPHONE: EVOLVING SOUNDS AND INTEGRAL PERFORMANCE PARAMETER MAPPINGS". International Journal on Artificial Intelligence Tools 15, n.º 04 (agosto de 2006): 599–621. http://dx.doi.org/10.1142/s0218213006002837.
Texto completoSerquera, Jaime y Eduardo Reck Miranda. "Histogram Mapping Synthesis: A Cellular Automata-Based Technique for Flexible Sound Design". Computer Music Journal 38, n.º 4 (diciembre de 2014): 38–52. http://dx.doi.org/10.1162/comj_a_00267.
Texto completoCorbella, Maurizio y Anna Katharina Windisch. "Sound Synthesis, Representation and Narrative Cinema in the Transition to Sound (1926-1935)". Cinémas 24, n.º 1 (26 de febrero de 2014): 59–81. http://dx.doi.org/10.7202/1023110ar.
Texto completoWRIGHT, MATTHEW, JAMES BEAUCHAMP, KELLY FITZ, XAVIER RODET, AXEL RÖBEL, XAVIER SERRA y GREGORY WAKEFIELD. "Analysis/synthesis comparison". Organised Sound 5, n.º 3 (diciembre de 2000): 173–89. http://dx.doi.org/10.1017/s1355771800005070.
Texto completoYuan, J., X. Cao, D. Wang, J. Chen y S. Wang. "Research on Bus Interior Sound Quality Based on Masking Effects". Fluctuation and Noise Letters 17, n.º 04 (14 de septiembre de 2018): 1850037. http://dx.doi.org/10.1142/s0219477518500372.
Texto completoTesis sobre el tema "Sound synthesi"
PAPETTI, Stefano. "Sound modeling issues in interactive sonification - From basic contact events to synthesis and manipulation tools". Doctoral thesis, Università degli Studi di Verona, 2010. http://hdl.handle.net/11562/340961.
Texto completoThe work presented in this thesis ranges over a variety of research topics, spacing from human-computer interaction to physical-modeling. What combines such broad areas of interest is the idea of using physically-based computer simulations of acoustic phenomena in order to provide human-computer interfaces with sound feedback which is consistent with the user interaction. In this regard, recent years have seen the emergence of several new disciplines that go under the name of -- to cite a few -- auditory display, sonification and sonic interaction design. This thesis deals with the design and implementation of efficient sound algorithms for interactive sonification. To this end, the physical modeling of everyday sounds is taken into account, that is sounds not belonging to the families of speech and musical sounds.
FONTANA, Federico. "Physics-based models for the acoustic representation of space in virtual environments". Doctoral thesis, Università degli Studi di Verona, 2003. http://hdl.handle.net/11562/342240.
Texto completoThis work deals with the simulation of virtual acoustic spaces using physics-based models. The acoustic space is what we perceive about space using our auditory system. The physical nature of the models means that they will present spatial attributes (such as, for example, shape and size) as a salient feature of their structure, in a way that space will be directly represented and manipulated by means of them.
Liao, Wei-Hsiang. "Modelling and transformation of sound textures and environmental sounds". Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066725/document.
Texto completoThe processing of environmental sounds has become an important topic in various areas. Environmental sounds are mostly constituted of a kind of sounds called sound textures. Sound textures are usually non-sinusoidal, noisy and stochastic. Several researches have stated that human recognizes sound textures with statistics that characterizing the envelopes of auditory critical bands. Existing synthesis algorithms can impose some statistical properties to a certain extent, but most of them are computational intensive. We propose a new analysis-synthesis framework that contains a statistical description that consists of perceptually important statistics and an efficient mechanism to adapt statistics in the time-frequency domain. The quality of resynthesised sound is at least as good as state-of-the-art but more efficient in terms of computation time. The statistic description is based on the STFT. If certain conditions are met, it can also adapt to other filter bank based time-frequency representations (TFR). The adaptation of statistics is achieved by using the connection between the statistics on TFR and the spectra of time-frequency domain coefficients. It is possible to adapt only a part of cross-correlation functions. This allows the synthesis process to focus on important statistics and ignore the irrelevant parts, which provides extra flexibility. The proposed algorithm has several perspectives. It could possibly be used to generate unseen sound textures from artificially created statistical descriptions. It could also serve as a basis for transformations like stretching or morphing. One could also expect to use the model to explore semantic control of sound textures
Chapman, David P. "Playing with sounds : a spatial solution for computer sound synthesis". Thesis, University of Bath, 1996. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.307047.
Texto completoLee, Chung. "Sound texture synthesis using an enhanced overlap-add approach /". View abstract or full-text, 2008. http://library.ust.hk/cgi/db/thesis.pl?CSED%202008%20LEE.
Texto completoConan, Simon. "Contrôle intuitif de la synthèse sonore d’interactions solidiennes : vers les métaphores sonores". Thesis, Ecole centrale de Marseille, 2014. http://www.theses.fr/2014ECDM0012/document.
Texto completoPerceptual control (i.e. from evocations) of sound synthesis processes is a current challenge. Indeed, sound synthesis models generally involve a lot of low-level control parameters, whose manipulation requires a certain expertise with respect to the sound generation process. Thus, intuitive control of sound generation is interesting for users, and especially non-experts, because they can create and control sounds from evocations. Such a control is not immediate and is based on strong assumptions linked to our perception, and especially the existence of acoustic morphologies, so-called ``invariants'', responsible for the recognition of specific sound events.This thesis tackles the problem by focusing on invariants linked to specific sound generating actions. If follows two main parts. The first is to identify invariants responsible for the recognition of three categories of continuous interactions: rubbing, scratching and rolling. The aim is to develop a real-time sound synthesizer with intuitive controls that enables users to morph continuously between the different interactions (e.g. progressively transform a rubbing sound into a rolling one). The synthesis model will be developed in the framework of the ``action-object'' paradigm which states that sounds can be described as the result of an action (e.g. scratching) on an object (e.g. a wood plate). This paradigm naturally fits the well-known source-filter approach for sound synthesis, where the perceptually relevant information linked to the object is described in the ``filter'' part, and the action-related information is described in the ``source'' part. To derive our generic synthesis model, several approaches are treated: physical models, phenomenological approaches and listening tests with recorded and synthesized sounds.The second part of the thesis deals with the concept of ``sonic metaphors'' by expanding the object notion to various sound textures. The question raised is the following: given any sound texture, is it possible to modify its intrinsic properties such that it evokes a particular interaction, like rolling or rubbing for instance? To create these sonic metaphors, a cross-synthesis process is used where the ``source'' part is based on the sound morphologies linked to the actions previously identified, and the ``filter'' part renders the sound texture properties. This work, together with the chosen paradigm offers new perspectives to build a sound language
Caracalla, Hugo. "Sound texture synthesis from summary statistics". Electronic Thesis or Diss., Sorbonne université, 2019. http://www.theses.fr/2019SORUS676.
Texto completoSound textures are a wide class of sounds that includes the sound of the rain falling, the hubbub of a crowd and the chirping of flocks of birds. All these sounds present an element of unpredictability which is not commonly sought after in sound synthesis, requiring the use of dedicated algorithms. However, the diverse audio properties of sound textures make the designing of an algorithm able to convincingly recreate varied textures a complex task. This thesis focuses on parametric sound texture synthesis. In this paradigm, a set of summary statistics are extracted from a target texture and iteratively imposed onto a white noise. If the set of statistics is appropriate, the white noise is modified until it resemble the target, sounding as if it had been recorded moments later. In a first part, we propose improvements to perceptual-based parametric method. These improvements aim at making its synthesis of sharp and salient events by mainly altering and simplifying its imposition process. In a second, we adapt a parametric visual texture synthesis method based statistics extracted by a Convolutional Neural Networks (CNN) to work on sound textures. We modify the computation of its statistics to fit the properties of sound signals, alter the architecture of the CNN to best fit audio elements present in sound textures and use a time-frequency representation taking both magnitude and phase into account
Serquera, Jaime. "Sound synthesis with cellular automata". Thesis, University of Plymouth, 2012. http://hdl.handle.net/10026.1/1189.
Texto completoPicard-Limpens, Cécile. "Expressive Sound Synthesis for Animation". Phd thesis, Université de Nice Sophia-Antipolis, 2009. http://tel.archives-ouvertes.fr/tel-00440417.
Texto completoPicard, Limpens Cécile. "Expressive sound synthesis for animation". Nice, 2009. http://www.theses.fr/2009NICE4075.
Texto completoL'objectif principal de ce travail est de proposer des outils pour une synthèse en temps-réel, réaliste et expressive, des sons résultant d'interactions physiques entre objets dans une scène virtuelle. De fait, ces effets sonores, à l'exemple des bruits de collisions entre solides ou encore d'interactions continues entre surfaces, ne peuvent être prédéfinis et calculés en phase de pré-production. Dans ce cadre, nous proposons deux approches, la première basée sur une modélisation des phénomènes physiques à l'origine de l'émission sonore, la seconde basée sur le traitement d'enregistrements audio. Selon une approche physique, la source sonore est traitée comme la combinaison d'une excitation et d'un résonateur. Dans un premier temps, nous présentons une technique originale traduisant la force d'interaction entre surfaces dans le cas de contacts continus, tel que le roulement. Cette technique repose sur l'analyse des textures utilisées pour le rendu graphique des surfaces de la scène virtuelle. Dans un second temps, nous proposons une méthode d'analyse modale robuste et flexible traduisant les vibrations sonores du résonateur. Outre la possibilité de traiter une large variété de géométries et d'offrir une multi-résolution des paramètres modaux, la méthode permet de résoudre le problème de cohérence entre simulation physique et synthèse sonore, problème fréquemment rencontré en animation. Selon une approche empirique, nous proposons une technique de type granulaire, exprimant la synthèse sonore par un agencement cohérent de particules ou grains sonores. La méthode consiste tout d'abord en un prétraitement d'enregistrements destiné à constituer un matériel sonore sous forme compacte. Ce matériel est ensuite manipulé en temps réel pour, d'une part, une resynthèse complète des enregistrements originaux, et d'autre part, une utilisation flexible en fonction des données reportées par le moteur de simulation et/ou de procédures prédéfinies. Enfin, l'intérêt est porté sur les sons de fracture, au vu de leur utilisation fréquente dans les environnements virtuels, et en particulier les jeux vidéos. Si la complexité du phénomène rend l'emploi d'un modèle purement physique très coûteux, l'utilisation d'enregistrements est également inadaptée pour la grande variété de micro-événements sonores. Le travail de thèse propose ainsi un modèle hybride et des stratégies possibles afin de combiner une approche physique et une approche empirique. Le modèle ainsi conçu vise à reproduire l'événement sonore de la fracture, de son initiation à la création de micro-débris
Libros sobre el tema "Sound synthesi"
Beauchamp, James W. Analysis, synthesis, and perception of musical sounds: The sound of music. New York: Springer, 2010.
Buscar texto completoSound synthesis and sampling. 2a ed. Boston: Focal Press, 2004.
Buscar texto completoSound synthesis and sampling. 3a ed. Oxford: Focal, 2009.
Buscar texto completoSound synthesis and sampling. Oxford ; Boston: Focal Press, 1996.
Buscar texto completoZiemer, Tim. Psychoacoustic Music Sound Field Synthesis. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-23033-3.
Texto completoHecker, Florian. Halluzination, Perspektive, Synthese. Editado por Müller Vanessa editor, Hecker Florian 1975- y Kunsthalle Wien. Berlin: Sternberg Press, 2019.
Buscar texto completoAhrens, Jens. Analytic Methods of Sound Field Synthesis. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-25743-8.
Texto completoSueur, Jérôme. Sound Analysis and Synthesis with R. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-77647-7.
Texto completoSound synthesis: Analog and digital techniques. Blue Ridge Summit, PA: TAB Books, 1990.
Buscar texto completoAhrens, Jens. Analytic Methods of Sound Field Synthesis. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012.
Buscar texto completoCapítulos de libros sobre el tema "Sound synthesi"
Uncini, Aurelio. "Sound Synthesis". En Springer Topics in Signal Processing, 565–608. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-14228-4_8.
Texto completoSporer, Thomas, Karlheinz Brandenburg, Sandra Brix y Christoph Sladeczek. "Wave Field Synthesis". En Immersive Sound, 311–32. New York ; London : Routledge, 2017.: Routledge, 2017. http://dx.doi.org/10.4324/9781315707525-11.
Texto completoRiches, Martin. "Mechanical Speech Synthesis". En Sound Inventions, 351–75. London: Focal Press, 2021. http://dx.doi.org/10.4324/9781003003526-35.
Texto completoLiu, Shiguang y Dinesh Manocha. "Sound Rendering". En Sound Synthesis, Propagation, and Rendering, 45–52. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-79214-4_4.
Texto completoMoffat, David, Rod Selfridge y Joshua D. Reiss. "Sound Effect Synthesis". En Foundations in Sound Design for Interactive Media, 274–99. New York, NY : Routledge, 2019. | Series: Sound design series; volume 2: Routledge, 2019. http://dx.doi.org/10.4324/9781315106342-13.
Texto completoMazzola, Guerino, Yan Pang, William Heinze, Kyriaki Gkoudina, Gian Afrisando Pujakusuma, Jacob Grunklee, Zilu Chen, Tianxue Hu y Yiqing Ma. "Standard Sound Synthesis". En Computational Music Science, 19–33. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-00982-3_4.
Texto completoAvanzini, Federico. "Procedural Modeling of Interactive Sound Sources in Virtual Reality". En Sonic Interactions in Virtual Environments, 49–76. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04021-4_2.
Texto completoSueur, Jérôme. "Synthesis". En Sound Analysis and Synthesis with R, 555–609. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-77647-7_18.
Texto completoXie, Bosun. "Spatial sound reproduction by wave field synthesis". En Spatial Sound, 439–96. Boca Raton: CRC Press, 2022. http://dx.doi.org/10.1201/9781003081500-10.
Texto completoSueur, Jérôme. "What Is Sound?" En Sound Analysis and Synthesis with R, 7–36. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-77647-7_2.
Texto completoActas de conferencias sobre el tema "Sound synthesi"
Lloyd, D. Brandon, Nikunj Raghuvanshi y Naga K. Govindaraju. "Sound synthesis for impact sounds in video games". En Symposium on Interactive 3D Graphics and Games. New York, New York, USA: ACM Press, 2011. http://dx.doi.org/10.1145/1944745.1944755.
Texto completoGuatimosim, Júlio, José Henrique Padovani y Carlos Guatimosim. "Concatenative Sound Synthesis as a Technomorphic Model in Computer-Aided Composition". En Simpósio Brasileiro de Computação Musical. Sociedade Brasileira de Computação - SBC, 2021. http://dx.doi.org/10.5753/sbcm.2021.19431.
Texto completoBellona, Jon, Lin Bai, Luke Dahl y Amy LaViers. "Empirically Informed Sound Synthesis Application for Enhancing the Perception of Expressive Robotic Movement". En The 23rd International Conference on Auditory Display. Arlington, Virginia: The International Community for Auditory Display, 2017. http://dx.doi.org/10.21785/icad2017.049.
Texto completoKreutzer, Cornelia, Jacqueline Walker y Michael O'Neill. "A parametric model for spectral sound synthesis of musical sounds". En 2008 International Conference on Audio, Language and Image Processing (ICALIP). IEEE, 2008. http://dx.doi.org/10.1109/icalip.2008.4590233.
Texto completoLykova, Marina P. "The content of speech therapy work on the development of language analysis and synthesis skills in preschool children". En Особый ребенок: Обучение, воспитание, развитие. Yaroslavl state pedagogical university named after К. D. Ushinsky, 2021. http://dx.doi.org/10.20323/978-5-00089-474-3-2021-326-330.
Texto completoBaird, Alice, Emilia Parada-Cabaleiro, Cameron Fraser, Simone Hantke y Björn Schuller. "The Perceived Emotion of Isolated Synthetic Audio". En AM'18: Sound in Immersion and Emotion. New York, NY, USA: ACM, 2018. http://dx.doi.org/10.1145/3243274.3243277.
Texto completoMoore, Dylan, Rebecca Currano y David Sirkin. "Sound Decisions: How Synthetic Motor Sounds Improve Autonomous Vehicle-Pedestrian Interactions". En AutomotiveUI '20: 12th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3409120.3410667.
Texto completoKayahara, Takuro y Hiroki Abe. "Synthesis of footstep sounds of crowd from single step sound based on cognitive property of footstep sounds". En 2011 IEEE International Symposium on VR Innovation (ISVRI). IEEE, 2011. http://dx.doi.org/10.1109/isvri.2011.5759644.
Texto completoJames, Doug. "Harmonic fluid sound synthesis". En ACM SIGGRAPH 2009 Computer Animation Fesitval. New York, New York, USA: ACM Press, 2009. http://dx.doi.org/10.1145/1596685.1596739.
Texto completo"Letter-to-sound rules for Korean". En Proceedings of 2002 IEEE Workshop on Speech Synthesis. IEEE, 2002. http://dx.doi.org/10.1109/wss.2002.1224370.
Texto completo