Добірка наукової літератури з теми "Sound synthesi"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Sound synthesi".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Статті в журналах з теми "Sound synthesi"
KRONLAND-MARTINET, R., Ph GUILLEMAIN, and S. YSTAD. "Modelling of natural sounds by time–frequency and wavelet representations." Organised Sound 2, no. 3 (November 1997): 179–91. http://dx.doi.org/10.1017/s1355771898009030.
Повний текст джерелаNovkovic, Dragan, Marko Peljevic, and Mateja Malinovic. "Synthesis and analysis of sounds developed from the Bose-Einstein condensate: Theory and experimental results." Muzikologija, no. 24 (2018): 95–109. http://dx.doi.org/10.2298/muz1824095n.
Повний текст джерелаMiner, Nadine E., Timothy E. Goldsmith, and Thomas P. Caudell. "Perceptual Validation Experiments for Evaluating the Quality of Wavelet-Synthesized Sounds." Presence: Teleoperators and Virtual Environments 11, no. 5 (October 2002): 508–24. http://dx.doi.org/10.1162/105474602320935847.
Повний текст джерелаMin, Dongki, Buhm Park, and Junhong Park. "Artificial Engine Sound Synthesis Method for Modification of the Acoustic Characteristics of Electric Vehicles." Shock and Vibration 2018 (2018): 1–8. http://dx.doi.org/10.1155/2018/5209207.
Повний текст джерелаMiner, Nadine E., and Thomas P. Caudell. "A Wavelet Synthesis Technique for Creating Realistic Virtual Environment Sounds." Presence: Teleoperators and Virtual Environments 11, no. 5 (October 2002): 493–507. http://dx.doi.org/10.1162/105474602320935838.
Повний текст джерелаMANDELIS, JAMES, and PHIL HUSBANDS. "GENOPHONE: EVOLVING SOUNDS AND INTEGRAL PERFORMANCE PARAMETER MAPPINGS." International Journal on Artificial Intelligence Tools 15, no. 04 (August 2006): 599–621. http://dx.doi.org/10.1142/s0218213006002837.
Повний текст джерелаSerquera, Jaime, and Eduardo Reck Miranda. "Histogram Mapping Synthesis: A Cellular Automata-Based Technique for Flexible Sound Design." Computer Music Journal 38, no. 4 (December 2014): 38–52. http://dx.doi.org/10.1162/comj_a_00267.
Повний текст джерелаCorbella, Maurizio, and Anna Katharina Windisch. "Sound Synthesis, Representation and Narrative Cinema in the Transition to Sound (1926-1935)." Cinémas 24, no. 1 (February 26, 2014): 59–81. http://dx.doi.org/10.7202/1023110ar.
Повний текст джерелаWRIGHT, MATTHEW, JAMES BEAUCHAMP, KELLY FITZ, XAVIER RODET, AXEL RÖBEL, XAVIER SERRA, and GREGORY WAKEFIELD. "Analysis/synthesis comparison." Organised Sound 5, no. 3 (December 2000): 173–89. http://dx.doi.org/10.1017/s1355771800005070.
Повний текст джерелаYuan, J., X. Cao, D. Wang, J. Chen, and S. Wang. "Research on Bus Interior Sound Quality Based on Masking Effects." Fluctuation and Noise Letters 17, no. 04 (September 14, 2018): 1850037. http://dx.doi.org/10.1142/s0219477518500372.
Повний текст джерелаДисертації з теми "Sound synthesi"
PAPETTI, Stefano. "Sound modeling issues in interactive sonification - From basic contact events to synthesis and manipulation tools." Doctoral thesis, Università degli Studi di Verona, 2010. http://hdl.handle.net/11562/340961.
Повний текст джерелаThe work presented in this thesis ranges over a variety of research topics, spacing from human-computer interaction to physical-modeling. What combines such broad areas of interest is the idea of using physically-based computer simulations of acoustic phenomena in order to provide human-computer interfaces with sound feedback which is consistent with the user interaction. In this regard, recent years have seen the emergence of several new disciplines that go under the name of -- to cite a few -- auditory display, sonification and sonic interaction design. This thesis deals with the design and implementation of efficient sound algorithms for interactive sonification. To this end, the physical modeling of everyday sounds is taken into account, that is sounds not belonging to the families of speech and musical sounds.
FONTANA, Federico. "Physics-based models for the acoustic representation of space in virtual environments." Doctoral thesis, Università degli Studi di Verona, 2003. http://hdl.handle.net/11562/342240.
Повний текст джерелаThis work deals with the simulation of virtual acoustic spaces using physics-based models. The acoustic space is what we perceive about space using our auditory system. The physical nature of the models means that they will present spatial attributes (such as, for example, shape and size) as a salient feature of their structure, in a way that space will be directly represented and manipulated by means of them.
Liao, Wei-Hsiang. "Modelling and transformation of sound textures and environmental sounds." Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066725/document.
Повний текст джерелаThe processing of environmental sounds has become an important topic in various areas. Environmental sounds are mostly constituted of a kind of sounds called sound textures. Sound textures are usually non-sinusoidal, noisy and stochastic. Several researches have stated that human recognizes sound textures with statistics that characterizing the envelopes of auditory critical bands. Existing synthesis algorithms can impose some statistical properties to a certain extent, but most of them are computational intensive. We propose a new analysis-synthesis framework that contains a statistical description that consists of perceptually important statistics and an efficient mechanism to adapt statistics in the time-frequency domain. The quality of resynthesised sound is at least as good as state-of-the-art but more efficient in terms of computation time. The statistic description is based on the STFT. If certain conditions are met, it can also adapt to other filter bank based time-frequency representations (TFR). The adaptation of statistics is achieved by using the connection between the statistics on TFR and the spectra of time-frequency domain coefficients. It is possible to adapt only a part of cross-correlation functions. This allows the synthesis process to focus on important statistics and ignore the irrelevant parts, which provides extra flexibility. The proposed algorithm has several perspectives. It could possibly be used to generate unseen sound textures from artificially created statistical descriptions. It could also serve as a basis for transformations like stretching or morphing. One could also expect to use the model to explore semantic control of sound textures
Chapman, David P. "Playing with sounds : a spatial solution for computer sound synthesis." Thesis, University of Bath, 1996. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.307047.
Повний текст джерелаLee, Chung. "Sound texture synthesis using an enhanced overlap-add approach /." View abstract or full-text, 2008. http://library.ust.hk/cgi/db/thesis.pl?CSED%202008%20LEE.
Повний текст джерелаConan, Simon. "Contrôle intuitif de la synthèse sonore d’interactions solidiennes : vers les métaphores sonores." Thesis, Ecole centrale de Marseille, 2014. http://www.theses.fr/2014ECDM0012/document.
Повний текст джерелаPerceptual control (i.e. from evocations) of sound synthesis processes is a current challenge. Indeed, sound synthesis models generally involve a lot of low-level control parameters, whose manipulation requires a certain expertise with respect to the sound generation process. Thus, intuitive control of sound generation is interesting for users, and especially non-experts, because they can create and control sounds from evocations. Such a control is not immediate and is based on strong assumptions linked to our perception, and especially the existence of acoustic morphologies, so-called ``invariants'', responsible for the recognition of specific sound events.This thesis tackles the problem by focusing on invariants linked to specific sound generating actions. If follows two main parts. The first is to identify invariants responsible for the recognition of three categories of continuous interactions: rubbing, scratching and rolling. The aim is to develop a real-time sound synthesizer with intuitive controls that enables users to morph continuously between the different interactions (e.g. progressively transform a rubbing sound into a rolling one). The synthesis model will be developed in the framework of the ``action-object'' paradigm which states that sounds can be described as the result of an action (e.g. scratching) on an object (e.g. a wood plate). This paradigm naturally fits the well-known source-filter approach for sound synthesis, where the perceptually relevant information linked to the object is described in the ``filter'' part, and the action-related information is described in the ``source'' part. To derive our generic synthesis model, several approaches are treated: physical models, phenomenological approaches and listening tests with recorded and synthesized sounds.The second part of the thesis deals with the concept of ``sonic metaphors'' by expanding the object notion to various sound textures. The question raised is the following: given any sound texture, is it possible to modify its intrinsic properties such that it evokes a particular interaction, like rolling or rubbing for instance? To create these sonic metaphors, a cross-synthesis process is used where the ``source'' part is based on the sound morphologies linked to the actions previously identified, and the ``filter'' part renders the sound texture properties. This work, together with the chosen paradigm offers new perspectives to build a sound language
Caracalla, Hugo. "Sound texture synthesis from summary statistics." Electronic Thesis or Diss., Sorbonne université, 2019. http://www.theses.fr/2019SORUS676.
Повний текст джерелаSound textures are a wide class of sounds that includes the sound of the rain falling, the hubbub of a crowd and the chirping of flocks of birds. All these sounds present an element of unpredictability which is not commonly sought after in sound synthesis, requiring the use of dedicated algorithms. However, the diverse audio properties of sound textures make the designing of an algorithm able to convincingly recreate varied textures a complex task. This thesis focuses on parametric sound texture synthesis. In this paradigm, a set of summary statistics are extracted from a target texture and iteratively imposed onto a white noise. If the set of statistics is appropriate, the white noise is modified until it resemble the target, sounding as if it had been recorded moments later. In a first part, we propose improvements to perceptual-based parametric method. These improvements aim at making its synthesis of sharp and salient events by mainly altering and simplifying its imposition process. In a second, we adapt a parametric visual texture synthesis method based statistics extracted by a Convolutional Neural Networks (CNN) to work on sound textures. We modify the computation of its statistics to fit the properties of sound signals, alter the architecture of the CNN to best fit audio elements present in sound textures and use a time-frequency representation taking both magnitude and phase into account
Serquera, Jaime. "Sound synthesis with cellular automata." Thesis, University of Plymouth, 2012. http://hdl.handle.net/10026.1/1189.
Повний текст джерелаPicard-Limpens, Cécile. "Expressive Sound Synthesis for Animation." Phd thesis, Université de Nice Sophia-Antipolis, 2009. http://tel.archives-ouvertes.fr/tel-00440417.
Повний текст джерелаPicard, Limpens Cécile. "Expressive sound synthesis for animation." Nice, 2009. http://www.theses.fr/2009NICE4075.
Повний текст джерелаL'objectif principal de ce travail est de proposer des outils pour une synthèse en temps-réel, réaliste et expressive, des sons résultant d'interactions physiques entre objets dans une scène virtuelle. De fait, ces effets sonores, à l'exemple des bruits de collisions entre solides ou encore d'interactions continues entre surfaces, ne peuvent être prédéfinis et calculés en phase de pré-production. Dans ce cadre, nous proposons deux approches, la première basée sur une modélisation des phénomènes physiques à l'origine de l'émission sonore, la seconde basée sur le traitement d'enregistrements audio. Selon une approche physique, la source sonore est traitée comme la combinaison d'une excitation et d'un résonateur. Dans un premier temps, nous présentons une technique originale traduisant la force d'interaction entre surfaces dans le cas de contacts continus, tel que le roulement. Cette technique repose sur l'analyse des textures utilisées pour le rendu graphique des surfaces de la scène virtuelle. Dans un second temps, nous proposons une méthode d'analyse modale robuste et flexible traduisant les vibrations sonores du résonateur. Outre la possibilité de traiter une large variété de géométries et d'offrir une multi-résolution des paramètres modaux, la méthode permet de résoudre le problème de cohérence entre simulation physique et synthèse sonore, problème fréquemment rencontré en animation. Selon une approche empirique, nous proposons une technique de type granulaire, exprimant la synthèse sonore par un agencement cohérent de particules ou grains sonores. La méthode consiste tout d'abord en un prétraitement d'enregistrements destiné à constituer un matériel sonore sous forme compacte. Ce matériel est ensuite manipulé en temps réel pour, d'une part, une resynthèse complète des enregistrements originaux, et d'autre part, une utilisation flexible en fonction des données reportées par le moteur de simulation et/ou de procédures prédéfinies. Enfin, l'intérêt est porté sur les sons de fracture, au vu de leur utilisation fréquente dans les environnements virtuels, et en particulier les jeux vidéos. Si la complexité du phénomène rend l'emploi d'un modèle purement physique très coûteux, l'utilisation d'enregistrements est également inadaptée pour la grande variété de micro-événements sonores. Le travail de thèse propose ainsi un modèle hybride et des stratégies possibles afin de combiner une approche physique et une approche empirique. Le modèle ainsi conçu vise à reproduire l'événement sonore de la fracture, de son initiation à la création de micro-débris
Книги з теми "Sound synthesi"
Beauchamp, James W. Analysis, synthesis, and perception of musical sounds: The sound of music. New York: Springer, 2010.
Знайти повний текст джерелаSound synthesis and sampling. 2nd ed. Boston: Focal Press, 2004.
Знайти повний текст джерелаSound synthesis and sampling. 3rd ed. Oxford: Focal, 2009.
Знайти повний текст джерелаSound synthesis and sampling. Oxford ; Boston: Focal Press, 1996.
Знайти повний текст джерелаZiemer, Tim. Psychoacoustic Music Sound Field Synthesis. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-23033-3.
Повний текст джерелаHecker, Florian. Halluzination, Perspektive, Synthese. Edited by Müller Vanessa editor, Hecker Florian 1975-, and Kunsthalle Wien. Berlin: Sternberg Press, 2019.
Знайти повний текст джерелаAhrens, Jens. Analytic Methods of Sound Field Synthesis. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-25743-8.
Повний текст джерелаSueur, Jérôme. Sound Analysis and Synthesis with R. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-77647-7.
Повний текст джерелаSound synthesis: Analog and digital techniques. Blue Ridge Summit, PA: TAB Books, 1990.
Знайти повний текст джерелаAhrens, Jens. Analytic Methods of Sound Field Synthesis. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012.
Знайти повний текст джерелаЧастини книг з теми "Sound synthesi"
Uncini, Aurelio. "Sound Synthesis." In Springer Topics in Signal Processing, 565–608. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-14228-4_8.
Повний текст джерелаSporer, Thomas, Karlheinz Brandenburg, Sandra Brix, and Christoph Sladeczek. "Wave Field Synthesis." In Immersive Sound, 311–32. New York ; London : Routledge, 2017.: Routledge, 2017. http://dx.doi.org/10.4324/9781315707525-11.
Повний текст джерелаRiches, Martin. "Mechanical Speech Synthesis." In Sound Inventions, 351–75. London: Focal Press, 2021. http://dx.doi.org/10.4324/9781003003526-35.
Повний текст джерелаLiu, Shiguang, and Dinesh Manocha. "Sound Rendering." In Sound Synthesis, Propagation, and Rendering, 45–52. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-79214-4_4.
Повний текст джерелаMoffat, David, Rod Selfridge, and Joshua D. Reiss. "Sound Effect Synthesis." In Foundations in Sound Design for Interactive Media, 274–99. New York, NY : Routledge, 2019. | Series: Sound design series; volume 2: Routledge, 2019. http://dx.doi.org/10.4324/9781315106342-13.
Повний текст джерелаMazzola, Guerino, Yan Pang, William Heinze, Kyriaki Gkoudina, Gian Afrisando Pujakusuma, Jacob Grunklee, Zilu Chen, Tianxue Hu, and Yiqing Ma. "Standard Sound Synthesis." In Computational Music Science, 19–33. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-00982-3_4.
Повний текст джерелаAvanzini, Federico. "Procedural Modeling of Interactive Sound Sources in Virtual Reality." In Sonic Interactions in Virtual Environments, 49–76. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04021-4_2.
Повний текст джерелаSueur, Jérôme. "Synthesis." In Sound Analysis and Synthesis with R, 555–609. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-77647-7_18.
Повний текст джерелаXie, Bosun. "Spatial sound reproduction by wave field synthesis." In Spatial Sound, 439–96. Boca Raton: CRC Press, 2022. http://dx.doi.org/10.1201/9781003081500-10.
Повний текст джерелаSueur, Jérôme. "What Is Sound?" In Sound Analysis and Synthesis with R, 7–36. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-77647-7_2.
Повний текст джерелаТези доповідей конференцій з теми "Sound synthesi"
Lloyd, D. Brandon, Nikunj Raghuvanshi, and Naga K. Govindaraju. "Sound synthesis for impact sounds in video games." In Symposium on Interactive 3D Graphics and Games. New York, New York, USA: ACM Press, 2011. http://dx.doi.org/10.1145/1944745.1944755.
Повний текст джерелаGuatimosim, Júlio, José Henrique Padovani, and Carlos Guatimosim. "Concatenative Sound Synthesis as a Technomorphic Model in Computer-Aided Composition." In Simpósio Brasileiro de Computação Musical. Sociedade Brasileira de Computação - SBC, 2021. http://dx.doi.org/10.5753/sbcm.2021.19431.
Повний текст джерелаBellona, Jon, Lin Bai, Luke Dahl, and Amy LaViers. "Empirically Informed Sound Synthesis Application for Enhancing the Perception of Expressive Robotic Movement." In The 23rd International Conference on Auditory Display. Arlington, Virginia: The International Community for Auditory Display, 2017. http://dx.doi.org/10.21785/icad2017.049.
Повний текст джерелаKreutzer, Cornelia, Jacqueline Walker, and Michael O'Neill. "A parametric model for spectral sound synthesis of musical sounds." In 2008 International Conference on Audio, Language and Image Processing (ICALIP). IEEE, 2008. http://dx.doi.org/10.1109/icalip.2008.4590233.
Повний текст джерелаLykova, Marina P. "The content of speech therapy work on the development of language analysis and synthesis skills in preschool children." In Особый ребенок: Обучение, воспитание, развитие. Yaroslavl state pedagogical university named after К. D. Ushinsky, 2021. http://dx.doi.org/10.20323/978-5-00089-474-3-2021-326-330.
Повний текст джерелаBaird, Alice, Emilia Parada-Cabaleiro, Cameron Fraser, Simone Hantke, and Björn Schuller. "The Perceived Emotion of Isolated Synthetic Audio." In AM'18: Sound in Immersion and Emotion. New York, NY, USA: ACM, 2018. http://dx.doi.org/10.1145/3243274.3243277.
Повний текст джерелаMoore, Dylan, Rebecca Currano, and David Sirkin. "Sound Decisions: How Synthetic Motor Sounds Improve Autonomous Vehicle-Pedestrian Interactions." In AutomotiveUI '20: 12th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3409120.3410667.
Повний текст джерелаKayahara, Takuro, and Hiroki Abe. "Synthesis of footstep sounds of crowd from single step sound based on cognitive property of footstep sounds." In 2011 IEEE International Symposium on VR Innovation (ISVRI). IEEE, 2011. http://dx.doi.org/10.1109/isvri.2011.5759644.
Повний текст джерелаJames, Doug. "Harmonic fluid sound synthesis." In ACM SIGGRAPH 2009 Computer Animation Fesitval. New York, New York, USA: ACM Press, 2009. http://dx.doi.org/10.1145/1596685.1596739.
Повний текст джерела"Letter-to-sound rules for Korean." In Proceedings of 2002 IEEE Workshop on Speech Synthesis. IEEE, 2002. http://dx.doi.org/10.1109/wss.2002.1224370.
Повний текст джерела