Littérature scientifique sur le sujet « Singing voice transformation »
Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres
Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Singing voice transformation ».
À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.
Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.
Articles de revues sur le sujet "Singing voice transformation"
Shen, Hung-Che. « Building a Japanese MIDI-to-Singing song synthesis using an English male voice ». MATEC Web of Conferences 201 (2018) : 02006. http://dx.doi.org/10.1051/matecconf/201820102006.
Texte intégralCooksey, John M., et Graham F. Welch. « Adolescence, Singing Development and National Curricula Design ». British Journal of Music Education 15, no 1 (mars 1998) : 99–119. http://dx.doi.org/10.1017/s026505170000379x.
Texte intégralSantacruz, José, Lorenzo Tardón, Isabel Barbancho et Ana Barbancho. « Spectral Envelope Transformation in Singing Voice for Advanced Pitch Shifting ». Applied Sciences 6, no 11 (19 novembre 2016) : 368. http://dx.doi.org/10.3390/app6110368.
Texte intégralBous, Frederik, et Axel Roebel. « A Bottleneck Auto-Encoder for F0 Transformations on Speech and Singing Voice ». Information 13, no 3 (23 février 2022) : 102. http://dx.doi.org/10.3390/info13030102.
Texte intégralGigolayeva-Yurchenko, V. « The role and specificity of vocal-performing universalism (on the example of the activity at MC “The Kharkiv Regional Philharmonic Society”) ». Problems of Interaction Between Arts, Pedagogy and the Theory and Practice of Education 52, no 52 (3 octobre 2019) : 188–200. http://dx.doi.org/10.34064/khnum1-52.13.
Texte intégralLimitovskaya, A. V., et I. V. Alekseeva. « SOUND IMAGE OF A CUCKOO IN INSTRUMENTAL MUSIC BY COMPOSERS OF THE XVIITH — XIXTH CENTURIES ». Arts education and science 1, no 4 (2020) : 130–38. http://dx.doi.org/10.36871/hon.202004017.
Texte intégralLajic-Mihajlovic, Danka, et Smiljana Djordjevic-Belic. « Singing with gusle accompaniment and the music industry : The first gramophone records of gusle players` performances (1908−1931/2) ». Muzikologija, no 20 (2016) : 199–222. http://dx.doi.org/10.2298/muz1620199l.
Texte intégralJiang, Qin. « The image of Oksana in the opera by N. Rimsky Korsakov “Christmas Eve” : a composer plan and a performing embodiment ». Aspects of Historical Musicology 18, no 18 (28 décembre 2019) : 57–72. http://dx.doi.org/10.34064/khnum2-18.04.
Texte intégralMykhailets, V. V. « The problems of vocal training in the conditions of modern choral art ». Aspects of Historical Musicology 18, no 18 (28 décembre 2019) : 141–54. http://dx.doi.org/10.34064/khnum2-18.08.
Texte intégralCampesato, Claudio. « The Natural Power of Music ». Religions 14, no 10 (26 septembre 2023) : 1237. http://dx.doi.org/10.3390/rel14101237.
Texte intégralThèses sur le sujet "Singing voice transformation"
Ardaillon, Luc. « Synthesis and expressive transformation of singing voice ». Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066511/document.
Texte intégralThis thesis aimed at conducting research on the synthesis and expressive transformations of the singing voice, towards the development of a high-quality synthesizer that can generate a natural and expressive singing voice automatically from a given score and lyrics. Mainly 3 research directions can be identified: the methods for modelling the voice signal to automatically generate an intelligible and natural-sounding voice according to the given lyrics; the control of the synthesis to render an adequate interpretation of a given score while conveying some expressivity related to a specific singing style; the transformation of the voice signal to improve its naturalness and add expressivity by varying the timbre adequately according to the pitch, intensity and voice quality. This thesis provides some contributions in each of those 3 directions. First, a fully-functional synthesis system has been developed, based on diphones concatenations. The modular architecture of this system allows to integrate and compare different signal modeling approaches. Then, the question of the control is addressed, encompassing the automatic generation of the f0, intensity, and phonemes durations. The modeling of specific singing styles has also been addressed by learning the expressive variations of the modeled control parameters on commercial recordings of famous French singers. Finally, some investigations on expressive timbre transformations have been conducted, for a future integration into our synthesizer. This mainly concerns methods related to intensity transformation, considering the effects of both the glottal source and vocal tract, and the modeling of vocal roughness
Ardaillon, Luc. « Synthesis and expressive transformation of singing voice ». Electronic Thesis or Diss., Paris 6, 2017. http://www.theses.fr/2017PA066511.
Texte intégralThis thesis aimed at conducting research on the synthesis and expressive transformations of the singing voice, towards the development of a high-quality synthesizer that can generate a natural and expressive singing voice automatically from a given score and lyrics. Mainly 3 research directions can be identified: the methods for modelling the voice signal to automatically generate an intelligible and natural-sounding voice according to the given lyrics; the control of the synthesis to render an adequate interpretation of a given score while conveying some expressivity related to a specific singing style; the transformation of the voice signal to improve its naturalness and add expressivity by varying the timbre adequately according to the pitch, intensity and voice quality. This thesis provides some contributions in each of those 3 directions. First, a fully-functional synthesis system has been developed, based on diphones concatenations. The modular architecture of this system allows to integrate and compare different signal modeling approaches. Then, the question of the control is addressed, encompassing the automatic generation of the f0, intensity, and phonemes durations. The modeling of specific singing styles has also been addressed by learning the expressive variations of the modeled control parameters on commercial recordings of famous French singers. Finally, some investigations on expressive timbre transformations have been conducted, for a future integration into our synthesizer. This mainly concerns methods related to intensity transformation, considering the effects of both the glottal source and vocal tract, and the modeling of vocal roughness
Loscos, Àlex. « Spectral processing of the singing voice ». Doctoral thesis, Universitat Pompeu Fabra, 2007. http://hdl.handle.net/10803/7542.
Texte intégralLa tesi presenta nous procediments i formulacions per a la descripció i transformació d'aquells atributs específicament vocals de la veu cantada. La tesis inclou, entre d'altres, algorismes per l'anàlisi i la generació de desordres vocals como ara rugositat, ronquera, o veu aspirada, detecció i modificació de la freqüència fonamental de la veu, detecció de nasalitat, conversió de veu cantada a melodia, detecció de cops de veu, mutació de veu cantada, i transformació de veu a instrument; exemplificant alguns d'aquests algorismes en aplicacions concretes.
Esta tesis doctoral versa sobre el procesado digital de la voz cantada, más concretamente, sobre el análisis, transformación y síntesis de este tipo de voz basándose e dominio espectral, con especial énfasis en aquellas técnicas relevantes para el desarrollo de aplicaciones musicales.
La tesis presenta nuevos procedimientos y formulaciones para la descripción y transformación de aquellos atributos específicamente vocales de la voz cantada. La tesis incluye, entre otros, algoritmos para el análisis y la generación de desórdenes vocales como rugosidad, ronquera, o voz aspirada, detección y modificación de la frecuencia fundamental de la voz, detección de nasalidad, conversión de voz cantada a melodía, detección de los golpes de voz, mutación de voz cantada, y transformación de voz a instrumento; ejemplificando algunos de éstos en aplicaciones concretas.
This dissertation is centered on the digital processing of the singing voice, more concretely on the analysis, transformation and synthesis of this type of voice in the spectral domain, with special emphasis on those techniques relevant for music applications.
The thesis presents new formulations and procedures for both describing and transforming those attributes of the singing voice that can be regarded as voice specific. The thesis includes, among others, algorithms for rough and growl analysis and transformation, breathiness estimation and emulation, pitch detection and modification, nasality identification, voice to melody conversion, voice beat onset detection, singing voice morphing, and voice to instrument transformation; being some of them exemplified with concrete applications.
Summers, Susan G. « Portraits of Vocal Psychotherapists : Singing as a Healing Influence for Change and Transformation ». Antioch University / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=antioch1401647279.
Texte intégralBous, Frederik. « A neural voice transformation framework for modification of pitch and intensity ». Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS382.
Texte intégralHuman voice has been a great source of fascination and an object of research for over 100 years. During that time numerous technologies have sprouted around the voice, such as the vocoder, which provides a parametric representation of the voice, commonly used for voice transformation. From this tradition, the limitations of purely signal processing based approaches are evident: To create meaningful transformations the codependencies between different voice properties have to be understood well and modelled precisely. Modelling these correlations with heuristics obtained by empiric studies is not sufficient to create natural results. It is necessary to extract information about the voice systematically and use this information during the transformation process automatically. Recent advances in computer hardware permit this systematic analysis of data by means of machine learning. This thesis thus uses machine learning to create a neural voice transformation framework. The proposed neural voice transformation framework works in two stages: First a neural vocoder allows mapping between a raw audio and a mel-spectrogram representation of voice signals. Secondly, an auto-encoder with information bottleneck allows disentangling various voice properties from the remaining information. The auto-encoder allows changing one voice property while automatically adjusting the remaining voice properties. In the first part of this thesis, we discuss different approaches to neural vocoding and reason why the mel-spectrogram is better suited for neural voice transformations than conventional parametric vocoder spaces. In the second part we discuss the information bottleneck auto-encoder. The auto-encoder creates a latent code that is independent of its conditional input. Using the latent code the synthesizer can perform the transformation by combining the original latent code with a modified parameter curve. We transform the voice using two control parameters: the fundamental frequency and the voice level. Transformation of the fundamental frequency is an objective with a long history. Using the fundamental frequency allows us to compare our approach to existing techniques and study how the auto-encoder models the dependency on other properties in a well known environment. For the voice level, we face the problem that annotations hardly exist. Therefore, first we provide a new estimation technique for voice level in large voice databases, and subsequently use the voice level annotations to train a bottleneck auto-encoder that allows changing the voice level
Bonada, Jordi 1973. « Voice Processing and synthesis by performance sampling and spectral models ». Doctoral thesis, Universitat Pompeu Fabra, 2009. http://hdl.handle.net/10803/7555.
Texte intégralEl principal objectiu d'aquesta recerca doctoral és construir un sintetitzador de veu cantada capaç de reproduir la veu d'un cantant determinat, que tingui la seva mateixa expressió i timbre, que soni natural, i que tingui com a entrades només la partitura i la lletra de una cançó. Aquest és un objectiu molt ambiciós, i en aquesta tesi discutim els principals aspectes de la nostra proposta i identifiquem les qüestions que encara queden obertes.
La voz cantada es probablemente el instrumento musical más complejo y el más rico en matices expresivos. A lo largo de varias décadas se ha dedicado mucho esfuerzo de investigación a estudiar sus propiedades acústicas y a entender los mecanismos involucrados en la producción de voz cantada, poniendo especial énfasis en sus particularidades y comparándolas con el habla. Desde la aparición de las primeras técnicas de síntesis de sonido, se ha intentado imitar dichos mecanismos y encontrar maneras de reproducirlos por medio de técnicas de procesado de señal.
El principal objetivo de esta investigación doctoral es construir un sintetizador de voz cantada capaz de reproducir la voz de un cantante determinado, que tenga su misma expresión y timbre, que suene natural, y cuyas entradas sean solamente la partitura y la letra de una canción. Éste es un objetivo muy ambicioso, y en esta tesis discutimos los principales aspectos de nuestra propuesta e identificamos las cuestiones aún sin resolver.
Singing voice is one of the most challenging musical instruments to model and imitate. Along several decades much research has been carried out to understand the mechanisms involved in singing voice production. In addition, from the very beginning of the sound synthesis techniques, singing has been one of the main targets to imitate and synthesize, and a large number of synthesizers have been created with that aim.
The final goal of this thesis is to build a singing voice synthesizer capable of reproducing the voice of a given singer, both in terms of expression and timbre, sounding natural and realistic, and whose inputs would be just the score and the lyrics of a song. This is a very difficult goal, and in this dissertation we discuss the key aspects of our proposed approach and identify the open issues that still need to be tackled.
Thibault, François. « High-level control of singing voice timbre transformations ». Thesis, McGill University, 2004. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=81514.
Texte intégralThe transformation methods use a harmonic plus noise representation from which voice timbre descriptors are derived. This higher-level representation, closer to our perception of voice timbre, offers more intuitive controls over timbre transformations. The topics of parametric voice modeling and timbre descriptor computation are first introduced, followed by a study of the acoustical impacts of voice breathiness variations. A timbre transformation system operating specifically on the singing voice quality is then introduced with accompanying software implementations, including an example digital audio effect for the control and modification of the breathiness quality on normal voices.
Livres sur le sujet "Singing voice transformation"
Park, Chan E., Adnan Hossain, William Grange, Judith Fletcher, Domenico Pietropaolo, Hanne M. de Bruin, Josh Stenberg et al. Korean Pansori as Voice Theatre. Bloomsbury Publishing Plc, 2023. http://dx.doi.org/10.5040/9781350174917.
Texte intégralChapitres de livres sur le sujet "Singing voice transformation"
Meizel, Katherine. « Voice Control ». Dans Multivocality, 159–76. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780190621469.003.0008.
Texte intégralGaines, Malik. « The Cockettes, Sylvester, and Performance as Life ». Dans Black Performance on the Outskirts of the Left. NYU Press, 2017. http://dx.doi.org/10.18574/nyu/9781479837038.003.0005.
Texte intégralWeber, Amanda. « Community Singing as Counterculture in a Women’s Prison ». Dans The Oxford Handbook of Community Singing, 625–46. Oxford University Press, 2024. http://dx.doi.org/10.1093/oxfordhb/9780197612460.013.32.
Texte intégralWeiss, Piero. « Opera Moves To Venice And Goes Public ». Dans Opera, 34–38. Oxford University PressNew York, NY, 2002. http://dx.doi.org/10.1093/oso/9780195116373.003.0007.
Texte intégralCase, George. « Youngstown ». Dans Takin' Care of Business, 132–53. Oxford University Press, 2021. http://dx.doi.org/10.1093/oso/9780197548813.003.0009.
Texte intégralActes de conférences sur le sujet "Singing voice transformation"
Cabral, João P., et Alexsandro R. Meireles. « Transformation of voice quality in singing using glottal source features ». Dans SMM19, Workshop on Speech, Music and Mind 2019. ISCA : ISCA, 2019. http://dx.doi.org/10.21437/smm.2019-7.
Texte intégralTurk, Oytun, Osman Buyuk, Ali Haznedaroglu et Levent M. Arslan. « Application of voice conversion for cross-language rap singing transformation ». Dans ICASSP 2009 - 2009 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 2009. http://dx.doi.org/10.1109/icassp.2009.4960404.
Texte intégralLee, S. W., Zhizheng Wu, Minghui Dong, Xiaohai Tian et Haizhou Li. « A comparative study of spectral transformation techniques for singing voice synthesis ». Dans Interspeech 2014. ISCA : ISCA, 2014. http://dx.doi.org/10.21437/interspeech.2014-536.
Texte intégralArdaillon, Luc, et Axel Roebel. « A Mouth Opening Effect Based on Pole Modification for Expressive Singing Voice Transformation ». Dans Interspeech 2017. ISCA : ISCA, 2017. http://dx.doi.org/10.21437/interspeech.2017-1453.
Texte intégralKobayashi, Kazuhiro, Tomoki Toda et Satoshi Nakamura. « Implementation of F0 transformation for statistical singing voice conversion based on direct waveform modification ». Dans 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2016. http://dx.doi.org/10.1109/icassp.2016.7472763.
Texte intégralYang, Qin, Qi Gao et Ruixia Fan. « A transformation from a singing voice to complex network using correlation coefficients of audio signals ». Dans 2011 2nd International Conference on Intelligent Control and Information Processing (ICICIP). IEEE, 2011. http://dx.doi.org/10.1109/icicip.2011.6008386.
Texte intégral