Literatura científica selecionada sobre o tema "Singing voice transformation"
Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos
Consulte a lista de atuais artigos, livros, teses, anais de congressos e outras fontes científicas relevantes para o tema "Singing voice transformation".
Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.
Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.
Artigos de revistas sobre o assunto "Singing voice transformation"
Shen, Hung-Che. "Building a Japanese MIDI-to-Singing song synthesis using an English male voice". MATEC Web of Conferences 201 (2018): 02006. http://dx.doi.org/10.1051/matecconf/201820102006.
Texto completo da fonteCooksey, John M., e Graham F. Welch. "Adolescence, Singing Development and National Curricula Design". British Journal of Music Education 15, n.º 1 (março de 1998): 99–119. http://dx.doi.org/10.1017/s026505170000379x.
Texto completo da fonteSantacruz, José, Lorenzo Tardón, Isabel Barbancho e Ana Barbancho. "Spectral Envelope Transformation in Singing Voice for Advanced Pitch Shifting". Applied Sciences 6, n.º 11 (19 de novembro de 2016): 368. http://dx.doi.org/10.3390/app6110368.
Texto completo da fonteBous, Frederik, e Axel Roebel. "A Bottleneck Auto-Encoder for F0 Transformations on Speech and Singing Voice". Information 13, n.º 3 (23 de fevereiro de 2022): 102. http://dx.doi.org/10.3390/info13030102.
Texto completo da fonteGigolayeva-Yurchenko, V. "The role and specificity of vocal-performing universalism (on the example of the activity at MC “The Kharkiv Regional Philharmonic Society”)". Problems of Interaction Between Arts, Pedagogy and the Theory and Practice of Education 52, n.º 52 (3 de outubro de 2019): 188–200. http://dx.doi.org/10.34064/khnum1-52.13.
Texto completo da fonteLimitovskaya, A. V., e I. V. Alekseeva. "SOUND IMAGE OF A CUCKOO IN INSTRUMENTAL MUSIC BY COMPOSERS OF THE XVIITH — XIXTH CENTURIES". Arts education and science 1, n.º 4 (2020): 130–38. http://dx.doi.org/10.36871/hon.202004017.
Texto completo da fonteLajic-Mihajlovic, Danka, e Smiljana Djordjevic-Belic. "Singing with gusle accompaniment and the music industry: The first gramophone records of gusle players` performances (1908−1931/2)". Muzikologija, n.º 20 (2016): 199–222. http://dx.doi.org/10.2298/muz1620199l.
Texto completo da fonteJiang, Qin. "The image of Oksana in the opera by N. Rimsky Korsakov “Christmas Eve”: a composer plan and a performing embodiment". Aspects of Historical Musicology 18, n.º 18 (28 de dezembro de 2019): 57–72. http://dx.doi.org/10.34064/khnum2-18.04.
Texto completo da fonteMykhailets, V. V. "The problems of vocal training in the conditions of modern choral art". Aspects of Historical Musicology 18, n.º 18 (28 de dezembro de 2019): 141–54. http://dx.doi.org/10.34064/khnum2-18.08.
Texto completo da fonteCampesato, Claudio. "The Natural Power of Music". Religions 14, n.º 10 (26 de setembro de 2023): 1237. http://dx.doi.org/10.3390/rel14101237.
Texto completo da fonteTeses / dissertações sobre o assunto "Singing voice transformation"
Ardaillon, Luc. "Synthesis and expressive transformation of singing voice". Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066511/document.
Texto completo da fonteThis thesis aimed at conducting research on the synthesis and expressive transformations of the singing voice, towards the development of a high-quality synthesizer that can generate a natural and expressive singing voice automatically from a given score and lyrics. Mainly 3 research directions can be identified: the methods for modelling the voice signal to automatically generate an intelligible and natural-sounding voice according to the given lyrics; the control of the synthesis to render an adequate interpretation of a given score while conveying some expressivity related to a specific singing style; the transformation of the voice signal to improve its naturalness and add expressivity by varying the timbre adequately according to the pitch, intensity and voice quality. This thesis provides some contributions in each of those 3 directions. First, a fully-functional synthesis system has been developed, based on diphones concatenations. The modular architecture of this system allows to integrate and compare different signal modeling approaches. Then, the question of the control is addressed, encompassing the automatic generation of the f0, intensity, and phonemes durations. The modeling of specific singing styles has also been addressed by learning the expressive variations of the modeled control parameters on commercial recordings of famous French singers. Finally, some investigations on expressive timbre transformations have been conducted, for a future integration into our synthesizer. This mainly concerns methods related to intensity transformation, considering the effects of both the glottal source and vocal tract, and the modeling of vocal roughness
Ardaillon, Luc. "Synthesis and expressive transformation of singing voice". Electronic Thesis or Diss., Paris 6, 2017. http://www.theses.fr/2017PA066511.
Texto completo da fonteThis thesis aimed at conducting research on the synthesis and expressive transformations of the singing voice, towards the development of a high-quality synthesizer that can generate a natural and expressive singing voice automatically from a given score and lyrics. Mainly 3 research directions can be identified: the methods for modelling the voice signal to automatically generate an intelligible and natural-sounding voice according to the given lyrics; the control of the synthesis to render an adequate interpretation of a given score while conveying some expressivity related to a specific singing style; the transformation of the voice signal to improve its naturalness and add expressivity by varying the timbre adequately according to the pitch, intensity and voice quality. This thesis provides some contributions in each of those 3 directions. First, a fully-functional synthesis system has been developed, based on diphones concatenations. The modular architecture of this system allows to integrate and compare different signal modeling approaches. Then, the question of the control is addressed, encompassing the automatic generation of the f0, intensity, and phonemes durations. The modeling of specific singing styles has also been addressed by learning the expressive variations of the modeled control parameters on commercial recordings of famous French singers. Finally, some investigations on expressive timbre transformations have been conducted, for a future integration into our synthesizer. This mainly concerns methods related to intensity transformation, considering the effects of both the glottal source and vocal tract, and the modeling of vocal roughness
Loscos, Àlex. "Spectral processing of the singing voice". Doctoral thesis, Universitat Pompeu Fabra, 2007. http://hdl.handle.net/10803/7542.
Texto completo da fonteLa tesi presenta nous procediments i formulacions per a la descripció i transformació d'aquells atributs específicament vocals de la veu cantada. La tesis inclou, entre d'altres, algorismes per l'anàlisi i la generació de desordres vocals como ara rugositat, ronquera, o veu aspirada, detecció i modificació de la freqüència fonamental de la veu, detecció de nasalitat, conversió de veu cantada a melodia, detecció de cops de veu, mutació de veu cantada, i transformació de veu a instrument; exemplificant alguns d'aquests algorismes en aplicacions concretes.
Esta tesis doctoral versa sobre el procesado digital de la voz cantada, más concretamente, sobre el análisis, transformación y síntesis de este tipo de voz basándose e dominio espectral, con especial énfasis en aquellas técnicas relevantes para el desarrollo de aplicaciones musicales.
La tesis presenta nuevos procedimientos y formulaciones para la descripción y transformación de aquellos atributos específicamente vocales de la voz cantada. La tesis incluye, entre otros, algoritmos para el análisis y la generación de desórdenes vocales como rugosidad, ronquera, o voz aspirada, detección y modificación de la frecuencia fundamental de la voz, detección de nasalidad, conversión de voz cantada a melodía, detección de los golpes de voz, mutación de voz cantada, y transformación de voz a instrumento; ejemplificando algunos de éstos en aplicaciones concretas.
This dissertation is centered on the digital processing of the singing voice, more concretely on the analysis, transformation and synthesis of this type of voice in the spectral domain, with special emphasis on those techniques relevant for music applications.
The thesis presents new formulations and procedures for both describing and transforming those attributes of the singing voice that can be regarded as voice specific. The thesis includes, among others, algorithms for rough and growl analysis and transformation, breathiness estimation and emulation, pitch detection and modification, nasality identification, voice to melody conversion, voice beat onset detection, singing voice morphing, and voice to instrument transformation; being some of them exemplified with concrete applications.
Summers, Susan G. "Portraits of Vocal Psychotherapists: Singing as a Healing Influence for Change and Transformation". Antioch University / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=antioch1401647279.
Texto completo da fonteBous, Frederik. "A neural voice transformation framework for modification of pitch and intensity". Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS382.
Texto completo da fonteHuman voice has been a great source of fascination and an object of research for over 100 years. During that time numerous technologies have sprouted around the voice, such as the vocoder, which provides a parametric representation of the voice, commonly used for voice transformation. From this tradition, the limitations of purely signal processing based approaches are evident: To create meaningful transformations the codependencies between different voice properties have to be understood well and modelled precisely. Modelling these correlations with heuristics obtained by empiric studies is not sufficient to create natural results. It is necessary to extract information about the voice systematically and use this information during the transformation process automatically. Recent advances in computer hardware permit this systematic analysis of data by means of machine learning. This thesis thus uses machine learning to create a neural voice transformation framework. The proposed neural voice transformation framework works in two stages: First a neural vocoder allows mapping between a raw audio and a mel-spectrogram representation of voice signals. Secondly, an auto-encoder with information bottleneck allows disentangling various voice properties from the remaining information. The auto-encoder allows changing one voice property while automatically adjusting the remaining voice properties. In the first part of this thesis, we discuss different approaches to neural vocoding and reason why the mel-spectrogram is better suited for neural voice transformations than conventional parametric vocoder spaces. In the second part we discuss the information bottleneck auto-encoder. The auto-encoder creates a latent code that is independent of its conditional input. Using the latent code the synthesizer can perform the transformation by combining the original latent code with a modified parameter curve. We transform the voice using two control parameters: the fundamental frequency and the voice level. Transformation of the fundamental frequency is an objective with a long history. Using the fundamental frequency allows us to compare our approach to existing techniques and study how the auto-encoder models the dependency on other properties in a well known environment. For the voice level, we face the problem that annotations hardly exist. Therefore, first we provide a new estimation technique for voice level in large voice databases, and subsequently use the voice level annotations to train a bottleneck auto-encoder that allows changing the voice level
Bonada, Jordi 1973. "Voice Processing and synthesis by performance sampling and spectral models". Doctoral thesis, Universitat Pompeu Fabra, 2009. http://hdl.handle.net/10803/7555.
Texto completo da fonteEl principal objectiu d'aquesta recerca doctoral és construir un sintetitzador de veu cantada capaç de reproduir la veu d'un cantant determinat, que tingui la seva mateixa expressió i timbre, que soni natural, i que tingui com a entrades només la partitura i la lletra de una cançó. Aquest és un objectiu molt ambiciós, i en aquesta tesi discutim els principals aspectes de la nostra proposta i identifiquem les qüestions que encara queden obertes.
La voz cantada es probablemente el instrumento musical más complejo y el más rico en matices expresivos. A lo largo de varias décadas se ha dedicado mucho esfuerzo de investigación a estudiar sus propiedades acústicas y a entender los mecanismos involucrados en la producción de voz cantada, poniendo especial énfasis en sus particularidades y comparándolas con el habla. Desde la aparición de las primeras técnicas de síntesis de sonido, se ha intentado imitar dichos mecanismos y encontrar maneras de reproducirlos por medio de técnicas de procesado de señal.
El principal objetivo de esta investigación doctoral es construir un sintetizador de voz cantada capaz de reproducir la voz de un cantante determinado, que tenga su misma expresión y timbre, que suene natural, y cuyas entradas sean solamente la partitura y la letra de una canción. Éste es un objetivo muy ambicioso, y en esta tesis discutimos los principales aspectos de nuestra propuesta e identificamos las cuestiones aún sin resolver.
Singing voice is one of the most challenging musical instruments to model and imitate. Along several decades much research has been carried out to understand the mechanisms involved in singing voice production. In addition, from the very beginning of the sound synthesis techniques, singing has been one of the main targets to imitate and synthesize, and a large number of synthesizers have been created with that aim.
The final goal of this thesis is to build a singing voice synthesizer capable of reproducing the voice of a given singer, both in terms of expression and timbre, sounding natural and realistic, and whose inputs would be just the score and the lyrics of a song. This is a very difficult goal, and in this dissertation we discuss the key aspects of our proposed approach and identify the open issues that still need to be tackled.
Thibault, François. "High-level control of singing voice timbre transformations". Thesis, McGill University, 2004. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=81514.
Texto completo da fonteThe transformation methods use a harmonic plus noise representation from which voice timbre descriptors are derived. This higher-level representation, closer to our perception of voice timbre, offers more intuitive controls over timbre transformations. The topics of parametric voice modeling and timbre descriptor computation are first introduced, followed by a study of the acoustical impacts of voice breathiness variations. A timbre transformation system operating specifically on the singing voice quality is then introduced with accompanying software implementations, including an example digital audio effect for the control and modification of the breathiness quality on normal voices.
Livros sobre o assunto "Singing voice transformation"
Park, Chan E., Adnan Hossain, William Grange, Judith Fletcher, Domenico Pietropaolo, Hanne M. de Bruin, Josh Stenberg et al. Korean Pansori as Voice Theatre. Bloomsbury Publishing Plc, 2023. http://dx.doi.org/10.5040/9781350174917.
Texto completo da fonteCapítulos de livros sobre o assunto "Singing voice transformation"
Meizel, Katherine. "Voice Control". In Multivocality, 159–76. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780190621469.003.0008.
Texto completo da fonteGaines, Malik. "The Cockettes, Sylvester, and Performance as Life". In Black Performance on the Outskirts of the Left. NYU Press, 2017. http://dx.doi.org/10.18574/nyu/9781479837038.003.0005.
Texto completo da fonteWeber, Amanda. "Community Singing as Counterculture in a Women’s Prison". In The Oxford Handbook of Community Singing, 625–46. Oxford University Press, 2024. http://dx.doi.org/10.1093/oxfordhb/9780197612460.013.32.
Texto completo da fonteWeiss, Piero. "Opera Moves To Venice And Goes Public". In Opera, 34–38. Oxford University PressNew York, NY, 2002. http://dx.doi.org/10.1093/oso/9780195116373.003.0007.
Texto completo da fonteCase, George. "Youngstown". In Takin' Care of Business, 132–53. Oxford University Press, 2021. http://dx.doi.org/10.1093/oso/9780197548813.003.0009.
Texto completo da fonteTrabalhos de conferências sobre o assunto "Singing voice transformation"
Cabral, João P., e Alexsandro R. Meireles. "Transformation of voice quality in singing using glottal source features". In SMM19, Workshop on Speech, Music and Mind 2019. ISCA: ISCA, 2019. http://dx.doi.org/10.21437/smm.2019-7.
Texto completo da fonteTurk, Oytun, Osman Buyuk, Ali Haznedaroglu e Levent M. Arslan. "Application of voice conversion for cross-language rap singing transformation". In ICASSP 2009 - 2009 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 2009. http://dx.doi.org/10.1109/icassp.2009.4960404.
Texto completo da fonteLee, S. W., Zhizheng Wu, Minghui Dong, Xiaohai Tian e Haizhou Li. "A comparative study of spectral transformation techniques for singing voice synthesis". In Interspeech 2014. ISCA: ISCA, 2014. http://dx.doi.org/10.21437/interspeech.2014-536.
Texto completo da fonteArdaillon, Luc, e Axel Roebel. "A Mouth Opening Effect Based on Pole Modification for Expressive Singing Voice Transformation". In Interspeech 2017. ISCA: ISCA, 2017. http://dx.doi.org/10.21437/interspeech.2017-1453.
Texto completo da fonteKobayashi, Kazuhiro, Tomoki Toda e Satoshi Nakamura. "Implementation of F0 transformation for statistical singing voice conversion based on direct waveform modification". In 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2016. http://dx.doi.org/10.1109/icassp.2016.7472763.
Texto completo da fonteYang, Qin, Qi Gao e Ruixia Fan. "A transformation from a singing voice to complex network using correlation coefficients of audio signals". In 2011 2nd International Conference on Intelligent Control and Information Processing (ICICIP). IEEE, 2011. http://dx.doi.org/10.1109/icicip.2011.6008386.
Texto completo da fonte