Tesis sobre el tema "Interactive musique"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 50 mejores tesis para su investigación sobre el tema "Interactive musique".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Hadjeres, Gaëtan. "Modèles génératifs profonds pour la génération interactive de musique symbolique". Thesis, Sorbonne université, 2018. http://www.theses.fr/2018SORUS027/document.
Texto completoThis thesis discusses the use of deep generative models for symbolic music generation. We will be focused on devising interactive generative models which are able to create new creative processes through a fruitful dialogue between a human composer and a computer. Recent advances in artificial intelligence led to the development of powerful generative models able to generate musical content without the need of human intervention. I believe that this practice cannot be thriving in the future since the human experience and human appreciation are at the crux of the artistic production. However, the need of both flexible and expressive tools which could enhance content creators' creativity is patent; the development and the potential of such novel A.I.-augmented computer music tools are promising. In this manuscript, I propose novel architectures that are able to put artists back in the loop. The proposed models share the common characteristic that they are devised so that a user can control the generated musical contents in a creative way. In order to create a user-friendly interaction with these interactive deep generative models, user interfaces were developed. I believe that new compositional paradigms will emerge from the possibilities offered by these enhanced controls. This thesis ends on the presentation of genuine musical projects like concerts featuring these new creative tools
Hadjeres, Gaëtan. "Modèles génératifs profonds pour la génération interactive de musique symbolique". Electronic Thesis or Diss., Sorbonne université, 2018. http://www.theses.fr/2018SORUS027.
Texto completoThis thesis discusses the use of deep generative models for symbolic music generation. We will be focused on devising interactive generative models which are able to create new creative processes through a fruitful dialogue between a human composer and a computer. Recent advances in artificial intelligence led to the development of powerful generative models able to generate musical content without the need of human intervention. I believe that this practice cannot be thriving in the future since the human experience and human appreciation are at the crux of the artistic production. However, the need of both flexible and expressive tools which could enhance content creators' creativity is patent; the development and the potential of such novel A.I.-augmented computer music tools are promising. In this manuscript, I propose novel architectures that are able to put artists back in the loop. The proposed models share the common characteristic that they are devised so that a user can control the generated musical contents in a creative way. In order to create a user-friendly interaction with these interactive deep generative models, user interfaces were developed. I believe that new compositional paradigms will emerge from the possibilities offered by these enhanced controls. This thesis ends on the presentation of genuine musical projects like concerts featuring these new creative tools
Bedoya, Ramos Daniel. "Capturing Musical Prosody Through Interactive Audio/Visual Annotations". Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS698.
Texto completoThe proliferation of citizen science projects has advanced research and knowledge across disciplines in recent years. Citizen scientists contribute to research through volunteer thinking, often by engaging in cognitive tasks using mobile devices, web interfaces, or personal computers, with the added benefit of fostering learning, innovation, and inclusiveness. In music, crowdsourcing has been applied to gather various structural annotations. However, citizen science remains underutilized in musical expressiveness studies. To bridge this gap, we introduce a novel annotation protocol to capture musical prosody, which refers to the acoustic variations performers introduce to make music expressive. Our top-down, human-centered method prioritizes the listener's role in producing annotations of prosodic functions in music. This protocol provides a citizen science framework and experimental approach to carrying out systematic and scalable studies on the functions of musical prosody. We focus on the segmentation and prominence functions, which convey structure and affect. We implement this annotation protocol in CosmoNote, a web-based, interactive, and customizable software conceived to facilitate the annotation of expressive music structures. CosmoNote gives users access to visualization layers, including the audio waveform, the recorded notes, extracted audio attributes (loudness and tempo), and score features (harmonic tension and other markings). The annotation types comprise boundaries of varying strengths, regions, comments, and note groups. We conducted two studies aimed at improving the protocol and the platform. The first study examines the impact of co-occurring auditory and visual stimuli on segmentation boundaries. We compare differences in boundary distributions derived from cross-modal (auditory and visual) vs. unimodal (auditory or visual) information. Distances between unimodal-visual and cross-modal distributions are smaller than between unimodal-auditory and cross-modal distributions. On the one hand, we show that adding visuals accentuates crucial information and provides cognitive scaffolding for accurately marking boundaries at the starts and ends of prosodic cues. However, they sometimes divert the annotator's attention away from specific structures. On the other hand, removing the audio impedes the annotation task by hiding subtle, relied-upon cues. Although visual cues may sometimes overemphasize or mislead, they are essential in guiding boundary annotations of recorded performances, often improving the aggregate results. The second study uses all CosmoNote's annotation types and analyzes how annotators, receiving either minimal or detailed protocol instructions, approach annotating musical prosody in a free-form exercise. We compare the quality of annotations between participants who are musically trained and those who are not. The citizen science component is evaluated in an ecological setting where participants are fully autonomous in a task where time, attention, and patience are valued. We present three methods based on common annotation labels, categories, and properties to analyze and aggregate the data. Results show convergence in annotation types and descriptions used to mark recurring musical elements across experimental conditions and musical abilities. We propose strategies for improving the protocol, data aggregation, and analysis in large-scale applications. This thesis contributes to representing and understanding performed musical structures by introducing an annotation protocol and platform, tailored experiments, and aggregation/analysis methods. The research shows the importance of balancing the collection of easier-to-analyze datasets and having richer content that captures complex musical thinking. Our protocol can be generalized to studies on performance decisions to improve the comprehension of expressive choices in musical performances
Petit, Bertrand. "Temps et durée : de la programmation réactive synchrone à la composition musicale". Thesis, Université Côte d'Azur, 2020. http://www.theses.fr/2020COAZ4024.
Texto completoThis thesis raises the question of the relationship between the expressiveness of computer programming tools and musical creation from the perspective of the expression of "time". We have based our reflection on time using the Bergsonian concept of "duration" which exists only in the consciousness of the perceiver as opposed to the objective time, which is that of clocks, independent of consciousness. We have chosen to follow the tradition of written music, i.e. preconceived in a fixed form that allows the composer to evaluate the aesthetic content of his work. Among the various possibilities of implementing duration, we have oriented ourselves towards interactive music, i.e. music that is controlled on stage, in part by the audience. We developed a method of composition based on three concepts: “basic musical elements” which are short musical phrases, “groups” of these basic elements, and “orchestrations”. The groups are made available to the audience who participates in the realization of the music by selecting one or another basic element or by responding to choices that will be proposed to them. The way in which the groups are made available to the audience constitutes the “orchestration” which is implemented by means of the synchronous reactive language HipHop.js. This language combines the programming of complex automata adapted to our orchestration concept, with web programming particularly adapted to the implementation of large-scale interactions. We fed this research with different experiments and musical productions using a software platform called Skini
Scurto, Hugo. "Designing With Machine Learning for Interactive Music Dispositifs". Electronic Thesis or Diss., Sorbonne université, 2019. http://www.theses.fr/2019SORUS356.
Texto completoMusic is a cultural and creative practice that enables humans to express a variety of feelings and intentions through sound. Machine learning opens many prospects for designing human expression in interactive music systems. Yet, as a Computer Science discipline, machine learning remains mostly studied from an engineering sciences perspective, which often exclude humans and musical interaction from the loop of the created systems. In this dissertation, I argue in favour of designing with machine learning for interactive music systems. I claim that machine learning must be first and foremost situated in human contexts to be researched and applied to the design of interactive music systems. I present four interdisciplinary studies that support this claim, using human-centred methods and model prototypes to design and apply machine learning to four situated musical tasks: motion-sound mapping, sonic exploration, synthesis exploration, and collective musical interaction. Through these studies, I show that model prototyping helps envision designs of machine learning with human users before engaging in model engineering. I also show that the final human-centred machine learning systems not only helps humans create static musical artifacts, but supports dynamic processes of expression between humans and machines. I call co-expression these processes of musical interaction between humans - who may have an expressive and creative impetus regardless of their expertise - and machines - whose learning abilities may be perceived as expressive by humans. In addition to these studies, I present five applications of the created model prototypes to the design of interactive music systems, which I publicly demonstrated in workshops, exhibitions, installations, and performances. Using a reflexive approach, I argue that the musical contributions enabled by such design practice with machine learning may ultimately complement the scientific contributions of human-centred machine learning. I claim that music research can thus be led through dispositif design, that is, through the technical realization of aesthetically-functioning artifacts that challenge cultural norms on computer science and music
Toro-Bermudez, Mauricio. "Structured interactive scores : from a structural description of a multimedia scenario to a real-time capable implementation with formal semantics". Thesis, Bordeaux 1, 2012. http://www.theses.fr/2012BOR14588/document.
Texto completoTechnology has shaped the way on which we compose and produce music. Notably, the invention of microphones and computers pushed the development of new music styles in the 20th century. In fact, several artistic domains have been benefiting from such technology developments ; for instance, Experimental music, non-linear multimedia, Electroacoustic music, and interactive multimedia. In this dissertation, we focus on interactive multimedia.Interactive multimedia deals with the design of scenarios where multimedia content and interactive events are handled by computer programs. Examples of such scenarios are multimedia art installations, interactive museum exhibitions, some Electroacoustic music pieces, and some Experimental music pieces. Unfortunately, most interactive multimedia scenarios are based on informal specifications, thus it is not possible to formally verify properties of such systems. We advocate the need of a general and formal model. Interactive scores is a formalism to describe interactive multimedia scenarios. We propose new semantics for interactive scores based on timed eventstructures. With such a semantics, we can specify properties for the system, in particular, properties about traces, which are difficult to specify as constraints. In fact, constraints are an important part of the semantic model of interactive scores because the formalism is based on temporal constraints among the objects of the scenario. We also present an operational semantics of interactive scores based on the non-deterministic timed concurrent constraint (ntcc) calculus and we relate such a semantics to the timed event structures semantics. With the operational semantics, we formally describe the behavior of a score whose temporal object durations can be arbitrary integer intervals. The operational semantics is obtained from the timed event structures semantics of the score. To provide such a translation, we first define the normal form of a timed event structure in which events related with zero-duration delays are collapsed into a single one. We also define the notion of dispatchable timed event structures. Event structures such that its constraint graph can be dispatched by relying only on local propagation.We believe that operational semantics in ntcc offers some advantages over existing Petri nets semantics for interactive scores; for instance, the duration of the temporal objects can be arbitrary integer intervals, whereas inprevious models of interactive scores, such durations can only be intervals to represent equalities and inequalities. In this dissertation, we also introduce two extensions of the formalism of interactive scores : (1) one to handle audio processing using the Fast AUdio Stream (Faust) languageand (2) another one to handle conditional branching, allowing designers to specify choices and loops. For the first extension, we present a timed event structures semantics and ideas on how to define operational semantics. For the second extension, we present an implementation and results comparing the average relative jitter of an implementation ofan arpeggio based on Karplus-Strong with respect to existing implementations of Karplus written in Pure Data. We also define a XML file format for interactive scores and for the conditional branching extension. A file format is crucial to assure the persistence of the scores. Ntcc models of interactive scores are executed using Ntccrt, a real-time capable interpreter for ntcc. They can also be verified automatically using ntccMC, a bounded-time automata based model checker for ntcc which we introduce in this dissertation. Using ntccMC, we can verify properties expressed on constraint linear-time logic. Ntcc has been used in the past, not only for multimedia interaction models, but alsofor system biology, security protocols and robots
Cavez, Vincent. "Designing Pen-based Interactions for Productivity and Creativity". Electronic Thesis or Diss., université Paris-Saclay, 2025. http://www.theses.fr/2025UPASG013.
Texto completoDesigned with the mouse and keyboard in mind, productivity tools and creativity support tools are powerful on desktop computers, but their structure becomes an obstacle when brought to interactive surfaces supporting pen and touch input.Indeed, the opportunities provided by the pen for precision and expressivity have been demonstrated in the HCI literature, but productivity and creativity tools require a careful redesign leveraging these unique affordances to take benefit from the intuitiveness they offer while keeping the advantages of structure. This delicate articulation between pen and structure has been overlooked in the literature.My thesis work focuses on this articulation with two use cases to answer the broad research question: “How to design pen-based interactions for productivity and creativity on interactive surfaces?” I argue that productivity depends on efficiency while creativity depends on both efficiency and flexibility, and explore interactions that promote these two dimensions.My first project, TableInk, explores a set of pen-based interaction techniques designed for spreadsheet programs and contributes guidelines to promote efficiency on interactive surfaces. I first conduct an analysis of commercial spreadsheet programs and an elicitation study to understand what users can do and what they would like to do with spreadsheets on interactive surfaces. Informed by these, I design interaction techniques that leverage the opportunities of the pen to mitigate friction and enable more operations by direct manipulation on and through the grid. I prototype these interaction techniques and conduct a qualitative study with information workers who performed a variety of spreadsheet operations on their own data. The observations show that using the pen to bypass the structure is a promising mean to promote efficiency with a productivity tool.My second project, EuterPen, explores a set of pen-based interaction techniques designed for music notation programs and contributes guidelines to promote both efficiency and flexibility on interactive surfaces. I first conduct a series of nine interviews with professional composers in order to take a step back and understand both their thought process and their work process with their current desktop tools. Building on this dual analysis, I derive guidelines for the design of features which have the potential to promote both efficiency with frequent or complex operations and flexibility in regard to the exploration of ideas. Then, I act on these guidelines by engaging in an iterative design process for interaction techniques that leverage the opportunities of the pen: two prototyping phases, a participatory design workshop, and a final series of interviews with eight professional composers. The observations show that on top of using the pen to leverage the structure for efficiency, using its properties to temporarily break the structure is a promising mean to promote flexibility with a creativity support tool.I conclude this manuscript by discussing several ways to interact with structure, presenting a set of guidelines to support the design of pen-based interactions for productivity and creativity tools, and elaborating on the future applications this thesis opens
Nika, Jérôme. "Guiding Human-Computer Music Improvisation : introducing Authoring and Control with Temporal Scenarios". Electronic Thesis or Diss., Paris 6, 2016. http://www.theses.fr/2016PA066141.
Texto completoThis thesis focuses on the introduction of authoring and controls in human-computer music improvisation through the use of temporal scenarios to guide or compose interactive performances, and addresses the dialectic between planning and reactivity in interactive music systems dedicated to improvisation. An interactive system dedicated to music improvisation generates music on the fly, in relation to the musical context of a live performance. We focus here on pulsed and idiomatic music relying on a formalized and temporally structured object, for example a harmonic progression in jazz improvisation. The same way, the models and architecture we developed rely on a formal temporal structure. This thesis thus presents: a music generation model guided by a ''scenario'' introducing anticipatory behaviors; an architecture combining this anticipation with reactivity using mixed static/dynamic scheduling techniques; an audio rendering module to perform live re-injection of captured material in synchrony with a non-metronomic beat; and a framework to compose improvised interactive performances at the ''scenario'' level. This work fully integrated frequent interactions with expert musicians to the iterative design of the models and architectures. These latter are implemented in the interactive music system ImproteK that was used at various occasions during live performances with improvisers. During these collaborations, work sessions were associated to listening sessions and interviews to gather numerous judgments expressed by the musicians in order to validate and refine the scientific and technological choices
Nika, Jérôme. "Guiding Human-Computer Music Improvisation : introducing Authoring and Control with Temporal Scenarios". Thesis, Paris 6, 2016. http://www.theses.fr/2016PA066141/document.
Texto completoThis thesis focuses on the introduction of authoring and controls in human-computer music improvisation through the use of temporal scenarios to guide or compose interactive performances, and addresses the dialectic between planning and reactivity in interactive music systems dedicated to improvisation. An interactive system dedicated to music improvisation generates music on the fly, in relation to the musical context of a live performance. We focus here on pulsed and idiomatic music relying on a formalized and temporally structured object, for example a harmonic progression in jazz improvisation. The same way, the models and architecture we developed rely on a formal temporal structure. This thesis thus presents: a music generation model guided by a ''scenario'' introducing anticipatory behaviors; an architecture combining this anticipation with reactivity using mixed static/dynamic scheduling techniques; an audio rendering module to perform live re-injection of captured material in synchrony with a non-metronomic beat; and a framework to compose improvised interactive performances at the ''scenario'' level. This work fully integrated frequent interactions with expert musicians to the iterative design of the models and architectures. These latter are implemented in the interactive music system ImproteK that was used at various occasions during live performances with improvisers. During these collaborations, work sessions were associated to listening sessions and interviews to gather numerous judgments expressed by the musicians in order to validate and refine the scientific and technological choices
Bouche, Dimitri. "Processus compositionnels interactifs : une architecture pour la programmation et l'exécution des structures musicales". Thesis, Paris 6, 2016. http://www.theses.fr/2016PA066533/document.
Texto completoThis thesis aims at designing a computer system enabling the computation of musical structures, their presentation/handling on a compositional side, and their interactive rendering. It is a study at the crossroads between several computer science research fields : discrete systems modeling, scheduling, software design and human-computer interfaces. We propose an architecture where program editing can affect their outputs, including during the rendering phase, while preserving the compositional benefits of a deferred-time approach. Compositions are therefore considered as continually running programs, where computation and rendering mechanisms are interleaved. We introduce new tools and interfaces to arrange their execution through time thanks to dynamic temporal scenario scripting, which we call meta-composing. The different results described in this manuscript are implemented in the computer-aided composition environment OpenMusic
Daviaud, Bérangère. "Méthodes formelles pour les systèmes réactifs, applications au live coding". Electronic Thesis or Diss., Angers, 2024. http://www.theses.fr/2024ANGE0032.
Texto completoThe formalism of discrete event systems and reactive systems provides an effective abstract framework for representing and studying a wide range of systems. In this thesis, we leverage this formalism to model a live coding score whose interpretation is conditioned by the occurrence of specific events. This approach led us to investigate formal methods for discrete event systems that enable their modeling, analysis, and the design of appropriate control strategies. This study resulted in several contributions, particularly regarding the expressiveness of weighted automata, the formal verification of temporal properties, and the existence of weighted simulation. The final part of this dissertation introduces the formalism of the interactive score, as well as the \textit{Troop Interactive} library, developed to make interactive score writing and the realization of interactive sound performances based on live coding practices more accessible
Echeveste, José-Manuel. "Un langage de programmation pour composer l'interaction musicale : la gestion du temps et des événements dans Antescofo". Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066143/document.
Texto completoMixed music is the association in live performance of human musicians and computer mediums, interacting in real-time. Authoring the interaction between the humans and the electronic processes, as well as their real-time implementation, challenge computer science in several ways. This contribution presents the Antescofo real-time system and its domain specific language. Using this language a composer is able to describe temporal scenarios where electronic musical processes are computed and scheduled in interaction with a live musician performance. Antescofo couples artificial machine listening with a reactive and temporized system. The challenge in bringing human actions in the loop of computing is strongly related the specification and the management of multiple time frameworks and timeliness of live execution despite heterogeneous nature of time in the two mediums. Interaction scenarios are expressed at a symbolic level through the management of musical time (i.e., events like notes or beats in relative tempi) and of the physical time (with relationships like succession, delay, duration, speed between the occurrence of the events during the performance on stage). Antescofo unique features are presented through a series of examples which illustrate how to manage execution of different audio processes through time and their interactions with an external environment. The Antescofo approach has been validated through numerous uses of the system in live electronic performances in contemporary music repertoire by various international music ensembles
Bouche, Dimitri. "Processus compositionnels interactifs : une architecture pour la programmation et l'exécution des structures musicales". Electronic Thesis or Diss., Paris 6, 2016. http://www.theses.fr/2016PA066533.
Texto completoThis thesis aims at designing a computer system enabling the computation of musical structures, their presentation/handling on a compositional side, and their interactive rendering. It is a study at the crossroads between several computer science research fields : discrete systems modeling, scheduling, software design and human-computer interfaces. We propose an architecture where program editing can affect their outputs, including during the rendering phase, while preserving the compositional benefits of a deferred-time approach. Compositions are therefore considered as continually running programs, where computation and rendering mechanisms are interleaved. We introduce new tools and interfaces to arrange their execution through time thanks to dynamic temporal scenario scripting, which we call meta-composing. The different results described in this manuscript are implemented in the computer-aided composition environment OpenMusic
Echeveste, José-Manuel. "Un langage de programmation pour composer l'interaction musicale : la gestion du temps et des événements dans Antescofo". Electronic Thesis or Diss., Paris 6, 2015. http://www.theses.fr/2015PA066143.
Texto completoMixed music is the association in live performance of human musicians and computer mediums, interacting in real-time. Authoring the interaction between the humans and the electronic processes, as well as their real-time implementation, challenge computer science in several ways. This contribution presents the Antescofo real-time system and its domain specific language. Using this language a composer is able to describe temporal scenarios where electronic musical processes are computed and scheduled in interaction with a live musician performance. Antescofo couples artificial machine listening with a reactive and temporized system. The challenge in bringing human actions in the loop of computing is strongly related the specification and the management of multiple time frameworks and timeliness of live execution despite heterogeneous nature of time in the two mediums. Interaction scenarios are expressed at a symbolic level through the management of musical time (i.e., events like notes or beats in relative tempi) and of the physical time (with relationships like succession, delay, duration, speed between the occurrence of the events during the performance on stage). Antescofo unique features are presented through a series of examples which illustrate how to manage execution of different audio processes through time and their interactions with an external environment. The Antescofo approach has been validated through numerous uses of the system in live electronic performances in contemporary music repertoire by various international music ensembles
Baboni, Schilingi Jacopo. "La musique hyper-systémique". Paris 8, 2010. http://www.theses.fr/2010PA084172.
Texto completoThe research we are presenting is about systemic in music, its applications through the more recent generative systems, and the definition of a new theory, able to include the more recent studies in the field of human/machine interaction, touching the writing of music. Specifically, it studies the relation between formalization of rules and free choice inside of a given system, and it is based on the development of some softwares allowing an application and a concrete demonstration of the theoretical conclusions we were able to reach. The thesis itself is articulated into four different typologies of documents : I – A text of fundamental theories, in which we expose a sociological, anthropological and systemic study of today's written music, together with a new theory for musical composition. II – Four articles published within different specialized reviews, which explains in detail some problematics inherent to computer music. III – Three software tools necessary for the algorithmic and practical demonstration of our hypotheses. IV – Four musical compositions, as examples of our concrete work in the field of artistic creation
Giura, Longo Alessandra. "Communication et interaction dans la musique de chambre : l'exemple de l'oeuvre ouverte dans la musique contemporaine anglo-saxonne". Thesis, Evry-Val d'Essonne, 2015. http://www.theses.fr/2015EVRY0023.
Texto completoThis research explores the mechanisms for sharing the interpretation of music from the point of view of the performers. It is based on an experiment on the ground realised with a professional contemporary music ensemble. The work was observed and analised according to the principles of the Action Research and the Constant Comparative Method of Strauss and Corbin, with the purpose of describing and classifying the behavior of musicians and understanding and clarifying the modality of communication and interaction between them. The experiment was about two works of Anglo-Saxon composers (Ensemble by Tim Parkinson and Treatise by Cornelius Cardew) issued from ‘open works’ repertoire, because its demand to the interpreters to define the final form of music lead to a collective process on the interpretation that multiplies communication and interaction between musicians.Data analysis led us to think that the musicians share the expressive core of music–for his nature unspeakable–through a kind of embodied knowledge and intuition, that can exist only if every participant is open to the others. The quality of relationships between musicians is a prerequisite for sharing the interpretation.Its fundamentals are not conveyed by words but flows between the musicians through the identification with the others and through the musical material itself, the sound.Verbal communication is limited, in most cases, to the technical speech, that, often hiddens expressive thoughts behind the words. Our hypothesis is that the activation of the mirror neurons system allows partners to understand gestures such as emotions, but verification must await further research in the field of neurosciences
Gulluni, Sébastien. "Un système interactif pour l'analyse des musiques électroacoustiques". Phd thesis, Télécom ParisTech, 2011. http://pastel.archives-ouvertes.fr/pastel-00676691.
Texto completoGulluni, Sébastien. "Un système interactif pour l’analyse des musiques électroacoustiques". Paris, Télécom ParisTech, 2011. https://pastel.hal.science/pastel-00676691.
Texto completoElectro-acoustic music is still hardly studied in the field of Music Information Retrieval. Most research on this type of music focuses on composition tools, pedagogy and music analysis. In this thesis, we focus on scientific issues related to the analysis of electro-acoustic music. After placing this music into historical context, a study of the practices of three professional musicologist allows us to obtain guidelines for building an analysis system. Thus, we propose an interactive system for helping the analysis of electro-acoustic music that allows one to find the various instances of the sound objects of a polyphonic piece. The proposed system first performs a segmentation to identify the initial instances of the main sound objects. Then, the user can select the target sound objects before entering an interactive loop that uses active learning and relevance feedback provided by the user. The feedback of the user is then used by the system to perform a multilabel classification of sound segments based on the selected ones sound objects. An evaluation of the system is performed by user simulation using a synthetic corpus. The evaluation shows that our approach achieves satisfying results in a reasonable number of interactions
Goudard, Vincent. "Représentation et contrôle dans le design interactif des instruments de musique numériques". Thesis, Sorbonne université, 2020. https://accesdistant.sorbonne-universite.fr/login?url=http://theses-intra.upmc.fr/modules/resources/download/theses/2020SORUS051.pdf.
Texto completoDigital musical instruments appear as complex objects, being positioned in a continuum with the history of lutherie as well as marked with a strong disruption provoked by the digital technology and its consequences in terms of sonic possibilities, relations between gesture and sound, listening situations, reconfigurability of instruments and so on. This doctoral work tries to describe the characteristics originating from the integration of digital technology into musical instruments, drawing notably on a musicological reflection, on softwares and hardwares development, on musical practice, as well as a number of interactions with other musicians, instruments makers, composers and researchers
Reraki, Fotini. "La musique imaginaire : discours, identités et représentations dans l’enseignement grec contemporain". Thesis, Paris 4, 2017. http://www.theses.fr/2017PA040018.
Texto completoThe present thesis explores the formal space of music education in Greece as an area of confrontation and negotiation of meanings around music. The introduction of Greek traditional music in this educational space serves as a paradigm for a study on “the management of musical otherness”, based on a field survey (participant observation and non-directive interviews) which focuses in particular on the conditions of cohabitation between teachers-musicians with different musical trajectories, thus on the conditions of cohabitation between learning practices, discourses and imaginaries which sometimes intertwine, sometimes they compete with one another. In this regard, the ultimate aim of this work is to bring to light that the ways individuals represent music and everything related to it, form a symbolic system referring to the manner in which they define and situate themselves in relation to others
Fernández, José Miguel. "Vers un système unifié d’interaction et de synchronisation en composition électroacoustique et mixte : partitions électroniques centralisées". Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS420.
Texto completoWith the advent of computers, new avenues of compositional and sound research have opened up. But if we have seen witnessing for years a plethora of sound generators and new synthesis techniques, there are few proposals for tools that address the control and formal construction of electronic music at several levels and that allow a fine integration of electronic writing. The work initiated in this thesis aims to develop, in the context of interactive mixed electroacoustic and audiovisual music in real time, a notion of centralised electronic score allowing within the same environment the definition, composition and general control of all electronic processes, their interactions and their synchronisations with musical, gestural and visual events. Using new, more expressive languages for writing electronics such as Antescofo and powerful synthesis and signal processing systems such as SuperCollider, this work has resulted in the development of a dedicated library: AntesCollider. This library allows to experiment with new approaches to the writing of electronics through the organisation and composition of sound structures, multitemporal, multiscale and interaction. By taking advantage of the computer notions of agents, processes and real-time algorithms, these sound structures can be combined dynamically and polyphonically in relation to external events, opening up new compositional paradigms and renewing the freedom and plasticity of musical creation
Lähdeoja, Otso. "Une approche de l'instrument augmenté : la guitare électrique". Paris 8, 2010. http://octaviana.fr/document/155983601#?c=0&m=0&s=0&cv=0.
Texto completoThis thesis approaches the notion of an augmented musical instrument from two complementary perspectives : an « objective » view aimed to outline a theoretical framework for instrument augmentation, as well as a subjective engagement in the process of creating an experimental augmented electric guitar and composing/performing music with it. The first part of the thesis discusses the nature and the history of instrument augmentation, establishing a basis for a view of the instrument as a system, comprising the musician, the instrument-object and the sound in a continuum of energy and/or information. The relationship between the musician and the instrument is studied in detail as a key element in the creation of an augmentation’s instrumental quality. A personal methodology for instrument augmentation presented, combining gesture, object and sound in a single conceptual tool. The second part presents the practical aspects of the research work. The technological specifications of our augmented electric guitar are described, in parallel to a discussion on the the technological and aesthetical choises. The guitar’s instrumental environment provides the basis for the development of nine specific augmentations with gestural, technological (sensor and software) and sonic aspects. A series of musical and intermedia works with the augmented guitar is presented and analysed at the end of the thesis. A CD-ROM and a DVD attached to the monograph provide sonic and visual illustrations of the augmentations and of the related artistic works
Salvati, Silvia. ""Punctum contra punctum" : interaction musique-architecture et réception au XXe siècle : du dodécaphonisme au déconstructivisme". Paris 1, 2007. http://www.theses.fr/2007PA010704.
Texto completoBagés, i. Rubí Joan. "Systèmes musicaux interactifs et création sonore musicale". Paris 8, 2012. http://www.theses.fr/2012PA084188.
Texto completoLe présent travail de thèse propose un examen de la notion de système musical interactif et fait la description d’une approche personnelle pour la conception de tels systèmes. La première partie expose un état de l’art des nouvelles technologies musicales. Dans cette même partie, est présentée une approche basée sur la construction d'un réseau d’éléments ou l’artiste est l’acteur principal. C'est l'artiste, face aux multiples interactions exploratoires et devant s’approprier une technologie prégnante, doit faire une nécessaire critique de ces outils qui excitent son propre imaginaire. Je défends que c'est l’attitude expérimentale qui se veut la plus en phase avec une utilisation de ces technologies dans le champ artistique. Roberto Barbanti oppose à la conception traditionnelle la notion de l’Ultramédia. Cette notion est confrontée aux théories musicales du compositeur Horacio Vaggione, qui se fondent sur le concept de réseaux d'objets informatiques. Dans la deuxième partie, je présente plusieurs travaux personnels qui ont été réalisés à l’aide de systèmes musicaux interactifs. Dans la troisième partie, je présente un projet très particulier autour les systèmes musicaux interactifs et du handicap médicalisé, au sein de l’APPC de Tarragone (Espagne). Le projet consiste en la création et au développement d’applications et d’activités interactives pour des personnes atteintes d’un fort et sévère handicap cérébral. L’ensemble du présent document s’achève avec une proposition d’un modèle pour la conception et l’élaboration de systèmes musicaux interactifs
Roy, Alexandre. "Développement d'une plate-forme robotisée pour l'étude des instruments de musique à cordes pincées". Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066540/document.
Texto completoThe study of musical instruments involves the study of musicians, instruments and of the complex interaction that exists between them. The analysis of musical gestures requires numerous measurements on musicians to extract the relevant parameters in order to model their interaction. In the case of plucked string instruments, the goal is to determine the initial conditions imposed on the string by the plucking mechanism (plectrum, finger). How does one get all these parameters without disrupting the musician in playing conditions ? How can one know that the parameters are the best ones to describe the initial conditions of the string vibrations and its acoustic signature ? An experimental platform has been designed to answer these questions. It can reproduce the gesture of a musician, in particular of a harpist or a harpsichordist. It should be pointed out that the concept of a musical gesture is defined here in a broad sense : the robot can reproduce either the path followed by the musician's fingers, or the initial conditions resulting from this trajectory. The first method is particularly suited for the resolution of an inverse dynamic problem. One can then calculate the forces developed by the musician's muscles during the execution of a musical piece, for example. The second method is better suited for imposing specific initial conditions on the instrument through trajectories designed by the experimenter. The correct reproduction of the trajectories needs to reject disturbances due to the contact between the robot and the instrument. The design of a force sensor, integrated into the robot end effector, is a first step toward satisfying this requirement. After the design of the robotic platform, its precision and repeatability is investigated. The force sensor is then integrated on the robot end effector, and an example of its use is presented. The experiment is focused on the harmonization of the harpsichord plectra. Harmonization is a complex process of adjustments achieved by the luthier on the instrument. A model of the plectrum / string interaction, taking into account the geometry of the plectrum, as well as experiments performed on a real harpsichord, show that harmonization have an impact on the string initial conditions of vibration
Ghilain, Matthieu. "Synchronisation au rythme de la musique et effet du contexte social dans la maladie d’Alzheimer et le vieillissement physiologique". Thesis, Lille 3, 2019. https://pepite-depot.univ-lille.fr/RESTREINT/EDSHS/2019/2019LIL3H3061.pdf.
Texto completoIn musical interventions with people with Alzheimer's disease or related diseases, participants are frequently asked to move to the rhythm of music. Synchronization to musical rhythm, especially in group, involves responses at different levels (motor, rhythmic, social and emotional) and could provide pleasure as well as strengthen social ties amongst the patients and their relatives. However, synchronization to musical rhythm and the possible link between these different levels of response to this activity are not well known in Alzheimer's disease. The objective of this thesis is to examine the different aspects of the behavior of people with Alzheimer's disease (or related diseases) and participants with ‘normal’ physiological aging during a synchronization activity to musical rhythm performed in joint action with a musician. The chosen approach in this project was based on a multidisciplinary method including movement science, social psychology and neuropsychology. First, we studied the effect of social context and music (and its temporal characteristics) on synchronization performance and on the social, emotional, rhythmic and motor engagement of people with Alzheimer's disease in this activity (study 1 chapter 4 and 5). The results showed that the physical presence of a singer performing the synchronization task with the participant modulated synchronization performance and the quality of the social and emotional relationship differently from an audio-visual recording of this singer. This effect of the social context was greater in response to music than to metronome and was modulated as well as by tempo and metric. In addition, we found that music increased rhythmic engagement of the participants compared to metronome. Then, we compared the responses to the synchronization task in pathological and physiological aging (study 2 chapter 6 and 7). The results revealed that synchronization performance did not differ between the two groups, suggesting that audio-motor coupling in Alzheimer's disease should be spared through this task. Although the disease reduced motor, social and emotional engagement in response to music compared to physiological aging, an effect of social context was observed on the behavior in both groups. Finally, we compared the groups of participants with Alzheimer's disease between the two studies showing that the severity of the disease could affect synchronization and engagement to music in the activity (chapter 8). In conclusion, this thesis has shown that audio-motor coupling is partly spared in people with Alzheimer's disease and that joint action with a partner modulates the quality of the social relationship and the engagement to music. The theorical knowledge acquired through this work provides a better understanding of the evolution of the behavior in response to music in Alzheimer’s disease. The method developed by this thesis thus offers the opportunity to evaluate the therapeutic benefits of musical interventions at different levels on the behavior of people with Alzheimer’s disease. Such perspectives would improve the care of these people and their caregivers
Bacot, Baptiste. "Geste et instrument dans la musique électronique : organologie des pratiques de création contemporaines". Thesis, Paris, EHESS, 2017. http://www.theses.fr/2017EHES0172.
Texto completoTechnological means of electronic music reconfigure musical practices. Because of the machines’ computation and automation capacities that mediate the sonic phenomenon, the causal relationship between instrumental gesture and sound is altered. Therefore, what does it mean to play electronic music? The concepts of gesture and musical instrument are traditionally employed for the analysis of musical practices, but how should they be understood in this context? To address these issues, we conducted an extensive fieldwork between 2010 and 2016 with professional musicians in various contexts: art music with real-time electronics, audiovisual performance, and popular electronic music. Our instrument-focused approach allows us to consider the materiality of electronic music itself, beyond aesthetics. Thus, this work is an “organological inquiry” on the following musicians or bands: Robert Henke, Alex Augier, Brain Damage, High Tone, Pierre Jodlowski, Jesper Nordin, John MacCallum and Teoma Naccarato, Nicolas Mondon, Greg Beller, and the Unmapped collective. The ethnographic method sheds a light on the use of music technologies at different stages of the creative process: conceptualization of the work, technical collaboration, the making of the music and its performance. From this analysis of musical activity captured through instrumental configurations, we offer a typology of electronic music instruments, based on a gestural criterion, which is the only residual aspect of the acoustic instrumental interaction model. The corporeal activity leads to organise the material diversity of music technologies, as well as it constitutes a strategic way to express instrumental interaction
Hsueh, Shu-Yuan. "Conceptualizing Creative Styles in Technology-Mediated Artistic Practices". Electronic Thesis or Diss., université Paris-Saclay, 2020. http://www.theses.fr/2020UPASG055.
Texto completoMuch contemporary discourse around democratization of creativity sees technologies supporting creativity as providing access to universally shared capabilities that are otherwise untapped. This encourages evaluation metrics that assess the emergence or development of those specific capabilities. These accounts often overlook the diversity in human capabilities and work styles. This long-standing focus on identifying universally useful tasks contributes to computational diversity instead of epistemological diversity. In this dissertation, I am interested in exploring an alternative design space: one in which computational support enriches, rather than erases, epistemological diversity, namely the diversity of styles in carrying out the same activity. This tension between diversity and generalizability poses important challenges for design. In this dissertation, I consider the intellectual foundations that undergird much of the contemporary rhetoric about creativity. I argue that the traditional stage-based models of the creative process contribute to a disembodied view that promulgate the understanding that all creative processes, at their core, follow the same structural rules. I suggest that this view is incomplete and propose a reconceptualization of the creative process that takes epistemological diversity into account. To do so, I give two empirical accounts of how creative styles are constructed over time in two different creative settings: one in dance improvisation and another in contemporary music composition and choreography. These case studies surface the shifting roles of technology as people repurpose and manipulate it. I reconceptualize the creative process as continuous acts of re-vision, which foregrounds the craftwork that goes into managing shifting perspectives. This understanding provides insight into the limits and opportunities of creative tools, offering critical reflections on the various roles technology may play in the creative process
Hussenet, Olivier. "À la fois théâtre et musique : la chanson, ars d'incarnation". Thesis, Paris, EHESS, 2020. http://www.theses.fr/2020EHES0080.
Texto completoHow can one, how does one trace the contours of ‘song’? Acknowledging the elusive nature of the object ‘s essence (because of its highly disparate and heterogeneous nature), we propose to deal with the issue of the object’s delineation in a pragmatist perspective. In the first part of this dissertation, the attempt to delineate the object called “chanson” (song) encounters a series of miscellanous obstacles. The successive definitory attempts build and reproduce essentialist presuppositions (the song-in-itself) and formal standards or thematic norms which prevent them from framing the object and create leftovers. The arbitrariness of definitions found in dictionaries and books on the history of song is compounded by the heterogeneous nature of ‘chanson’ on different levels : artifact, style and situated occurrence. The use of the term “chanson” followed by a series of adjectives creates a categorization leading to a generic labyrinth dominated by a priori hierarchy and normative presuppositions. Considering “chanson” as a hypergenre (in the sense of Maingueneau) might provide a solution.In a second section, the importance of considering the different ‘moments’ of song becomes more apparent. Depending on the moment considered, the patrimonialization processes differ: the constitution of a corpus, anthologization, establishment of a repertory (repertoire). Heritage and the objects that compose it are interdependent. Exclusively institutional forms of patrimonialization, such as Fortoul’s 1852 survey or the musical folklore mission in Lower Brittany in 1939, do not construct the same object as the joint initiative of the French Museum of Popular Arts and Traditions and radio station Europe 1, to create a ‘Musée de la Chanson’ – “Song Museum” – in the early 1960s. Patrimonialization through repertory (which is the choice of ‘Le Hall de la chanson’ – "Song Hall of Fame" –, founded in 1990) would appear to be more inclusive because it is multidirectional. Unfortunately, it runs the double risk of trying to defend an objet seen as inferior and having to steer clear of the trope of the threat of extinction.In a third section, the adoption of a pragmatist perspective renders possible a dramatic change of perspective : the object is no longer seen as a semantic structure to be interpreted; rather it attains the status of “organizational object” (Garfinkel 2001). From there, the inquiry goes on to discuss song’s agency viewed in terms of its delivery in singing mode and its deployment in acting mode, firstly through the example of the constitution of a prescriptive apparatus in the context of 17th- and 18th-century Catholicism, then through the analysis of an ordinary theory developed in a treatise whose author was herself a singer : Yvette Guilbert. Two series of song-implementation trials are then examined: : those of the characterization of the object and those of a given song’s actual materialization. Finally, the ‘chanson’ object is approached from the standpoint of involving the performer’s skills and studying the pedagogical setting in which the art of song performance is handed down
Lindborg, PerMagnus. "Le dialogue musicien-machine : Aspects des systèmes d'interactivité musicale". Licentiate thesis, Université de Paris IV Sorbonne, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-177670.
Texto completoQC 20151217
Lopez, Charles Carlos. "La convergence musicale entre les médias visuels et musicaux dans la création de la musique visuelle". Thesis, Paris 8, 2015. http://www.theses.fr/2015PA080142/document.
Texto completoHow can the musical convergence between sound and image, between audio en video, be achieved when working with digital technology? What is "musicable" in the visual domain? I suggest that the musical convergence is achieved through the use of technological devices that allow to 1) establish a certain degree of interaction between the musical and visual media, and 2) manipulate visual materials through an ensemble of compositional operations that are coherent with a given musical paradigm. On this basis, we suggest that what is "musicable" in the visual domain is the ensemble of visual variables that can be manipulated during the compositional process, to the extent in which they make musical sense. The research begins by identifying the historical context that set the conditions for the musical convergence between visual and musical media. We then present a multiple case study that is integrated by 12 works which belong different stages of the history of visual music. The study reveals the use of 41 methods to craft the musical convergence between visual and musical media. The final chapter presents an artistic demonstration of the use of the methods and concepts that emerged from our multiple case study. The conclusions present a synthesis of the results, a discussion of the interaction between theoretical and practical approaches, and my personal reflections on the pathways for future research in the field
Dias, Fernandes João Eduardo. "L’improvisation musicale électroacoustique : enjeux et problématiques du développement des technologies numériques". Thesis, Paris 8, 2019. http://www.theses.fr/2019PA080026.
Texto completoThis research/creation thesis is centred on the exploration of musicians’ needs for the practice of electroacoustic music improvisation. This investigation was conducted through participant observation with different musical ensembles. It also presents the strategies adopted for the development of digital tools in the case of a created musical instrument. How does the practice of free improvisation in electroacoustic music work and what knowledge is needed to achieve free improvisation? To answer these questions, several paths were taken: for example considering the context of the performance and by challenging the musical and extramusical aspects surrounding the improvised action. Furthermore, this research deployed a study of the characteristics of a digital instrument that endeavours to possess the same flexibility as acoustic instruments. The “creation” component of this thesis is through the development of a digital instrument (RJ). This instrument is implemented with a sound samples recommendation system. It was used in situation of improvisation with several ensembles, which form part of the corpus of data of this thesis. Lastly, this thesis seeks to characterize musical improvisation through the multiple interactions that occur during an improvised performance. The research therefore provides a practical and theoretical framework for the use and exploration of contemporary digital tools for musicians, which practice collective free improvisation
Mann, Patricia. "La sensibilité esthétique et la sensibilité à l'interaction sociale : deux nouvelles variables pour expliquer le comportement de fréquentation des concerts de musique classique". Dijon, 2000. http://www.theses.fr/2000DIJOE010.
Texto completoVeytizou, Julien. "Caractérisation des spécificités motrices d'utilisateurs en situation de handicap : application à la conception de systèmes personnalisables pour la pratique musicale instrumentale". Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENI069/document.
Texto completoThe users and uses inclusion in product design remains a difficult aspect to be addressed, especially when their characterization is very specific (in the case of disabled people or situations of uses in stressful environments). In this context, the purpose of this work is to contribute to a better consideration of the users and uses by the implementation of tools and methods to characterize them. This characterization integrated into the design process, helped to meet the needs of AE2M Association (Ergonomic Adaptation of Musical Equipments). It aims to provide assistive technologies for people with physical impairments. These systems allow them to play musical instruments with the same level of independence that able-bodied people. In the context of this thesis, we firstly proposed a common conceptualization of "disability situation" in our context and have highlighted the constraints to the success of this thesis. Secondly, we did a bibliography study about design approaches in the disabled context, Human-Machine Interface in the musical context, and means and methods for the analysis of the users' motors specificities. This step allowed us to propose a generic design process approach adapted to the context of the design of assistive technology for people with physical impairments. This CARACTH method is inspired from the User Centered Design methodology to which we propose to insert: (1) a specific users' motor characterization step and (2) a modular architecture product definition step. These proposals permitted (3) to simplifying the iteration phases within the design process to allow a quick and effective personalization of the product (control and operative parts). Next, a set of experiments was conducted in the laboratory and in situ. They allowed offering and validating the modular product architecture pertinence for facilitating the assistive devices design for music practice, but also for proposing a system named KinecLAB to measure and interpret user's gestural capabilities. This KinectLAB system was tested and validated with professional physiotherapists in the Michallon Grenoble Hospital. Finally, our CARACTH process was applied to design a customizable system adapted to the physical capabilities of a disabled user. We confirmed its successful integration into a use situation during musical concert. We also studied the relevance of customization on user performances and workload
Francoeur, Mikaël. "Enrichir la pratique des pianistes classiques : la contribution du public à la construction de sens dans un récital commenté interactif". Doctoral thesis, Université Laval, 2020. http://hdl.handle.net/20.500.11794/40129.
Texto completoThe relation between artist and audience in classical music is traditionally defined by a paradigm in whichthe arts professional is the exclusive maker of the art work’s meaning, and where the audience recognizes an artistic or scientific authority in the meaning he receives from the arts professional.This is contrary to a paradigmatic shift that has been observed in various areas, and where consumers claim an increased control over the meaning-making of the products they consume. This leads to the realization that a favoured practice of classical pianist, the lecture-recital, is ill suited to this paradigmatic shift, since meaning-making in a lecture-recital is the sole responsibility of the artist.Using a research-creation approach, we devised in Interactive Lecture-Recital (ILR) in which the audience was responsible for selecting textual andmusical material by emitting live votes through a digital interface.Listeners therefore had to make a coherent meaning from scattered cultural elements, all of which revolved around Québécois composer Léo Roy (1887–1974).After eachof thethree ILR’s that I presented, audience members completed a survey aiming to assess their concert experience and their meaning-making at the ILR.Audience members generally welcomed enthusiastically the opportunity to play a more active meaning-making role, although individuals presented a variety of attitudes pertaining to artist–audience cooperation.This type of reception brings an increased difficulty for the spectator. This prompts an artist to use adaptation strategies, in order to enhance spectators’reception.An analysis of survey data shows five parameters that an artist can vary in order to adapt an ILR to various publics, with the goal of providing an optimal experience to audience members: conceptual distance between modulesof cultural material, addition or deletion of chronological data, completeness of the introduction, presentation of links between modules, and presentation of musical pieces. We argue that this kind of reception brings a new relevance to classical pianism, by adapting it to a contemporaryconsumption paradigm.
Pacault, Daniel. "[Le sãs] de la musique : contradictions et paradoxes de la pratique musicale instituée : le cas d'un cours individuel de piano". Pau, 2008. http://www.theses.fr/2008PAUU1008.
Texto completoTeaching music in school academies has always been a recurrent issue. Is art meant to serve the learners' purpose or conversely, do learners have to comply to the rules of art ? There seems to be no clearcut decision that could close this debate. Since they have to face such a contradiction, learners, teachers and institution seem to be in a paradoxical and antinomic position which makes it hard to settle the matter. How can they solve such a dilemma ? With this aim in view, we will question music itself, using a play on words about the "säs" of music, something between the sense and the essence of music. We will study the sense of music through its three different meanings: its very meaning as regards language, its perspective as regards sociology, its sensation as regards phenomenology. Will this research, by giving some meaning to music, enable us unravel the paradoxical situations generated by music ? In a second part, we will attempt to get as close as possible to music by laying its sound on paper exactly like the written form (le säs) which perfectly renders the music heard in the words "sense" and "essence". For this purpose, we will translate the recording of a piano lesson into a score and we will attempt to reveal what is being woven between a teacher and his pupil. How do these two persons hear the music through the song of the piano ? And finally, we will unexpectedly find out that the musician's paradox can be solved through the practice of improvisation, thein-between, essential to expression and interpretation. In a sense, the essence of music might be hidden in the dephs of improvisation?
Molina, Villota Daniel Hernán. "Vocal audio effects : tuning, vocoders, interaction". Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS166.
Texto completoThis research focuses on the use of digital audio effects (DAFx) on vocal tracks in modern music, mainly pitch correction and vocoding. Despite its widespread use, there has not been enough discussion on how to improve autotune or what makes a pitch-modification more musically interesting. A taxonomic analysis of vocal effects has been conducted, demonstrating examples of how they can preserve or transform vocal identity and their musical use, particularly with pitch modification. Furthermore, a compendium of technical-musical terms has been developed to distinguish types of vocal tuning and cases of pitch correction. Additionally, a graphical correction method for vocal pitch correction is proposed. This method is validated with theoretical pitch curves (supported by audio) and compared with a reference method. Although the vocoder is essential for pitch correction, there is a lack of descriptive and comparative basis for vocoding techniques. Therefore, a sonic description of the vocoder is proposed, given its use for tuning, employing four different techniques: Antares, Retune, World, and Circe. Subsequently, a subjective psychoacoustic evaluation is conducted to compare the four systems in the following cases: original tone resynthesis, soft vocal correction, and extreme vocal correction. This psychoacoustic evaluation seeks to understand the coloring of each vocoder (preservation of vocal identity) and the role of melody in extreme vocal correction. Furthermore, a protocol for the subjective evaluation of pitch correction methods is proposed and implemented. This protocol compares our DPW pitch correction method with the ATA reference method. This study aims to determine if there are perceptual differences between the systems and in which cases they occur, which is useful for developing new melodic modification methods in the future. Finally, the interactive use of vocal effects has been explored, capturing hand movement with wireless sensors and mapping it to control effects that modify the perception of space and vocal melody
Rousseaux, Francis. "Une contribution de l'intelligence artificielle et de l'apprentissage symbolique automatique à l'élaboration d'un modèle d'enseignement de l'écoute musicale". Phd thesis, Université Pierre et Marie Curie - Paris VI, 1990. http://tel.archives-ouvertes.fr/tel-00417579.
Texto completoC'est ainsi que ce thème devient un objectif d'études et de recherches : mais dans cette optique, il est nécessaire de prendre en compte l'état de l'art en informatique musicale, et d'écouter les besoins manifestés par les musiciens, afin de prendre pied sur une réelle communauté d'intérêts entre les deux disciplines.
En toute hypothèse, la musique est un objet abstrait dont il existe plusieurs représentations, aucune n'étant complète ni générale, et chacune possédant des propriétés spécifiques. Qui plus est, ces représentations ont tendance à évoluer, naître et mourir au gré des besoins des musiciens, même si la représentation sonore reste essentielle et par définition indissociable de l'objet abstrait : mais il faut bien admettre que le son musical n'est pas seul à évoquer la musique, et que si l'homme éprouve le besoin d'inventer des représentations pour mieux s'approprier le phénomène musical, il peut être enrichissant d'examiner la transposition de ce comportement aux machines.
On peut certes isoler une de ces représentations, la traduire informatiquement et lui dédier des outils : c'est ainsi que de nombreux systèmes informatiques abordent la musique. Mais il existe une approche plus typique de l'intelligence artificielle, qui consiste à chercher à atteindre l'objet abstrait à travers l'ensemble de ses représentations et de leurs relations : pour un système informatique, faire preuve d'intelligence dans ce contexte, c'est utiliser cette diversité et cette multiplicité de représentation; c'est savoir s'appuyer sur une réalité mouvante et se déplacer dans un univers d'abstractions.
Mais les représentations ne prennent leur sens qu'avec ceux qui communiquent à travers elles, qu'avec les activités qu'elles engendrent. On peut alors imaginer un système qui constituerait un véritable lieu de rencontre, de réflexion, de création, en un mot de communication : car la musique est avant tout un médium de communication. Mais quelle est la nature de ce qu'on pourra communiquer à travers un tel système ? Par exemple, on pourra s'exercer aux pratiques musicales, expérimenter de nouveaux rapports entre les représentations, en un mot s'approprier le médium musical lui-même.
Mais alors, on a besoin d'un système qui sache témoigner de ces rencontres, plus précisément qui apprenne à en témoigner; c'est là notre définition de l'apprentissage dans le contexte : on dira qu'un système apprend s'il témoigne, et éventuellement s'adapte à un univers de communication musicale. Sans cette exigence, la valeur de la communication est perdue : en effet les parties prenantes quittent le système avec leur nouvelle richesse, quelle que soit la réussite de la médiation. Aussi, l'enjeu pour un système apprenti consiste à retourner un témoignage aux musiciens, aux pédagogues et aux informaticiens, afin qu'ils puissent en tirer profit : bien entendu, on exigera de ce témoignage qu'il produise de la connaissance utile, sans se contenter de cumuls d'événements ou de faits ordonnés historiquement.
Ainsi, à travers un enseignement ouvert, il s'agira pour des élèves d'appréhender et d'expérimenter le médium musical, d'enrichir leurs connaissances et d'obtenir des explications. Pour des enseignants, il s'agira de créer et d'organiser cette médiation, et de rendre des oracles pédagogiques au système. Mais l'intelligence artificielle et l'apprentissage symbolique automatique sont les sciences de l'explication : il faut mettre en jeu la dimension cognitive qui permettra d'expertiser l'adéquation du lieu de rencontre; il faut se placer au cœur des besoins et des préoccupations des enseignants et des élèves, en tentant de formaliser les théories cognitives de la musique. On pourra même inventer des représentations à vocations cognitive et explicative : à terme, un système construit sur un tel modèle pourrait bien être capable de faire lui-même des découvertes dans ce domaine.
Françoise, Jules. "Motion-sound Mapping By Demonstration". Electronic Thesis or Diss., Paris 6, 2015. http://www.theses.fr/2015PA066105.
Texto completoDesigning the relationship between motion and sound is essential to the creation of interactive systems. This thesis proposes an approach to the design of the mapping between motion and sound called Mapping-by-Demonstration. Mapping-by-Demonstration is a framework for crafting sonic interactions from demonstrations of embodied associations between motion and sound. It draws upon existing literature emphasizing the importance of bodily experience in sound perception and cognition. It uses an interactive machine learning approach to build the mapping iteratively from user demonstrations. Drawing upon related work in the fields of animation, speech processing and robotics, we propose to fully exploit the generative nature of probabilistic models, from continuous gesture recognition to continuous sound parameter generation. We studied several probabilistic models under the light of continuous interaction. We examined both instantaneous (Gaussian Mixture Model) and temporal models (Hidden Markov Model) for recognition, regression and parameter generation. We adopted an Interactive Machine Learning perspective with a focus on learning sequence models from few examples, and continuously performing recognition and mapping. The models either focus on movement, or integrate a joint representation of motion and sound. In movement models, the system learns the association between the input movement and an output modality that might be gesture labels or movement characteristics. In motion-sound models, we model motion and sound jointly, and the learned mapping directly generates sound parameters from input movements. We explored a set of applications and experiments relating to real-world problems in movement practice, sonic interaction design, and music. We proposed two approaches to movement analysis based on Hidden Markov Model and Hidden Markov Regression, respectively. We showed, through a use-case in Tai Chi performance, how the models help characterizing movement sequences across trials and performers. We presented two generic systems for movement sonification. The first system allows users to craft hand gesture control strategies for the exploration of sound textures, based on Gaussian Mixture Regression. The second system exploits the temporal modeling of Hidden Markov Regression for associating vocalizations to continuous gestures. Both systems gave birth to interactive installations that we presented to a wide public, and we started investigating their interest to support gesture learning
Morciano, Lara. "Ecriture du son, du temps et de l'espace dans l'interaction entre instruments et dispositifs numériques synchrones". Thesis, Paris Sciences et Lettres (ComUE), 2018. http://www.theses.fr/2018PSLET038/document.
Texto completoThis research project aims to study the contemporary integration, in the composition and performance, of technological and artistic tools corresponding to the state of the art in terms of real-time interaction between instrumental production, digital sound production and production of spatio-temporal forms in the listening space. In particular, we study how this integration can in turn constitute a new modality of writing in which a coherent writing of sound, a writing of time and a writing of space informed by technology do merge together in a coherent way.Computer paradigms for time and interaction management, synchronization tools, sound and gestural flow analysis, parameter control from instrumental sound, research on the issue of instrumental timbre and its digital descriptors, and interpreter-computer interaction are key elements of this research and creation work.We focus on real-time interaction with smart computer devices in a particularly virtuoso writing with specific aspects of temporal and spatial construction, this hybrid situation in turn influencing the nature of the writing itself. The various themes relating to this exploration, such as the writing of sound, time and space, are the starting point for declining and developing, according to the nature of the various productions envisaged, the possible links with other artistic disciplines
Françoise, Jules. "Motion-sound Mapping By Demonstration". Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066105/document.
Texto completoDesigning the relationship between motion and sound is essential to the creation of interactive systems. This thesis proposes an approach to the design of the mapping between motion and sound called Mapping-by-Demonstration. Mapping-by-Demonstration is a framework for crafting sonic interactions from demonstrations of embodied associations between motion and sound. It draws upon existing literature emphasizing the importance of bodily experience in sound perception and cognition. It uses an interactive machine learning approach to build the mapping iteratively from user demonstrations. Drawing upon related work in the fields of animation, speech processing and robotics, we propose to fully exploit the generative nature of probabilistic models, from continuous gesture recognition to continuous sound parameter generation. We studied several probabilistic models under the light of continuous interaction. We examined both instantaneous (Gaussian Mixture Model) and temporal models (Hidden Markov Model) for recognition, regression and parameter generation. We adopted an Interactive Machine Learning perspective with a focus on learning sequence models from few examples, and continuously performing recognition and mapping. The models either focus on movement, or integrate a joint representation of motion and sound. In movement models, the system learns the association between the input movement and an output modality that might be gesture labels or movement characteristics. In motion-sound models, we model motion and sound jointly, and the learned mapping directly generates sound parameters from input movements. We explored a set of applications and experiments relating to real-world problems in movement practice, sonic interaction design, and music. We proposed two approaches to movement analysis based on Hidden Markov Model and Hidden Markov Regression, respectively. We showed, through a use-case in Tai Chi performance, how the models help characterizing movement sequences across trials and performers. We presented two generic systems for movement sonification. The first system allows users to craft hand gesture control strategies for the exploration of sound textures, based on Gaussian Mixture Regression. The second system exploits the temporal modeling of Hidden Markov Regression for associating vocalizations to continuous gestures. Both systems gave birth to interactive installations that we presented to a wide public, and we started investigating their interest to support gesture learning
Grobert, Julien. "L'effet de la congruence avec l'image d'une entreprise de deux facteurs atmosphériques (parfum et musique), sur la satisfaction et les réponses comportementales des individus : application au secteur bancaire". Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENG001/document.
Texto completoWhile the bank sector will be the center of huge changes, during next years, due to a restructuration of its model. It seems relevant to understand whether the sensory marketing can create added value for the customer and companies in this context A scent and/or a music diffusion must nevertheless be the object of recommandations. Indeed, will a scent or/a music with high level of congruency with brand image have differents effects(and stronger or lower) than a scent (or a music) with a low level of congruency on the satisfaction and behavioral responses of consumer ? This doctoral research aims to investigate this issue. Two diffents stages have been realised. First of all, a qualitative study allowed to show brand's identity markers allowing the creation of two types of scents (high congruency vs.low congruency) and two types of musics (high congruency vs. low congruency). The second stage, quantitative, has been realised in situ. Results showed that diffusion of a perfume with low congruency with the brand image leads to more favorable responses of physical elements and in fine on the satisfaction and on the behavioral responses. Conversely,a music's diffusion overall leads to negatives effects
Vincent, Delphine. "Son et pratique artistique : sources et origines de la sculpture sonore jusqu'à la fin des années 60". Toulouse 2, 2009. http://www.theses.fr/2009TOU20002.
Texto completoThe Western Sculpture has always, or almost always, been immobile and silent. In the 1960's appears in Europe a new form of arts that mixes sculpture and sound elements, sometimes accompanied by movement that appeals to participation of the viewer, by chance, by playing and that seems to cross the lines of artistic categories, challenging the Romanticism and the Gesamtkunstwerk of Richard Wagner. The aim of this thesis is to search for sources and origins of the sound sculpture. What mechanisms and what influences pushed artists to introduce the sound into their sculptures. Can we still really talk about it as sculpture? How can this practice be defined according to other contemporary artistic practices? How to designate the produced sounds? The proposed approach is a historical consideration that gives preference to two main tracks. On the one hand the modern world, the scientific advancement and the sound world knew unprecedented upsets at the turn of the XXth century that make daily live a reservoir of noises, materials and theories from which artists, painters, sculptors or musicians were inspired. On the other hand, the interactions between music and arts, appearing as an obvious source and being at the heart of this subject of sound sculpture and its sources, reaching an unprecedented craze in the history of art. At last, the sound sculpture is studied through the example of works by Tinguely, Schöffer, Takis or Soto, that allow a clear definition of the Baschet sound structures. Another question is if sound can be considered as an artistic material and what is its contribution to the sculpture in the way of freeing the main characteristics of this artistic practice. The Western Sculpture has always, or almost always, been immobile and silent. In the 1960's appears in Europe a new form of arts that mixes sculpture and sound elements, sometimes accompanied by movement that appeals to participation of the viewer, by chance, by playing and that seems to cross the lines of artistic categories, challenging the Romanticism and the Gesamtkunstwerk of Richard Wagner. The aim of this thesis is to search for sources and origins of the sound sculpture. What mechanisms and what influences pushed artists to introduce the sound into their sculptures. Can we still really talk about it as sculpture? How can this practice be defined according to other contemporary artistic practices? How to designate the produced sounds? The proposed approach is a historical consideration that gives preference to two main tracks. On the one hand the modern world, the scientific advancement and the sound world knew unprecedented upsets at the turn of the XXth century that make daily live a reservoir of noises, materials and theories from which artists, painters, sculptors or musicians were inspired. On the other hand, the interactions between music and arts, appearing as an obvious source and being at the heart of this subject of sound sculpture and its sources, reaching an unprecedented craze in the history of art. At last, the sound sculpture is studied through the example of works by Tinguely, Schöffer, Takis or Soto, that allow a clear definition of the Baschet sound structures. Another question is if sound can be considered as an artistic material and what is its contribution to the sculpture in the way of freeing the main characteristics of this artistic practice. The Western Sculpture has always, or almost always, been immobile and silent. In the 1960's appears in Europe a new form of arts that mixes sculpture and sound elements, sometimes accompanied by movement that appeals to participation of the viewer, by chance, by playing and that seems to cross the lines of artistic categories, challenging the Romanticism and the Gesamtkunstwerk of Richard Wagner. The aim of this thesis is to search for sources and origins of the sound sculpture. What mechanisms and what influences pushed artists to introduce the sound into their sculptures. Can we still really talk about it as sculpture? How can this practice be defined according to other contemporary artistic practices? How to designate the produced sounds? The proposed approach is a historical consideration that gives preference to two main tracks. On the one hand the modern world, the scientific advancement and the sound world knew unprecedented upsets at the turn of the XXth century that make daily live a reservoir of noises, materials and theories from which artists, painters, sculptors or musicians were inspired. On the other hand, the interactions between music and arts, appearing as an obvious source and being at the heart of this subject of sound sculpture and its sources, reaching an unprecedented craze in the history of art. At last, the sound sculpture is studied through the example of works by Tinguely, Schöffer, Takis or Soto, that allow a clear definition of the Baschet sound structures. Another question is if sound can be considered as an artistic material and what is its contribution to the sculpture in the way of freeing the main characteristics of this artistic practice
Aceituno, Jonathan. "Direct and expressive interaction on the desktop : increasing granularity, extent, and dimensionality". Thesis, Lille 1, 2015. http://www.theses.fr/2015LIL10089/document.
Texto completoDesktop and laptop personal computers enable knowledge work and creative activities by supporting direct and expressive interaction. But the feeling of directness often disappears when users perform complex actions: expressiveness is ensured at the cost of spatiotemporal separation, cognitive load, or complex operation sequences. This thesis proposes to solve this problem by harnessing the unexploited capabilities of standard input devices and familiar actions performed on them. We investigate how this enables direct increases of the granularity, extent, and dimensionality of user actions. First, we show that the granularity of pointer movements can be increased a hundredfold without impeding normal behavior if pointing transfer functions include device characteristics, user capabilities, and the manipulated data model, thus allowing subpixel interaction. This is limited by the useful resolution, the smallest displacement a user can produce using a pointing device, for which we propose an experimental protocol.Second, we study the design space of a widely used technique, edge-scrolling, that extends dragging actions past a viewport edge by scrolling. We reverse engineer 33 existing implementations, and highlight usability problems through a survey and experiments. We also propose push-edge and slide-edge scrolling, two position control techniques that provide performance comparable to rate control without the shortcomings. Third, we describe three ways of using a standard laptop as a musical instrument, allowing simultaneous multiparametric control of sound synthesis in real time, together with design considerations and examples of successful uses
Vesac, Jean-Ambroise. "Approche de l'être-ensemble numérique dans les réalités mixtes : ScnVir, un dispositif de simulation située artistique d'interaction de groupe". Doctoral thesis, Université Laval, 2018. http://hdl.handle.net/20.500.11794/31884.
Texto completoThis research associate immersive digital mobility and interactivemusic composition. We develop an approach to creation that encourages engagement and prolongs the experimentation of an artistic installation in an outdoor public space. We are interested in digital togetherness in mixed reality, where physical and virtual presence overlap and hybridize. Continuing and renewing ancient participatory cultural traditions, our creation, ScnVir offers a virtual scene of mixed reality and interactive design that promotes the emergence of participatory artistic experiences. Our hypothesis is that digital togetherness, through its clues communicated by group agency, favors engagement and extends the experience of a digital art installation. This assumes that the digital togetherness emerges by providing clues associated with this state and that the situated simulation is a technology that makes it possible to materialize such clues within an artistic installation. In order to continue this research, we have established an interdisciplinary research-creation approach including digital art, interaction design and geomatics, which includes conceptual, methodological and artistic objectives. We have determined and validated phases of the digital togetherness (the being-there, the hymersion and the group interaction) and the indices of the digital togetherness in situated simulation (proximity, sociability, the coordination of movements, the success of a mission). For the realization of the project, we designed interactivity based on simple movements using such indices. We have developed a creative methodology and created an artistic work applying the phases of digital togetherness in an artistic installation in situated simulation. To validate our research, we have developed a hybrid methodology combining a quantitative and qualitative analysis approach. The results show that all the participants have reached the first phase of the digital togetherness, the being-there and that all the participants have acceded, at least partially, to the second phase of the digital togetherness, the hymersion. The success of the second phase of digital togetherness validates that our device promotes the commitment of the participants in the experience, because hymersion requires commitment. For the third phase involving group interaction, our results highlight many digital togetherness indices and a mitigate success (28%) for the hidden mission, which allows us to affirm that some forms of digital togetherness have emerged from our installation. However, not all the participants succeeded in the hidden mission that served us as a quantitative validation of the emergence of digital togetherness, because the experimental conditions, the implementation of interaction design and our validation process demonstrate weaknesses. On the other hand, the presence of numerous indices of the digital togetherness attests that the situated simulation is a effective technology to implement the group agency. These results demonstrate that the implementation of digital togetherness within an artistic installation is possible. To achieve this, conditions must be respected: the phases of the digital togetherness must be met; the installation must allow the communication of behavioral indices by the group agency; group interactions must be mediated in the interface by appropriate media content. The observed results establish that our installation offers an experiment of rather long duration and thus favors the prolongation of the experimentation of a device of simple gestural interaction. Thus, our results allow us to affirm that digital togetherness promotes engagement and prolongs participation in a digital art installation in the public space. From an artistic point of view, our creative process is satisfactory because it offers both unpredictable and significant events, connected to the user experience, adapted to musical and artistic expression. The diffusion of our project in a recognized festival validates the interest of our approach of creation for the digital creation. The prospects for artistic development are extraordinary.
Allombert, Antoine. "Aspects temporels d’un système de partitions musicales interactives pour la composition et l’exécution". Thesis, Bordeaux 1, 2009. http://www.theses.fr/2009BOR13680/document.
Texto completoAbstract
Wofford, Timothy. "Study of the interaction between the musician and the instrument. Application to the playability of the cello". Thesis, Sorbonne université, 2018. http://www.theses.fr/2018SORUS336.
Texto completoWe attempt to validate the link between the bridge mobility and the perception of playability by observing whether the musician perceives differences in Schelleng’s upper and lower limits for notes other than the wolf note. We also look for other factors which are relevant to the musician’s perception of playability by studying the interactions between the player and the instrument during an evaluation and performance task. Unlike previous approaches, we observe all parts of the playability feedback loop, including the control parameters, some vibro-acoustical measurements taken with an impact hammer, the response of the instrument to the control parameters, and the musician’s comments about perceptual properties. We find that geometric features of the cello set-up (in particular, the bridge curvature) may lead the musician to use less force than usual in order to avoid accidentally touching adjacent strings. This results in longer transients and different timbres, which are perceived in ways that are not related to geometry or control parameters. Sometimes unconscious adaptations to the geometry affect the sounds produced and the subsequent perception of the instrument. A conscious effort is needed to overcome these natural behaviors in order to compensate for set-up geometry and arrive at a stable evaluation
Araújo, Costa Fabiano. "Poétiques du « Lieu Interactionnel-Formatif » : sur les conditions de constitution et de reconnaissance mutuelle de l’expérience esthétique musicale audiotactile (post-1969) comme objet artistique". Thesis, Paris 4, 2016. http://www.theses.fr/2016PA040107.
Texto completoThis study consists of a hermeneutic approach of the phenomenon of musical interaction as aesthetic experience, within the inter- and trans-cultural contexts of the post-1969 jazz. A critical review of the traditional musicological analysis on the subject of interaction in jazz was an essential tool for this work to propose a formalization of the hermeneutical concept of the “Interactional-Formative Space” [IFS], which consists of a set of conditions that institutes the interactional aesthetic experience as a dynamic process of constitution and mutual recognition of the artistic rules that, regardless of the poetic agenda pre-adopted, can lead the improvised musical performance into being work of art. There are two key approaches to this conception, which we discuss in the first part of this work: the formativity theory of the Italian philosopher Luigi Pareyson, and the phenomenology and taxonomy of the music experience as audiotactile formativity of Vincenzo Caporaletti. Our main purpose is to build an in-depth view of some characteristics of the Pareysonian formativity, particularly the one regarding the isomorphism between the person and the work in process, alongside with the cultural-cognitive-anthropologic design of the audiotactile formativity theory, which results in a philological contribution to the conceptualization of Caporaletti, making way for a confrontation between these two theoretical systems and the general problem of the interaction. Finally, in the second part, we propose the formalization of the interpersonal, contextual, and systemic dimensions of the IFS and also a series of three analytical essays
Ghomi, Emilien. "Designing expressive interaction techniques for novices inspired by expert activities : the case of musical practice". Phd thesis, Université Paris Sud - Paris XI, 2012. http://tel.archives-ouvertes.fr/tel-00839850.
Texto completoAntoniadis, Pavlos. "Embodied navigation of complex piano notation : rethinking musical interaction from a performer’s perspective". Thesis, Strasbourg, 2018. http://www.theses.fr/2018STRAC007/document.
Texto completoThis thesis proposes a performer-specific paradigm of embodied interaction with complex piano notation. This paradigm, which I term embodied navigation, extends and even confronts the traditional paradigm of textual interpretation. The latter assumes a linear and hierarchical process, whereby internalized understanding of the musical text is considered a prerequisite of instrumental technique towards personal interpretation. In lieu of that, I advocate for a dynamic, non-linear, embodied and external processing of music notation. At a second stage, the proposed paradigm serves as the basis for the development of methodologies and customized tools for a range of applications, including: performance analysis, embodied interactive learning, contemporary composition, free improvisation and piano pedagogy