Literatura académica sobre el tema "Interactive musique"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Interactive musique".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Artículos de revistas sobre el tema "Interactive musique"
Ha, Byeongwon. "Nam June Paik’s Unpublished Korean Article and His Interactive Musique Concrète Projects". Leonardo Music Journal 29 (diciembre de 2019): 93–96. http://dx.doi.org/10.1162/lmj_a_01071.
Texto completoHarkin, Treasa. "Interactive Scores at the Irish Traditional Music Archive (ITMA)". Fontes Artis Musicae 71, n.º 3 (julio de 2024): 190–94. http://dx.doi.org/10.1353/fam.2024.a939060.
Texto completoPalacio-Quintin, Cléo. "Composition interactive : du geste instrumental au contrôle de l’électronique dans Synesthesia 4 : Chlorophylle". Circuit 22, n.º 1 (30 de abril de 2012): 25–40. http://dx.doi.org/10.7202/1008966ar.
Texto completoBresson, Jean, Fabrice Guédy y Gérard Assayag. "Musique Lab Maquette : approche interactive des processus compositionnels pour la pédagogie musicale". Sciences et Technologies de l'Information et de la Communication pour l'Éducation et la Formation 13, n.º 1 (2006): 65–96. http://dx.doi.org/10.3406/stice.2006.927.
Texto completoCastanet, Pierre Albert. "Le médium mythologique du Rock’n roll et la musique contemporaine". Articles 32, n.º 1-2 (9 de septiembre de 2013): 83–116. http://dx.doi.org/10.7202/1018580ar.
Texto completoStévance, Sophie. "La Dream House ou l’idée de la musique universelle". Circuit 17, n.º 3 (28 de febrero de 2008): 87–92. http://dx.doi.org/10.7202/017595ar.
Texto completoImberty, Michel. "Formes de la répétition et formes des affects du temps dans l'expression musicale". Musicae Scientiae 1, n.º 1 (marzo de 1997): 33–62. http://dx.doi.org/10.1177/102986499700100104.
Texto completoAristides, Marcos, Romain Talou, Christine Morard y Silvia Del Bianco. "Développement interdisciplinaire d’un support numérique à l’interface tangible pour les cours collectifs de musique : l’expérience dans une classe de rythmique Jaques-Dalcroze". Journal de recherche en éducation musicale 13-1 (2022): 55–69. https://doi.org/10.4000/134yz.
Texto completoTrivedi, Harsh, Niranjan Balasubramanian, Tushar Khot y Ashish Sabharwal. "♫ MuSiQue: Multihop Questions via Single-hop Question Composition". Transactions of the Association for Computational Linguistics 10 (2022): 539–54. http://dx.doi.org/10.1162/tacl_a_00475.
Texto completoDelerce, Christophe. "Balade autour des plantes sauvages urbaines". Nouvelle revue de psychosociologie N° 37, n.º 1 (7 de mayo de 2024): 91–101. http://dx.doi.org/10.3917/nrp.037.0091.
Texto completoTesis sobre el tema "Interactive musique"
Hadjeres, Gaëtan. "Modèles génératifs profonds pour la génération interactive de musique symbolique". Thesis, Sorbonne université, 2018. http://www.theses.fr/2018SORUS027/document.
Texto completoThis thesis discusses the use of deep generative models for symbolic music generation. We will be focused on devising interactive generative models which are able to create new creative processes through a fruitful dialogue between a human composer and a computer. Recent advances in artificial intelligence led to the development of powerful generative models able to generate musical content without the need of human intervention. I believe that this practice cannot be thriving in the future since the human experience and human appreciation are at the crux of the artistic production. However, the need of both flexible and expressive tools which could enhance content creators' creativity is patent; the development and the potential of such novel A.I.-augmented computer music tools are promising. In this manuscript, I propose novel architectures that are able to put artists back in the loop. The proposed models share the common characteristic that they are devised so that a user can control the generated musical contents in a creative way. In order to create a user-friendly interaction with these interactive deep generative models, user interfaces were developed. I believe that new compositional paradigms will emerge from the possibilities offered by these enhanced controls. This thesis ends on the presentation of genuine musical projects like concerts featuring these new creative tools
Hadjeres, Gaëtan. "Modèles génératifs profonds pour la génération interactive de musique symbolique". Electronic Thesis or Diss., Sorbonne université, 2018. http://www.theses.fr/2018SORUS027.
Texto completoThis thesis discusses the use of deep generative models for symbolic music generation. We will be focused on devising interactive generative models which are able to create new creative processes through a fruitful dialogue between a human composer and a computer. Recent advances in artificial intelligence led to the development of powerful generative models able to generate musical content without the need of human intervention. I believe that this practice cannot be thriving in the future since the human experience and human appreciation are at the crux of the artistic production. However, the need of both flexible and expressive tools which could enhance content creators' creativity is patent; the development and the potential of such novel A.I.-augmented computer music tools are promising. In this manuscript, I propose novel architectures that are able to put artists back in the loop. The proposed models share the common characteristic that they are devised so that a user can control the generated musical contents in a creative way. In order to create a user-friendly interaction with these interactive deep generative models, user interfaces were developed. I believe that new compositional paradigms will emerge from the possibilities offered by these enhanced controls. This thesis ends on the presentation of genuine musical projects like concerts featuring these new creative tools
Bedoya, Ramos Daniel. "Capturing Musical Prosody Through Interactive Audio/Visual Annotations". Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS698.
Texto completoThe proliferation of citizen science projects has advanced research and knowledge across disciplines in recent years. Citizen scientists contribute to research through volunteer thinking, often by engaging in cognitive tasks using mobile devices, web interfaces, or personal computers, with the added benefit of fostering learning, innovation, and inclusiveness. In music, crowdsourcing has been applied to gather various structural annotations. However, citizen science remains underutilized in musical expressiveness studies. To bridge this gap, we introduce a novel annotation protocol to capture musical prosody, which refers to the acoustic variations performers introduce to make music expressive. Our top-down, human-centered method prioritizes the listener's role in producing annotations of prosodic functions in music. This protocol provides a citizen science framework and experimental approach to carrying out systematic and scalable studies on the functions of musical prosody. We focus on the segmentation and prominence functions, which convey structure and affect. We implement this annotation protocol in CosmoNote, a web-based, interactive, and customizable software conceived to facilitate the annotation of expressive music structures. CosmoNote gives users access to visualization layers, including the audio waveform, the recorded notes, extracted audio attributes (loudness and tempo), and score features (harmonic tension and other markings). The annotation types comprise boundaries of varying strengths, regions, comments, and note groups. We conducted two studies aimed at improving the protocol and the platform. The first study examines the impact of co-occurring auditory and visual stimuli on segmentation boundaries. We compare differences in boundary distributions derived from cross-modal (auditory and visual) vs. unimodal (auditory or visual) information. Distances between unimodal-visual and cross-modal distributions are smaller than between unimodal-auditory and cross-modal distributions. On the one hand, we show that adding visuals accentuates crucial information and provides cognitive scaffolding for accurately marking boundaries at the starts and ends of prosodic cues. However, they sometimes divert the annotator's attention away from specific structures. On the other hand, removing the audio impedes the annotation task by hiding subtle, relied-upon cues. Although visual cues may sometimes overemphasize or mislead, they are essential in guiding boundary annotations of recorded performances, often improving the aggregate results. The second study uses all CosmoNote's annotation types and analyzes how annotators, receiving either minimal or detailed protocol instructions, approach annotating musical prosody in a free-form exercise. We compare the quality of annotations between participants who are musically trained and those who are not. The citizen science component is evaluated in an ecological setting where participants are fully autonomous in a task where time, attention, and patience are valued. We present three methods based on common annotation labels, categories, and properties to analyze and aggregate the data. Results show convergence in annotation types and descriptions used to mark recurring musical elements across experimental conditions and musical abilities. We propose strategies for improving the protocol, data aggregation, and analysis in large-scale applications. This thesis contributes to representing and understanding performed musical structures by introducing an annotation protocol and platform, tailored experiments, and aggregation/analysis methods. The research shows the importance of balancing the collection of easier-to-analyze datasets and having richer content that captures complex musical thinking. Our protocol can be generalized to studies on performance decisions to improve the comprehension of expressive choices in musical performances
Petit, Bertrand. "Temps et durée : de la programmation réactive synchrone à la composition musicale". Thesis, Université Côte d'Azur, 2020. http://www.theses.fr/2020COAZ4024.
Texto completoThis thesis raises the question of the relationship between the expressiveness of computer programming tools and musical creation from the perspective of the expression of "time". We have based our reflection on time using the Bergsonian concept of "duration" which exists only in the consciousness of the perceiver as opposed to the objective time, which is that of clocks, independent of consciousness. We have chosen to follow the tradition of written music, i.e. preconceived in a fixed form that allows the composer to evaluate the aesthetic content of his work. Among the various possibilities of implementing duration, we have oriented ourselves towards interactive music, i.e. music that is controlled on stage, in part by the audience. We developed a method of composition based on three concepts: “basic musical elements” which are short musical phrases, “groups” of these basic elements, and “orchestrations”. The groups are made available to the audience who participates in the realization of the music by selecting one or another basic element or by responding to choices that will be proposed to them. The way in which the groups are made available to the audience constitutes the “orchestration” which is implemented by means of the synchronous reactive language HipHop.js. This language combines the programming of complex automata adapted to our orchestration concept, with web programming particularly adapted to the implementation of large-scale interactions. We fed this research with different experiments and musical productions using a software platform called Skini
Scurto, Hugo. "Designing With Machine Learning for Interactive Music Dispositifs". Electronic Thesis or Diss., Sorbonne université, 2019. http://www.theses.fr/2019SORUS356.
Texto completoMusic is a cultural and creative practice that enables humans to express a variety of feelings and intentions through sound. Machine learning opens many prospects for designing human expression in interactive music systems. Yet, as a Computer Science discipline, machine learning remains mostly studied from an engineering sciences perspective, which often exclude humans and musical interaction from the loop of the created systems. In this dissertation, I argue in favour of designing with machine learning for interactive music systems. I claim that machine learning must be first and foremost situated in human contexts to be researched and applied to the design of interactive music systems. I present four interdisciplinary studies that support this claim, using human-centred methods and model prototypes to design and apply machine learning to four situated musical tasks: motion-sound mapping, sonic exploration, synthesis exploration, and collective musical interaction. Through these studies, I show that model prototyping helps envision designs of machine learning with human users before engaging in model engineering. I also show that the final human-centred machine learning systems not only helps humans create static musical artifacts, but supports dynamic processes of expression between humans and machines. I call co-expression these processes of musical interaction between humans - who may have an expressive and creative impetus regardless of their expertise - and machines - whose learning abilities may be perceived as expressive by humans. In addition to these studies, I present five applications of the created model prototypes to the design of interactive music systems, which I publicly demonstrated in workshops, exhibitions, installations, and performances. Using a reflexive approach, I argue that the musical contributions enabled by such design practice with machine learning may ultimately complement the scientific contributions of human-centred machine learning. I claim that music research can thus be led through dispositif design, that is, through the technical realization of aesthetically-functioning artifacts that challenge cultural norms on computer science and music
Toro-Bermudez, Mauricio. "Structured interactive scores : from a structural description of a multimedia scenario to a real-time capable implementation with formal semantics". Thesis, Bordeaux 1, 2012. http://www.theses.fr/2012BOR14588/document.
Texto completoTechnology has shaped the way on which we compose and produce music. Notably, the invention of microphones and computers pushed the development of new music styles in the 20th century. In fact, several artistic domains have been benefiting from such technology developments ; for instance, Experimental music, non-linear multimedia, Electroacoustic music, and interactive multimedia. In this dissertation, we focus on interactive multimedia.Interactive multimedia deals with the design of scenarios where multimedia content and interactive events are handled by computer programs. Examples of such scenarios are multimedia art installations, interactive museum exhibitions, some Electroacoustic music pieces, and some Experimental music pieces. Unfortunately, most interactive multimedia scenarios are based on informal specifications, thus it is not possible to formally verify properties of such systems. We advocate the need of a general and formal model. Interactive scores is a formalism to describe interactive multimedia scenarios. We propose new semantics for interactive scores based on timed eventstructures. With such a semantics, we can specify properties for the system, in particular, properties about traces, which are difficult to specify as constraints. In fact, constraints are an important part of the semantic model of interactive scores because the formalism is based on temporal constraints among the objects of the scenario. We also present an operational semantics of interactive scores based on the non-deterministic timed concurrent constraint (ntcc) calculus and we relate such a semantics to the timed event structures semantics. With the operational semantics, we formally describe the behavior of a score whose temporal object durations can be arbitrary integer intervals. The operational semantics is obtained from the timed event structures semantics of the score. To provide such a translation, we first define the normal form of a timed event structure in which events related with zero-duration delays are collapsed into a single one. We also define the notion of dispatchable timed event structures. Event structures such that its constraint graph can be dispatched by relying only on local propagation.We believe that operational semantics in ntcc offers some advantages over existing Petri nets semantics for interactive scores; for instance, the duration of the temporal objects can be arbitrary integer intervals, whereas inprevious models of interactive scores, such durations can only be intervals to represent equalities and inequalities. In this dissertation, we also introduce two extensions of the formalism of interactive scores : (1) one to handle audio processing using the Fast AUdio Stream (Faust) languageand (2) another one to handle conditional branching, allowing designers to specify choices and loops. For the first extension, we present a timed event structures semantics and ideas on how to define operational semantics. For the second extension, we present an implementation and results comparing the average relative jitter of an implementation ofan arpeggio based on Karplus-Strong with respect to existing implementations of Karplus written in Pure Data. We also define a XML file format for interactive scores and for the conditional branching extension. A file format is crucial to assure the persistence of the scores. Ntcc models of interactive scores are executed using Ntccrt, a real-time capable interpreter for ntcc. They can also be verified automatically using ntccMC, a bounded-time automata based model checker for ntcc which we introduce in this dissertation. Using ntccMC, we can verify properties expressed on constraint linear-time logic. Ntcc has been used in the past, not only for multimedia interaction models, but alsofor system biology, security protocols and robots
Cavez, Vincent. "Designing Pen-based Interactions for Productivity and Creativity". Electronic Thesis or Diss., université Paris-Saclay, 2025. http://www.theses.fr/2025UPASG013.
Texto completoDesigned with the mouse and keyboard in mind, productivity tools and creativity support tools are powerful on desktop computers, but their structure becomes an obstacle when brought to interactive surfaces supporting pen and touch input.Indeed, the opportunities provided by the pen for precision and expressivity have been demonstrated in the HCI literature, but productivity and creativity tools require a careful redesign leveraging these unique affordances to take benefit from the intuitiveness they offer while keeping the advantages of structure. This delicate articulation between pen and structure has been overlooked in the literature.My thesis work focuses on this articulation with two use cases to answer the broad research question: “How to design pen-based interactions for productivity and creativity on interactive surfaces?” I argue that productivity depends on efficiency while creativity depends on both efficiency and flexibility, and explore interactions that promote these two dimensions.My first project, TableInk, explores a set of pen-based interaction techniques designed for spreadsheet programs and contributes guidelines to promote efficiency on interactive surfaces. I first conduct an analysis of commercial spreadsheet programs and an elicitation study to understand what users can do and what they would like to do with spreadsheets on interactive surfaces. Informed by these, I design interaction techniques that leverage the opportunities of the pen to mitigate friction and enable more operations by direct manipulation on and through the grid. I prototype these interaction techniques and conduct a qualitative study with information workers who performed a variety of spreadsheet operations on their own data. The observations show that using the pen to bypass the structure is a promising mean to promote efficiency with a productivity tool.My second project, EuterPen, explores a set of pen-based interaction techniques designed for music notation programs and contributes guidelines to promote both efficiency and flexibility on interactive surfaces. I first conduct a series of nine interviews with professional composers in order to take a step back and understand both their thought process and their work process with their current desktop tools. Building on this dual analysis, I derive guidelines for the design of features which have the potential to promote both efficiency with frequent or complex operations and flexibility in regard to the exploration of ideas. Then, I act on these guidelines by engaging in an iterative design process for interaction techniques that leverage the opportunities of the pen: two prototyping phases, a participatory design workshop, and a final series of interviews with eight professional composers. The observations show that on top of using the pen to leverage the structure for efficiency, using its properties to temporarily break the structure is a promising mean to promote flexibility with a creativity support tool.I conclude this manuscript by discussing several ways to interact with structure, presenting a set of guidelines to support the design of pen-based interactions for productivity and creativity tools, and elaborating on the future applications this thesis opens
Nika, Jérôme. "Guiding Human-Computer Music Improvisation : introducing Authoring and Control with Temporal Scenarios". Electronic Thesis or Diss., Paris 6, 2016. http://www.theses.fr/2016PA066141.
Texto completoThis thesis focuses on the introduction of authoring and controls in human-computer music improvisation through the use of temporal scenarios to guide or compose interactive performances, and addresses the dialectic between planning and reactivity in interactive music systems dedicated to improvisation. An interactive system dedicated to music improvisation generates music on the fly, in relation to the musical context of a live performance. We focus here on pulsed and idiomatic music relying on a formalized and temporally structured object, for example a harmonic progression in jazz improvisation. The same way, the models and architecture we developed rely on a formal temporal structure. This thesis thus presents: a music generation model guided by a ''scenario'' introducing anticipatory behaviors; an architecture combining this anticipation with reactivity using mixed static/dynamic scheduling techniques; an audio rendering module to perform live re-injection of captured material in synchrony with a non-metronomic beat; and a framework to compose improvised interactive performances at the ''scenario'' level. This work fully integrated frequent interactions with expert musicians to the iterative design of the models and architectures. These latter are implemented in the interactive music system ImproteK that was used at various occasions during live performances with improvisers. During these collaborations, work sessions were associated to listening sessions and interviews to gather numerous judgments expressed by the musicians in order to validate and refine the scientific and technological choices
Nika, Jérôme. "Guiding Human-Computer Music Improvisation : introducing Authoring and Control with Temporal Scenarios". Thesis, Paris 6, 2016. http://www.theses.fr/2016PA066141/document.
Texto completoThis thesis focuses on the introduction of authoring and controls in human-computer music improvisation through the use of temporal scenarios to guide or compose interactive performances, and addresses the dialectic between planning and reactivity in interactive music systems dedicated to improvisation. An interactive system dedicated to music improvisation generates music on the fly, in relation to the musical context of a live performance. We focus here on pulsed and idiomatic music relying on a formalized and temporally structured object, for example a harmonic progression in jazz improvisation. The same way, the models and architecture we developed rely on a formal temporal structure. This thesis thus presents: a music generation model guided by a ''scenario'' introducing anticipatory behaviors; an architecture combining this anticipation with reactivity using mixed static/dynamic scheduling techniques; an audio rendering module to perform live re-injection of captured material in synchrony with a non-metronomic beat; and a framework to compose improvised interactive performances at the ''scenario'' level. This work fully integrated frequent interactions with expert musicians to the iterative design of the models and architectures. These latter are implemented in the interactive music system ImproteK that was used at various occasions during live performances with improvisers. During these collaborations, work sessions were associated to listening sessions and interviews to gather numerous judgments expressed by the musicians in order to validate and refine the scientific and technological choices
Bouche, Dimitri. "Processus compositionnels interactifs : une architecture pour la programmation et l'exécution des structures musicales". Thesis, Paris 6, 2016. http://www.theses.fr/2016PA066533/document.
Texto completoThis thesis aims at designing a computer system enabling the computation of musical structures, their presentation/handling on a compositional side, and their interactive rendering. It is a study at the crossroads between several computer science research fields : discrete systems modeling, scheduling, software design and human-computer interfaces. We propose an architecture where program editing can affect their outputs, including during the rendering phase, while preserving the compositional benefits of a deferred-time approach. Compositions are therefore considered as continually running programs, where computation and rendering mechanisms are interleaved. We introduce new tools and interfaces to arrange their execution through time thanks to dynamic temporal scenario scripting, which we call meta-composing. The different results described in this manuscript are implemented in the computer-aided composition environment OpenMusic
Libros sobre el tema "Interactive musique"
Marcelo, Wanderley, ed. New digital musical instruments: Control and interaction beyond the keyboard. Middleton, Wis: A-R Editions, 2006.
Buscar texto completoOntario. Esquisse de cours 12e année: Musique amu4m cours préuniversitaire. Vanier, Ont: CFORP, 2002.
Buscar texto completoOntario. Esquisse de cours 12e année: Sciences de l'activité physique pse4u cours préuniversitaire. Vanier, Ont: CFORP, 2002.
Buscar texto completoOntario. Esquisse de cours 12e année: Technologie de l'information en affaires btx4e cours préemploi. Vanier, Ont: CFORP, 2002.
Buscar texto completoOntario. Esquisse de cours 12e année: Études informatiques ics4m cours préuniversitaire. Vanier, Ont: CFORP, 2002.
Buscar texto completoOntario. Esquisse de cours 12e année: Mathématiques de la technologie au collège mct4c cours précollégial. Vanier, Ont: CFORP, 2002.
Buscar texto completoOntario. Esquisse de cours 12e année: Sciences snc4m cours préuniversitaire. Vanier, Ont: CFORP, 2002.
Buscar texto completoOntario. Esquisse de cours 12e année: English eae4e cours préemploi. Vanier, Ont: CFORP, 2002.
Buscar texto completoOntario. Esquisse de cours 12e année: Le Canada et le monde: une analyse géographique cgw4u cours préuniversitaire. Vanier, Ont: CFORP, 2002.
Buscar texto completoOntario. Esquisse de cours 12e année: Environnement et gestion des ressources cgr4e cours préemploi. Vanier, Ont: CFORP, 2002.
Buscar texto completoCapítulos de libros sobre el tema "Interactive musique"
Christodoulou, Anna-Maria. "Exploring the Electroacoustic Music History Through Interactive Sonic Design". En Current Research in Systematic Musicology, 259–70. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-57892-2_14.
Texto completoJEANDEMANGE, Thibault. "Le fonctionnement de la musique dans les clips de campagne électoraux". En Corpus audiovisuels, 25–40. Editions des archives contemporaines, 2022. http://dx.doi.org/10.17184/eac.5698.
Texto completoKulezic-Wilson, Danijela. "Musicalized Sound Design and the Erotics of Cinema". En Sound Design is the New Score, 89–126. Oxford University Press, 2019. http://dx.doi.org/10.1093/oso/9780190855314.003.0004.
Texto completoPreti, Costanza y Helen Shoemark. "Section Introduction". En The Oxford Handbook of Early Childhood Learning and Development in Music, 631–32. Oxford University Press, 2023. http://dx.doi.org/10.1093/oxfordhb/9780190927523.013.39.
Texto completoCampion, Edmund. "Spectral Moments". En The Oxford Handbook of Spectral Music, C42P1—C42P17. Oxford University Press, 2023. http://dx.doi.org/10.1093/oxfordhb/9780190633547.013.42.
Texto completoFeldman, Walter. "The Position of Music Within the Mevleviye". En From Rumi to the Whirling Dervishes, 135–61. Edinburgh University Press, 2022. http://dx.doi.org/10.3366/edinburgh/9781474491853.003.0007.
Texto completoActas de conferencias sobre el tema "Interactive musique"
Saito, Yuri y Takayuki Itoh. "MusiCube: A Visual Interface for Music Selection Featuring Interactive Evolutionary Computing". En 2011 15th International Conference Information Visualisation (IV). IEEE, 2011. http://dx.doi.org/10.1109/iv.2011.78.
Texto completo