Dissertations / Theses on the topic 'Doctor of Musical Arts'

To see the other types of publications on this topic, follow the link: Doctor of Musical Arts.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Doctor of Musical Arts.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Severn, Edwin Philip. "Doctor of musical arts in performance critical commentary." Thesis, University of Salford, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.537556.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Cameron-Caluori, George. "Philosophy and musical criticism." Thesis, University of Ottawa (Canada), 1988. http://hdl.handle.net/10393/5314.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Métois, Eric. "Musical sound information : musical gestures and embedding synthesis." Thesis, Massachusetts Institute of Technology, 1997. http://hdl.handle.net/1721.1/29125.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Horowitz, Damon Matthew. "Representing musical knowledge." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/61530.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Porter, Alastair. "Evaluating musical fingerprinting systems." Thesis, McGill University, 2013. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=117191.

Full text
Abstract:
Audio fingerprinting is a process that uses computers to analyse small clips of music recordings to answer a common question that people who listen to music often ask : "What is the name of that song I hear ?" Audio fingerprinting systems identify musical content in audio and search a reference database for recordings that contain the same musical features. These systems can find matching recordings even when the query has been recorded in a public space and contains added noise. Different audio fingerprinting algorithms are better at identifying different types of queries, for example, queries that are short, or have a large amount of noise present in the signal. There are few comprehensive comparisons of fingerprinting systems available in the literature that compare the retrieval accuracy offingerprinting systems with a wide range of querys.This thesis presents an overview of the historical developments in audio fingerprinting, including an analysis of three state-of-the-art audio fingerprinting algorithms. The thesis introduces factors that must be considered when performing a comparative evaluation of many fingerprinting algorithms, and presents a new evaluation framework that has been developed to address these factors. The thesis contributes the results of a large-scale comparison between three audio fingerprinting algorithms, with an analysis recommending which algorithms should be used to identify music queries recorded in different situations.
Le système d'empreinte audio est un procédé qui analyse de courts extraits de musique avec un ordinateur pour répondre à une question courante: « Quelle est le nom de cette chanson que j'écoute? ». Les systèmes d'empreintes audio identifient le contenu musical d'un enregistrement et cherchent des documents sonores possédant les même traits musicaux au sein d'une base de données de référence. Ces systèmes sont capables de fonctionner même si les requêtes qui leur sont transmises sont enregistrées dans un espace public, avec de nombreuses sources de bruit extérieur. Les différents algorithmes d'empreinte audio se distinguent par le type de requête qu'ils peuvent traiter: certains se concentrent sur des requêtes de courte durée, d'autres sont optimisés pour pouvoir être performant même dans des conditions de bruit très défavorables. Dans la littérature, il existe peu d'études comparatives poussées traitant spécifiquement des performances des systèmes de reconnaissance par empreinte audio dans un large éventail de cas.Cette thèse présente une vue d'ensemble de l'histoire du développement des systèmes d'empreinte audio. Cette thèse introduit en suite des facteurs qui doivent être pris en compte lors de l'évaluation comparative de plusieurs algorithmes pour la reconnaissance par empreinte audio. De plus, ce travail présente un nouveau cadre d'évaluation développé afin d'incorporer ces facteurs. Cette thèse combine les résultats d'une comparaison à grande échelle de trois algorithmes d'identification d'empreinte audio avec une analyse recommandant lequel de ces algorithmes est le plus efficace pour identifier la plus grande variété d'extraits audio.
APA, Harvard, Vancouver, ISO, and other styles
6

Troyer, Akito van. "Score instruments : a new paradigm of musical instruments to guide musical wonderers." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/120882.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, February 2018.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 167-190).
Advancements in technology have made musical instruments, especially electronic instruments, accessible to the masses. As a result, music-making has become more widespread and convenient. However, the blackboxing practices of commercial Digital Musical Instruments (DMIs) have conditioned many users to produce only specific styles of music. Furthermore, as many of these commercial instruments produce sound through loudspeakers, rather than the body of the instrument, players lose the physical and tactile connection to sound and music. Consequently, these DMIs inhibit understanding of the relationship between musicality and our everyday physical world, and cut players off from exploring a more extensive range of musical possibilities. Despite the multiplication of music-making tools, music-making practices still operate on the same principles. The production of music requires instruments to generate organized physical sound energies that follow the schema of a score. This dissertation studies a new class of Interactive Music Systems (IMSs) called Score Instruments that embed both instrument and score into a single unified interface. Score Instruments reopen the range of possibilities offered by everyday sounds and objects as musical bricolage tools to bring players into a personalized, guided, and open-ended use of the instrument. Players of Score Instruments are called Musical Wonderers, as the instruments encourage them to focus on exploration to build their own musical language, rather than on the technically correct realization of music. The dissertation describes the concept of Score Instruments. Two instances of Score Instruments demonstrate how the techniques and criteria translate into specific IMSs. City Symphonies is a massive musical collaboration platform that encourages players to listen to their cities and create music with environmental sounds. MM-RT is a tabletop tangible musical instrument that employs electromagnetic actuators and small permanent magnets to physically induce sounds with found objects. Both projects exemplify how Score Instruments can simultaneously stimulate open creativity and provide meaningful direction and constraints that guide users to learn underlying principles about music and the physical world. The design investigations and historical perspective of this dissertation offer a future of music-making practice that is based on exploration and designed to broaden the definition and variety of music.
by Akito van Troyer.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
7

Albawardy, Reema. "Costume Designs for "Urinetown| The Musical"." Thesis, The George Washington University, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=1599645.

Full text
Abstract:

Urinetown: The Musical by Greg Kotis and Mark Hollman was produced by the Department of Theatre and Dance at George Washington University in the Fall of 2014. The show opened on October 30, 2014 at the Dorothy Betts Marvin Theatre, part of the George Washington University in Washington DC. It was directed by Muriel Von Villas along with costume designer Reema Albawardy, lighting designer Carl Gudenius, and set designer Kirk Kristlibas. This thesis explores the costume design process for Urinetown: The Musical and the challenges of working with a large cast and dealing with many quick changes.

APA, Harvard, Vancouver, ISO, and other styles
8

Rosenbaum, Eric (Eric Ross). "Explorations in musical tinkering." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/97970.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 171-174).
This thesis introduces the idea of "musical tinkering," exploring how to engage people in playfully creating and experimenting with their own musical instruments and compositions. I discuss the design and study of two new tools for musical tinkering, MelodyMorph and MaKey MaKey. MelodyMorph is an iPad app for making musical compositions on the screen. MaKey MaKey is an invention kit that lets you transform everyday objects into physical-digital musical instruments. Two themes of musical tinkering, the loop and the map, are woven throughout this thesis. Loops are feedback processes. They range in scope from rapid iterative design, through interpersonal interaction and creative emergence, to longer-term personal transformation. Maps are active visualizations. We use them to externalize our thought processes, and we fluidly manipulate them as we tinker, linking graphical or tangible symbols to musical sounds. I use loops and maps as the basis for design concepts for musical tinkering tools and to analyze musical tinkering as a learning process. I present case studies of middle school students tinkering as they use MelodyMorph to compose musical stories, reconstruct tunes from video games, and make musical cartoons. I also present case studies of MaKey MaKey, showing how people have used it to tinker with music "in the wild," in my own workshops, and in the work of other educators. Through these case studies I characterize musical tinkering using the concepts of musical landscape-making, musical backtalk, and musical inquiry. I show that loops and maps intertwine in the processes of collaborative emergence, inventing new maps for new instruments, and tinkering with musical ideas and musical attitudes. Finally, I conclude with visions for remaking the landscape of musical tinkering in the future.
by Eric Rosenbaum.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
9

Holbrow, Charles J. "Hypercompression : stochastic musical processing." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/101838.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages [66]-70).
The theory of stochastic music proposes that we think of music as a vertical integration of mathematics, the physics of sound, psychoacoustics, and traditional music theory In Hypercompression, Stochastic Musical Processing we explore the design and implementation of three innovative musical projects that build on a deep vertical integration of science and technology in different ways: Stochastic Tempo Modulation, Reflection Visualizer, and Hypercompression. Stochastic Tempo Modulation proposes a mathematical approach for composing previously inaccessible polytempic music. The Reflection Visualizer introduces an interface for quickly sketching abstract architectural and musical ideas. Hypercompression describes new technique for manipulating music in space and time. For each project, we examine how stochastic theory can help us discover and explore new musical possibilities, and we discuss the advantages and shortcomings of this approach.
by Charles J. Holbrow.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
10

Ollivier, Michèle. "M'entendez-vous? : essai de sociologie du langage musical." Thesis, University of Ottawa (Canada), 1988. http://hdl.handle.net/10393/5298.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Spearing, Robert. "A portfolio of compositions submitted for the degree of Doctor of Philosophy in Musical Composition." Thesis, University of Birmingham, 2010. http://etheses.bham.ac.uk//id/eprint/865/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Chai, Wei 1972. "Automated analysis of musical structure." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/33878.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2005.
Includes bibliographical references (p. 93-96).
Listening to music and perceiving its structure is a fairly easy task for humans, even for listeners without formal musical training. For example, we can notice changes of notes, chords and keys, though we might not be able to name them (segmentation based on tonality and harmonic analysis); we can parse a musical piece into phrases or sections (segmentation based on recurrent structural analysis); we can identify and memorize the main themes or the catchiest parts - hooks - of a piece (summarization based on hook analysis); we can detect the most informative musical parts for making certain judgments (detection of salience for classification). However, building computational models to mimic these processes is a hard problem. Furthermore, the amount of digital music that has been generated and stored has already become unfathomable. How to efficiently store and retrieve the digital content is an important real-world problem. This dissertation presents our research on automatic music segmentation, summarization and classification using a framework combining music cognition, machine learning and signal processing. It will inquire scientifically into the nature of human perception of music, and offer a practical solution to difficult problems of machine intelligence for automatic musical content analysis and pattern discovery.
(cont.) Specifically, for segmentation, an HMM-based approach will be used for key change and chord change detection; and a method for detecting the self-similarity property using approximate pattern matching will be presented for recurrent structural analysis. For summarization, we will investigate the locations where the catchiest parts of a musical piece normally appear and develop strategies for automatically generating music thumbnails based on this analysis. For musical salience detection, we will examine methods for weighting the importance of musical segments based on the confidence of classification. Two classification techniques and their definitions of confidence will be explored. The effectiveness of all our methods will be demonstrated by quantitative evaluations and/or human experiments on complex real-world musical stimuli.
by Wei Chai.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
13

Clark, Alan. "Brown study an original musical recording." Honors in the Major Thesis, University of Central Florida, 2012. http://digital.library.ucf.edu/cdm/ref/collection/ETH/id/673.

Full text
Abstract:
For a year and a spring semester, I have been in the works of a school music project. I set out to make a record of ten self-penned songs. Along the length of the project, I would discover musicians and recording artists. I notated my songs on a staff and recorded demos to assist players of drums, electric bass, French horn, and violin. I play guitar, percussion, synthesized instruments, and do all of the singing on Brown Study, the record's title. The technology used to create the songs include a Tascam 2488 (home digital recording device), computers, printers, cell phones and i-phones, amplifiers, microphones and headphones, and a drum machine. This is my first attempt at collaborating with other musicians. At the defense I will be presenting 4 songs: "Runaway," "Lonely Heart," "Topsy Turvy," and "Friendliest Advice." Each song has a particular history and story to tell containing influences and aesthetic philosophies. My gift is to be shared with whoever will listen upon completing and distributing the full-length album.
B.A.
Bachelors
Arts and Humanities
Music Education
APA, Harvard, Vancouver, ISO, and other styles
14

Weinberg, Gil 1967. "Expressive digital musical instruments for children." Thesis, Massachusetts Institute of Technology, 1999. http://hdl.handle.net/1721.1/62942.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 1999.
Includes bibliographical references (p. 87-92).
This thesis proposes to use technology to introduce children to musical expressivity and creativity. It describes a set of digital musical instruments that were developed in an effort to provide children with new tools for interaction, exploration and enjoyment of music. The thesis unfolds a multidisciplinary theoretical background, which reviews a number of philosophical, psychological, musical, and technological theories. The theoretical background focuses on enlightening a number of personal musical experiences and leads towards the formulation of three musical concepts that inform the design of the digital musical instruments. The musical concepts are: High and Low-level Musical Control, Immersive and Constructive Musical Experiences and Interdependent Group Playing. The thesis presents the embodiment of these concepts in digital musical instruments while emphasizing the importance of novel technology as a provider of creative and expressive musical experiences for children.
by Gil Weinberg.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
15

Overholt, Daniel James 1974. "The Emonator : a novel musical interface." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/62353.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Program in Media Arts & Sciences, 2000.
Includes bibliographical references (p. 75-77).
This thesis will discuss the technical and artistic design of the Emonator', a novel interactive musical interface which responds to gestural input with realtime aural and visual feedback. A user interacts with the Emonator by manipulating the surface formed by a bed of rods at the top of the Emonator. The user's movements are analyzed and used to control several music and sound generation engines as well as video streams in real-time. The Emonator is an interesting musical experience for both amateur and professional musicians. It is also versatile, working well as a stand-alone interface or as part of a larger interactive experience.
by Daniel James Overholt.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
16

Hughes, Adam Lefever. "Assai| Historical Contexts of a Contested Musical Term." Thesis, The University of North Carolina at Greensboro, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10264065.

Full text
Abstract:

This study seeks to establish the feasibility of assai as a moderating term in more cases than is typically assumed. Since evidence of concurrent competing definitions for the term assai exists from the mid- to late-18th century, understanding and putting into practice a composer’s indications according to his own understanding of the term becomes murky where the word assai is concerned during and beyond the time when the two definitions exist concurrently. Through investigation of musical scores, examining such features as ornamentation, the relative brilliance of the work, tonality, meter, and structure, the characteristics of a piece of music that are crucial to navigating the multivalent qualities of the word assai are identified and tested against the actual musical content of examples from works of J. S. Bach, Domenico Scarlatti, W. F. Bach, J. C. F. Bach, Johann Friedrich Agricola, C. P. E. Bach, W. A. Mozart, F. J. Haydn, Ludwig van Beethoven, Frédéric Chopin, and Franz Liszt.

APA, Harvard, Vancouver, ISO, and other styles
17

Campbell, Spencer Evison. "Automatic key detection of musical excerpts from audio." Thesis, McGill University, 2011. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=96769.

Full text
Abstract:
The proliferation of large digital audio collections has motivated recent research on content-based music information retrieval. One of the primary goals of this research is to develop new systems for searching, browsing, and retrieving music. Since tonality is a primary characteristic in Western music, the ability to detect the key of an audio source would be a valuable asset for such systems as well as numerous other applications.A typical audio key finding model is comprised of two main elements: feature extraction and key classification. Feature extraction utilizes signal processing techniques in order to obtain a set of data from the audio, usually representing information about the pitch content. The key classifier may employ a variety of strategies, but is essentially an algorithm that uses the extracted data in order to identify the key of the excerpt.This thesis presents a review of previous audio key detection techniques, as well as an implementation of an audio key detection system. Various combinations of feature extraction algorithms and classifiers are evaluated using three different data sets of 30-second musical excerpts. The first data set consists of excerpts from the first movement of pieces from the classical period. The second data set is comprised of excerpts of popular music songs. The final set is made up of excerpts of classical music songs that have been synthesized from MIDI files. A quantitative assessment of the results leads to a system design that maximizes key identification accuracy.
La prolifération de grandes collections de musique numérique a récemment mené à de la recherche qui porte sur la récupération d'information musical d'après le contenu. Un des principaux objectifs de ce travail de recherche est de développer un nouveau system qui permet de chercher, feuilleter et récupérer de la musique numérique. Étant donné que la tonalité est une des principales caractéristiques de la musique occidentale, l'habilité de détecter la tonalité d'une bande sonre serait un outil indispensable pour un tel system et pourrait mener à maintes autres applications. Un model de détection de tonalité typique comprend deux principaux éléments : l'identification des structures et la classification des tonalités. L'identification des structures comprend des techniques de traitement de signaux afin d'obtenir de l'information à partir d'une bande sonore, cette information porte typiquement sur le contenu du ton. Un classificateur de tonalité peut servir plusieurs fonctions, mais est essentiellement un algorithme qui traite l'information extraite d'une bande sonore afin d'identifier sa tonalité.Cette thèse vise à revoir les techniques de détection de tonalité existantes, ainsi que implantation d'un tel système. Diverses combinaisons de classificateurs et d'algorithmes de télédétection et de reconnaissance seront évaluées en utilisant trois différentes bandes sonores d'une durée de 30 secondes. La première bande sonore comprend des extraits de musique classique. La deuxième bande sonore comprend des extraits de musique populaire. La troisième bande sonore comprend des extraits de musique classique créés avec un synthétiseur employant l'interface numérique des instruments de musique (MIDI). Une analyse quantitative des résultats mènera à un système qui optimise la détection de tonalité.
APA, Harvard, Vancouver, ISO, and other styles
18

Meine, Rodrigo. "Restrições e liberdade em composição musical : memorial de composição." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2012. http://hdl.handle.net/10183/61262.

Full text
Abstract:
O presente memorial consiste em uma reflexão sobre os atos criativos de um conjunto de composições, acompanhado das respectivas partituras e de registros audiovisuais. A reflexão tem início com o estabelecimento do referencial teórico, prossegue com uma abordagem de cada uma das quatro composições e é encerrada por meio de algumas conclusões a respeito dos trabalhos realizados. O enfoque adotado para a investigação é o conceito de restrições, cuja definição como elemento norteador do ato composicional, no contexto deste memorial, é elaborada no capítulo inicial. Os capítulos seguintes investigam as composições individualmente, ressaltando, à luz do conceito-chave, fatores técnicos e estéticos presentes ao longo do ato criativo e elucidando retrospectivamente decisões composicionais cujo resultado são as partituras apresentadas. O capítulo conclusivo delineia breves generalizações a respeito do trabalho realizado como um todo, assim como aponta aspectos passíveis de considerações futuras.
This paper consists in a reflection about the creative acts of a set of compositions, together with their respective scores and audiovisual recordings. The discussion begins with the establishment of the theoretical framework, proceeds with an approach of each of the four compositions and is closed by a few conclusions about the work carried out. The approach taken for the research is the concept of constraints, whose understanding and definition as a guiding element of the compositional act, in the context of this paper, is elaborated in the opening chapter. The following chapters discuss the compositions individually, noting, in light of the central concept, technical and aesthetic factors present during the creative act and retrospectively elucidating compositional decisions whose results are the presented scores. The concluding chapter outlines brief generalizations about the work as a whole as well as pointing aspects that may be subject to future considerations.
APA, Harvard, Vancouver, ISO, and other styles
19

Farbood, Morwaread Mary. "A quantitative, parametric model of musical tension." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/34182.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2006.
Includes bibliographical references (leaves [125]-132).
This thesis presents a quantitative, parametric model for describing musical tension. While the phenomenon of tension is evident to listeners, it is difficult to formalize due to its subjective and multi-dimensional nature. The model is therefore derived from empirical data. Two experiments with contrasting approaches are described. The first experiment is an online test with short musical excerpts and multiple choice answers. The format of the test makes it possible to gather large amounts of data. The second study requires fewer subjects and collects real-time responses to musical stimuli. Both studies present test subjects with examples that take into account a number of musical parameters including harmony, pitch height, melodic expectation, dynamics, onset frequency, tempo, and rhythmic regularity. The goal of the first experiment is to confirm that the individual musical parameters contribute directly to the listener's overall perception of tension. The goal of the second experiment is to explore linear and nonlinear models for predicting tension given descriptions of the musical parameters for each excerpt. The resulting model is considered for potential incorporation into computer-based applications. Specifically, it could be used as part of a computer-assisted composition environment. One such application, Hyperscore, is described and presented as a possible platform for integration.
by Morwaread M. Farbood.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
20

Krom, Matthew Wayne. "Machine perception of natural musical conducting gestures." Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/61823.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Atkinson, Victoria. "Unravelling the musical in art : Matisse, his music and his textiles." Thesis, University of Essex, 2017. http://repository.essex.ac.uk/21219/.

Full text
Abstract:
From flamenco guitarists to parlour pianists, Matisse’s images of music-making often appear within decorative scenes of gleaming carpets, multi-coloured costumes and lavishly embroidered wall hangings. All of these textiles and more comprised what he called ‘ma bibliothèque de travail’, a working library of inspiration that he maintained throughout his career. ‘I am made up of everything I have seen,’ he remarked, to which he might have added, ‘and heard.’ Practising, performing, listening and concert-going: music, like textiles, was a lifelong pursuit. But his passion for them is not simply of anecdotal significance, nor does it explain their mere co-existence as the subject-matter of his art. Rather, just as music and textiles are interwoven at every stage of his life, so too is their structural and conceptual significance in his work. In a series of case studies, a single textile from his working library is paired with the art it inspired: the kasāya robe and 'The Song of the Nightingale'; the Moghan rug and the Symphonic Interiors; and the Bakuba velours and 'Jazz'. In each case, visual form is found to have musical counterpart, both in the textiles themselves and as represented by Matisse. This opens up new, more imaginative possibilities of interpreting his visual musicality, which is found to be metaphysical, modal and motivic in concept. Finally, these separate strands are drawn together in a single synoptic analysis of the Chapel of the Rosary, the artist’s self-proclaimed masterpiece and ‘total’ work of art. This thesis explores the expansive musical space created by the reduced visual form of textiles. Considered together for the first time, these enduring and inseparable continuities of Matisse’s art – music and textiles – suggest not only a means of unravelling his own visual musicality, but point towards a much-needed methodology for interpreting this notion more broadly.
APA, Harvard, Vancouver, ISO, and other styles
22

Robertson, Hannah. "Testing a new tool for alignment of musical recordings." Thesis, McGill University, 2014. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=121523.

Full text
Abstract:
Audio-to-audio alignment of musical recordings is the mapping of events in one recording to their corresponding events in other recordings of the same underlying musical piece. Among other applications, musical audio-to-audio alignment is used for: comparing and analyzing musical performances; finding different performances and arrangements of a musical work in a database; discovering musical motifs in field recordings of folk music; automatically synchronizing multiple takes (re-recordings of specific excerpts) in a recording studio; and aligning a musician's performance to a score in realtime, for purposes of interactive performance such as automated accompaniment. This thesis investigates audio-to-audio alignment by an algorithm that has not previously been applied to music, the continuous profile model (CPM) (Listgarten et al. 2005).In this thesis, the CPM is used to align pairs of recordings (pairwise alignment) as well as groups containing more than two recordings (multiple alignment). A standard evaluation methodology is used to systematically compare pairwise alignment by the CPM to pairwise alignment by dynamic time warping (DTW), the algorithm most frequently used for audio-to-audio alignment of music. The evaluation methodology is then generalized to multiple dimensions in order to compare two approaches to multiple alignment: simultaneous multiple alignment with the CPM and iterative pairwise alignment with DTW.
L'alignement audio-audio des enregistrements sonores est le mappage de certains moments dans un enregistrement aux moments correspondants dans d'autres enregistrements de la même pièce musicale. Parmi d'autres applications, l'alignement audio-audio sert à: comparer et analyser des performances musicales; trouver de plusieurs performances et arrangements d'une oeuvre musicale dans une base de données; découvrir des motifs musicaux dans des enregistrements de terrain de la musique folk; synchroniser automatiquement de multiples prises (réenregistrements d'extraits spécifiques) dans un studio d'enregistrements; et aligner une performance d'un musicien qui utilise une partition en temps réel pour permettre des performances interactifs avec des ordinateurs, par exemple, l'accompagnement automatisé. Dans cette thèse, on examine l'alignement des enregistrements musicaux accompli par un algorithme qui n'a auparavant jamais été appliqué à la musique qui s'appelle le « modèle du profil continu » (« continuous profile model » ou « CPM » en anglais) (Listgarten et al. 2005).Cette thèse examine la CPM pour les alignements deux à deux et les alignements multiples des enregistrements sonores. On emploie une méthodologie standard pour comparer systématiquement l'alignement deux à deux accompli par le CPM à l'alignement deux à deux accompli par la « déformation temporelle dynamique » (en anglais, « dynamic time warping » ou « DTW »), l'algorithme le plus souvent employé pour l'alignement audio-audio de la musique. La méthodologie d'évaluation est ensuite généralisé à plusieurs dimensions à comparer deux approches pour l'alignement multiple: alignement multiple simultanée par le CPM à itérative alignement par paires avec DTW.
APA, Harvard, Vancouver, ISO, and other styles
23

Moore, John Oliver. "CollaboRhythm : new paradigms in doctor-patient interaction applied to HIV medication adherence." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/55194.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2009.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 74-78).
Despite astounding advances in medical knowledge and treatment in recent decades, health outcomes are disappointing and costs continue to rise. The traditional paternalistic and episodic approach to medical care is not meeting the needs of patients. CollaboRhythm is a technological platform that is being developed to enable a more modem collaborative and continuous approach to care by facilitating new paradigms in doctor-patient interaction. It asks the question: Can a system that allows patients to become active participants in their care, through data transparency, shared decision making, education, and new channels of communication, improve patient outcomes? To begin testing the principles of CollaboRhythm, a system to support medication adherence for Human Immunodeficiency Virus (HIV) infection was created. It includes custom applications on a patient cell phone and an interactive device for the home called a Chumby as well as a collaborative workstation in the clinician's office. The applications allow the reporting of medication adherence, viewing of adherence performance including a personalized and dynamic simulation of HIV, and sending of supportive video messages. The system is novel in that it abandons the typical alarm-based method of supporting adherence and instead focuses on a multifaceted approach to generating motivation through awareness, self-reflection, education, and social support. Transparency of data and new communication channels allow efficient and socially engaging collaboration in real-time. The HIV medication adherence system was evaluated in two stages.
(cont.) In the first stage, twelve patient interviews were conducted. The response to the principles of the system was positive with eleven of the twelve patients willing to share their adherence data with their clinician and all twelve agreeing that the HIV simulation and encouraging messages would motivate them to take their medications. Overall, eleven patients were interested in using the system. In the second stage, a one-month pilot deployment was conducted with four patients collaborating with an HIV medication adherence specialist. This stage also yielded encouraging results with three patients maintaining greater than 95% adherence all four patients confident that the system helped them improve their adherence. Important lessons were learned about its limitations, including ramifications of inaccurate reporting. The results from the HIV adherence study suggest that there is merit in the new paradigms in provider-patient interaction facilitated by CollaboRhythm and that some patients are receptive to the idea of becoming more active participants in their care. Evaluations at a larger scale and for a number of clinical scenarios are warranted.
by John O. Moore.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
24

Pashenkov, Nikita 1975. "Optical turntable as an interface for musical performance." Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/62364.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2002.
"June 2002."
Includes bibliographical references (leaves 80-82).
This thesis proposes a model of creative activity on the computer incorporating the elements of programming, graphics, sound generation, and physical interaction. An interface for manipulating these elements is suggested, based on the concept of a disk-jockey turntable as a performance instrument. A system is developed around this idea, enabling optical pickup of visual information from physical media as input to processes on the computer. Software architecture(s) are discussed and examples are implemented, illustrating the potential uses of the interface for the purpose of creative expression in the virtual domain.
Nikita Pashenkov.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
25

Lefford, M. Nyssim 1968. "The structure, perception and generation of musical patterns." Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/28781.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2004.
Includes bibliographical references (p. 149-151).
Structure distinguishes music from noise. When formulating that structure, musical artists rely on both mental representations and sensory perceptions to organize pitch, rhythm, harmony, timbre and dynamics into musical patterns. The generative process may be compared to playing a game, with goals, constraints, rules and strategies. In this study, games serve as a model for the interrelated mechanisms of music creation, and provide a format for an experimental technique that constrains creators as they generate simple rhythmic patterns. Correlations between subjects' responses and across experiments with varied constraints provide insight into how structure is defined in situ and how constraints impact creators' perceptions and decisions. Through the music composition games we investigate the nature of generative strategizing, refine a method for observing the generative process, and model the interconnecting components of a generative decision. The patterns produced in these games and the findings derived from observing how the games are played elucidate the roles of metric inference, preference and the perception of similarity in the generative process, and lead us to a representation of generative decision tied to a creator's perception of structure.
M. Nyssim Lefford.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
26

Su, David(David Dewei). "Massively multiplayer operas : interactive systems for collaborative musical narrative." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/123644.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2019
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 74-79).
Music, narrative, and social interaction have long been intertwined. The objective of this thesis is to create a platform, designed for interactive multiplayer operas, that explores the potential for technology-enabled systems to facilitate creativity through expression, the emotional affordances of musical storytelling, and the spatiotemporal boundaries of copresence. A variety of design experiments for collaborative musical narrative are implemented and evaluated. The work also introduces a real-time lyrical conversation system, with user interfaces that allow for simultaneous musical and narrative expression with a high degree of granularity. These experiences are encapsulated by an overarching lyrical multiplayer narrative opera platform. This project seeks to provide a novel means of creating and understanding multi-user, interactive music systems in which users participate in active and collaborative music-making in conjunction with narrative engagement.
by David Su.
S.M.
S.M. Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences
APA, Harvard, Vancouver, ISO, and other styles
27

Stedman, Kyle D. "Musical Rhetoric and Sonic Composing Processes." Scholar Commons, 2012. http://scholarcommons.usf.edu/etd/4229.

Full text
Abstract:
This project is a study of musical rhetoric and music composition processes. It asks the questions, "How does the nature of music as sound-in-time affect its rhetorical functions, production, and delivery?" and "How do composers approach the task of communicating with audiences through instrumental music?" I answer these questions by turning to the history of musical rhetoric as practiced in the field of musicology and by interviewing composers themselves about their composition practices--approaches that are both underused in the rhetoric and composition community. I frame my research participants' responses with a discussion of the different degrees to which composers try to control the eventual meaning made from their compositions and the different ways that they try to identify with their audiences. While some composers express a desire to control audiences' emotions and experiences through the use of forms and careful predictions about an audience's reactions to certain genres and influences, other composers express a comfort with audiences composing their own meanings from musical sounds, perhaps eschewing or transforming traditional forms and traditional performance practices. Throughout, I argue for the importance of considering all of these perspectives in the context of actually hearing music, as opposed to taming and solidifying it into a score on a page. These composers' insights suggest the importance of understanding musical rhetoric as an act based in sound and time that guides meaning but can never control it. They also suggest new ways of teaching English composition courses that are inspired by the experiences and practices of music composition students. Specifically, I argue that English composition courses should better rely on the self-sponsored literacies that students bring to classrooms, stretch the ways these courses approach traditional rules of composing, and approach digital tools, collaboration, and delivery in ways that mirror the experiences of music students.
APA, Harvard, Vancouver, ISO, and other styles
28

Jacobs, Bryan. "Coloring regret: emotional prosody as a metaphor for musical composition." Thesis, McGill University, 2008. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=18765.

Full text
Abstract:
Coloring Regret is a musical composition written for 21 musicians, one sound technician, and electronics. This essay is an analysis and description of the compositional tools and methods developed during the compositional process. The piece attempts to explore a relationship between emotional expression in the human voice and emotional expression in music. The inspiration for this work came from current research in emotional prosody which suggests that there are identifiable components to human speech that allow listeners to accurately interpret a speaker's emotional state. Audio files in which actors portray outbursts of emotional energy were analyzed and categorized, then later transcribed for acoustic instruments. An omni-present lament motive suggested a specific path through a previously developed harmonic “gravity” system. The final composition implies a journey from the “Vocal Sound Object World”- with a dramatic vocal, textural, and naturalistic electronic component – to the “Traditional Pitch and Rhythm-based World” dominated by clear rhythms, timbres, and pitches.
Coloring Regret est une composition musicale écrite pour 21 musiciens, un technicien du son et électronique. Cet essai est une analyse et description des outils et méthodes de composition développés pendant le processus compositionnel. La pièce tente d'explorer un rapport entre l'expression émotionnelle dans la voix humaine et l'expression émotionnelle en musique. Cette pièce à été inspirée par des recherches récentes en prosodie émotionnelle qui suggèrent que des composantes identifiables de la parole humaine permettent à un auditeur d'interpréter précisément l'état émotionnel d'un locuteur. Des fichiers audio dans lesquels des acteurs illustrent des débordements d'énergie émotionnelle ont été analysés et catégorisés, puis transcrits plus tard pour des instruments acoustiques. Un motif de lamentation omni-présent a suggéré un parcours spécifique à travers un système harmonique 'gravitationnel' développé auparavant. La composition finale implique un périple partant du -Monde des Objets Sonores Vocaux-, avec une composante électronique vocale, texturale et naturaliste dramatique, vers le -Monde Traditionnel du Rythme et des Hauteurs- dominé par des rythmes, timbres et hauteurs clairs.
APA, Harvard, Vancouver, ISO, and other styles
29

Boyes, Graham. "Dictionary-based analysis/synthesis and structured representations of musical audio." Thesis, McGill University, 2012. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=106507.

Full text
Abstract:
In the representation of musical audio, it is common to favour either a signal or symbol interpretation, where mid-level representation is an emerging topic. In this thesis we investigate the perspective of structured, intermediate representations through an integration of theoretical aspects related to separable sound objects, dictionary-based methods of signal analysis, and object-oriented programming. In contrast to examples in the literature that approach an intermediate representation from the signal level, we orient our formulation towards the symbolic level. This methodology is applied to both the specification of analytical techniques and the design of a software framework. Experimental results demonstrate that our method is able to achieve a lower Itakura-Saito distance, a perceptually-motivated measure of spectral dissimilarity, when compared to a generic model and that our structured representation can be applied to visualization as well as agglomerative post-processing.
Dans la représentation du signal audio musical, il est commun de favoriser une interprétation de type signal ou bien de type symbole, alors que la représentation de type mi-niveau, ou intermédiaire, devient un sujet d'actualité. Dans cette thèse nous investiguons la perspective de ces représentations intermédiaires et structurées. Notre recherche intègre tant les aspects théoriques liés à des objets sonores séparables, que les méthodes d'analyse des signaux fondées sur des dictionnaires, et ce jusqu'à la conception de logiciels conus dans le cadre de la programmation orienté objet. Contrairement aux exemples disponibles dans la littérature notre approche des représentations intermédiaires part du niveau symbolique pour aller vers le signal, plutôt que le contraire. Cette méthodologie est appliquée non seulement à la spécification de techniques analytiques mais aussi à la conception d'un système logiciel afférent. Les résultats expérimentaux montrent que notre méthode est capable de réduire la distance d'Itakura-Saito, distance fondé sur la perception, ceci en comparaison à une méthode de décomposition générique. Nous montrons également que notre représentation structurée peut être utilisée dans des applications pratiques telles que la visualisation, l'agrégation post-traitement ainsi qu'en composition musicale.
APA, Harvard, Vancouver, ISO, and other styles
30

Walker, Seth. "Musical Spirituality: The Transformative Power of Popular Music." Honors in the Major Thesis, University of Central Florida, 2007. http://digital.library.ucf.edu/cdm/ref/collection/ETH/id/1050.

Full text
Abstract:
This item is only available in print in the UCF Libraries. If this is your Honors Thesis, you can help us make it available online for use by researchers around the world by following the instructions on the distribution consent form at http://library.ucf.edu/Systems/DigitalInitiatives/DigitalCollections/InternetDistributionConsentAgreementForm.pdf You may also contact the project coordinator, Kerri Bottorff, at kerri.bottorff@ucf.edu for more information.
Bachelors
Arts and Humanities
Humanities
APA, Harvard, Vancouver, ISO, and other styles
31

Chinburg, Jenna. "The Perception of Trust Between Athletic Trainers and Musical Performing Artists." Thesis, San Jose State University, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10634918.

Full text
Abstract:

Trust is a crucial element for a successful patient-clinician relationship. Athletic trainers may care for musical performing artists who demonstrate unique needs compared to traditional patients. In order to provide the best care, athletic trainers must establish a basis of patient-centered care and build solid professional relationships with performers. By improving overall patient-clinician relationship factors with respect to this population, trust may be implemented and sustained. The purpose of the study was to determine factors that established or diminished trust between drum corps members and their athletic trainers. The study included 12 semi-structured interviews in which Drum Corps International (DCI) members defined and analyzed the perception of trust held within this population in relation to athletic trainer interaction. Trustworthiness techniques of member checks, triangulation, external auditing, connoisseurship, and negative case analyses were used. The qualitative methods determined perception of trust through emergent themes and the effect of trust on the patient-clinician relationship. The study further identified factors that maintained or inhibited the aspect of trust between performer and athletic trainer. Accessibility, clinical competence, dependability, comfort, and having a plan of action were found to be the most prominent themes and promote success within this relationship. Overall, trust plays a role in determining patient rapport, compliance, and timely return-to-play through the patient-clinician relationship in the performing arts setting.

APA, Harvard, Vancouver, ISO, and other styles
32

Fabio, Michael A. S. M. Massachusetts Institute of Technology. "The chandelier : an exploration in robotic musical instrument design." Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/39342.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2007.
Includes bibliographical references (leaves 169-173).
This thesis presents several works involving robotic musical instruments. Robots have long been used in industry for performing repetitive tasks, or jobs requiring superhuman strength. However, more recently robots have found a niche as musical instruments. The works presented here attempt to address the musicality of these instruments, their use in various settings, and the relationship of a robotic instrument to its human player in terms of mapping and translating gesture to sound. The primary project, The Chandelier, addresses both hardware and software issues, and builds directly from experience with two other works, The Marshall Field's Flower Show and Jeux Deux. The Marshall Field's Flower Show is an installation for several novel musical instruments and controllers. Presented here is a controller and mapping system for a Yamaha Disklavier player piano that allows for real-time manipulation of musical variations on famous compositions. The work is presented in the context of the exhibit, but also discussed in terms of its underlying software and technology. Jeux Deux is a concerto for hyperpiano, orchestra, and live computer graphics.
(cont.) The software and mapping schema for this piece are presented in this thesis as a novel method for live interaction, in which a human player duets with a computer controlled player piano. Results are presented in the context of live performance. The Chandelier is the culmination of these past works, and presents a full-scale prototype of a new robotic instrument. This instrument explores design methodology, interaction, and the relationship-and disconnect-of a human player controlling a robotic instrument. The design of hardware and software, and some mapping schema are discussed and analyzed in terms of playability, musicality, and use in public installation and individual performance. Finally, a proof-of-concept laser harp is presented as a low-cost alternative musical controller. This controller is easily constructed from off-the-shelf parts. It is analyzed in terms of its sensing abilities and playability.
Michael A. Fabio.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
33

Hammond, Edward Vickers 1974. "Multi-modal mixing : gestural control for musical mixing systems." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/69188.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2000.
Includes bibliographical references (p. 51-53).
The hi-end tools available to today's sound designers give them almost limitless control over audio processing. Devices such as Yamaha's new digital mixers and Digidesign's Pro-Tools computerized editing workstations allow users in a small studio to accomplish tasks which would have required racks full of gear only seven years ago in a professional studio. However, the evolution of the interfaces used with these systems have not kept pace with the improvements in functionality. With all of their complexity, the new digital mixing consoles still look like the old analog mixers, with the addition of an LCD screen. This thesis will introduce a new type of concept called Multi-Modal Mixing that aims to enhance current systems and point to a new standard of audio control.
by Edward Vickers Hammond.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
34

Feldmeier, Mark Christopher 1974. "Large group musical interaction using disposable wireless motion sensors." Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/33547.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2003.
Includes bibliographical references (p. 113-114).
One of the difficulties in interactive music and entertainment is creating environments that reflect and react to the collective activity of groups with tens, hundreds, or even thousands of participants. Generating content on this scale involves many challenges. For example, how is the individual granted low latency control and a sense of causality, while still allowing for information retrieval from all participants so that the environment responds to the behavior of the entire group? These issues are particularly pertinent in the area of interactive dance. To address these issues, a low-cost, wireless motion sensor has been developed. The sensor is inexpensive enough to be considered disposable, allowing it to be given away to participants at large dance events, enabling the dancers to participate concurrently in a realtime, interactive musical performance. The sensors are either worn or held by participants and transmit a short RF pulse when accelerated past a certain threshold. The RF pulses are received by a base station and analyzed to detect rhythmic features and estimate the general activity level of the group. These data are then used to generate music that can either lead or follow the participants' actions, thereby tightening the feedback loop between music and dancer. Multiple tests of the system have been conducted, with groups ranging from fifteen to 200 participants. Results of these tests show the viability of the sensors as a large group interaction tool. Participants found the interface intuitive to use, effectively controlling such aspects of the music as style, tempo, voicing, and filter parameters. These tests also demonstrate the system's ability to detect both the activity level and dominant tempo of the participants' motions, and give considerable insight into methods of mapping these data to musical parameters that give participants direct feedback as to their current state. Furthermore, it is shown that participants, if given this direct feedback, will synchronize their actions and increase in activity level, creating a mutually coherent and pleasing outcome.
by Mark Christopher Feldmeier.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
35

Grindlay, Graham Charles. "The impact of haptic guidance on musical motor learning." Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/41562.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2007.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Includes bibliographical references (p. 75-80).
Skilled musical performance provides one of the best demonstrations of the upper limits of the human motor system's capabilities. It is therefore not surprising that learning to play an instrument is a long and difficult process. Teachers and education researchers alike have long since recognized that while learning rate is dependent on the quantity of practice, perhaps even more important is the quality of that practice. However, for non-trivial skills such as music performance, just gaining an understanding of what physical movements are required can be challenging since they are often difficult to describe verbally. Music teachers often communicate complex gesture by physically guiding their students' hands through the required motions. However, at best, this gives a rough approximation of the target movement and begs the question of whether technology might be leveraged to provide a more accurate form of physical guidance. The success of such a system could lead to significant advancements in music pedagogy by speeding and easing the learning process and providing a more effective means of home instruction. This thesis proposes a "learning-by-feel" approach to percussion instruction and presents two different systems to test the effect of guidance on motor learning. The first system, called the FielDrum, uses a combination of permanent and electromagnets to guide a player's drumstick tip through the motions involved in the performance of arbitrary rhythmic patterns. The second system, called the Haptic Guidance System, uses a servo motor and optical encoder pairing to provide precise measurement and playback of motions approximating those involved in snaredrum performance. This device was used in a pilot study of the effects of physical guidance on percussion learning.
(cont.) Results indicate that physical guidance can significantly benefit recall of both note timing and velocity. When subject performance was compared in terms of note velocity recall, the addition of haptic guidance to audio-based training produced a 17% reduction in final error when compared to audio training alone. When performance was evaluated in terms of timing recall, the combination of audio and haptic guidance led to an 18% reduction in early-stage error.
by Graham C. Grindlay.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
36

Newton, Keith Randolph. "Cross-Disciplinary Integration of Musical Works and Visual Arts through Computer Technology." The Ohio State University, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=osu1366381657.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Kyakuwa, Julius. "Exploring African musical arts as community outreach at the University of Pretoria." Diss., University of Pretoria, 2016. http://hdl.handle.net/2263/60374.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Pauza, Louis Anthony. "A study of historical dance forms and their relation to musical theatre choreography." Honors in the Major Thesis, University of Central Florida, 2008. http://digital.library.ucf.edu/cdm/ref/collection/ETH/id/1126.

Full text
Abstract:
This item is only available in print in the UCF Libraries. If this is your Honors Thesis, you can help us make it available online for use by researchers around the world by following the instructions on the distribution consent form at http://library.ucf.edu/Systems/DigitalInitiatives/DigitalCollections/InternetDistributionConsentAgreementForm.pdf You may also contact the project coordinator, Kerri Bottorff, at kerri.bottorff@ucf.edu for more information.
Bachelors
Arts and Humanities
Theatre
APA, Harvard, Vancouver, ISO, and other styles
39

O'Reilly, Regueiro Federico. "Evaluation of interpolation strategies for the morphing of musical sound objects." Thesis, McGill University, 2010. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=95146.

Full text
Abstract:
Audio morphing is a timbre-transformation technique that produces timbres which lie in between those of two or more given tones. It can thus be seen as the interpolation of timbre descriptors or features. Morphing is most convincing when the features are perceptually relevant and the interpolation is perceived to be smooth and linear. Our research aims at producing practical guidelines for morphing musical sound objects. We define a set of features aimed at representing timbre in a quantifiable fashion, as completely and with as little redundancies as possible. We then report the interpolation of each single feature imposed on an otherwise neutral synthetic sound, exploring strategies to obtain smooth-sounding interpolations. Chosen strategies are then evaluated by morphing recorded acoustic instrumental sounds. All of the scripts and the resulting sounds are available through the www to the reader.
Le morphing audio est une transformation sonore produisant des timbres intermédiaires entre ceux de sons donnés. On peut considérer qu'il s'agit d'une interpolation des descripteurs du timbre. Le morphing est plus convaincant lorsque les descripteurs choisis sont pertinents perceptivement et quand l'interpolation est perçue comme ́étant linéaire. Le but de nos recherches est de constituer un guide pratique pour le morphing des objets musicaux. Nous définissons une collection de descripteurs qui dérivent le timbre d'une façon complète et non redondante. Nous nous livrons ensuite à une ́étude systématique ayant pour objectif de déterminer les meilleures stratégies d'interpolation, pour chaque descripteur sur des sons synthétiques simples. Les stratégies adaptées au traitement des signaux synthétiques sont ensuite ́évaluées pour la modification de sons d'instruments acoustiques. Toutes les routines et les fichiers audio sont disponibles sur un site internet.
APA, Harvard, Vancouver, ISO, and other styles
40

Xiao, Xiao Ph D. Massachusetts Institute of Technology. "MirrorFugue : communicating presence in musical collaboration across space and time." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/76135.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2011.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 52-55).
This thesis examines the problem of conveying presence across space and time. My work focuses on collaborative music, but findings may be generalized to other fields of collaboration. I present MirrorFugue, a set of interfaces for a piano keyboard designed to visualize the body of a collaborator. I begin by describing a philosophy of remote communication where the sense of presence of a person is just as essential as the bits of raw information transmitted. I then describe work in remote collaborative workspaces motivated by this view. I apply this philosophy to musical performances, giving a historical perspective and presenting projects in musical collaboration and pedagogy. Next, I describe two iterations of MirrorFugue interfaces. The first introduce three spatial metaphors inspired by remote collaborative workspaces to display the hands of a virtual pianist at the interaction locus of a physical piano. The second iteration introduces a pianist's face and upper body in the display. I outline usage scenarios for remote collaboration between two users and for a single user interacting with recorded material. I then present user studies of a MirrorFugue prototype in the context of remote piano lessons. I outline future work directions for increasing the portability of MirrorFugue, enhancing the sense of presence beyond the visual, and expanding MirrorFugue as an augmented piano platform.
by Xiao Xiao.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
41

McClendon, Carlton W. "The signifying culture of harlem renaissance writers: musical tropes and my people, my people." Honors in the Major Thesis, University of Central Florida, 1993. http://digital.library.ucf.edu/cdm/ref/collection/ETH/id/117.

Full text
Abstract:
This item is only available in print in the UCF Libraries. If this is your Honors Thesis, you can help us make it available online for use by researchers around the world by following the instructions on the distribution consent form at http://library.ucf.edu/Systems/DigitalInitiatives/DigitalCollections/InternetDistributionConsentAgreementForm.pdf You may also contact the project coordinator, Kerri Bottorff, at kerri.bottorff@ucf.edu for more information.
Bachelors
Arts and Sciences
English Literature
APA, Harvard, Vancouver, ISO, and other styles
42

Tsurumaki, Megan Wiley. "You can't stop the beat bringing musical theatre to underprivileged youth." Master's thesis, University of Central Florida, 2010. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/4692.

Full text
Abstract:
In an age of standardized testing and quality-controlled classrooms, teachers have lost the freedom to integrate imagination and creativity in their lessons, ultimately cheating today's youth. In the classroom, students no longer have the outlets that transport them from the harsh realities of life. This thesis is an attempt to provide a venue for the Orange County Public School System that will engage the imaginations of under-represented or underprivileged students. The thesis will chronicle the development of a script with the intent of producing it in Title I elementary schools located in lower socio-economic areas of Orlando, Florida. The script will be based on Hans Christian Anderson's fairy tale "The Ugly Duckling." The final product will be a musical theatre piece to take into the school system to be performed by the students. The body of the thesis will contain my prior experiences of bringing musical theatre to underprivileged youth. The document will also include chapters detailing the process of creating the script and composing the music. Research will determine the socio-economic challenges prevalent in the under-represented cultures in the urban schools of Orlando. Finally, the thesis will contain a section of the actual script and will conclude with a chapter summarizing the reactions to the first reading of the play.
ID: 029050200; System requirements: World Wide Web browser and PDF reader.; Mode of access: World Wide Web.; Thesis (M.F.A.)--University of Central Florida, 2010.; Includes bibliographical references (p. 54).
M.F.A.
Masters
Department of Theatre
Arts and Humanities
APA, Harvard, Vancouver, ISO, and other styles
43

Pendley, James. "Visualizing sound : a musical composition of aural architecture." [Tampa, Fla] : University of South Florida, 2009. http://purl.fcla.edu/usf/dc/et/SFE0003152.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Rudraraju, Vijay. "A tool for configuring mappings for musical systems using wireless sensor networks." Thesis, McGill University, 2012. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=106509.

Full text
Abstract:
Digital musical instruments, which are defined here as interactive musical systems containing a control mechanism and a sound generation mechanism, are powerful tools for analyzing performance practice and for transforming and reimagining the bounds of musical performance. However, the transitory nature of digital technology and the complexity of maintaining and configuring a digital musical instrument involving tens, if not hundreds, of interconnected, discrete components presents a unique problem. Even the most mechanically complex acoustic musical instruments, like a piano, are robust enough to withstand the daily grind without expert intervention by someone with intimate knowledge of the material and mechanical construction of the instrument. Furthermore, they are standardized enough that repairs can be conducted by any number of trained professionals. By contrast, digital musical instruments are often configured differently for each performance (this configurability being one of the virtues of a digital musical instrument), incorporate any number of non-standard pieces of hardware and software, and often can only be reliably configured by their creator. This problem is exacerbated as the number of sensors that make up the control mechanism in an instrument increases and the interaction of the control mechanism with the sound generation mechanism grows more complex. This relationship between the control mechanism and the sound generation mechanism is referred to here as the "mapping" of the instrument. The mapping for an instrument represents the aspect of an instrument that is usually most configurable because it is defined by software (as opposed to hardware) and also most crucial to the character of the instrument. In the case of a digital musical instrument, being able to easily configure the musical instrument becomes a point of artistic freedom in addition to a point of maintainability. This thesis builds upon work encompassed in two projects at the Input Devices and Musical Interaction Lab, the Digital Orchestra Project and Libmapper, to tackle the problem of building an interface/system for configuring a complex musical system without expert programming skills. The intent is to present a targeted survey of user interface design and data visualization design research through the years to inform the design of a graphical user interface for performing this configuration task.
Les instruments de musique numériques, que l'on définit ici comme des systèmes musicaux interactifs contenant un mecanisme de contrôle ainsi qu'un mécanisme de production sonore, constituent de puissants outils pour l'analyse des méthodes de performance ainsi que pour la transformation et la redéfinition de la la performance musicale. Cependant, la nature éphémère des technologie digitales ainsi que la complexité du maintien de la configuration d'un instrument de musique numérique, comprenant des dizaines voire centaines de composantes discrètes interconnectées, présente un reel problème. Même les systèmes musicaux acoustiques le plus complexes, comme le piano, sont assez robustes pour résister à l'usure quotidienne sans l'intervention d'une main experte ayant une connaissance aiguisée des matériaux et de la mecanique de l'instrument. Par ailleurs, ces instruments sont assez standards pour que tout luthier qualifíe soit en mesure d'intervenir sur l'instrument en cas de besoin. A l'inverse, les instruments de musique numériques présentent souvent des configurations propres a chaque performance (cette configurabilité étant une des vertus des instruments digitaux), incluent divers type de système hardware et software, et ne peuvent souvent être configurés de manière fiable que par leur créateur. Le problème grandit lorsque le nombre de capteurs utilisés pour la partie contrôle d'un instrument augmente et que l'interaction entre les sous-blocs contrôle et production sonore se complexifient. Cette relation entre le mécanisme de contrôle et de production sonore est appelée "mapping" de l'instrument. Le "mapping" d'un instrument se rapporte aux aspects les plus configurables définis par le logiciel (par opposition au éléments hardware), et mais aussi faisant partie les caractéristiques les plus importantes de l'instrument. Dans le cas d'un instrument de musique numérique, la capacite a agir facilement sur la configuration devient alors un enjeux de liberté d'expression artistique, en plus d'une condition de pérennisation. Ce travail de thèse se construit sur la base de deux projets menés au sein du laboratoire IDMIL (Input Devices and Musical Interaction Lab), le "Digital Orchestra" et "Libmapper", et traite du problème de construction d'un système/interface pour la configuration de système musicaux complexes sans connaissances expertes en programmation. L'objectif est de présenter un sondage ciblé sur la recherche en conception d'interface utilisateur et visualisation de données, dans le but d'apporter des éléments nouveaux a la conception d'interfaces destinées aux taches de configuration d'instruments digitaux.
APA, Harvard, Vancouver, ISO, and other styles
45

Wilansky, Jonathan. "A software tool for creating and visualizing mappings in digital musical instruments." Thesis, McGill University, 2014. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=121453.

Full text
Abstract:
In acoustic instruments, physical properties of the instrument determine both the gestures that can be performed on it and the sound produced by it. This is not the case with digital musical instruments (DMIs) that consist of three distinct parts: the playable interface, the sound generating component, and the mapping between them. The implication is that mapping from sensors to sound is an integral part in the design of a DMI, and a process highly influential on how the instrument is played and sounds. Creating mappings in a DMI is a non-trivial task, and typically a lot of different mappings are explored throughout the stages of prototyping, composition, and production. Furthermore, production environments involving DMIs will typically bring together engineers, composers, and performers in valuable and often short meeting times, requiring configuration and visualization of the mapping layer to be quick and simple, catering to both technical and nontechnical participants. This thesis presents two software components developed at the Input Devices and Music Interaction Laboratory to aid in the mapping process: libmapper, a software library enabling connections to be made between data signals declared on a shared network, and Webmapper, a list-based graphical user interface to libmapper. An extension to Webmapper was created to address the following two concerns: to provide alternate interfaces for configuring mappings, and to provide an alternate visualizations of the mapping layer. Two new interfaces were created to investigate the tasks: a grid inspired view and a hive plot inspired view. The system was developed using HTML5 compliant technologies, and a framework architecture inspired by the model-view-controller paradigm was added to Webmapper for code modularity and maintainability. Advantages and disadvantages of the three different interfaces from Webmapper and its new extension are discussed in regards to their ability to act both as a software user interface and a data visualization tool. The contribution of the work is the demonstration of benefits to alternate methods for configuring and visualizing the mapping layer in DMIs, and the laying of a foundation for future investigations using the created HTML5 compliant software.
Dans le monde des instruments acoustiques, les propriétés physiques des instruments déterminent les gestes effectués ainsi que les sons produits; ce n'est pas le cas avec les Instruments de Musique Numériques (IMNs) constitués trois parties distinctes: une interface de contrôle, un système de production sonore et le système de correspondance entre les deux ("mapping"). Ceci implique que l'interaction entre les capteurs et le son fait partie intégrante de l'IMN et influence fortement comment l'instrument sonne et joue. La création de correspondance dans un IMN n'est pas une tâche triviale, et généralement l'exploration de différentes correspondances est nécessaire durant les phases de prototype, composition et production. De plus, les environnement de productions incluant l'utilisation d'IMN peuvent créer des opportunités de rencontre entre les ingénieurs, les compositeurs et les interprètes car la configuration et visualisation des correspondances, qui se doit d'être simple et rapide, requiert un mélange de savoir-faire technique et non-technique. Cette thèse est basée sur deux logiciels développés au sein du laboratoire IDMIL (Input Devices and Music Interaction Laboratory) aidant le processus de correspondance: Libmapper, une logithèque permettant de connecter des signaux numériques et Webmapper, une interface graphique pour Libmapper. Une extension de Webmapper a également été crée afin d'adresser les deux problèmes suivants: pour offrir une interface alternative de configuration de correspondance et pour offrir une visualisation alternative des couches de correspondance. Deux nouvelles interfaces ont également été crées afin d'ajouter deux nouvelles vues: une en forme de ruche et une en quadrillée. Le système a été développé en HTML5, utilisant une architecture inspiré par le patron modèle-vue-contrôleur. Les avantages et les désavantages des trois différentes interfaces de Webmapper ainsi que de sa nouvelle extension seront traités vis à vis de leur capacité d'agir en tant qu'interface et en tant qu'outil de visualisation. Les contributions de ce travail résident dans la démonstration des avantages des méthodes alternatives de configuration et de visualisation des couches de correspondances dans les IMN, ainsi que dans les fondations pour de futures recherches utilisant le logiciel HTML5.
APA, Harvard, Vancouver, ISO, and other styles
46

Marlin, Maggie. "Musical Theatre Handbook for the Actor." VCU Scholars Compass, 2009. http://scholarscompass.vcu.edu/etd/1774.

Full text
Abstract:
MUSICAL THEATRE HANDBOOK FOR THE ACTOR By Maggie Elizabeth Marlin, MFA A Thesis submitted in partial fulfillment of the requirements for the degree of Master of Fine Arts at Virginia Commonwealth University. Virginia Commonwealth University, 2009 Major Director: David S. Leong Chairman, Department of Theatre Musical Theatre is a performance style deeply woven into the fabric of the American theatre. We live in time and social climate where over half of the productions open on Broadway right now are musicals. If actor training institutions profess a mission to prepare their students for a career in the entertainment industry, why are so many components of an actor’s skill set left to the side and considered peripheral? One can make the argument that their actor training program is exclusively for the theatre, and even more specifically for straight plays for the theatre. Of course, what your career preparation institution chooses to target is your prerogative and as long as that is clear to the incoming students who wish to specialize only in that one faction of the artist’s opportunities for work then my argument is moot. However, if you believe that actor training has a duty to prepare actors to work in an ever changing and transforming field and to be competitive in meeting the demands of various media, among many other areas of focus you should consider preparing your students to develop their craft for musical theatre as legitimately as you would for a classical or contemporary straight play. In this thesis I propose an approach to creating a role for musical theatre using as an example my character development technique for the role of Sally Bowles from a recent production of Cabaret. My desire is to illustrate a seamless continuation of the actor’s craft to meet the additional requirements of skills necessary to perform in a musical. Rather than signifying a separate style of acting for musical theatre which is identified as being altogether different and often dismissed as inferior to the craft of acting in a straight play, I hope to challenge the reader to consider a new perspective in which the foundation of musical theatre performance is built on the fundamentals of acting in a straight play.
APA, Harvard, Vancouver, ISO, and other styles
47

Anderson, Ron James. "The Arts of Persuasion: Musical Rhetoric in the Keyboard Genres of Dieterich Buxtehude." Diss., The University of Arizona, 2012. http://hdl.handle.net/10150/242454.

Full text
Abstract:
Dietrich Buxtehude (ca. 1637-1707) was a North German composer of the mid-Baroque period. He lived in a time and place in which classical rhetoric, the study of oratory, influenced education, religion, and music. Applying the definition of rhetoric as the art of persuasion, this study surveys the different persuasive strategies employed by Buxtehude in his various keyboard genres. The elements considered in this inquiry include the affects of keys and modes, rhetorical figures, and structures of speeches as applied to music. The style and setting (ethos), intellectual content (logos), and emotional effect (pathos) are explored in each genre as elements of rhetorical persuasion. This study reveals that different genres of Buxtehude's keyboard music utilize different rhetorical strategies and techniques. These strategies vary according to the purpose of the music (i.e., secular or sacred), the presence or absence of an associated text, and the form of the composition. The chorale preludes, since they are driven by texts, use figures such as hypotyposis, assimilatio, anabasis and catabasis, to musically highlight important words in the text, or to amplify the text's underlying meaning. The suites, and parts of the variations, reflect the affects of the various dance movements as described by Johann Mattheson, Gregory Butler, and Patricia Ranum. The rhetorical nature of contrapuntal works is considered in terms of solving a musical issue through musical proofs, as described by Daniel Harrison. Finally, the praeludia embody the rhetorical form of the classical dispositio, or form of a forensic speech. These sectional works are arranged in such a way as to advantageously present both emotional and intellectual facets of a musical oration.The study also asserts that it is stylistically appropriate, given the audience-centered values of rhetorical persuasion, to perform Buxtehude's manualiter works at the piano, providing that they are played in a manner consistent with the style and structure of the music. This view is fortified by evidence that Baroque musicians, compared to modern musicians, were far less specific about instrumentation and musical details. An appendix offers specific performance suggestions for pianists in each of the works discussed in the study.
APA, Harvard, Vancouver, ISO, and other styles
48

Dennis, Harold Edward Brokaw. "How We Saved the World: A Multimedia Musical Drama." Diss., Temple University Libraries, 2013. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/254032.

Full text
Abstract:
Music Composition
D.M.A.
A monograph on the musical composition How We Saved the World, a multimedia musical drama written by the author, describes in detail the history of the writing of the piece, its context within his development as a composer, its context within our times, the writing and structure of the libretto, the characters and character types within the piece, their relationships with one another, the music of the piece and its construction. The two hour long composition requires 44 performers to stage: 14 singers, 8 dancers, and a conducted 21 piece orchestra. In addition to traditional acoustic instruments (winds, brass, percussion, strings) the orchestra includes electric guitars, drum set, and audio and video laptop performers. How We Saved the World is situated in a future time and begins with the premise that the world has been saved. Human beings have found a way to live in peace and harmony with one another and with the ecology of our planet Earth. We, the participants in the performance are sharing among ourselves the story of how human culture changed from the destructive, unsustainable practices and consciousness of the past. The libretto is included as an appendix. The score and all of the audio files needed to perform the piece are included as supplementary material.
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
49

Weinberg, Gil 1967. "Interconnected musical networks : bringing expression and thoughtfulness to collaborative group playing." Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/28287.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2003.
Includes bibliographical references (p. 211-219).
(cont.) In order to addressee the latter challenge I have decided to employ the digital network--a promising candidate for bringing a unique added value to the musical experience of collaborative group playing. I have chosen to address both challenges by embedding cognitive and educational concepts in newly designed interconnect instruments and applications, which led to the development of a number of such Interconnected Musical Networks (IMNs)--live performance systems that allow players to influence, share, and shape each other's music in real-time. In my thesis I discuss the concepts, motivations, and aesthetics of IMNs and review a number of historical and current technological landmarks that led the way to the development of the field. I then suggest a comprehensive theoretical framework for artistic interdependency, based on which I developed a set of instruments and activities in an effort to turn IMNs into an expressive and intuitive art form that provides meaningful learning experiences, engaging collaborative interactions, and worthy music.
Music today is more ubiquitous, accessible, and democratized than ever. Thanks to technologies such as high-end home studios, audio compression, and digital distribution, music now surrounds us in everyday life, almost every piece of music is a few minutes of download away, and almost any western musician, novice or expert, can compose, perform and distribute their music directly to their listeners from their home studios. But at the same time these technologies lead to some concerning social effects on the culture of consuming and creating music. Although music is available for more people, in more locations, and for longer periods of time, most listeners experience it in an incidental, unengaged, or utilitarian manner. On the creation side, home studios promote private and isolated practice of music making where hardly any musical instruments or even musicians are needed, and where the value of live group interaction is marginal. My thesis work attempts to use technology to address these same concerning effects that it had created by developing tools and applications that would address two main challenges: 1. Facilitating engaged and thoughtful as well as intuitive and expressive musical experiences for novices and children 2. Enhancing the inherent social attributes of music making by connecting to and intensifying the roots of music as a collaborative socialritual. My approach for addressing the first challenge is to study and model music cognition and education theories and to design algorithms that would bridge between the thoughtful and the expressive, allowing novices and children an access to meaningful and engaging musical experiences.
by Gil Weinberg.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
50

Waxman, David Michael. "Digital theremins--interactive musical experiences for amateurs using electric field sensing." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/29100.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography