Academic literature on the topic 'Music perceptual evaluation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Music perceptual evaluation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Music perceptual evaluation"

1

Houtsma, Adrianus J. M., and Henricus J. G. M. Tholen. "II. A Perceptual Evaluation." Music Perception 4, no. 3 (1987): 255–66. http://dx.doi.org/10.2307/40285369.

Full text
Abstract:
This article reports a study of the musical appreciation of carillons consisting of computer-synthesized major-third bells, minor-third bells, and "neutral-third" bells. Paired comparison judgments of melodies played on these instruments were obtained from a group of carillon majors, a group of other music students from a local conservatory, and a group of nonmusicians. The results provide evidence that each group of subjects can hear the difference between the three computerized instruments, but each group evaluates these perceptual differences in a different way.
APA, Harvard, Vancouver, ISO, and other styles
2

Larrouy-Maestri, Pauline, Dominique Morsomme, David Magis, and David Poeppel. "Lay Listeners Can Evaluate the Pitch Accuracy of Operatic Voices." Music Perception 34, no. 4 (April 1, 2017): 489–95. http://dx.doi.org/10.1525/mp.2017.34.4.489.

Full text
Abstract:
Lay listeners are reliable judges when evaluating pitch accuracy of occasional singers, suggesting that enculturation and laypersons’ perceptual abilities are sufficient to judge “simple” music material adequately. However, the definition of pitch accuracy in operatic performances is much more complex than in melodies performed by occasional singers. Furthermore, because listening to operatic performances is not a common activity, laypersons‘ experience with this complicated acoustic signal is more limited. To address the question of music expertise in evaluating operatic singing voices, listeners without music training were compared with the music experts examined in a recent study (Larrouy-Maestri, Magis, & Morsomme, 2014a) and their ratings were modeled with regard to underlying acoustic variables of pitch accuracy. As expected, some participants lacked test-retest reliability in their judgments. However, listeners who used a consistent strategy relied on a definition of pitch accuracy that appears to overlap with the quantitative criteria used by music experts. Besides clarifying the role of music expertise in the evaluation of melodies, our findings show robust perceptual abilities in laypersons when listening to complex signals such as operatic performances.
APA, Harvard, Vancouver, ISO, and other styles
3

de Man, Brecht, Kirk McNally, and Joshua Reiss. "Perceptual Evaluation and Analysis of Reverberation in Multitrack Music Production." Journal of the Audio Engineering Society 65, no. 1/2 (February 17, 2017): 108–16. http://dx.doi.org/10.17743/jaes.2016.0062.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Novello, Alberto, Martin M. F. McKinney, and Armin Kohlrausch. "Perceptual Evaluation of Inter-song Similarity in Western Popular Music." Journal of New Music Research 40, no. 1 (March 2011): 1–26. http://dx.doi.org/10.1080/09298215.2010.523470.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Liu, Fang, Cunmei Jiang, Tom Francart, Alice H. D. Chan, and Patrick C. M. Wong. "Perceptual Learning of Pitch Direction in Congenital Amusia." Music Perception 34, no. 3 (February 1, 2017): 335–51. http://dx.doi.org/10.1525/mp.2017.34.3.335.

Full text
Abstract:
Congenital amusia is a lifelong disorder of musical processing for which no effective treatments have been found. The present study aimed to treat amusics’ impairments in pitch direction identification through auditory training. Prior to training, twenty Chinese-speaking amusics and 20 matched controls were tested on the Montreal Battery of Evaluation of Amusia (MBEA) and two psychophysical pitch threshold tasks for identification of pitch direction in speech and music. Subsequently, ten of the twenty amusics undertook 10 sessions of adaptive-tracking pitch direction training, while the remaining 10 received no training. Post training, all amusics were retested on the pitch threshold tasks and on the three pitch-based MBEA subtests. Trained amusics demonstrated significantly improved thresholds for pitch direction identification in both speech and music, to the level of non-amusic control participants, although no significant difference was observed between trained and untrained amusics in the MBEA subtests. This provides the first clear positive evidence for improvement in pitch direction processing through auditory training in amusia. Further training studies are required to target different deficit areas in congenital amusia, so as to reveal which aspects of improvement will be most beneficial to the normal functioning of musical processing.
APA, Harvard, Vancouver, ISO, and other styles
6

Ycart, Adrien, Lele Liu, Emmanouil Benetos, and Marcus T. Pearce. "Investigating the Perceptual Validity of Evaluation Metrics for Automatic Piano Music Transcription." Transactions of the International Society for Music Information Retrieval 3, no. 1 (2020): 68–81. http://dx.doi.org/10.5334/tismir.57.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Larrouy-Maestri, Pauline, David Magis, and Dominique Morsomme. "The Evaluation of Vocal Pitch Accuracy." Music Perception 32, no. 1 (September 1, 2014): 1–10. http://dx.doi.org/10.1525/mp.2014.32.1.1.

Full text
Abstract:
The objective analysis of Western operatic singing voices indicates that professional singers can be particularly “out of tune.” This study aims to better understand the evaluation of operatic voices, which have particularly complex acoustical signals. Twenty-two music experts were asked to evaluate the vocal pitch accuracy of 14 sung performances with a pairwise comparison paradigm, in a test and a retest. In addition to the objective measurement of pitch accuracy (pitch interval deviation), several performance parameters (average tempo, fundamental frequency of the starting note) and quality parameters (energy distribution, vibrato rate and extent) were observed and compared to the judges’ perceptual rating. The results show high intra and interjudge reliability when rating the pitch accuracy of operatic singing voices. Surprisingly, all the parameters were significantly related to the ratings and explain 78.8% of the variability of the judges’ rating. The pitch accuracy evaluation of operatic voices is thus not based exclusively on the precision of performed music intervals but on a complex combination of performance and quality parameters.
APA, Harvard, Vancouver, ISO, and other styles
8

Rasumow, Eugen, Matthias Blau, Simon Doclo, Stephen van de Par, Martin Hansen, Dirk Püschel, and Volker Mellert. "Perceptual Evaluation of Individualized Binaural Reproduction Using a Virtual Artificial Head." Journal of the Audio Engineering Society 65, no. 6 (June 27, 2017): 448–59. http://dx.doi.org/10.17743/jaes.2017.0012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Fela, Randy Frans, Nick Zacharov, and Søren Forchhammer. "Assessor Selection Process for Perceptual Quality Evaluation of 360 Audiovisual Content." Journal of the Audio Engineering Society 70, no. 10 (November 2, 2022): 824–42. http://dx.doi.org/10.17743/jaes.2022.0037.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zacharakis, Asterios, Maximos Kaliakatsos-Papakostas, Costas Tsougras, and Emilios Cambouropoulos. "Creating Musical Cadences via Conceptual Blending." Music Perception 35, no. 2 (December 1, 2017): 211–34. http://dx.doi.org/10.1525/mp.2017.35.2.211.

Full text
Abstract:
The cognitive theory of conceptual blending may be employed to understand the way music becomes meaningful and, at the same time, it may form a basis for musical creativity per se. This work constitutes a case study whereby conceptual blending is used as a creative tool for inventing musical cadences. Specifically, the perfect and the renaissance Phrygian cadential sequences are used as input spaces to a cadence blending system that produces various cadential blends based on musicological and blending optimality criteria. A selection of “novel” cadences is subject to empirical evaluation in order to gain a better understanding of perceptual relationships between cadences. Pairwise dissimilarity ratings between cadences are transformed into a perceptual space and a verbal attribute magnitude estimation method on six descriptive axes (preference, originality, tension, closure, expectancy, and fit) is used to associate the dimensions of this space with descriptive qualities (closure and tension emerged as the most prominent qualities). The novel cadences generated by the computational blending system are mainly perceived as single-scope blends (i.e., blends where one input space is dominant), since categorical perception seems to play a significant role (especially in relation to the upward leading note movement). Insights into perceptual aspects of conceptual bending are presented and ramifications for developing sophisticated creative systems are discussed.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Music perceptual evaluation"

1

Sanden, Christopher, and University of Lethbridge Faculty of Arts and Science. "An empirical evaluation of computational and perceptual multi-label genre classification on music / Christopher Sanden." Thesis, Lethbridge, Alta. : University of Lethbridge, Dept. of Mathematics and Computer Science, c2010, 2010. http://hdl.handle.net/10133/2602.

Full text
Abstract:
Automatic music genre classi cation is a high-level task in the eld of Music Information Retrieval (MIR). It refers to the process of automatically assigning genre labels to music for various tasks, including, but not limited to categorization, organization and browsing. This is a topic which has seen an increase in interest recently as one of the cornerstones of MIR. However, due to the subjective and ambiguous nature of music, traditional single-label classi cation is inadequate. In this thesis, we study multi-label music genre classi cation from perceptual and computational perspectives. First, we design a set of perceptual experiments to investigate the genre-labelling behavior of individuals. The results from these experiments lead us to speculate that multi-label classi cation is more appropriate for classifying music genres. Second, we design a set of computational experiments to evaluate multi-label classi cation algorithms on music. These experiments not only support our speculation but also reveal which algorithms are more suitable for music genre classi cation. Finally, we propose and examine a group of ensemble approaches for combining multi-label classi cation algorithms to further improve classi cation performance. ii
viii, 87 leaves ; 29 cm
APA, Harvard, Vancouver, ISO, and other styles
2

SIMONETTA, FEDERICO. "MUSIC INTERPRETATION ANALYSIS. A MULTIMODAL APPROACH TO SCORE-INFORMED RESYNTHESIS OF PIANO RECORDINGS." Doctoral thesis, Università degli Studi di Milano, 2022. http://hdl.handle.net/2434/918909.

Full text
Abstract:
This Thesis discusses the development of technologies for the automatic resynthesis of music recordings using digital synthesizers. First, the main issue is identified in the understanding of how Music Information Processing (MIP) methods can take into consideration the influence of the acoustic context on the music performance. For this, a novel conceptual and mathematical framework named “Music Interpretation Analysis” (MIA) is presented. In the proposed framework, a distinction is made between the “performance” – the physical action of playing – and the “interpretation” – the action that the performer wishes to achieve. Second, the Thesis describes further works aiming at the democratization of music production tools via automatic resynthesis: 1) it elaborates software and file formats for historical music archiving and multimodal machine-learning datasets; 2) it explores and extends MIP technologies; 3) it presents the mathematical foundations of the MIA framework and shows preliminary evaluations to demonstrate the effectiveness of the approach
APA, Harvard, Vancouver, ISO, and other styles
3

Nieto, Oriol. "Discovering structure in music| Automatic approaches and perceptual evaluations." Thesis, New York University, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=3705329.

Full text
Abstract:

This dissertation addresses the problem of the automatic discovery of structure in music from audio signals by introducing novel approaches and proposing perceptually enhanced evaluations. First, the problem of music structure analysis is reviewed from the perspectives of music information retrieval (MIR) and music perception and cognition (MPC), including a discussion of the limitations and current challenges in both disciplines. When discussing the existing methods of evaluating the outputs of algorithms that discover musical structure, a transparent open source software called mir eval, which contains implementations to these evaluations, is introduced. Then, four MIR algorithms are presented: one to compress music recordings into audible summaries, another to discover musical patterns from an audio signal, and two for the identification of the large-scale, non-overlapping segments of a musical piece. After discussing these techniques, and given the differences when perceiving the structure of music, the idea of applying more MPC-oriented approaches is considered to obtain perceptually relevant evaluations for music segmentation. A methodology to automatically obtain the most difficult tracks for machines to annotate is presented in order to include them in a design of a human study to collect multiple human annotations. To select these tracks, a novel open source framework called music structural analysis framework (MSAF) is introduced. This framework contains the most relevant music segmentation algorithms and it uses mir eval to transparently evaluate them. Moreover, MSAF makes use of the JSON annotated music specification (JAMS), a new format to contain multiple annotations for several tasks in a single file, which simplifies the dataset design and the analysis of agreement across different human references. The human study to collect additional annotations (which are stored in JAMS files) is described, where five new annotations for fifty tracks are stored. Finally, these additional annotations are analyzed, confirming the problem of having ground-truth datasets with a single annotator per track due to the high degree of disagreement among annotators for the challenging tracks. To alleviate this, these annotations are merged to produce a more robust human reference annotation. Lastly, the standard F-measure of the hit rate measure to evaluate music segmentation is analyzed when access to additional annotations is not possible, and it is shown, via multiple human studies, that precision seems more perceptually relevant than recall.

APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Music perceptual evaluation"

1

Zhang, Kunzhu, Haoyu Yang, and Quan Yuan. "Perceptual Evaluation on the Man-Machine-Environment System of Music Library." In Man-Machine-Environment System Engineering, 703–9. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-4786-5_98.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

De Man, Brecht, Ryan Stables, and Joshua D. Reiss. "Perceptual Evaluation in Music Production." In IntelligentMusic Production, 83–94. Focal Press, 2019. http://dx.doi.org/10.4324/9781315166100-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Iwaki, Mamoru. "Information Hiding Using Interpolation for Audio and Speech Signals." In Advances in Multimedia and Interactive Technologies, 71–89. IGI Global, 2013. http://dx.doi.org/10.4018/978-1-4666-2217-3.ch004.

Full text
Abstract:
In this chapter, a time-domain high-bit-rate information hiding method using interpolation techniques, which can extract embedded data in both informed (non-blind) and non-informed (blind) ways, is proposed. Three interpolation techniques are introduced for the information hiding method, i.e., spline interpolation, Fourier-series interpolation, and linear-prediction interpolation. In performance evaluation, spline interpolation was mainly examined as an example implementation. According to the simulation of information hiding in music signals, the spline interpolation-based method achieved audio-information hiding for CD-audio signals at bit rate of about 2.9 kbps, and about 1.1 kbps under MP3 compression (160 kbps). The objective sound quality measured by the Perceptual Evaluation of Audio Quality (PEAQ) was maintained if the length of interpolation data increased. The objective sound quality was also evaluated for the Fourier series-based implementation and the linear prediction-based one. Fourier series interpolation achieved the same sound quality as spline interpolation did. Linear prediction interpolation required longer interpolation signals to get good sound quality.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography