Journal articles on the topic 'Music perceptual evaluation'

To see the other types of publications on this topic, follow the link: Music perceptual evaluation.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Music perceptual evaluation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Houtsma, Adrianus J. M., and Henricus J. G. M. Tholen. "II. A Perceptual Evaluation." Music Perception 4, no. 3 (1987): 255–66. http://dx.doi.org/10.2307/40285369.

Full text
Abstract:
This article reports a study of the musical appreciation of carillons consisting of computer-synthesized major-third bells, minor-third bells, and "neutral-third" bells. Paired comparison judgments of melodies played on these instruments were obtained from a group of carillon majors, a group of other music students from a local conservatory, and a group of nonmusicians. The results provide evidence that each group of subjects can hear the difference between the three computerized instruments, but each group evaluates these perceptual differences in a different way.
APA, Harvard, Vancouver, ISO, and other styles
2

Larrouy-Maestri, Pauline, Dominique Morsomme, David Magis, and David Poeppel. "Lay Listeners Can Evaluate the Pitch Accuracy of Operatic Voices." Music Perception 34, no. 4 (April 1, 2017): 489–95. http://dx.doi.org/10.1525/mp.2017.34.4.489.

Full text
Abstract:
Lay listeners are reliable judges when evaluating pitch accuracy of occasional singers, suggesting that enculturation and laypersons’ perceptual abilities are sufficient to judge “simple” music material adequately. However, the definition of pitch accuracy in operatic performances is much more complex than in melodies performed by occasional singers. Furthermore, because listening to operatic performances is not a common activity, laypersons‘ experience with this complicated acoustic signal is more limited. To address the question of music expertise in evaluating operatic singing voices, listeners without music training were compared with the music experts examined in a recent study (Larrouy-Maestri, Magis, & Morsomme, 2014a) and their ratings were modeled with regard to underlying acoustic variables of pitch accuracy. As expected, some participants lacked test-retest reliability in their judgments. However, listeners who used a consistent strategy relied on a definition of pitch accuracy that appears to overlap with the quantitative criteria used by music experts. Besides clarifying the role of music expertise in the evaluation of melodies, our findings show robust perceptual abilities in laypersons when listening to complex signals such as operatic performances.
APA, Harvard, Vancouver, ISO, and other styles
3

de Man, Brecht, Kirk McNally, and Joshua Reiss. "Perceptual Evaluation and Analysis of Reverberation in Multitrack Music Production." Journal of the Audio Engineering Society 65, no. 1/2 (February 17, 2017): 108–16. http://dx.doi.org/10.17743/jaes.2016.0062.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Novello, Alberto, Martin M. F. McKinney, and Armin Kohlrausch. "Perceptual Evaluation of Inter-song Similarity in Western Popular Music." Journal of New Music Research 40, no. 1 (March 2011): 1–26. http://dx.doi.org/10.1080/09298215.2010.523470.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Liu, Fang, Cunmei Jiang, Tom Francart, Alice H. D. Chan, and Patrick C. M. Wong. "Perceptual Learning of Pitch Direction in Congenital Amusia." Music Perception 34, no. 3 (February 1, 2017): 335–51. http://dx.doi.org/10.1525/mp.2017.34.3.335.

Full text
Abstract:
Congenital amusia is a lifelong disorder of musical processing for which no effective treatments have been found. The present study aimed to treat amusics’ impairments in pitch direction identification through auditory training. Prior to training, twenty Chinese-speaking amusics and 20 matched controls were tested on the Montreal Battery of Evaluation of Amusia (MBEA) and two psychophysical pitch threshold tasks for identification of pitch direction in speech and music. Subsequently, ten of the twenty amusics undertook 10 sessions of adaptive-tracking pitch direction training, while the remaining 10 received no training. Post training, all amusics were retested on the pitch threshold tasks and on the three pitch-based MBEA subtests. Trained amusics demonstrated significantly improved thresholds for pitch direction identification in both speech and music, to the level of non-amusic control participants, although no significant difference was observed between trained and untrained amusics in the MBEA subtests. This provides the first clear positive evidence for improvement in pitch direction processing through auditory training in amusia. Further training studies are required to target different deficit areas in congenital amusia, so as to reveal which aspects of improvement will be most beneficial to the normal functioning of musical processing.
APA, Harvard, Vancouver, ISO, and other styles
6

Ycart, Adrien, Lele Liu, Emmanouil Benetos, and Marcus T. Pearce. "Investigating the Perceptual Validity of Evaluation Metrics for Automatic Piano Music Transcription." Transactions of the International Society for Music Information Retrieval 3, no. 1 (2020): 68–81. http://dx.doi.org/10.5334/tismir.57.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Larrouy-Maestri, Pauline, David Magis, and Dominique Morsomme. "The Evaluation of Vocal Pitch Accuracy." Music Perception 32, no. 1 (September 1, 2014): 1–10. http://dx.doi.org/10.1525/mp.2014.32.1.1.

Full text
Abstract:
The objective analysis of Western operatic singing voices indicates that professional singers can be particularly “out of tune.” This study aims to better understand the evaluation of operatic voices, which have particularly complex acoustical signals. Twenty-two music experts were asked to evaluate the vocal pitch accuracy of 14 sung performances with a pairwise comparison paradigm, in a test and a retest. In addition to the objective measurement of pitch accuracy (pitch interval deviation), several performance parameters (average tempo, fundamental frequency of the starting note) and quality parameters (energy distribution, vibrato rate and extent) were observed and compared to the judges’ perceptual rating. The results show high intra and interjudge reliability when rating the pitch accuracy of operatic singing voices. Surprisingly, all the parameters were significantly related to the ratings and explain 78.8% of the variability of the judges’ rating. The pitch accuracy evaluation of operatic voices is thus not based exclusively on the precision of performed music intervals but on a complex combination of performance and quality parameters.
APA, Harvard, Vancouver, ISO, and other styles
8

Rasumow, Eugen, Matthias Blau, Simon Doclo, Stephen van de Par, Martin Hansen, Dirk Püschel, and Volker Mellert. "Perceptual Evaluation of Individualized Binaural Reproduction Using a Virtual Artificial Head." Journal of the Audio Engineering Society 65, no. 6 (June 27, 2017): 448–59. http://dx.doi.org/10.17743/jaes.2017.0012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Fela, Randy Frans, Nick Zacharov, and Søren Forchhammer. "Assessor Selection Process for Perceptual Quality Evaluation of 360 Audiovisual Content." Journal of the Audio Engineering Society 70, no. 10 (November 2, 2022): 824–42. http://dx.doi.org/10.17743/jaes.2022.0037.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zacharakis, Asterios, Maximos Kaliakatsos-Papakostas, Costas Tsougras, and Emilios Cambouropoulos. "Creating Musical Cadences via Conceptual Blending." Music Perception 35, no. 2 (December 1, 2017): 211–34. http://dx.doi.org/10.1525/mp.2017.35.2.211.

Full text
Abstract:
The cognitive theory of conceptual blending may be employed to understand the way music becomes meaningful and, at the same time, it may form a basis for musical creativity per se. This work constitutes a case study whereby conceptual blending is used as a creative tool for inventing musical cadences. Specifically, the perfect and the renaissance Phrygian cadential sequences are used as input spaces to a cadence blending system that produces various cadential blends based on musicological and blending optimality criteria. A selection of “novel” cadences is subject to empirical evaluation in order to gain a better understanding of perceptual relationships between cadences. Pairwise dissimilarity ratings between cadences are transformed into a perceptual space and a verbal attribute magnitude estimation method on six descriptive axes (preference, originality, tension, closure, expectancy, and fit) is used to associate the dimensions of this space with descriptive qualities (closure and tension emerged as the most prominent qualities). The novel cadences generated by the computational blending system are mainly perceived as single-scope blends (i.e., blends where one input space is dominant), since categorical perception seems to play a significant role (especially in relation to the upward leading note movement). Insights into perceptual aspects of conceptual bending are presented and ramifications for developing sophisticated creative systems are discussed.
APA, Harvard, Vancouver, ISO, and other styles
11

Buyens, Wim, Bas van Dijk, Marc Moonen, and Jan Wouters. "Evaluation of a Stereo Music Preprocessing Scheme for Cochlear Implant Users." Journal of the American Academy of Audiology 29, no. 01 (January 2018): 035–43. http://dx.doi.org/10.3766/jaaa.16103.

Full text
Abstract:
AbstractAlthough for most cochlear implant (CI) users good speech understanding is reached (at least in quiet environments), the perception and the appraisal of music are generally unsatisfactory.The improvement in music appraisal was evaluated in CI participants by using a stereo music preprocessing scheme implemented on a take-home device, in a comfortable listening environment. The preprocessing allowed adjusting the balance among vocals/bass/drums and other instruments, and was evaluated for different genres of music. The correlation between the preferred settings and the participants’ speech and pitch detection performance was investigated.During the initial visit preceding the take-home test, the participants’ speech-in-noise perception and pitch detection performance were measured, and a questionnaire about their music involvement was completed. The take-home device was provided, including the stereo music preprocessing scheme and seven playlists with six songs each. The participants were asked to adjust the balance by means of a turning wheel to make the music sound most enjoyable, and to repeat this three times for all songs.Twelve postlingually deafened CI users participated in the study.The data were collected by means of a take-home device, which preserved all the preferred settings for the different songs. Statistical analysis was done with a Friedman test (with post hoc Wilcoxon signed-rank test) to check the effect of “Genre.” The correlations were investigated with Pearson’s and Spearman’s correlation coefficients.All participants preferred a balance significantly different from the original balance. Differences across participants were observed which could not be explained by perceptual abilities. An effect of “Genre” was found, showing significantly smaller preferred deviation from the original balance for Golden Oldies compared to the other genres.The stereo music preprocessing scheme showed an improvement in music appraisal with complex music and hence might be a good tool for music listening, training, or rehabilitation for CI users.
APA, Harvard, Vancouver, ISO, and other styles
12

Repp, Bruno H. "Further Perceptual Evaluations of Pulse Microstructure in Computer Performances of Classical Piano Music." Music Perception 8, no. 1 (1990): 1–33. http://dx.doi.org/10.2307/40285483.

Full text
Abstract:
This research continues the perceptual evaluation of "composers' pulses" begun by Repp (1989a) and Thompson (1989). Composers' pulses are patterns of expressive microstructure (i. e., timing and amplitude modulations) proposed by Clynes (1983). They are said to convey individual composers' personalities and to enhance their characteristic expression when implemented in computer performances of their music. For the present experiments, the initial bars of five piano pieces by each of four composers (Beethoven, Haydn, Mozart, and Schubert) were generated with each of four pulse microstructures similar to Clynes's composer-specific patterns, and also in a deadpan version. Listeners representing a wide range of musical experience judged to what extent each computer performance had the composer's individual expression, relative to the deadpan version. Listeners showed an overall preference for the Beethoven and Haydn pulses. The pattern of pulse preferences varied significantly among individual pieces, but little among different composers. These results indirectly support the general notion that expressive variation is contingent on musical structure, but they offer little evidence in support of fixed, composer-specific patterns of expressive microstructure.
APA, Harvard, Vancouver, ISO, and other styles
13

Rathcke, Tamara, Simone Falk, and Simone Dalla Bella. "Music to Your Ears." Music Perception 38, no. 5 (June 1, 2021): 499–508. http://dx.doi.org/10.1525/mp.2021.38.5.499.

Full text
Abstract:
Listeners usually have no difficulties telling the difference between speech and song. Yet when a spoken phrase is repeated several times, they often report a perceptual transformation that turns speech into song. There is a great deal of variability in the perception of the speech-to-song illusion (STS). It may result partly from linguistic properties of spoken phrases and be partly due to the individual processing difference of listeners exposed to STS. To date, existing evidence is insufficient to predict who is most likely to experience the transformation, and which sentences may be more conducive to the transformation once spoken repeatedly. The present study investigates these questions with French and English listeners, testing the hypothesis that the transformation is achieved by means of functional re-evaluation of phrasal prosody during repetition. Such prosodic re-analysis places demands on the phonological structure of sentences and language proficiency of listeners. Two experiments show that STS is facilitated in high-sonority sentences and in listeners’ non-native languages and support the hypothesis that STS involves a switch between musical and linguistic perception modes.
APA, Harvard, Vancouver, ISO, and other styles
14

Navarro-Cáceres, María, Javier Félix Merchán Sánchez-Jara, Valderi Reis Quietinho Leithardt, and Raúl García-Ovejero. "Assistive Model to Generate Chord Progressions Using Genetic Programming with Artificial Immune Properties." Applied Sciences 10, no. 17 (August 31, 2020): 6039. http://dx.doi.org/10.3390/app10176039.

Full text
Abstract:
In Western tonal music, tension in chord progressions plays an important role in defining the path that a musical composition should follow. The creation of chord progressions that reflects such tension profiles can be challenging for novice composers, as it depends on many subjective factors, and also is regulated by multiple theoretical principles. This work presents ChordAIS-Gen, a tool to assist the users to generate chord progressions that comply with a concrete tension profile. We propose an objective measure capable of capturing the tension profile of a chord progression according to different tonal music parameters, namely, consonance, hierarchical tension, voice leading and perceptual distance. This measure is optimized into a Genetic Program algorithm mixed with an Artificial Immune System called Opt-aiNet. Opt-aiNet is capable of finding multiple optima in parallel, resulting in multiple candidate solutions for the next chord in a sequence. To validate the objective function, we performed a listening test to evaluate the perceptual quality of the candidate solutions proposed by our system. Most listeners rated the chord progressions proposed by ChordAIS-Gen as better candidates than the progressions discarded. Thus, we propose to use the objective values as a proxy for the perceptual evaluation of chord progressions and compare the performance of ChordAIS-Gen with chord progressions generators.
APA, Harvard, Vancouver, ISO, and other styles
15

Fela, Randy Frans, Nick Zacharov, and Søren Forchhammer. "Comparison of Full Factorial and Optimal Experimental Design for Perceptual Evaluation of Audiovisual Quality." Journal of the Audio Engineering Society 71, no. 1/2 (January 16, 2023): 4–19. http://dx.doi.org/10.17743/jaes.2022.0063.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Henry, Molly J., and J. Devin McAuley. "Failure to Apply Signal Detection Theory to the Montreal Battery of Evaluation of Amusia May Misdiagnose Amusia." Music Perception 30, no. 5 (December 2012): 480–96. http://dx.doi.org/10.1525/mp.2013.30.5.480.

Full text
Abstract:
This article considers a signal detection theory (SDT) approach to evaluation of performance on the Montreal Battery of Evaluation of Amusia (MBEA). One hundred fifty-five individuals completed the original binary response version of the MBEA (n = 62) or a confidence rating version (MBEA-C; n = 93). Confidence ratings afforded construction of empirical receiver operator characteristic (ROC) curves and derivation of bias-free performance measures against which we compared the standard performance metric, proportion correct (PC), and an alternative signal detection metric, d ′. Across the board, PC was tainted by response bias and underestimated performance as indexed by Az, a nonparametric ROC-based performance measure. Signal detection analyses further revealed that some individuals performing worse than the standard PC-based cutoff for amusia diagnosis showed large response biases. Given that PC is contaminated by response bias, this suggests the possibility that categorizing individuals as having amusia or not, using a PC-based cutoff, may inadvertently misclassify some individuals with normal perceptual sensitivity as amusic simply because they have large response biases. In line with this possibility, a comparison of amusia classification using d ′- and PC-based cutoffs showed potential misclassification of 33% of the examined cases.
APA, Harvard, Vancouver, ISO, and other styles
17

Kantrowitz, J. T., N. Scaramello, A. Jakubovitz, J. M. Lehrfeld, P. Laukka, H. A. Elfenbein, G. Silipo, and D. C. Javitt. "Amusia and protolanguage impairments in schizophrenia." Psychological Medicine 44, no. 13 (March 31, 2014): 2739–48. http://dx.doi.org/10.1017/s0033291714000373.

Full text
Abstract:
BackgroundBoth language and music are thought to have evolved from a musical protolanguage that communicated social information, including emotion. Individuals with perceptual music disorders (amusia) show deficits in auditory emotion recognition (AER). Although auditory perceptual deficits have been studied in schizophrenia, their relationship with musical/protolinguistic competence has not previously been assessed.MethodMusical ability was assessed in 31 schizophrenia/schizo-affective patients and 44 healthy controls using the Montreal Battery for Evaluation of Amusia (MBEA). AER was assessed using a novel battery in which actors provided portrayals of five separate emotions. The Disorganization factor of the Positive and Negative Syndrome Scale (PANSS) was used as a proxy for language/thought disorder and the MATRICS Consensus Cognitive Battery (MCCB) was used to assess cognition.ResultsHighly significant deficits were seen between patients and controls across auditory tasks (p < 0.001). Moreover, significant differences were seen in AER between the amusia and intact music-perceiving groups, which remained significant after controlling for group status and education. Correlations with AER were specific to the melody domain, and correlations between protolanguage (melody domain) and language were independent of overall cognition.DiscussionThis is the first study to document a specific relationship between amusia, AER and thought disorder, suggesting a shared linguistic/protolinguistic impairment. Once amusia was considered, other cognitive factors were no longer significant predictors of AER, suggesting that musical ability in general and melodic discrimination ability in particular may be crucial targets for treatment development and cognitive remediation in schizophrenia.
APA, Harvard, Vancouver, ISO, and other styles
18

Wright, Rose, and Rosalie M. Uchanski. "Music Perception and Appraisal: Cochlear Implant Users and Simulated Cochlear Implant Listening." Journal of the American Academy of Audiology 23, no. 05 (May 2012): 350–65. http://dx.doi.org/10.3766/jaaa.23.5.6.

Full text
Abstract:
Background: The inability to hear music well may contribute to decreased quality of life for cochlear implant (CI) users. Researchers have reported recently on the generally poor ability of CI users to perceive music, and a few researchers have reported on the enjoyment of music by CI users. However, the relation between music perception skills and music enjoyment is much less explored. Only one study has attempted to predict CI users’ enjoyment and perception of music from the users’ demographic variables and other perceptual skills (Gfeller et al, 2008). Gfeller's results yielded different predictive relationships for music perception and music enjoyment, and the relationships were weak, at best. Purpose: The first goal of this study is to clarify the nature and relationship between music perception skills and musical enjoyment for CI users, by employing a battery of music tests. The second goal is to determine whether normal hearing (NH) subjects, listening with a CI simulation, can be used as a model to represent actual CI users for either music enjoyment ratings or music perception tasks. Research Design: A prospective, cross-sectional observational study. Original music stimuli (unprocessed) were presented to CI users, and music stimuli processed with CI-simulation software were presented to 20 NH listeners (CIsim). As a control, original music stimuli were also presented to five other NH listeners. All listeners appraised 24 musical excerpts, performed music perception tests, and filled out a musical background questionnaire. Music perception tests were the Appreciation of Music in Cochlear Implantees (AMICI), Montreal Battery for Evaluation of Amusia (MBEA), Melodic Contour Identification (MCI), and University of Washington Clinical Assessment of Music Perception (UW-CAMP). Study Sample: Twenty-five NH adults (22–56 yr old), recruited from the local and research communities, participated in the study. Ten adult CI users (46–80 yr old), recruited from the patient population of the local adult cochlear implant program, also participated in this study. Data Collection and Analysis: Musical excerpts were appraised using a seven-point rating scale, and music perception tests were scored as designed. Analysis of variance was performed on appraisal ratings, perception scores, and questionnaire data with listener group as a factor. Correlations were computed between musical appraisal ratings and perceptual scores on each music test. Results: Music is rated as more enjoyable by CI users than by the NH listeners hearing music through a simulation (CIsim), and the difference is statistically significant. For roughly half of the music perception tests, there are no statistically significant differences between the performance of the CI users and of the CIsim listeners. Generally, correlations between appraisal ratings and music perception scores are weak or nonexistent. Conclusions: NH adults listening to music that has been processed through a CI-simulation program are a reasonable model for actual CI users for many music perception skills, but not for rating musical enjoyment. For CI users, the apparent independence of music perception skills and music enjoyment (as assessed by appraisals) indicates that music enjoyment should not be assumed and should be examined explicitly.
APA, Harvard, Vancouver, ISO, and other styles
19

Eriksson, Gillian I. "Developing Creative Thinking Through an Integrated Arts Programme for Talented Children." Gifted Education International 6, no. 1 (January 1989): 8–15. http://dx.doi.org/10.1177/026142948900600103.

Full text
Abstract:
The Schmerenbeck Multi-Racial Educational Centre provides extra-mural enrichment to challenge gifted and talented children. In terms of a broader concept of identification, the Centre differentiated a Creative Arts Programme for talented children which has been in operation since 1983. This aims to extend children beyond their technical competence to develop creative excellence; to encourage psychological growth in developing perceptual, cultural, social and self-awareness; to develop aesthetic judgement, critical thinking and self-evaluation; and to develop metacognitive processes. The design of the programme includes workshops in several art disciplines (fine art, dance, music, drama, writing, etc.); Integrative Courses (Communication, Study, Thinking, Research Skills); and Integrated Art (Creativity) workshops. This paper discusses the nature and development of creative thinking in relation to expression and communication in the arts based on the results of an evaluation study of an Integrated Arts Programme. In the Integrated Arts Workshops, professional artists and teachers are brought into contact with groups of talented children to give exposure, encourage participation and develop understanding of the nature of creative thinking as expressed through different art forms. Herein, a concept or idea, initiated by the children, is explored through sensory stimulation (developing perceptual skills); through creative problem-solving (developing cognitive processes); and through reflection (developing affective processes).
APA, Harvard, Vancouver, ISO, and other styles
20

Lübeck, Tim, Hannes Helmholz, Johannes M. Arend, Christoph Pörschmann, and Jens Ahrens. "Perceptual Evaluation of Mitigation Approaches of Impairments due to Spatial Undersampling in Binaural Rendering of Spherical Microphone Array Data." Journal of the Audio Engineering Society 68, no. 6 (July 30, 2020): 428–40. http://dx.doi.org/10.17743/jaes.2020.0038.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Inabinet, Devin, Jan De La Cruz, Justin Cha, Kevin Ng, and Gabriella Musacchia. "Diotic and Dichotic Mechanisms of Discrimination Threshold in Musicians and Non-Musicians." Brain Sciences 11, no. 12 (November 30, 2021): 1592. http://dx.doi.org/10.3390/brainsci11121592.

Full text
Abstract:
The perception of harmonic complexes provides important information for musical and vocal communication. Numerous studies have shown that musical training and expertise are associated with better processing of harmonic complexes, however, it is unclear whether the perceptual improvement associated with musical training is universal to different pitch models. The current study addresses this issue by measuring discrimination thresholds of musicians (n = 20) and non-musicians (n = 18) to diotic (same sound to both ears) and dichotic (different sounds to each ear) sounds of four stimulus types: (1) pure sinusoidal tones, PT; (2) four-harmonic complex tones, CT; (3) iterated rippled noise, IRN; and (4) interaurally correlated broadband noise, called the “Huggins” or “dichotic” pitch, DP. Frequency difference limens (DLF) for each stimulus type were obtained via a three-alternative-forced-choice adaptive task requiring selection of the interval with the highest pitch, yielding the smallest perceptible fundamental frequency (F0) distance (in Hz) between two sounds. Music skill was measured by an online test of musical pitch, melody and timing maintained by the International Laboratory for Brain Music and Sound Research. Musicianship, length of music experience and self-evaluation of musical skill were assessed by questionnaire. Results showed musicians had smaller DLFs in all four conditions with the largest group difference in the dichotic condition. DLF thresholds were related to both subjective and objective musical ability. In addition, subjective self-report of musical ability was shown to be a significant variable in group classification. Taken together, the results suggest that music-related plasticity benefits multiple mechanisms of pitch encoding and that self-evaluation of musicality can be reliably associated with objective measures of perception.
APA, Harvard, Vancouver, ISO, and other styles
22

Duhovska, Jana, and Inga Millere. "EXPRESSIVE THERAPIES CONTINUUM-INFORMED EVALUATION OF THREE RESOURCE-ORIENTED RECEPTIVE AND ACTIVE MUSIC THERAPY TECHNIQUES IN CANCER PATIENTS IN PSYCHOSOCIAL REHABILITATION PROGRAMME." SOCIETY. INTEGRATION. EDUCATION. Proceedings of the International Scientific Conference 7 (May 20, 2020): 34. http://dx.doi.org/10.17770/sie2020vol7.5115.

Full text
Abstract:
Expressive Therapies Continuum (ETC), a model posed by Lusebrink and widely used in arts therapies, stipulates that human being is perceiving the world and processing the information in three modes – motion (kinesthetic-sensory perception), emotion (perceptual-emotional perception) and thought (cognitive-symbolic perception), and that optimally functioning person can freely function in all the modes, can slide between the poles of each of the mode and can integrate the elements from various modes and poles. And vice versa - difficulty or inability to function or being stuck in certain modes, can indicate to malfunction and even psychopathology. If that is the case - purposeful integration of various functions by offering expressive activity promoting utilisation of various functions of the ETC, can promote the optimal functioning. In order to find out the capacity of the three resource-based music therapy activities – 1) receptive music therapy activity, 2) semi-structured musical improvisation, 3) song-writing activity - to stimulate the utilisation of specific levels and polarities of the ETC, participants (n=24 cancer patients participating in the psychosocial rehabilitation programme) were asked to assess the elements of the ETC they applied while executing each of the activities. Results of the study show that during the receptive music therapy activity participants mostly used the affective, symbolic and sensory function, during the song-writing activity the mostly used all ETC functions except for sensory, but musical improvisation provoked application of all the ETC functions, and therefore turned out as ultimate activity, capable of integrating all the modes of perception and information processing.
APA, Harvard, Vancouver, ISO, and other styles
23

Ismail, Mostafa Refat. "Soniferous Architecture." International Journal of Art, Culture and Design Technologies 4, no. 1 (January 2014): 42–62. http://dx.doi.org/10.4018/ijacdt.2014010104.

Full text
Abstract:
“I call architecture frozen music” a quote by Johan Wolfgan von Goethe. It seems that his description of architecture will not be this much long lasting. Since many architectural structures now are considered soniferous. In an approach to rational the thinking of positive soundscape and move onwards in terms of systematic decision making, and creating tools for more creative planning techniques, this paper utilizes two methodologies in assessing soundscape impacts. One approach usually implemented in quality of manufacturing and product development, namely the Kano Model. The other approach deals with the case in the form of a wider scope which relates the design of the soundscape, and the effect of sound sculptures in objective terms. Due to the complexity of characterizing the soundscape, and its dependence on several of perceptual aspects and interventions, both models are mapped to form an evaluation tool for a specific sonic environment. It can be considered to be a complement along with previous frameworks that shed light on the emission of sound, and others on factors influencing the soundscape perception, or to be used as a tool for understanding and assessing individual responses and evaluation. In this case the importance of having a framework is to help evaluating the common effect of a successful intervention on the positive attributes of the soundscape.
APA, Harvard, Vancouver, ISO, and other styles
24

Wise, Karen J., and John A. Sloboda. "Establishing an empirical profile of self-defined “tone deafness”: Perception, singing performance and self-assessment." Musicae Scientiae 12, no. 1 (March 2008): 3–26. http://dx.doi.org/10.1177/102986490801200102.

Full text
Abstract:
Research has suggested that around 17% of Western adults self-define as “tone deaf” (Cuddy, Balkwill, Peretz & Holden, 2005). But questions remain about the exact nature of tone deafness. One candidate for a formal definition is “congenital amusia” (Peretz et al., 2003), characterised by a dense music-specific perceptual deficit. However, most people self-defining as tone deaf are not congenially amusic (Cuddy et al., 2005). According to Sloboda, Wise and Peretz (2005), the general population defines tone deafness as perceived poor singing ability, suggesting the need to extend investigations to production abilities and self-perceptions. The present research aims to discover if self-defined tone deaf people show any pattern of musical difficulties relative to controls, and to offer possible explanations for them ( e.g. perceptual, cognitive, productive, motivational). 13 self-reporting “tone deaf” (TD) and 17 self-reporting “not tone deaf” (NTD) participants were assessed on a range of measures for musical perception, cognition, memory, production and self-ratings of performance. This paper reports on four measures to assess perception (Montreal Battery of Evaluation of Amusia), vocal production (songs and pitch-matching) and self-report. Results showed that the TD group performed significantly less well than the NTD group in all measures, but did not demonstrate the dense deficits characteristic of “congenital amusics”. Singing performance was influenced by context, with both groups performing better when accompanied than unaccompanied. The TD group self-rated the accuracy of their singing significantly lower than the NTD group, but not disproportionately so, and were less confident in their vocal quality. The TD participants are not facing an insurmountable difficulty, but are likely to improve with targeted intervention.
APA, Harvard, Vancouver, ISO, and other styles
25

Yoshie, Michiko, Kazutoshi Kudo, and Tatsuyuki Ohtsuki. "Effects of Psychological Stress on State Anxiety, Electromyographic Activity, and Arpeggio Performance in Pianists." Medical Problems of Performing Artists 23, no. 3 (September 1, 2008): 120–32. http://dx.doi.org/10.21091/mppa.2008.3024.

Full text
Abstract:
The present study examined the effects of psychological stress, as manipulated by performance evaluation, on the cognitive, physiological, and behavioral components of music performance anxiety (MPA) and performance quality. Twelve skilled pianists (five women, seven men) aged 21.9 ± 3.3 yrs performed arpeggios on a digital piano at the metronome-paced fastest possible tempo under the evaluation and no-evaluation conditions. Measurements were made of self-reported state anxiety, heart rate (HR), sweat rate (SR), and electromyographic (EMG) activity from eight arm and shoulder muscles, and MIDI signals were obtained. The increases in self-reported anxiety score, HR, and SR in the evaluation condition confirmed the effectiveness of stress manipulation. The EMG activity of all the muscles investigated significantly increased from the no-evaluation to evaluation condition, suggesting that psychological stress can add to the risk of playing-related musculoskeletal disorders. Furthermore, the elevated muscle activity in the forearm was accompanied by increased key velocities. We also obtained the first evidence of increased arm stiffness associated with MPA by estimating the cocontraction levels of antagonist muscles in the forearm and upper arm. Consistent with the three systems model of anxiety, the three MPA components were moderately intercorrelated. Participants with high trait anxiety showed stronger correlations between the self-reported anxiety score and other objective measures, which indicated their heightened perceptual sensitivity to physiological and behavioral changes caused by psychological stress. These results provide some practical implications for understanding and coping with MPA.
APA, Harvard, Vancouver, ISO, and other styles
26

Solekhan, Solekhan, Yoyon K. Suprapto, and Wirawan Wirawan. "Impulsive spike enhancement on gamelan audio using harmonic percussive separation." International Journal of Electrical and Computer Engineering (IJECE) 9, no. 3 (June 1, 2019): 1700. http://dx.doi.org/10.11591/ijece.v9i3.pp1700-1710.

Full text
Abstract:
Impulsive spikes often occur in audio recording of gamelan where most existing methods reduce it. This research offers new method to enhance audio impulsive spike in gamelan music that is able to reduce, eliminate and even strengthen spikes. The process separates audio components into harmonics and percussive components. Percussion component is set to rise or lowered, and the results of the process combined with harmonic components again. This study proposes a new method that allows reducing, eliminating and even amplifying the spike. From the similarity test using the Cosine Distance method, it is seen that spike enhancement through Harmonic Percussive Source Separation (HPSS) has an average Cosine Distance value of 0.0004 or similar to its original, while Mean Square Error (MSE) has an average value of 0.0004 that is very small in average error and also very similar. From the Perceptual Evaluation of Audio Quality (PEAQ) testing with Harmonic Percussive Source Separation (HPSS), it has a better quality with an average Objective Difference Grade (ODG) of -0.24 or Imperceptible.
APA, Harvard, Vancouver, ISO, and other styles
27

Mamykina, Olena. "Theoretical and methodological aspects of the development of the future Music teachers’ artistic taste in the process of teaching pop singing." Scientific bulletin of South Ukrainian National Pedagogical University named after K. D. Ushynsky 2020, no. 3 (132) (September 24, 2020): 98–105. http://dx.doi.org/10.24195/2617-6688-2020-3-11.

Full text
Abstract:
The article is devoted to the scientific development of the problem related to the development of an artistic taste of the future teachers specialised in Musical Art. The purpose of the article is to theoretically and methodologically substantiate the development process of an artistic taste of the future teachers specialised in Musical Art in the process of teaching pop singing. The purpose is realised through the implementation of relevant tasks using the methods of theoretical research: analysis, synthesis, deduction, induction, generalisation and extrapolation. The article considers the category "artistic taste" as a person's ability developed by social practice to evaluate aesthetic phenomena, to distinguish beautiful things from ugly ones. The artistic taste of the future teachers of Musical Art is defined as an individual, socially conditioned system of evaluation of phenomena reflected in works of art, aimed at widening the worldview of an individual. The component structure of the artistic taste of the future teachers specialised in Musical Art consists of four components: personal-motivational, sensory-perceptual, intellectual-interpretive and reflexive-projective. Professional training of the future Music teachers is considered as a platform for the formation of their artistic taste. We find the specifics of training future professionals in pop singing particularly important in the context of the study. The pop variety of vocal music is recognised as the one that provides for effective pedagogical influence; in particular, it is aimed at forming the artistic taste of the younger generation, based on the needs, interests and perception level of most students. A set of scientific approaches is considered as a methodological basis for the formation of the future Music teachers’ artistic taste in the process of teaching them pop singing: axiological, student-centred and holistic approaches. The study also presents pedagogical principles, the implementation of which ensures the formation of the future Music teachers’ artistic taste in the process of teaching them pop singing.
APA, Harvard, Vancouver, ISO, and other styles
28

Bai, Mingsian R., and Chia-Hao Kuo. "Acoustic Source Localization and Deconvolution-Based Separation." Journal of Computational Acoustics 23, no. 02 (May 7, 2015): 1550008. http://dx.doi.org/10.1142/s0218396x15500083.

Full text
Abstract:
This paper examines two fundamental issues in sound field analysis: acoustic sources localization and separation. Algorithms are developed to locate and separate acoustic signals on the basis of plane-wave decomposition. In the localization stage, directions of plane waves are determined using either minimum variance distortionless response (MVDR) method or multiple signal classification (MUSIC) method. For broadband scenarios, coherent and incoherent techniques are utilized in the localization procedure. In the separation stage, two approaches with overdetermined and underdetermined settings can be employed. In the overdetermined approach, Tikhonov regularization (TIKR) is utilized to recover the source signals. In the underdetermined approach, the steering matrix is augmented by including the directions that have been determined in the localization stage. Hence, the separation problem is formulated into a compressive sensing (CS) problem which can be effectively solved by using convex (CVX) optimization. Simulation and experiments are conducted for a 24-element circular array. Objective tests using perceptual evaluation of speech quality (PESQ) tests and subjective listening tests demonstrate that the proposed methods yield speech signals with well separated and improved quality, as compared to the mixed signals.
APA, Harvard, Vancouver, ISO, and other styles
29

Zhao, Hua, Aibo Wang, and Ying Fan. "Research on the Identification and Evaluation of Aerobics Movements Based on Deep Learning." Scientific Programming 2021 (December 21, 2021): 1–10. http://dx.doi.org/10.1155/2021/6433260.

Full text
Abstract:
A deep learning approach is used in this study to provide insight into aerobics movement recognition, and the model is used for aerobics movement recognition. The model complexity is significantly reduced, while the multi-scale features of the target at the fine-grained level are extracted, significantly improving the characterization of the target, by embedding lightweight multi-scale convolution modules in 3D convolutional residual networks to increase the local perceptual field range in each layer of the network. Finally, using the channel attention mechanism, the key features are extracted from the multi-scale features. To create a dual-speed frame rate detection model, the fast-slow combination idea is fused into a 3D convolutional network. To obtain spatial semantic information and motion information in the video, the model uses different frame rates, and the two-channel information is fused with features using lateral concatenation. Following the acquisition of all features, the features are fed into a temporal detection network to identify temporal actions and to design a behavior recognition system for the network model to demonstrate the network model's applicability. The average scores of students in the experimental group were significantly higher than those in the control group in seven areas: set accuracy, movement amplitude, movement strength, body coordination, coordination of movement and music, movement expression, and aesthetics; the average scores of movement proficiency and body control in the experimental group were also significantly higher than those in the control group, but the differences were not significant. The differences between the eight indicators in the experimental group were not significant when compared to those in the preexperimental group, indicating that intensive rhythm training for students improves secondary school students' comprehension, proficiency, and presentation of aerobics sets.
APA, Harvard, Vancouver, ISO, and other styles
30

Bruderer, Michael J., Martin F. McKinney, and Armin Kohlrausch. "The perception of structural boundaries in melody lines of Western popular music." Musicae Scientiae 13, no. 2 (September 2009): 273–313. http://dx.doi.org/10.1177/102986490901300204.

Full text
Abstract:
Two experiments were conducted to investigate the perception of structural boundaries in six popular music songs. In the segmentation experiment, participants were asked to indicate perceived segment boundaries in monophonic representations of the songs, synthesized from the MIDI score. In the salience rating experiment, participants were asked to rate the salience of a number of boundaries selected from the outcome of the segmentation experiment, and to describe the perceptual cues for each boundary. The segmentation experiment showed that there is a wide variety in the number and temporal positions of perceived boundaries across participants. However, certain boundaries in the music are indicated by nearly all participants. The salience rating experiment showed a moderate correlation between participants’ boundary salience ratings. Comparing the outcome of the two experiments, we found a significant correlation between the frequency of boundary indications and the corresponding salience rating of that boundary. These findings suggest that both methods can be used equally well for evaluating the perceptual boundaries. The perceptual boundaries were also compared to boundaries predicted by three musicological models. The comparison of the perceptual boundaries with the predicted boundaries showed a moderate correlation between the perceptual and predicted boundaries.
APA, Harvard, Vancouver, ISO, and other styles
31

Tahmasebi, Sina, Manuel Segovia-Martinez, and Waldo Nogueira. "Optimization of Sound Coding Strategies to Make Singing Music More Accessible for Cochlear Implant Users." Trends in Hearing 27 (January 2023): 233121652211480. http://dx.doi.org/10.1177/23312165221148022.

Full text
Abstract:
Cochlear implants (CIs) are implantable medical devices that can partially restore hearing to people suffering from profound sensorineural hearing loss. While these devices provide good speech understanding in quiet, many CI users face difficulties when listening to music. Reasons include poor spatial specificity of electric stimulation, limited transmission of spectral and temporal fine structure of acoustic signals, and restrictions in the dynamic range that can be conveyed via electric stimulation of the auditory nerve. The coding strategies currently used in CIs are typically designed for speech rather than music. This work investigates the optimization of CI coding strategies to make singing music more accessible to CI users. The aim is to reduce the spectral complexity of music by selecting fewer bands for stimulation, attenuating the background instruments by strengthening a noise reduction algorithm, and optimizing the electric dynamic range through a back-end compressor. The optimizations were evaluated through both objective and perceptual measures of speech understanding and melody identification of singing voice with and without background instruments, as well as music appreciation questionnaires. Consistent with the objective measures, results gathered from the perceptual evaluations indicated that reducing the number of selected bands and optimizing the electric dynamic range significantly improved speech understanding in music. Moreover, results obtained from questionnaires show that the new music back-end compressor significantly improved music enjoyment. These results have potential as a new CI program for improved singing music perception.
APA, Harvard, Vancouver, ISO, and other styles
32

Schepker, Henning, Florian Denk, Birger Kollmeier, and Simon Doclo. "Acoustic Transparency in Hearables - Perceptual Sound Quality Evaluations." Journal of the Audio Engineering Society 68, no. 7/8 (September 4, 2020): 495–507. http://dx.doi.org/10.17743/jaes.2020.0045.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Davis, Nicholas. "Human-Computer Co-Creativity: Blending Human and Computational Creativity." Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment 9, no. 6 (June 30, 2021): 9–12. http://dx.doi.org/10.1609/aiide.v9i6.12603.

Full text
Abstract:
This paper describes a thesis exploring how computer programs can collaborate as equals in the artistic creative process. The proposed system, CoCo Sketch, encodes some rudimentary stylistic rules of abstract sketching and music theory to contribute supplemental lines and music while the user sketches. We describe a three-part research method that includes defining rudimentary stylistic rules for abstract line drawing, exploring the interaction design for artistic improvisation with a computer, and evaluating how CoCo Sketch affects the artistic creative process. We report on the initial results of early investigations into artistic style that describe cognitive, perceptual, and behavioral processes used in abstract artists making.
APA, Harvard, Vancouver, ISO, and other styles
34

Honing, Henkjan. "Computational Modeling of Music Cognition: A Case Study on Model Selection." Music Perception 23, no. 5 (June 2006): 365–76. http://dx.doi.org/10.1525/mp.2006.23.5.365.

Full text
Abstract:
While the most common way of evaluating a computational model is to see whether it shows a good fit with the empirical data, recent literature on theory testing and model selection criticizes the assumption that this is actually strong evidence for the validity of a model. This article presents a case study from music cognition (modeling the ritardandi in music performance) and compares two families of computational models (kinematic and perceptual) using three different model selection criteria: goodness-of-fit, model simplicity, and the degree of surprise in the predictions. In the light of what counts as strong evidence for a model’s validity—namely that it makes limited range, nonsmooth, and relatively surprising predictions—the perception-based model is preferred over the kinematic model.
APA, Harvard, Vancouver, ISO, and other styles
35

Neuhoff, Hans, Rainer Polak, and Timo Fischinger. "Perception and Evaluation of Timing Patterns in Drum Ensemble Music from Mali." Music Perception 34, no. 4 (April 1, 2017): 438–51. http://dx.doi.org/10.1525/mp.2017.34.4.438.

Full text
Abstract:
Polak’s (2010) chronometric analyses of Malian jembe music suggested that the characteristic “feel” of individual pieces rests upon nonisochronous subdivisions of the beat. Each feel is marked by a specific pattern of two or three different subdivisional pulses—these being either short, medium, or long. London (2010) called the possibility of more than two different pulse classes into question on psychological and theoretical grounds. To shed light on this issue, 23 professional Malian percussionists and dancers were presented with timing-manipulated phrases from a piece of Malian drumming music called “Manjanin.” In a pairwise comparison experiment, participants were asked: (1) if the items of each pair were same or different, and (2) if different, which of the two was the better example of the characteristic rhythm of Manjanin. While most contrastive pairs were well distinguished and produced clear preference ratings, participants were unable to distinguish short-medium-long patterns from short-long-long patterns, and both were preferred to all other manipulations. This supports London’s claim that, perceptually, there are only two pulse classes. We discuss further implications of these findings for music theory, involving beat subdivision, tempo effects, microtiming, and expressive variation, as well as methodological issues.
APA, Harvard, Vancouver, ISO, and other styles
36

Gfeller, Kate, Jacob Oleson, John F. Knutson, Patrick Breheny, Virginia Driscoll, and Carol Olszewski. "Multivariate Predictors of Music Perception and Appraisal by Adult Cochlear Implant Users." Journal of the American Academy of Audiology 19, no. 02 (February 2008): 120–34. http://dx.doi.org/10.3766/jaaa.19.2.3.

Full text
Abstract:
The research examined whether performance by adult cochlear implant recipients on a variety of recognition and appraisal tests derived from real-world music could be predicted from technological, demographic, and life experience variables, as well as speech recognition scores. A representative sample of 209 adults implanted between 1985 and 2006 participated. Using multiple linear regression models and generalized linear mixed models, sets of optimal predictor variables were selected that effectively predicted performance on a test battery that assessed different aspects of music listening. These analyses established the importance of distinguishing between the accuracy of music perception and the appraisal of musical stimuli when using music listening as an index of implant success. Importantly, neither device type nor processing strategy predicted music perception or music appraisal. Speech recognition performance was not a strong predictor of music perception, and primarily predicted music perception when the test stimuli included lyrics. Additionally, limitations in the utility of speech perception in predicting musical perception and appraisal underscore the utility of music perception as an alternative outcome measure for evaluating implant outcomes. Music listening background, residual hearing (i.e., hearing aid use), cognitive factors, and some demographic factors predicted several indices of perceptual accuracy or appraisal of music. La investigación examinó si el desempeño, por parte de adultos receptores de un implante coclear, sobre una variedad de pruebas de reconocimiento y evaluación derivadas de la música del mundo real, podrían predecirse a partir de variables tecnológicas, demográficas y de experiencias de vida, así como de puntajes de reconocimiento del lenguaje. Participó una muestra representativa de 209 adultos implantados entre 1965 y el 2006. Usando múltiples modelos de regresión lineal y modelos mixtos lineales generalizados, se seleccionaron grupos de variables óptimas de predicción, que pudieran predecir efectivamente el desempeño por medio de una batería de pruebas que permitiera evaluar diferentes aspectos de la apreciación musical. Estos análisis establecieron la importancia de distinguir entre la exactitud en la percepción musical y la evaluación de estímulos musicales cuando se utiliza la apreciación musical como un índice de éxito en la implantación. Importantemente, ningún tipo de dispositivo o estrategia de procesamiento predijo la percepción o la evaluación musical. El desempeño en el reconocimiento del lenguaje no fue un elemento fuerte de predicción, y llegó a predecir primariamente la percepción musical cuando los estímulos de prueba incluyeron las letras. Adicionalmente, las limitaciones en la utilidad de la percepción del lenguaje a la hora de predecir la percepción y la evaluación musical, subrayan la utilidad de la percepción de la música como una medida alternativa de resultado para evaluar la implantación coclear. La música de fondo, la audición residual (p.e., el uso de auxiliares auditivos), los factores cognitivos, y algunos factores demográficos predijeron varios índices de exactitud y evaluación perceptual de la música.
APA, Harvard, Vancouver, ISO, and other styles
37

Andreopoulou, Areti, and Brian F. G. Katz. "Perceptual Impact on Localization Quality Evaluations of Common Pre-Processing for Non-Individual Head-Related Transfer Functions." Journal of the Audio Engineering Society 70, no. 5 (May 11, 2022): 340–54. http://dx.doi.org/10.17743/jaes.2022.0008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Vos, Piet G., and Paul P. Verkaart. "Inference of Mode in Melodies." Music Perception 17, no. 2 (1999): 223–39. http://dx.doi.org/10.2307/40285892.

Full text
Abstract:
Listeners' ability to infer the mode (major vs. minor) of a piece of Western tonal music was examined. Twenty-four subjects, divided into two groups according to their level of musical expertise, evaluated 11 musical stimuli, selected from J. S. Bach's "Well-Tempered Clavier". The stimuli included both unambiguous and ambiguous examples of the two modes, as well as one example of a modulation (from minor into major). The stimuli consisted of unaccompanied melodic openings of compositions, each containing 10 tones. Stimulus presentation and evaluation took place in nine progressively longer steps, starting with presentation of the first two tones, followed by their evaluation on a continuous scale, with 0 = "extremely minor" and 100 = "extremely major," and ending with evaluation of the complete stimulus. The results showed that mode inference followed the prescribed modes and tended to become more definite with increasing stimulus length. Experts were generally more definite in their inferences than were nonexperts. Surprisingly, the temporal structure of stimuli also appeared to affect mode inference. The degree of definiteness of mode judgments did not systematically differ between the two modes. It was concluded that listeners are able to infer the mode of a piece of music in the absence of explicit harmonic cues. The generalizability of the results with respect to music pieces of late periods in Western music history and the impact of different musical genres on mode inference are discussed. /// Onderwerp van onderzoek betrof het perceptuele onderscheid tussen majeur en mineur. Vierentwintig proefpersonen, verdeeld in twee groepen die verschilden in nivo van muzikale expertise, evalueerden 11 hen onbekende muziek stimuli, gekozen uit J. S. Bach's "Wohltemperierte Klavier". De stimuli bevatten zowel ondubbelzinnige als ambigue voorbeelden van de twee toonsoort geslachten, alsmede een voorbeeld van modulatie (in dit geval van mineur naar majeur). De stimuli bestonden uit ongeharmonizeerde melodische openingen van composities, elk 10 tonen lang. Stimulus aanbieding en (majeur/mineur) evaluatie vonden plaats in negen toenemend langere stappen, beginnend met de aanbieding van de eerste twee tonen van een stimulus, gevolgd door een evaluatie daarvan (op een continue schaal met 0 = "uitgesproken mineur" en 100 = "uitgesproken majeur"), en eindigend met de evaluatie van de complete stimulus. De resultaten lieten zien dat de evaluaties de profilering der voorgeschreven toonsoortgeslachten volgden en stelliger werden met toenemende stimulus lengte. De experts bleken doorgaans zekerder in hun kwalificaties dan de nonexperts. Verrassend genoeg bleek ook de temporele struktuur der stimuli de beoordeling te beïnvloeden. Geconcludeerd werd dat luisteraars in staat zijn om het geslacht van de toonsoort waarin een muziekstuk staat te identificeren in afwezigheid van expliciete harmonische informatie. De generalizeerbaarheid der resultaten met betrekking tot muziekstukken uit latere perioden in de Westerse tonale muziekgeschiedenis alswel de mogelijke invloed van verschillende muzikale genres op de majeur/mineur interpretatie werden ter discussie gesteld.
APA, Harvard, Vancouver, ISO, and other styles
39

Soraghan, Sean, Felix Faire, Alain Renaud, and Ben Supper. "A New Timbre Visualization Technique Based on Semantic Descriptors." Computer Music Journal 42, no. 1 (April 2018): 23–36. http://dx.doi.org/10.1162/comj_a_00449.

Full text
Abstract:
This article introduces the concept of Sound Signature audio visualization, a new form of amplitude waveform that also visualizes perceptually salient spectral features and their evolution over time. A brief review of existing research into timbre description and visualization is given. This is followed by an in-depth description of the algorithm. Rationale is given for the various visual mappings with reference to existing literature. The results of an online subjective evaluation survey are reported and discussed. The survey examined user preferences for the visual mappings used in Sound Signature visualizations. Results show a preference for inverse mapping of spectral centroid to the first component of the hue, saturation, value (HSV) color space.
APA, Harvard, Vancouver, ISO, and other styles
40

Leite, Harlei Miguel de Arruda, Sarah Negreiros de Carvalho, Thiago Bulhões da Silva Costa, Romis Attux, Heiko Horst Hornung, and Dalton Soares Arantes. "Analysis of User Interaction with a Brain-Computer Interface Based on Steady-State Visually Evoked Potentials: Case Study of a Game." Computational Intelligence and Neuroscience 2018 (2018): 1–10. http://dx.doi.org/10.1155/2018/4920132.

Full text
Abstract:
This paper presents a systematic analysis of a game controlled by a Brain-Computer Interface (BCI) based on Steady-State Visually Evoked Potentials (SSVEP). The objective is to understand BCI systems from the Human-Computer Interface (HCI) point of view, by observing how the users interact with the game and evaluating how the interface elements influence the system performance. The interactions of 30 volunteers with our computer game, named “Get Coins,” through a BCI based on SSVEP, have generated a database of brain signals and the corresponding responses to a questionnaire about various perceptual parameters, such as visual stimulation, acoustic feedback, background music, visual contrast, and visual fatigue. Each one of the volunteers played one match using the keyboard and four matches using the BCI, for comparison. In all matches using the BCI, the volunteers achieved the goals of the game. Eight of them achieved a perfect score in at least one of the four matches, showing the feasibility of the direct communication between the brain and the computer. Despite this successful experiment, adaptations and improvements should be implemented to make this innovative technology accessible to the end user.
APA, Harvard, Vancouver, ISO, and other styles
41

Collins, Tom, Robin Laney, Alistair Willis, and Paul H. Garthwaite. "Developing and evaluating computational models of musical style." Artificial Intelligence for Engineering Design, Analysis and Manufacturing 30, no. 1 (April 30, 2015): 16–43. http://dx.doi.org/10.1017/s0890060414000687.

Full text
Abstract:
AbstractStylistic composition is a creative musical activity, in which students as well as renowned composers write according to the style of another composer or period. We describe and evaluate two computational models of stylistic composition, called Racchman-Oct2010 (random constrained chain of Markovian nodes, October 2010) and Racchmaninof-Oct2010 (Racchman with inheritance of form). The former is a constrained Markov model, and the latter embeds this model in an analogy-based design system. Racchmaninof-Oct2010 applies a pattern discovery algorithm called SIACT and a perceptually validated formula for rating pattern importance, to guide the generation of a new target design from an existing source design. A listening study is reported concerning human judgments of music excerpts that are, to varying degrees, in the style of mazurkas by Frédéric Chopin (1810–1849). The listening study acts as an evaluation of the two computational models and a third, benchmark system, called Experiments in Musical Intelligence. Judges' responses indicate that some aspects of musical style, such as phrasing and rhythm, are being modeled effectively by our algorithms. Judgments are also used to identify areas for future improvements. We discuss the broader implications of this work for the fields of engineering and design, where there is potential to make use of our models of hierarchical repetitive structure.
APA, Harvard, Vancouver, ISO, and other styles
42

Simonetta, Federico, Federico Avanzini, and Stavros Ntalampiras. "A perceptual measure for evaluating the resynthesis of automatic music transcriptions." Multimedia Tools and Applications, April 13, 2022. http://dx.doi.org/10.1007/s11042-022-12476-0.

Full text
Abstract:
AbstractThis study focuses on the perception of music performances when contextual factors, such as room acoustics and instrument, change. We propose to distinguish the concept of “performance” from the one of “interpretation”, which expresses the “artistic intention”. Towards assessing this distinction, we carried out an experimental evaluation where 91 subjects were invited to listen to various audio recordings created by resynthesizing MIDI data obtained through Automatic Music Transcription (AMT) systems and a sensorized acoustic piano. During the resynthesis, we simulated different contexts and asked listeners to evaluate how much the interpretation changes when the context changes. Results show that: (1) MIDI format alone is not able to completely grasp the artistic intention of a music performance; (2) usual objective evaluation measures based on MIDI data present low correlations with the average subjective evaluation. To bridge this gap, we propose a novel measure which is meaningfully correlated with the outcome of the tests. In addition, we investigate multimodal machine learning by providing a new score-informed AMT method and propose an approximation algorithm for the p-dispersion problem.
APA, Harvard, Vancouver, ISO, and other styles
43

Gupta, Chitralekha, Haizhou Li, and Ye Wang. "A technical framework for automatic perceptual evaluation of singing quality." APSIPA Transactions on Signal and Information Processing 7 (2018). http://dx.doi.org/10.1017/atsip.2018.10.

Full text
Abstract:
Human experts evaluate singing quality based on many perceptual parameters such as intonation, rhythm, and vibrato, with reference to music theory. We proposed previously the Perceptual Evaluation of Singing Quality (PESnQ) framework that incorporated acoustic features related to these perceptual parameters in combination with the cognitive modeling concept of the telecommunication standard Perceptual Evaluation of Speech Quality to evaluate singing quality. In this study, we present further the study of the PESnQ framework to approximate the human judgments. First, we find that a linear combination of the individual perceptual parameter human scores can predict their overall singing quality judgment. This provides us with a human parametric judgment equation. Next, the prediction of the individual perceptual parameter scores from the PESnQ acoustic features show a high correlation with the respective human scores, which means more meaningful feedback to learners. Finally, we compare the performance of early fusion and late fusion of the acoustic features in predicting the overall human scores. We find that the late fusion method is superior to that of the early fusion method. This work underlines the importance of modeling human perception in automatic singing quality assessment.
APA, Harvard, Vancouver, ISO, and other styles
44

Merrill, Julia. "Auditory perceptual assessment of voices: Examining perceptual ratings as a function of voice experience." Current Psychology, January 27, 2022. http://dx.doi.org/10.1007/s12144-022-02734-7.

Full text
Abstract:
AbstractUnderstanding voice usage is vital to our understanding of human interaction. What is known about the auditory perceptual evaluation of voices comes mainly from studies of voice professionals, who evaluate operatic/lyrical singing in specific contexts. This is surprising as recordings of singing voices from different musical styles are an omnipresent phenomenon, evoking reactions in listeners with various levels of expertise. Understanding how untrained listeners perceive and describe voices will open up new research possibilities and enhance vocal communication between listeners. Here three studies with a mixed-methods approach aimed at: (1) evaluating the ability of untrained listeners to describe voices, and (2) determining what auditory features were most salient in participants’ discrimination of voices. In an interview (N = 20) and a questionnaire study (N = 48), free voice descriptions by untrained listeners of 23 singing voices primarily from popular music were compared with terms used by voice professionals, revealing that participants were able to describe voices using vocal characteristics from essential categories indicating sound quality, pitch changes, articulation, and variability in expression. Nine items were derived and used in an online survey for the evaluation of six voices by trained and untrained listeners in a German (N = 216) and an English (N = 50) sample, revealing that neither language nor expertise affected the assessment of the singers. A discriminant analysis showed that roughness and tension were important features for voice discrimination. The measurement of vocal expression created in the current study will be informative for studying voice perception and evaluation more generally.
APA, Harvard, Vancouver, ISO, and other styles
45

Bernardo, Gonçalo, and Gilberto Bernardes. "Leveraging compatibility and diversity in computer-aided music mashup creation." Personal and Ubiquitous Computing, December 23, 2022. http://dx.doi.org/10.1007/s00779-022-01702-z.

Full text
Abstract:
AbstractWe advance Mixmash-AIS, a multimodal optimization music mashup creation model for loop recombination at scale. Our motivation is to (1) tackle current scalability limitations in state-of-the-art (brute force) computational mashup models while enforcing the (2) compatibility of audio loops and (3) a pool of diverse mashups that can accommodate user preferences. To this end, we adopt the artificial immune system (AIS) opt-aiNet algorithm to efficiently compute a population of compatible and diverse music mashups from loop recombinations. Optimal mashups result from local minima in a feature space representing harmonic, rhythmic, and spectral musical audio compatibility. We objectively assess the compatibility, diversity, and computational performance of Mixmash-AIS generated mashups compared to a standard genetic algorithm (GA) and a brute force (BF) approach. Furthermore, we conducted a perceptual test to validate the objective evaluation function within Mixmash-AIS in capturing user enjoyment of the computer-generated loop mashups. Our results show that while the GA stands as the most efficient algorithm, the AIS opt-aiNet outperforms both the GA and BF approaches in terms of compatibility and diversity. Our listening test has shown that Mixmash-AIS objective evaluation function significantly captures the perceptual compatibility of loop mashups (p < .001).
APA, Harvard, Vancouver, ISO, and other styles
46

Lippolis, Mariangela, Daniel Müllensiefen, Klaus Frieler, Benedetta Matarrelli, Peter Vuust, Rosalinda Cassibba, and Elvira Brattico. "Learning to play a musical instrument in the middle school is associated with superior audiovisual working memory and fluid intelligence: A cross-sectional behavioral study." Frontiers in Psychology 13 (October 13, 2022). http://dx.doi.org/10.3389/fpsyg.2022.982704.

Full text
Abstract:
Music training, in all its forms, is known to have an impact on behavior both in childhood and even in aging. In the delicate life period of transition from childhood to adulthood, music training might have a special role for behavioral and cognitive maturation. Among the several kinds of music training programs implemented in the educational communities, we focused on instrumental training incorporated in the public middle school curriculum in Italy that includes both individual, group and collective (orchestral) lessons several times a week. At three middle schools, we tested 285 preadolescent children (aged 10–14 years) with a test and questionnaire battery including adaptive tests for visuo-spatial working memory skills (with the Jack and Jill test), fluid intelligence (with a matrix reasoning test) and music-related perceptual and memory abilities (with listening tests). Of these children, 163 belonged to a music curriculum within the school and 122 to a standard curriculum. Significant differences between students of the music and standard curricula were found in both perceptual and cognitive domains, even when controlling for pre-existing individual differences in musical sophistication. The music children attending the third and last grade of middle school had better performance and showed the largest advantage compared to the control group on both audiovisual working memory and fluid intelligence. Furthermore, some gender differences were found for several tests and across groups in favor of females. The present results indicate that learning to play a musical instrument as part of the middle school curriculum represents a resource for preadolescent education. Even though the current evidence is not sufficient to establish the causality of the found effects, it can still guide future research evaluation with longitudinal data.
APA, Harvard, Vancouver, ISO, and other styles
47

Urbaniak, Olivia, and Helen F. Mitchell. "How to dress to impress: The effect of concert dress type on perceptions of female classical pianists." Psychology of Music, May 3, 2021, 030573562110011. http://dx.doi.org/10.1177/03057356211001120.

Full text
Abstract:
Audiences expect music performers to follow tacit dress codes for the concert stage. In classical music performance, audiences favor performers in formal dress over casual dress, but it is unclear what constitutes appropriate formal attire. A perceptual study was designed to test for different interpretations of suitable concert dress. Four female pianists in three contrasting black outfits (long dress, short dress, and suit) were video-recorded performing three musical pieces, and the audio was dubbed throughout for audio consistency. Thirty listener/viewers rated the clips on musicality, technical proficiency, overall performance, and appropriateness of dress. Performances in the long dress were rated significantly higher than in the short dress or suit. The short dress was consistently rated lowest, whereas the suit received more complex responses. Follow-up interviews confirmed listener/viewers’ unconscious bias toward untraditional formal attire and their tendency to objectify the performers. They were unblinded to the purpose of the task and were able to reflect on the tangible implications of concert dress, stage manner, and physical appearance on their evaluations. Future studies should harness the potential for experiential learning, or “learning by doing,” to expand future music professionals’ critical evaluation skills.
APA, Harvard, Vancouver, ISO, and other styles
48

Chandna, Pritish, Helena Cuesta, Darius Petermann, and Emilia Gómez. "A Deep-Learning Based Framework for Source Separation, Analysis, and Synthesis of Choral Ensembles." Frontiers in Signal Processing 2 (April 5, 2022). http://dx.doi.org/10.3389/frsip.2022.808594.

Full text
Abstract:
Choral singing in the soprano, alto, tenor and bass (SATB) format is a widely practiced and studied art form with significant cultural importance. Despite the popularity of the choral setting, it has received little attention in the field of Music Information Retrieval. However, the recent publication of high-quality choral singing datasets as well as recent developments in deep learning based methodologies applied to the field of music and speech processing, have opened new avenues for research in this field. In this paper, we use some of the publicly available choral singing datasets to train and evaluate state-of-the-art source separation algorithms from the speech and music domains for the case of choral singing. Furthermore, we evaluate existing monophonic F0 estimators on the separated unison stems and propose an approximation of the perceived F0 of a unison signal. Additionally, we present a set of applications combining the proposed methodologies, including synthesizing a single singer voice from the unison, and transposing and remixing the separated stems into a synthetic multi-singer choral signal. We finally conduct a set of listening tests to perform a perceptual evaluation of the results we obtain with the proposed methodologies.
APA, Harvard, Vancouver, ISO, and other styles
49

Guinamard, Antoine, Sylvain Clément, Sophie Goemaere, Alice Mary, Audrey Riquet, and Delphine Dellacherie. "Musical abilities in children with developmental cerebellar anomalies." Frontiers in Systems Neuroscience 16 (August 18, 2022). http://dx.doi.org/10.3389/fnsys.2022.886427.

Full text
Abstract:
Developmental Cerebellar Anomalies (DCA) are rare diseases (e.g., Joubert syndrome) that affect various motor and non-motor functions during childhood. The present study examined whether music perception and production are affected in children with DCA. Sixteen children with DCA and 37 healthy matched control children were tested with the Montreal Battery for Evaluation of Musical Abilities (MBEMA) to assess musical perception. Musical production was assessed using two singing tasks: a pitch-matching task and a melodic reproduction task. Mixed model analyses showed that children with DCA were impaired on the MBEMA rhythm perception subtest, whereas there was no difference between the two groups on the melodic perception subtest. Children with DCA were also impaired in the melodic reproduction task. In both groups, singing performance was positively correlated with rhythmic and melodic perception scores, and a strong correlation was found between singing ability and oro-bucco-facial praxis in children with DCA. Overall, children with DCA showed impairments in both music perception and production, although heterogeneity in cerebellar patient’s profiles was highlighted by individual analyses. These results confirm the role of the cerebellum in rhythm processing as well as in the vocal sensorimotor loop in a developmental perspective. Rhythmic deficits in cerebellar patients are discussed in light of recent work on predictive timing networks including the cerebellum. Our results open innovative remediation perspectives aiming at improving perceptual and/or production musical abilities while considering the heterogeneity of patients’ clinical profiles to design music-based therapies.
APA, Harvard, Vancouver, ISO, and other styles
50

Loutrari, Ariadne, Kathryn Ansell, C. Philip Beaman, Cunmei Jiang, and Fang Liu. "Auditory imagery in congenital amusia." Musicae Scientiae, September 13, 2022, 102986492211228. http://dx.doi.org/10.1177/10298649221122870.

Full text
Abstract:
Congenital amusia is a neurogenetic disorder affecting various aspects of music and speech processing. Although perception and auditory imagery in the general population may share mechanisms, it is not known whether previously documented perceptual impairments in amusia are coupled with difficulties in imaging auditory objects. We employed the Bucknell Auditory Imagery Scale (BAIS) to assess participants’ self-perceived voluntary imagery and a short earworm questionnaire to gauge their subjective experience of involuntary musical imagery. A total of 32 participants with amusia and 34 matched controls, recruited based on their performance on the Montreal Battery of Evaluation of Amusia (MBEA), filled out the questionnaires in their own time. The earworm scores of amusic participants were not statistically significantly different from those of controls. By contrast, their scores on vividness and control of auditory imagery were significantly lower relative to controls. Overall, results suggest that the presence of amusia may not have an adverse effect on generating involuntary musical imagery—at the level of self-report—but still significantly reduces the individual’s self-rated voluntary imagery of musical, vocal, and environmental sounds. We discuss the findings in the light of previous research on explicit musical judgments and implicit engagement with music, while also touching on some statistical power considerations.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography