Journal articles on the topic 'Music information processing'

To see the other types of publications on this topic, follow the link: Music information processing.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Music information processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Zhao, Tian, and Patricia K. Kuhl. "Music, speech, and temporal information processing." Journal of the Acoustical Society of America 144, no. 3 (September 2018): 1760. http://dx.doi.org/10.1121/1.5067789.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Goto, Masataka, and Keiji Hirata. "Recent studies on music information processing." Acoustical Science and Technology 25, no. 6 (2004): 419–25. http://dx.doi.org/10.1250/ast.25.419.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Tsuboi, Kuniharu. "Computer music and musical information processing." Journal of the Institute of Television Engineers of Japan 42, no. 1 (1988): 49–55. http://dx.doi.org/10.3169/itej1978.42.49.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Katayose, Haruhiro. "The Dawn of Kansei Information Processing. Application of Kansei Information Processing. Music Performance." Journal of the Institute of Image Information and Television Engineers 52, no. 1 (1998): 53–55. http://dx.doi.org/10.3169/itej.52.53.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Bugos, Jennifer, and Wendy Mostafa. "Musical Training Enhances Information Processing Speed." Bulletin of the Council for Research in Music Education, no. 187 (January 1, 2011): 7–18. http://dx.doi.org/10.2307/41162320.

Full text
Abstract:
Abstract The purpose of this research is to examine the effects of music instruction on information processing speed. We examined music’s role on information processing speed in musicians (N = 14) and non-musicians (N = 16) using standardized neuropsychological measures, the Paced Auditory Serial Addition Task (PASAT) and the Trail Making Test (TMT). Results of a One Way ANOVA indicate significantly (p < .05) enhanced performance by musicians compared to non-musicians on the PASAT and TMT (Part A and B). These results suggest that musical training has the capacity to enhance processing speed of auditory and visual content. Implications for music educators stemming from these findings include the need for inclusion of rhythmic sight-reading exercises and improvisational activities to reinforce processing speed.
APA, Harvard, Vancouver, ISO, and other styles
6

FUKAYAMA, Satoru. "Music Information Processing for Visualization with Musical Notations." Journal of the Visualization Society of Japan 40, no. 158 (2020): 19–22. http://dx.doi.org/10.3154/jvs.40.158_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Atherton, Ryan P., Quin M. Chrobak, Frances H. Rauscher, Aaron T. Karst, Matt D. Hanson, Steven W. Steinert, and Kyra L. Bowe. "Shared Processing of Language and Music." Experimental Psychology 65, no. 1 (January 2018): 40–48. http://dx.doi.org/10.1027/1618-3169/a000388.

Full text
Abstract:
Abstract. The present study sought to explore whether musical information is processed by the phonological loop component of the working memory model of immediate memory. Original instantiations of this model primarily focused on the processing of linguistic information. However, the model was less clear about how acoustic information lacking phonological qualities is actively processed. Although previous research has generally supported shared processing of phonological and musical information, these studies were limited as a result of a number of methodological concerns (e.g., the use of simple tones as musical stimuli). In order to further investigate this issue, an auditory interference task was employed. Specifically, participants heard an initial stimulus (musical or linguistic) followed by an intervening stimulus (musical, linguistic, or silence) and were then asked to indicate whether a final test stimulus was the same as or different from the initial stimulus. Results indicated that mismatched interference conditions (i.e., musical – linguistic; linguistic – musical) resulted in greater interference than silence conditions, with matched interference conditions producing the greatest interference. Overall, these results suggest that processing of linguistic and musical information draws on at least some of the same cognitive resources.
APA, Harvard, Vancouver, ISO, and other styles
8

Rammsayer, Thomas, and Eckart Altenmüller. "Temporal Information Processing in Musicians and Nonmusicians." Music Perception 24, no. 1 (September 1, 2006): 37–48. http://dx.doi.org/10.1525/mp.2006.24.1.37.

Full text
Abstract:
The present study was designed to examine the general notion that temporal information processing is more accurate in musicians than in nonmusicians. For this purpose, 36 academically trained musicians and 36 nonmusicians performed seven different auditory temporal tasks. Superior temporal acuity for musicians compared to nonmusicians was shown for auditory fusion, rhythm perception, and three temporal discrimination tasks. The two groups did not differ, however, in terms of their performance on two tasks of temporal generalization. Musicians’superior performance appeared to be limited to aspects of timing which are considered to be automatically and immediately derived from online perceptual processing of temporal information. Unlike immediate online processing of temporal information, temporal generalizations, which involve a reference memory of sorts, seemed not to be influenced by extensive music training.
APA, Harvard, Vancouver, ISO, and other styles
9

Achkar, Charbel El, and Talar Atechian. "MEI2JSON: a pre-processing music scores converter." International Journal of Intelligent Information and Database Systems 1, no. 1 (2021): 1. http://dx.doi.org/10.1504/ijiids.2021.10040316.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Achkar, Charbel El, and Talar Atéchian. "MEI2JSON: a pre-processing music scores converter." International Journal of Intelligent Information and Database Systems 15, no. 1 (2022): 57. http://dx.doi.org/10.1504/ijiids.2022.120130.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Li, Yi. "Digital Development for Music Appreciation of Information Resources Using Big Data Environment." Mobile Information Systems 2022 (September 10, 2022): 1–12. http://dx.doi.org/10.1155/2022/7873636.

Full text
Abstract:
With the continuous development of information technology and the arrival of the era of big data, music appreciation has also entered the digital development. Big data essence is highlighted by comparison with traditional data management and processing technologies. Under different requirements, the required time processing range is different. Music appreciation is an essential and important part of music lessons, which can enrich people’s emotional experience, improve aesthetic ability, and cultivate noble sentiments. Data processing of music information resources will greatly facilitate the management, dissemination, and big data analysis and processing of music resources and improve the ability of music lovers to appreciate music. This paper aims to study the digital development of music in the environment of big data, making music appreciation more convenient and intelligent. This paper proposes an intelligent music recognition and appreciation model based on deep neural network (DNN) model. The use of DNN allows this study to have significant improvement over the traditional algorithm. This paper proposes an intelligent music recognition and appreciation model based on the DNN model and improves the traditional algorithm. The improved method in this paper refers to the Dropout method on the traditional DNN model. The DNN is trained on the database and tested on the data. The results show that, in the same database, the traditional DNN model is 114 and the RNN model is 120. The PPL of the improved DNN model in this paper is 98, i.e., the lowest value. The convergence speed is faster, which indicates that the model has stronger music recognition ability and it is more conducive to the digital development of music appreciation.
APA, Harvard, Vancouver, ISO, and other styles
12

Geake, John G. "An Information Processing Account of Audiational Abilities." Research Studies in Music Education 12, no. 1 (June 1999): 10–23. http://dx.doi.org/10.1177/1321103x9901200102.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Unyk, Anna M. "An information-processing analysis of expectancy in music cognition." Psychomusicology: A Journal of Research in Music Cognition 9, no. 2 (1990): 229–40. http://dx.doi.org/10.1037/h0094146.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Rohrmeier, Martin A., and Stefan Koelsch. "Predictive information processing in music cognition. A critical review." International Journal of Psychophysiology 83, no. 2 (February 2012): 164–75. http://dx.doi.org/10.1016/j.ijpsycho.2011.12.010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Osaka, Naotoshi. "Electroacoustic Music Linked with Information Processing Research in Japan." Contemporary Music Review 37, no. 1-2 (March 4, 2018): 67–85. http://dx.doi.org/10.1080/07494467.2018.1453337.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Moreno, Alberto. "ELEMENTS OF MUSIC BASED ON ARTIFICIAL INTELLIGENCE." Acta Informatica Malaysia 4, no. 2 (July 13, 2020): 30–32. http://dx.doi.org/10.26480/aim.02.2020.30.32.

Full text
Abstract:
Thus, for the current status of research and practical music audio processing needs, this paper argues, the music element analysis technology is the key to this research field, and on this basis, proposes a new framework music processing – Music calculation system, the core objective is to study intelligently and automatically identifies various elements of music information and analyze the information used in constructing the music content, and intelligent retrieval method translated. To achieve the above core research objectives, the paper advocates will be closely integrated music theory and calculation methods, the promotion of integrated use of music theory, cognitive psychology, music, cognitive science, neuroscience, artificial intelligence, signal processing theory to solve the music signal analysis identify the problem.
APA, Harvard, Vancouver, ISO, and other styles
17

Zhang, Shenghuan, and Ye Cheng. "Masking and noise reduction processing of music signals in reverberant music." Journal of Intelligent Systems 31, no. 1 (January 1, 2022): 420–27. http://dx.doi.org/10.1515/jisys-2022-0024.

Full text
Abstract:
Abstract Noise will be inevitably mixed with music signals in the recording process. To improve the quality of music signals, it is necessary to reduce noise as much as possible. This article briefly introduces noise, the masking effect, and the spectral subtraction method for reducing noise in reverberant music. The spectral subtraction method was improved by the human ear masking effect to enhance its noise reduction performance. Simulation experiments were carried out on the traditional and improved spectral subtraction methods. The results showed that the improved spectral subtraction method could reduce the noise in reverberant music more effectively; under an objective evaluation criterion, the signal-to-noise ratio, the de-reverberated music signal processed by the improved spectral subtraction method had a higher signal-to-noise ratio; under a subjective evaluation criterion, mean opinion score (MOS), the de-reverberated music signal processed by the improved spectral subtraction method also had a better evaluation.
APA, Harvard, Vancouver, ISO, and other styles
18

Colby, Michael, Sarah J. Shaw, and Lauralee Shiere. "Sheet Music Cataloging and Processing: A Manual." Notes 42, no. 4 (June 1986): 779. http://dx.doi.org/10.2307/897789.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Tsuboi, Kuniharu, and Mitsuru Ishizuka. "Describing method of music information toward its advanced computer processing." Systems and Computers in Japan 17, no. 7 (1986): 60–62. http://dx.doi.org/10.1002/scj.4690170707.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Fritz, Thomas, and Stefan Koelsch. "The role of semantic association and emotional contagion for the induction of emotion with music." Behavioral and Brain Sciences 31, no. 5 (October 2008): 579–80. http://dx.doi.org/10.1017/s0140525x08005347.

Full text
Abstract:
AbstractWe suggest that semantic association may be a further mechanism by which music may elicit emotion. Furthermore, we note that emotional contagion is not always an immediate process requiring little prior information processing; rather, emotional contagion contributing to music processing may constitute a more complex decoding mechanism for information inherent in the music, which may be subject to a time course of activation.
APA, Harvard, Vancouver, ISO, and other styles
21

Ognjenovic, Predrag. "Processing of Aesthetic Information." Empirical Studies of the Arts 9, no. 1 (January 1991): 1–9. http://dx.doi.org/10.2190/kc25-jwtn-nrx4-c7a1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Edworthy, Judy. "Interval and Contour in Melody Processing." Music Perception 2, no. 3 (1985): 375–88. http://dx.doi.org/10.2307/40285305.

Full text
Abstract:
Musician subjects were required to detect interval and contour changes in transposed versions of standard melodies of 3, 5, 7, 9,11,13, and 15 notes. Subjects were significantly better at detecting contour alterations for melodies of up to 11 notes but significantly better at detecting interval alterations in the 15-note melodies. Serial position effects for 5-, 7-, and 9-note melodies showed contour to be immediately precise after transposition, whereas the ability to detect interval alterations improved as the melodies progressed. These results suggest that, on transposition, contour information is immediately precise but is lost as melody length increases. Interval information is initially less precise but is more resistant to forgetting in longer melodies. The implication of this is that contour can be encoded independently of tonal context, whereas interval information becomes more precise as a tonal framework is established. Some musical implications of the finding are discussed.
APA, Harvard, Vancouver, ISO, and other styles
23

Terhardt, Ernst. "Music Perception and Sensory Information Acquisition: Relationships and Low-Level Analogies." Music Perception 8, no. 3 (1991): 217–39. http://dx.doi.org/10.2307/40285500.

Full text
Abstract:
Information processing is characterized by conditional decisions on hierarchically organized levels. In biological systems, this principle is manifest in the phenomena of contourization and categorization, which are more or less synonymous. Primary contourization—such as in the visual system—is regarded as the first step of abstraction. Its auditory equivalent is formation of spectral pitches. Hierarchical processing is characterized by the principles of immediate processing, open end, recursion, distributed knowledge, forward processing, autonomy, and viewback. In that concept, perceptual phenomena such as illusion, ambiguity, and similarity turn out to be essential and typical. With respect to perception of musical sound, those principles and phenomena readily explain pitch categorization, tone affinity, octave equivalence (chroma), root, and tonality. As a particular example, an explanation of the tritone paradox is suggested.
APA, Harvard, Vancouver, ISO, and other styles
24

Geake, John G. "Why Mozart? Information Processing Abilities of Gifted Young Musicians." Research Studies in Music Education 7, no. 1 (December 1996): 28–45. http://dx.doi.org/10.1177/1321103x9600700103.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Mao, Nan. "Analysis on the Application of Dependent Information System Optimization Algorithm in Music Education in Colleges and Universities." Mobile Information Systems 2022 (May 11, 2022): 1–12. http://dx.doi.org/10.1155/2022/4102280.

Full text
Abstract:
Although the influence of pop music is huge and the emergence of pop music talents is also increasing, its education has great problems. Moreover, both theoretical research and professional discipline construction are still very backward. In order to improve the efficiency of music teaching in colleges and universities, this paper applies the information system optimization algorithm to the intelligent analysis of music education in colleges and universities, selects the appropriate method for music information processing, and builds the music education system in colleges and universities on this basis. At the same time, this paper compiles a music system questionnaire for the two main groups (teachers and students) who use the music system to understand the needs of teachers and students for the music system. Then, this paper analyzes the feasibility of the system, constructs an intelligent music education system in colleges and universities, and verifies the effect of the system in combination with experiments. In addition, this paper evaluates the music information processing and educational effect of the system in combination with the actual situation and counts the test results. The test results verify the reliability of the system in this paper.
APA, Harvard, Vancouver, ISO, and other styles
26

Huron, David. "Music Information Processing Using the Humdrum Toolkit: Concepts, Examples, and Lessons." Computer Music Journal 26, no. 2 (June 2002): 11–26. http://dx.doi.org/10.1162/014892602760137158.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Goolsby, Thomas W. "Profiles of Processing: Eye Movements during Sightreading." Music Perception 12, no. 1 (1994): 97–123. http://dx.doi.org/10.2307/40285757.

Full text
Abstract:
Temporal and sequential components of the eye movement used by a skilled and a less-skilled sightreader were used to construct six profiles of processing. Each subject read three melodies of varying levels of concentration of visual detail. The profiles indicates the order, duration, and location of each fixation while the subjects sightread the melodies. Results indicate that music readers do not fixate on note stems or the bar lines that connect eighth notes when sightreading. The less-skilled music reader progressed through the melody virtually note-by-note using long fixations, whereas the skilled sightreader directed fixations to all areas of the notation (using more regressions than the less-skilled reader) to perform the music accurately. Results support earlier findings that skilled sightreaders look farther ahead in the notation, then back to the point of performance (Goolsby, 1994), and have a larger perceptual span than less-skilled sightreaders. Findings support Slobodans (1984) contention that music reading (i. e., sightreading) is indeed music perception, because music notation is processed before performance. Support was found for Sloboda's (1977, 1984, 1985, 1988) hypotheses on the effects of physical and structural boundaries on visual musical perception. The profiles indicate a number of differences between music perception from processing visual notation and perception resulting from language reading. These differences include: (1) opposite trends in the control of eye movement (i. e., the better music reader fixates in blank areas of the visual stimuli and not directly on each item of the information that was performed), (2) a perceptual span that is vertical as well as horizontal, (3) more eye movement associated with the better reader, and (4) greater attention used for processing language than for music, although the latter task requires an "exact realization."
APA, Harvard, Vancouver, ISO, and other styles
28

Menkin, A. V. "Development of a Music Recommender System Based on Content Metadata Processing." Vestnik NSU. Series: Information Technologies 17, no. 3 (2019): 43–60. http://dx.doi.org/10.25205/1818-7900-2019-17-3-43-60.

Full text
Abstract:
Music recommender systems (MRS) help users of music streaming services to find interesting music in the music catalogs. The sparsity problem is an essential problem of MRS research. It refers to the fact that user usually rates only a tiny part of items. As a result, MRS often has not enough data to make a recommendation. To solve the sparsity problem, in this paper, a new approach that uses related items’ ratings is proposed. Hybrid MRS based on this approach is described. It uses tracks, albums, artists, genres normalized ratings along with information about relations between items of different types in the music catalog. The proposed MRS is evaluated and compared to collaborative method for users’ preferences prediction.
APA, Harvard, Vancouver, ISO, and other styles
29

Carpentier, Sarah M., Andrea R. McCulloch, Tanya M. Brown, Sarah E. M. Faber, Petra Ritter, Zheng Wang, Valorie Salimpoor, Kelly Shen, and Anthony R. McIntosh. "Complexity Matching: Brain Signals Mirror Environment Information Patterns during Music Listening and Reward." Journal of Cognitive Neuroscience 32, no. 4 (April 2020): 734–45. http://dx.doi.org/10.1162/jocn_a_01508.

Full text
Abstract:
Understanding how the human brain integrates information from the environment with intrinsic brain signals to produce individual perspectives is an essential element of understanding the human mind. Brain signal complexity, measured with multiscale entropy, has been employed as a measure of information processing in the brain, and we propose that it can also be used to measure the information available from a stimulus. We can directly assess the correspondence between brain signal complexity and stimulus complexity as an indication of how well the brain reflects the content of the environment in an analysis that we term “complexity matching.” Music is an ideal stimulus because it is a multidimensional signal with a rich temporal evolution and because of its emotion- and reward-inducing potential. When participants focused on acoustic features of music, we found that EEG complexity was lower and more closely resembled the musical complexity compared to an emotional task that asked them to monitor how the music made them feel. Music-derived reward scores on the Barcelona Music Reward Questionnaire correlated with less complexity matching but higher EEG complexity. Compared with perceptual-level processing, emotional and reward responses are associated with additional internal information processes above and beyond those linked to the external stimulus. In other words, the brain adds something when judging the emotional valence of music.
APA, Harvard, Vancouver, ISO, and other styles
30

Sanfilippo, Dario. "Time-Domain Adaptive Algorithms for Low- and High-Level Audio Information Processing." Computer Music Journal 45, no. 1 (2021): 24–38. http://dx.doi.org/10.1162/comj_a_00592.

Full text
Abstract:
Abstract In this paper, we present a set of time-domain algorithms for the low- and high-level analysis of audio streams. These include spectral centroid, noisiness, and spectral spread for the low level, and dynamicity, heterogeneity, and complexity for the high level. The low-level algorithms provide a continuous measure of the features and can operate with short analysis frames. The high-level algorithms, on the other hand, are original designs informed both perceptually and by complexity theory for the analysis of musically meaningful information, both in short sounds or articulated streams with long-term nontrivial variations. These algorithms are suitable for the implementation of real-time audio analysis in diverse live performance setups that require the extraction of information from several streams at the same time. For example, the low-level algorithms can be deployed in large audio networks of adaptive agents, or in small-to-large ensembles for the analysis of various characteristics of the instruments for computer-assisted performance. Furthermore, the high-level algorithms can be implemented as part of fitness functions in music systems based on evolutionary algorithms that follow musically-informed criteria, or as analysis tools to assess the quality of some of the characteristics of a musical output. Musical applications of these algorithms can be found in a companion paper in this issue of Computer Music Journal: “Complex Adaptation in Audio Feedback Networks for the Synthesis of Music and Sounds.”
APA, Harvard, Vancouver, ISO, and other styles
31

Zhao, T. Christina, and Patricia K. Kuhl. "Musical intervention enhances infants’ neural processing of temporal structure in music and speech." Proceedings of the National Academy of Sciences 113, no. 19 (April 25, 2016): 5212–17. http://dx.doi.org/10.1073/pnas.1603984113.

Full text
Abstract:
Individuals with music training in early childhood show enhanced processing of musical sounds, an effect that generalizes to speech processing. However, the conclusions drawn from previous studies are limited due to the possible confounds of predisposition and other factors affecting musicians and nonmusicians. We used a randomized design to test the effects of a laboratory-controlled music intervention on young infants’ neural processing of music and speech. Nine-month-old infants were randomly assigned to music (intervention) or play (control) activities for 12 sessions. The intervention targeted temporal structure learning using triple meter in music (e.g., waltz), which is difficult for infants, and it incorporated key characteristics of typical infant music classes to maximize learning (e.g., multimodal, social, and repetitive experiences). Controls had similar multimodal, social, repetitive play, but without music. Upon completion, infants’ neural processing of temporal structure was tested in both music (tones in triple meter) and speech (foreign syllable structure). Infants’ neural processing was quantified by the mismatch response (MMR) measured with a traditional oddball paradigm using magnetoencephalography (MEG). The intervention group exhibited significantly larger MMRs in response to music temporal structure violations in both auditory and prefrontal cortical regions. Identical results were obtained for temporal structure changes in speech. The intervention thus enhanced temporal structure processing not only in music, but also in speech, at 9 mo of age. We argue that the intervention enhanced infants’ ability to extract temporal structure information and to predict future events in time, a skill affecting both music and speech processing.
APA, Harvard, Vancouver, ISO, and other styles
32

Li, Pengfei. "Design of Meyer’s Theory-based High Quality Piano Multi-media System." International Journal of Emerging Technologies in Learning (iJET) 12, no. 01 (January 31, 2017): 95. http://dx.doi.org/10.3991/ijet.v12i01.6486.

Full text
Abstract:
Based on the current situation of the development of multi-media technology and its application in colleges and universities, this paper selected high-quality multi-media hardware devices suitable for the piano curriculum, followed Meyer’s principles for multi-media instruction design, and designed a multi-media through the visualization of multi-media information, promotion of necessary cognitive processing, reduction of external cognitive processing, stimulation of generative cognitive processing and other information processing methods, to improve the information presentation ways and delivery strategies for multi-media teaching software. To a certain extent, the system prevents teachers from neglecting learners’ cognitive mechanism when processing information for their multi-media teaching software, and also improves the multi-media teaching effect of music-oriented courses, thereby having important guiding significance for multi-media teaching as well as the design and production of teaching software. Meanwhile, it provides theoretical and statistical support for the application of high-quality multi-media teaching system in music-oriented courses and college education.
APA, Harvard, Vancouver, ISO, and other styles
33

Siddiquee, Md Mahfuzur Rahman, Md Saifur Rahman, Shahnewaz Ul Islam Chowdhury, and Rashedur M. Rahman. "Association Rule Mining and Audio Signal Processing for Music Discovery and Recommendation." International Journal of Software Innovation 4, no. 2 (April 2016): 71–87. http://dx.doi.org/10.4018/ijsi.2016040105.

Full text
Abstract:
In this research, the authors propose an intelligent system that can recommend songs to user according to his choice. They predict the next song a user might prefer to listen based on their previous listening patterns, currently played songs and similar music based on music data. To calculate music similarity the authors used a Matlab toolbox that considers audio signals. They used association rule mining to find users' listening patterns and predict the next song the user might prefer. As they propose a music discovery service as well, the authors use the information of music listening pattern and music data similarity to recommend a new song. Later in result section, they replaced the audio based similarity with last.fm api for similar song listing and analyzed the behaviour of their system with the new list of songs.
APA, Harvard, Vancouver, ISO, and other styles
34

Swinney, David, and Tracy Love. "The Processing of Discontinuous Dependencies in Language and Music." Music Perception 16, no. 1 (1998): 63–78. http://dx.doi.org/10.2307/40285778.

Full text
Abstract:
This article examines the nature and time course of the processing of discontinuous dependency relationships in language and draws suggestive parallels to similar issues in music perception. The on-line language comprehension data presented demonstrate that discontinuous structural dependencies cause reactivation of the misordered or "stranded" sentential material at its underlying canonical position in the sentence during ongoing comprehension. Further, this process is demonstrated to be driven by structural knowledge, independent of pragmatic information, aided by prosodic cues, and dependent on rate of input. Issues of methodology and of theory that are equally relevant to language and music are detailed.
APA, Harvard, Vancouver, ISO, and other styles
35

Gedik, Ali C., and Barış Bozkurt. "Pitch-frequency histogram-based music information retrieval for Turkish music." Signal Processing 90, no. 4 (April 2010): 1049–63. http://dx.doi.org/10.1016/j.sigpro.2009.06.017.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Berz, William L. "Working Memory in Music: A Theoretical Model." Music Perception 12, no. 3 (1995): 353–64. http://dx.doi.org/10.2307/40286188.

Full text
Abstract:
Many psychologists have accepted a dual memory system with separate short-and long-term storage components. More recently, the concept of working memory, where short-term memory is composed of both storage and processing segments, has been considered. Baddeley (1990) proposes a model for working memory that includes a central executive controller along with two slave systems: the phonological loop and the visuospatial sketch pad. The model allows for both storage and manipulation of information. However, this model does not seem to account adequately for musical memory (Clarke, 1993). Through a review of relevant literature, a new model is proposed in which an additional slave system is added to the Baddeley model to account for musical information. Consideration of this kind of cognitive processing is important in understanding the significant demands placed on working memory in such activities as taking music dictation, where there would be a tradeoff between storage and processing functions.
APA, Harvard, Vancouver, ISO, and other styles
37

Hutka, Stefanie, Sarah M. Carpentier, Gavin M. Bidelman, Sylvain Moreno, and Anthony R. McIntosh. "Musicianship and Tone Language Experience Are Associated with Differential Changes in Brain Signal Variability." Journal of Cognitive Neuroscience 28, no. 12 (December 2016): 2044–58. http://dx.doi.org/10.1162/jocn_a_01021.

Full text
Abstract:
Musicianship has been associated with auditory processing benefits. It is unclear, however, whether pitch processing experience in nonmusical contexts, namely, speaking a tone language, has comparable associations with auditory processing. Studies comparing the auditory processing of musicians and tone language speakers have shown varying degrees of between-group similarity with regard to perceptual processing benefits and, particularly, nonlinguistic pitch processing. To test whether the auditory abilities honed by musicianship or speaking a tone language differentially impact the neural networks supporting nonlinguistic pitch processing (relative to timbral processing), we employed a novel application of brain signal variability (BSV) analysis. BSV is a metric of information processing capacity and holds great potential for understanding the neural underpinnings of experience-dependent plasticity. Here, we measured BSV in electroencephalograms of musicians, tone language-speaking nonmusicians, and English-speaking nonmusicians (controls) during passive listening of music and speech sound contrasts. Although musicians showed greater BSV across the board, each group showed a unique spatiotemporal distribution in neural network engagement: Controls had greater BSV for speech than music; tone language-speaking nonmusicians showed the opposite effect; musicians showed similar BSV for both domains. Collectively, results suggest that musical and tone language pitch experience differentially affect auditory processing capacity within the cerebral cortex. However, information processing capacity is graded: More experience with pitch is associated with greater BSV when processing this cue. Higher BSV in musicians may suggest increased information integration within the brain networks subserving speech and music, which may be related to their well-documented advantages on a wide variety of speech-related tasks.
APA, Harvard, Vancouver, ISO, and other styles
38

Maidhof, Clemens, and Stefan Koelsch. "Effects of Selective Attention on Syntax Processing in Music and Language." Journal of Cognitive Neuroscience 23, no. 9 (September 2011): 2252–67. http://dx.doi.org/10.1162/jocn.2010.21542.

Full text
Abstract:
The present study investigated the effects of auditory selective attention on the processing of syntactic information in music and speech using event-related potentials. Spoken sentences or musical chord sequences were either presented in isolation, or simultaneously. When presented simultaneously, participants had to focus their attention either on speech, or on music. Final words of sentences and final harmonies of chord sequences were syntactically either correct or incorrect. Irregular chords elicited an early right anterior negativity (ERAN), whose amplitude was decreased when music was simultaneously presented with speech, compared to when only music was presented. However, the amplitude of the ERAN-like waveform elicited when music was ignored did not differ from the conditions in which participants attended the chord sequences. Irregular sentences elicited an early left anterior negativity (ELAN), regardless of whether speech was presented in isolation, was attended, or was to be ignored. These findings suggest that the neural mechanisms underlying the processing of syntactic structure of music and speech operate partially automatically, and, in the case of music, are influenced by different attentional conditions. Moreover, the ERAN was slightly reduced when irregular sentences were presented, but only when music was ignored. Therefore, these findings provide no clear support for an interaction of neural resources for syntactic processing already at these early stages.
APA, Harvard, Vancouver, ISO, and other styles
39

Gupta, Ashish, Braj Bhushan, and Laxmidhar Behera. "Neural response to sad autobiographical recall and sad music listening post recall reveals distinct brain activation in alpha and gamma bands." PLOS ONE 18, no. 1 (January 6, 2023): e0279814. http://dx.doi.org/10.1371/journal.pone.0279814.

Full text
Abstract:
Although apparently paradoxical, sad music has been effective in coping with sad life experiences. The underpinning brain neural correlates of this are not well explored. We performed Electroencephalography (EEG) source-level analysis for the brain during a sad autobiographical recall (SAR) and upon exposure to sad music. We specifically investigated the Cingulate cortex complex and Parahippocampus (PHC) regions, areas prominently involved in emotion and memory processing. Results show enhanced alpha band lag phase-synchronization in the brain during sad music listening, especially within and between the Posterior cingulate cortex (PCC) and (PHC) compared to SAR. This enhancement was lateralized for alpha1 and alpha2 bands in the left and right hemispheres, respectively. We also observed a significant increase in alpha2 brain current source density (CSD) during sad music listening compared to SAR and baseline resting state in the region of interest (ROI). Brain during SAR condition had enhanced right hemisphere lateralized functional connectivity and CSD in gamma band compared to sad music listening and baseline resting state. Our findings show that the brain during the SAR state had enhanced gamma-band activity, signifying increased content binding capacity. At the same time, the brain is associated with an enhanced alpha band activity while sad music listening, signifying increased content-specific information processing. Thus, the results suggest that the brain’s neural correlates during sad music listening are distinct from the SAR state as well as the baseline resting state and facilitate enhanced content-specific information processing potentially through three-channel neural pathways—(1) by enhancing the network connectivity in the region of interest (ROI), (2) by enhancing local cortical integration of areas in ROI, and (3) by enhancing sustained attention. We argue that enhanced content-specific information processing possibly supports the positive experience during sad music listening post a sad experience in a healthy population. Finally, we propose that sadness has two different characteristics under SAR state and sad music listening.
APA, Harvard, Vancouver, ISO, and other styles
40

Rege, Amit, and Ravi Sindal. "Audio classification for music information retrieval of Hindustani vocal music." Indonesian Journal of Electrical Engineering and Computer Science 24, no. 3 (December 1, 2021): 1481. http://dx.doi.org/10.11591/ijeecs.v24.i3.pp1481-1490.

Full text
Abstract:
An important task in music information retrieval of Indian art music is the recognition of the larger musicological frameworks, called ragas, on which the performances are based. Ragas are characterized by prominent musical notes, motifs, general sequences of notes used and embellishments improvised by the performers. In this work we propose a convolutional neural network-based model to work on the mel-spectrograms for classication of steady note regions and note transition regions in vocal melodies which can be used for finding prominent musical notes. It is demonstrated that, good classification accuracy is obtained using the proposed model.
APA, Harvard, Vancouver, ISO, and other styles
41

Pardo, B. "Finding structure in audio for music information retrieval." IEEE Signal Processing Magazine 23, no. 3 (May 2006): 126–32. http://dx.doi.org/10.1109/msp.2006.1628889.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Schwabe, Markus, Omar Elaiashy, and Fernando Puente León. "Incorporation of phase information for improved time-dependent instrument recognition." tm - Technisches Messen 87, s1 (September 25, 2020): s62—s67. http://dx.doi.org/10.1515/teme-2020-0031.

Full text
Abstract:
AbstractTime-dependent estimation of playing instruments in music recordings is an important preprocessing for several music signal processing algorithms. In this approach, instrument recognition is realized by neural networks with a two-dimensional input of short-time Fourier transform (STFT) magnitudes and a time-frequency representation based on phase information. The modified group delay (MODGD) function and the product spectrum (PS), which is based on MODGD, are analysed as phase representations. Training and evaluation processes are executed based on the MusicNet dataset. By the incorporation of PS in the input, instrument recognition can be improved about 2% in F1-score.
APA, Harvard, Vancouver, ISO, and other styles
43

Koelsch, Stefan, Tobias Grossmann, Thomas C. Gunter, Anja Hahne, Erich Schröger, and Angela D. Friederici. "Children Processing Music: Electric Brain Responses Reveal Musical Competence and Gender Differences." Journal of Cognitive Neuroscience 15, no. 5 (July 2003): 683–93. http://dx.doi.org/10.1162/jocn.2003.15.5.683.

Full text
Abstract:
Numerous studies investigated physiological correlates of the processing of musical information in adults. How these correlates develop during childhood is poorly understood. In the present study, we measured event-related electric brain potentials elicited in 5and 9-year-old children while they listened to (major–minor tonal) music. Stimuli were chord sequences, infrequently containing harmonically inappropriate chords. Our results demonstrate that the degree of (in) appropriateness of the chords modified the brain responses in both groups according to music-theoretical principles. This suggests that already 5-year-old children process music according to a well-established cognitive representation of the major–minor tonal system and according to music-syntactic regularities. Moreover, we show that, in contrast to adults, an early negative brain response was left predominant in boys, whereas it was bilateral in girls, indicating a gender difference in children processing music, and revealing that children process music with a hemispheric weighting different from that of adults. Because children process, in contrast to adults, music in the same hemispheres as they process language, results indicate that children process music and language more similarly than adults. This finding might support the notion of a common origin of music and language in the human brain, and concurs with findings that demonstrate the importance of musical features of speech for the acquisition of language.
APA, Harvard, Vancouver, ISO, and other styles
44

Koike, Takashi. "Information processing apparatus and method for reproducing an output audio signal from midi music playing information and audio information." Journal of the Acoustical Society of America 112, no. 1 (2002): 23. http://dx.doi.org/10.1121/1.1500930.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Kızrak, Merve Ayyüce, and Bülent Bolat. "A musical information retrieval system for Classical Turkish Music makams." SIMULATION 93, no. 9 (May 24, 2017): 749–57. http://dx.doi.org/10.1177/0037549717708615.

Full text
Abstract:
Musical information retrieval (MIR) applications have become an interesting topic both for researchers and commercial applications. The majority of the current knowledge on MIR is based on Western music. However, traditional genres, such as Classical Turkish Music (CTM), have great structural differences compared with Western music. Then, the validity of the current knowledge on this subject must be checked on such genres. Through this work, a MIR application that simulates the human music processing system based on CTM is proposed. To achieve this goal, first mel-frequency cepstral coefficients (MFCCs) and delta-MFCCs, which are the most frequent features used in audio applications, were used as features. In the last few years deep belief networks (DBNs) have become promising classifiers for sound classification problems. To confirm this statement, the classification accuracies of four probability theory-based neural networks, namely radial basis function networks, generalized regression neural networks, probabilistic neural networks, and support vector machines, were compared to the DBN. Our results show that the DBN outperforms the others.
APA, Harvard, Vancouver, ISO, and other styles
46

Sun, Lingyang, and Mishal Sohail. "Machine Learning-Based Improvement of Musical Digital Processing Technology on Musical Performance." Security and Communication Networks 2022 (May 17, 2022): 1–9. http://dx.doi.org/10.1155/2022/8318928.

Full text
Abstract:
The modern stage focuses more on structural changes, and the numerical control technology realizes the complex changes of the stage scenes and the precise movement of stage props. The study object in this article is musical drama, and it uses digital technologies to digitally manipulate the soundtrack. This study offers a low-complexity feature space minimal variance algorithm that combines the power approach to address the concerns of inadequate resolution improvement, excessive complexity, poor real-time performance, and low resilience of standard MV methods. The method has a high resolution, low complexity, and great resilience, and it may be utilized in a variety of stages. In addition, this paper combines digital technology to process music, enhances the promotion of music in musical performances, and allows performers to integrate in more effectively. Finally, through experimental research, it can be known that the music digital processing technology proposed in this paper can play a good role in promoting musical performances.
APA, Harvard, Vancouver, ISO, and other styles
47

Schaefer, Rebecca S. "Mental Representations in Musical Processing and their Role in Action-Perception Loops." Empirical Musicology Review 9, no. 3-4 (January 5, 2015): 161. http://dx.doi.org/10.18061/emr.v9i3-4.4291.

Full text
Abstract:
Music is created in the listener as it is perceived and interpreted - its meaning derived from our unique sense of it; likely driving the range of interpersonal differences found in music processing. Person-specific mental representations of music are thought to unfold on multiple levels as we listen, spanning from an entire piece of music to regularities detected across notes. As we track incoming auditory information, predictions are generated at different levels for different musical aspects, leading to specific percepts and behavioral outputs, illustrating a tight coupling of cognition, perception and action. This coupling, together with a prominent role of prediction in music processing, fits well with recently described ideas about the role of predictive processing in cognitive function, which appears to be especially suitable to account for the role of mental models in musical perception and action. Investigating the cerebral correlates of constructive music imagination offers an experimentally tractable approach to clarifying how mental models of music are represented in the brain. I suggest here that mental representations underlying imagery are multimodal, informed and modulated by the body and its in- and outputs, while perception and action are informed and modulated by predictions based on mental models.  
APA, Harvard, Vancouver, ISO, and other styles
48

Chen, Chen. "Design of Deep Learning Network Model for Personalized Music Emotional Recommendation." Security and Communication Networks 2022 (May 2, 2022): 1–8. http://dx.doi.org/10.1155/2022/4443277.

Full text
Abstract:
Music is a way for people to express their inner thoughts, and it is an art form to express their feelings and send their emotions. In modern society, people tend to listen to music more and more as a way of leisure and entertainment, and different types of music hold different feelings of listeners and trigger different emotional resonances. In this study, we propose an algorithmic model based on the two-layer attention mechanism, which includes the processing of textual convolutional neural network for music name and music label text data and the processing of two-layer attention mechanism, where the two-layer attention mechanism refers to the first layer of attention mechanism that learns the user's preference for each music feature from the feature level and the second layer of attention mechanism that learns the user's preference for each piece of music in the history listening list from the item level. The experiments show that the NDCG value of this method is improved by about 0.08, and the overall quality of the recommendation list is improved, which indicates that the user interest model constructed based on the fusion of various dings has good characterization ability and is helpful to alleviate data sparsity.
APA, Harvard, Vancouver, ISO, and other styles
49

Ding, Yu, Chunmei Cao, and Keying Zhang. "Different Effects Of Shooting Training And Music Training On Auditory Information Processing Ability." Medicine & Science in Sports & Exercise 54, no. 9S (September 2022): 60–61. http://dx.doi.org/10.1249/01.mss.0000875788.45183.e1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Hirai, Shigeyuki. "Editor's Message to Special Issue on Extensions and Advances in Music Information Processing." Journal of Information Processing 24, no. 3 (2016): 469. http://dx.doi.org/10.2197/ipsjjip.24.469.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography