Journal articles on the topic 'Time in music Data processing'

To see the other types of publications on this topic, follow the link: Time in music Data processing.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Time in music Data processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Schwabe, Markus, and Michael Heizmann. "Influence of input data representations for time-dependent instrument recognition." tm - Technisches Messen 88, no. 5 (February 25, 2021): 274–81. http://dx.doi.org/10.1515/teme-2020-0100.

Full text
Abstract:
Abstract An important preprocessing step for several music signal processing algorithms is the estimation of playing instruments in music recordings. To this aim, time-dependent instrument recognition is realized by a neural network with residual blocks in this approach. Since music signal processing tasks use diverse time-frequency representations as input matrices, the influence of different input representations for instrument recognition is analyzed in this work. Three-dimensional inputs of short-time Fourier transform (STFT) magnitudes and an additional time-frequency representation based on phase information are investigated as well as two-dimensional STFT or constant-Q transform (CQT) magnitudes. As additional phase representations, the product spectrum (PS), based on the modified group delay, and the frequency error (FE) matrix, related to the instantaneous frequency, are used. Training and evaluation processes are executed based on the MusicNet dataset, which enables the estimation of seven instruments. With a higher number of frequency bins in the input representations, an improved instrument recognition of about 2 % in F1-score can be achieved. Compared to the literature, frame-level instrument recognition can be improved for different input representations.
APA, Harvard, Vancouver, ISO, and other styles
2

Tchana Tankeu, Bachir, Vincent Baltazart, Yide Wang, and David Guilbert. "PUMA Applied to Time Delay Estimation for Processing GPR Data over Debonded Pavement Structures." Remote Sensing 13, no. 17 (August 31, 2021): 3456. http://dx.doi.org/10.3390/rs13173456.

Full text
Abstract:
In this paper, principal-singular-vector utilization for modal analysis (PUMA) was adapted to perform time delay estimation on ground-penetrating radar (GPR) data by taking into account the shape of the transmitted GPR signal. The super-resolution capability of PUMA was used to separate overlapping backscattered echoes from a layered pavement structure with some embedded debondings. The well-known root-MUSIC algorithm was selected as a benchmark for performance assessment. The simulation results showed that the proposed PUMA performs very well, especially in the case where the sources are totally coherent, and it requires much less computational time than the root-MUSIC algorithm.
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Yi. "Digital Development for Music Appreciation of Information Resources Using Big Data Environment." Mobile Information Systems 2022 (September 10, 2022): 1–12. http://dx.doi.org/10.1155/2022/7873636.

Full text
Abstract:
With the continuous development of information technology and the arrival of the era of big data, music appreciation has also entered the digital development. Big data essence is highlighted by comparison with traditional data management and processing technologies. Under different requirements, the required time processing range is different. Music appreciation is an essential and important part of music lessons, which can enrich people’s emotional experience, improve aesthetic ability, and cultivate noble sentiments. Data processing of music information resources will greatly facilitate the management, dissemination, and big data analysis and processing of music resources and improve the ability of music lovers to appreciate music. This paper aims to study the digital development of music in the environment of big data, making music appreciation more convenient and intelligent. This paper proposes an intelligent music recognition and appreciation model based on deep neural network (DNN) model. The use of DNN allows this study to have significant improvement over the traditional algorithm. This paper proposes an intelligent music recognition and appreciation model based on the DNN model and improves the traditional algorithm. The improved method in this paper refers to the Dropout method on the traditional DNN model. The DNN is trained on the database and tested on the data. The results show that, in the same database, the traditional DNN model is 114 and the RNN model is 120. The PPL of the improved DNN model in this paper is 98, i.e., the lowest value. The convergence speed is faster, which indicates that the model has stronger music recognition ability and it is more conducive to the digital development of music appreciation.
APA, Harvard, Vancouver, ISO, and other styles
4

Zhong, Kai, Shangqian Liu, Yue Li, and Yanling Xu. "Music Network Data Analysis Based on ISOMAP Algorithm Model." Journal of Physics: Conference Series 2066, no. 1 (November 1, 2021): 012073. http://dx.doi.org/10.1088/1742-6596/2066/1/012073.

Full text
Abstract:
Abstract The development of music is a tortuous process, and the network relationship between each genre and each artist is intricate. In order to have a better understanding of the history of music, this paper tells the stories hidden in the history of music by means of data processing. Firstly, this paper establishes a model to evaluate the similarity between music by using ISOMAP algorithm. At the same time, the forest evolution model was established to mark the most revolutionary musical characters. Finally, using the Page-Rank algorithm, we get the founders of several music genres. It turns out that the figures who led the development of music don’t coincide with the figures who revolutionized music. Through the analysis of this paper, we can more clearly understand the development of music and the evolution of genres.
APA, Harvard, Vancouver, ISO, and other styles
5

Qin, Zeng. "A Data Mining-Based Evaluation Technique for Music Teaching." Mobile Information Systems 2022 (April 19, 2022): 1–7. http://dx.doi.org/10.1155/2022/2470777.

Full text
Abstract:
Musical data mining covers a number of methodologies to successfully apply data mining techniques for music processing, drawing together a multidisciplinary array of top experts. The field of music data acquisition has grown through time to solve the difficulties of obtaining and engaging with enormous amounts of music and associated data, such as styles, artists, lyrics, and reviews. In order to improve the quality of music teaching, a music teaching evaluation based on data mining is proposed. Data mining is becoming more widely accepted as a viable form of inquiry for analyzing data obtained in natural settings. More and more attention is paid to music teaching. Actual data is frequently inadequate, unreliable, and/or lacking in specific behaviors or patterns, as well as including numerous inaccuracies. Preprocessing data is a tried-and-true means of resolving such problems. Music teaching data is divided into three steps after preprocessing, that is, “object and object type,” “music teaching data normalization,” and “data integration.” A model is built with a high-dimensional characteristic distribution and essential parameters of convergent teaching capacity. The experimental results show that the data mining method can be used for music teaching evaluation and has the advantages of short evaluation time, high accuracy, and clear indicators.
APA, Harvard, Vancouver, ISO, and other styles
6

Raglio, A., D. Traficante, and O. Oasi. "Comparison of the Music Therapy Coding Scheme with the Music Therapy Checklist." Psychological Reports 101, no. 3 (December 2007): 875–80. http://dx.doi.org/10.2466/pr0.101.3.875-880.

Full text
Abstract:
The Music Therapy Checklist is useful for music therapists to monitor and evaluate the music therapeutic process. A list of different types of behaviors were selected based on results derived from applying the Music Therapy Coding Scheme. The use of a checklist to code the events with a recording method based on 1-min. intervals allows observation without data-processing systems and drastically reduces coding time. At the same time, the checklist tags the main factors in musical interaction.
APA, Harvard, Vancouver, ISO, and other styles
7

Sudarma, Made, and I. Gede Harsemadi. "Design and Analysis System of KNN and ID3 Algorithm for Music Classification based on Mood Feature Extraction." International Journal of Electrical and Computer Engineering (IJECE) 7, no. 1 (February 1, 2017): 486. http://dx.doi.org/10.11591/ijece.v7i1.pp486-495.

Full text
Abstract:
Each of music which has been created, has its own mood which is emitted, therefore, there has been many researches in Music Information Retrieval (MIR) field that has been done for recognition of mood to music. This research produced software to classify music to the mood by using K-Nearest Neighbor and ID3 algorithm. In this research accuracy performance comparison and measurement of average classification time is carried out which is obtained based on the value produced from music feature extraction process. For music feature extraction process it uses 9 types of spectral analysis, consists of 400 practicing data and 400 testing data. The system produced outcome as classification label of mood type those are contentment, exuberance, depression and anxious. Classification by using algorithm of KNN is good enough that is 86.55% at k value = 3 and average processing time is 0.01021. Whereas by using ID3 it results accuracy of 59.33% and average of processing time is 0.05091 second.
APA, Harvard, Vancouver, ISO, and other styles
8

Chen, Xu, and Jun Tang. "Research on Piano Music Signal Recognition Based on Short-Time Fourier Analysis." Advanced Materials Research 853 (December 2013): 680–85. http://dx.doi.org/10.4028/www.scientific.net/amr.853.680.

Full text
Abstract:
This paper starts with the basic process of music recognition to complete the study on extraction and realization of seven musical characteristics of the music features characterization, at the same time, the paper in-depth studies the pitch value duration, tonality characteristic extraction unit. Fourier analysis method based on short-time uses the computer programming for audio signal automatic analysis and processing, implements the characteristics recognition of the piano music playing, Experimental data show that the average recognition rate of algorithm is above 95% with the strong recognition ability, which provides the core technology support for developing the evaluation system of piano performance.
APA, Harvard, Vancouver, ISO, and other styles
9

Balaban, Mira, and Michael Elhadad. "On the Need for Visual Formalisms in Music Processing." Leonardo 32, no. 2 (April 1999): 127–34. http://dx.doi.org/10.1162/002409499553109.

Full text
Abstract:
Computer music environments (CMEs) are notoriously difficult to design and implement. As computer programs, they reflect the complex nature of music ontology and must support real-time manipulation of multimedia data. In addition, these programs must be usable by native users, supporting their creative process without obstructing it through technical difficulties. To achieve these goals, the authors argue, CMEs must be provided with a well-defined methodology relying on techniques from the fields of software engineering, artificial intelligence, and knowledge representation. This paper contributes an aspect of this methodology, concentrating on the role of visualizations in CMEs. The authors state that visualization deserves a specialized theory that is based on music ontology and that is independent of the concrete, implemented graphical interface.
APA, Harvard, Vancouver, ISO, and other styles
10

Swinney, David, and Tracy Love. "The Processing of Discontinuous Dependencies in Language and Music." Music Perception 16, no. 1 (1998): 63–78. http://dx.doi.org/10.2307/40285778.

Full text
Abstract:
This article examines the nature and time course of the processing of discontinuous dependency relationships in language and draws suggestive parallels to similar issues in music perception. The on-line language comprehension data presented demonstrate that discontinuous structural dependencies cause reactivation of the misordered or "stranded" sentential material at its underlying canonical position in the sentence during ongoing comprehension. Further, this process is demonstrated to be driven by structural knowledge, independent of pragmatic information, aided by prosodic cues, and dependent on rate of input. Issues of methodology and of theory that are equally relevant to language and music are detailed.
APA, Harvard, Vancouver, ISO, and other styles
11

Jin, Lin, and Qiang Liu. "Music Spectrum Display System Based on MCU." Advanced Materials Research 852 (January 2014): 353–56. http://dx.doi.org/10.4028/www.scientific.net/amr.852.353.

Full text
Abstract:
Analysis of signal from the frequency analysis of spectrum, which is widely used in voice processing, image processing, digital audio, seismic exploration. With the FFT algorithm, provides an effective solution for real-time processing of frequency domain analysis. Based on STC12C5A60S2 microcontroller to complete realization of music spectrum display system development, which introduces the STC12C5A60S2 chip, 32*64LED dot matrix display principle, 74LS595 and 74LS164 chip using the method. On this basis, the completion of the system's hardware and software design, implementation STC12C5A60S2 microcontroller based music spectrum display system, its structure is simple, easy to use, and you can use STC12C5A60S2 MCU data processing capabilities of the sound signal spectrum analysis and display.
APA, Harvard, Vancouver, ISO, and other styles
12

Zhao, Pin-Jiao, Guo-Bing Hu, and Li-Wei Wang. "A Sliding Window Data Compression Method for Spatial-Time DOA Estimation." International Journal of Antennas and Propagation 2021 (October 28, 2021): 1–8. http://dx.doi.org/10.1155/2021/9705617.

Full text
Abstract:
This paper presents a sliding window data compression method for spatial-time direction-of-arrival (DOA) estimation using coprime array. The signal model is firstly formulated by jointly using the temporal and spatial information of the impinging sources. Then, a sliding window data compression processing is performed on the array output matrix to realize fast calculation of time average function, and the computational burden has been reduced accordingly. Based on the concept of sum and difference co-array (SDCA), the vectorized conjugate augmented MUSIC is adopted, with which more sources than twice of the physical sensors can be resolved. Additionally, the sparse array robustness to sensor failure has been evaluated by introducing the concept of essential sensors. The theoretical analysis and numerical simulations are provided to confirm the effectiveness performance of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
13

Wang, Xiaohua, Lei Cheng, Ding Cheng, and Qinlin Zhou. "Theater Music Data Acquisition and Genre Recognition Using Edge Computing and Deep Brief Network." Scientific Programming 2022 (August 25, 2022): 1–6. http://dx.doi.org/10.1155/2022/8543443.

Full text
Abstract:
Artificial intelligence (AI) and the Internet of Things (IoT) make it urgent to push the frontier of AI to the network edge and release the potential of edge big data. The model’s accuracy in data acquisition and music genre classification (MGC) is further improved based on theater music data acquisition. First, machine learning and AI algorithms are used to collect data on various devices and automatically identify music genres. The data collected by edge devices are safe and private, which shortens the time delay of data processing and response. In addition, the deep belief network (DBN)-based MGC algorithm has better overall recognition and classification effect on music genres. The MGC accuracy of the proposed improved DBN algorithm is nearly 80%, compared to 30%–40% of the traditional algorithms. The DBN algorithm is more accurate than the traditional classical algorithm in MGC. The research has an important reference value for developing Internet technology and establishing a music recognition model.
APA, Harvard, Vancouver, ISO, and other styles
14

Liu, Xiangli. "Music Trend Prediction Based on Improved LSTM and Random Forest Algorithm." Journal of Sensors 2022 (March 22, 2022): 1–10. http://dx.doi.org/10.1155/2022/6450469.

Full text
Abstract:
As one of the entertainment consumption products, pop music attracts more and more people’s attention. In the context of big data, many pop music listeners can determine the development trend of pop music to a large extent. In order to predict the trend of pop music, we can dig and analyze the audience’s preferences and preferences deeply based on massive user data. This paper proposes a music trend prediction method based on improved LSTM and random forest algorithm. The algorithm first performs abnormal data processing and normalization processing on the test data set. Then the important features are selected by the random forest algorithm and corrected by the rough set compensation system. Finally, the prediction is made by improving LSTM. In the experiment, RMSE and MAER are used as the performance evaluation indexes of the algorithm, and the results show that the proposed algorithm can better predict the music popularity trend. At the same time, the root means square error and mean absolute error index are improved obviously.
APA, Harvard, Vancouver, ISO, and other styles
15

Patil, Ramesh. "Song Recommendation System Based on Real-Time Facial Expressions." International Journal for Research in Applied Science and Engineering Technology 10, no. 11 (November 30, 2022): 1302–6. http://dx.doi.org/10.22214/ijraset.2022.47572.

Full text
Abstract:
Abstract: Traditional methods of playing music according to a person's mood required human interaction. Migrating to computer vision technology will enable the automation of the such system. This article describes the implementation details of a real-time facial feature extraction and emotion recognition system. One way to do this is to compare selected facial features from an image to a face database. Recognizing emotions from images has become one of the active research topics in image processing and applications based on human-computer interaction. The expression on the face is detected using a convolutional neural network (CNN) for classifying human emotions from dynamic facial expressions in real time. The FER dataset is utilized to train the model which contains approximately 30,000 facial RGB images of different expressions. Expression-based music players aim to scan and interpret data and build playlists accordingly. It will suggest a playlist of songs based on your facial expressions. This is an additional feature of the existing feature of a music player.
APA, Harvard, Vancouver, ISO, and other styles
16

Qi, Linlin, and Na Liu. "Music Singing Based on Computer Analog Piano Accompaniment and Digital Processing for 5G Industrial Internet of Things." Mobile Information Systems 2022 (May 11, 2022): 1–10. http://dx.doi.org/10.1155/2022/4489301.

Full text
Abstract:
At present, most professional music colleges and universities have made large-scale investment in computer software and hardware equipment and have also continuously opened various relevant courses. However, how to give full play to the advantages of computer music at the software level and make software and hardware systems suitable for music education application is still a subject being explored. The industrial Internet of Things turns every link and equipment in the production process into a data terminal, comprehensively collects the underlying basic data, carries out deeper data analysis and mining, and improves efficiency. To adapt to the trend of the times, digitization is the inevitable trend of the development of music accompaniment, and the deep integration of computer is an important way to realize digitization. Combined with the basic music theory and digital processing method, this paper designs the virtual piano auxiliary accompaniment and realizes the digital music accompaniment of artificial intelligence through digital signal processing. This paper will start with the auditory effect of auxiliary singing, take the rhythm and style of piano auxiliary accompaniment as the research object, build the accompaniment generation tool through the piano music simulation system model, and generate the accompaniment chord of the main melody by extracting the style, treble, and other characteristics of the main melody. The simulation results show that the frequency response is evenly distributed, the model effectively uses the harmonic structure information, and the extracted feature dimension is only 88. The reduction of feature dimension can save the time of subsequent processing and has the characteristics of real time, automation, embedded (software), security, and information interoperability.
APA, Harvard, Vancouver, ISO, and other styles
17

Kai, Hong. "Optimization of Music Feature Recognition System for Internet of Things Environment Based on Dynamic Time Regularization Algorithm." Complexity 2021 (May 24, 2021): 1–11. http://dx.doi.org/10.1155/2021/9562579.

Full text
Abstract:
Because of the difficulty of music feature recognition due to the complex and varied music theory knowledge influenced by music specialization, we designed a music feature recognition system based on Internet of Things (IoT) technology. The physical sensing layer of the system places sound sensors at different locations to collect the original music signals and uses a digital signal processor to carry out music signal analysis and processing. The network transmission layer transmits the completed music signals to the music signal database in the application layer of the system. The music feature analysis module of the application layer uses a dynamic time regularization algorithm to obtain the maximum similarity between the test template and the reference. The music feature analysis module of the application layer uses the dynamic time regularization algorithm to obtain the maximum similarity between the test template and the reference template to realize the feature recognition of the music signal and determine the music pattern and music emotion corresponding to the music feature content according to the recognition result. The experimental results show that the system operates stably, can capture high-quality music signals, and can correctly identify music style features and emotion features. The results of this study can meet the needs of composers’ assisted creation and music researchers’ analysis of a large amount of music data, and the results can be further transferred to deep music learning research, human-computer interaction music creation, application-based music creation, and other fields for expansion.
APA, Harvard, Vancouver, ISO, and other styles
18

Wang, Tuntun, Junke Li, Jincheng Zhou, Mingjiang Li, and Yong Guo. "Music Recommendation Based on “User-Points-Music” Cascade Model and Time Attenuation Analysis." Electronics 11, no. 19 (September 28, 2022): 3093. http://dx.doi.org/10.3390/electronics11193093.

Full text
Abstract:
Music has an increasing impact on people’s daily lives, and a sterling music recommendation algorithm can help users find their habitual music accurately. Recent research on music recommendation directly recommends the same type of music according to the specific music in the user’s historical favorite list. However, users’ behavior towards a certain cannot reflect the preference for this type of music and possibly provides music the listener dislikes. A recommendation model, MCTA, based on “User-Point-Music” structure is proposed. By clustering users’ historical behavior, different interest points are obtained to further recommend high-quality music under interest points. Furthermore, users’ interest points will decay over time. Combined with the number of music corresponding to each interest point and the liking degree of each music, a multi-interest point attenuation model is constructed. Based on the real data after desensitization and encoding, including 100,000 users and 12,028 pieces of music, a series of experimental results show that the effect of the proposed MCTA model has improved by seven percentage points in terms of accuracy compared with existing works. It came to the conclusion that the multi-interest point attenuation model can more accurately simulate the actual music consumption behavior of users and recommend music better.
APA, Harvard, Vancouver, ISO, and other styles
19

Chen, Yunfei. "Construction and Application of Music Style Intelligent Learning System Based on Situational Awareness." Mathematical Problems in Engineering 2022 (September 25, 2022): 1–11. http://dx.doi.org/10.1155/2022/2689233.

Full text
Abstract:
Contextual representation recommendation directly uses contextual prefiltering technology when processing user contextual data, which is not the integration of context and model in the true sense. To this end, this paper proposes a context-aware recommendation model based on probability matrix factorization. We design a music genre style recognition and generation network. In this network, all the sub-networks of music genres share the explanation layer, which can greatly reduce the learning of model parameters and improve the learning efficiency. Each music genre sub-network analyzes music of different genres, realizing the effect of multitasking simultaneous processing. In this paper, a music style recognition method using a combination of independent recurrent neural network and scattering transform is proposed. The relevant characteristics of traditional audio processing methods are analyzed, and their suitable application scenarios and inapplicability in this task scenario are expounded. Starting from the principle of scattering transform, the superiority and rationality of using scattering transform in this task are explained. This paper proposes a music style recognition method combining two strategies of scattering transform and independent recurrent neural network. In the case that the incremental data set is all labeled, this paper introduces the solution of the convex hull vector, which reduces the training time of the initial sample. According to the error push strategy, an incremental learning algorithm based on convex hull vector and error push strategy is proposed, which can effectively filter historical useful information and at the same time eliminate useless information in new samples. Experiments show that this method improves the accuracy of music style recognition to a certain extent. Music style recognition based on independent recurrent neural network can achieve better performance.
APA, Harvard, Vancouver, ISO, and other styles
20

Zhang, Na. "Research on the Difference between Environmental Music Perception and Innovation Ability Based on EEG Data." Journal of Environmental and Public Health 2022 (November 17, 2022): 1–9. http://dx.doi.org/10.1155/2022/9441697.

Full text
Abstract:
It is of great significance to practice and explore music creation for training creative talents. Perception includes feeling and perception, and feeling is a reflection of individual attributes of objective things directly acting on sensory organs. This paper mainly has a research on the difference between environmental music perception and innovation ability based on EEG data. First, this study performed noise reduction and artifact preprocessing of EEG signals generated by subjects with different levels of consciousness subjected to musical stimulation and then performed tensor decomposition to obtain the tensor component of EEG. The time-domain components of these tensor components were analyzed together with five musical features (fluctuation centroid, fluctuation entropy, pulse clarity, key clarity, and mode), EEG tensor components related to music characteristics were analyzed, the power spectrum and the distribution of responsive brain regions were analyzed, and finally, the differences in the processing of music characteristics by different levels of consciousness were explored.
APA, Harvard, Vancouver, ISO, and other styles
21

Barrett, Frederick S., Katrin H. Preller, Marcus Herdener, Petr Janata, and Franz X. Vollenweider. "Serotonin 2A Receptor Signaling Underlies LSD-induced Alteration of the Neural Response to Dynamic Changes in Music." Cerebral Cortex 28, no. 11 (September 28, 2017): 3939–50. http://dx.doi.org/10.1093/cercor/bhx257.

Full text
Abstract:
AbstractClassic psychedelic drugs (serotonin 2A, or 5HT2A, receptor agonists) have notable effects on music listening. In the current report, blood oxygen level-dependent (BOLD) signal was collected during music listening in 25 healthy adults after administration of placebo, lysergic acid diethylamide (LSD), and LSD pretreated with the 5HT2A antagonist ketanserin, to investigate the role of 5HT2A receptor signaling in the neural response to the time-varying tonal structure of music. Tonality-tracking analysis of BOLD data revealed that 5HT2A receptor signaling alters the neural response to music in brain regions supporting basic and higher-level musical and auditory processing, and areas involved in memory, emotion, and self-referential processing. This suggests a critical role of 5HT2A receptor signaling in supporting the neural tracking of dynamic tonal structure in music, as well as in supporting the associated increases in emotionality, connectedness, and meaningfulness in response to music that are commonly observed after the administration of LSD and other psychedelics. Together, these findings inform the neuropsychopharmacology of music perception and cognition, meaningful music listening experiences, and altered perception of music during psychedelic experiences.
APA, Harvard, Vancouver, ISO, and other styles
22

Li, Tianjiao. "Visual Classification of Music Style Transfer Based on PSO-BP Rating Prediction Model." Complexity 2021 (May 13, 2021): 1–9. http://dx.doi.org/10.1155/2021/9959082.

Full text
Abstract:
In this paper, based on computer reading and processing of music frequency, amplitude, timbre, image pixel, color filling, and so forth, a method of image style transfer guided by music feature data is implemented in real-time playback, using existing music files and image files, processing and trying to reconstruct the fluent relationship between the two in terms of auditory and visual, generating dynamic, musical sound visualization with real-time changes in the visualization. Although recommendation systems have been well developed in real applications, the limitations of CF algorithms are slowly coming to light as the number of people increases day by day, such as the data sparsity problem caused by the scarcity of rated items, the cold start problem caused by new items and new users. The work is dynamic, with real-time changes in music and sound. Taking portraits as an experimental case, but allowing users to customize the input of both music and image files, this new visualization can provide users with a personalized service of mass customization and generate personalized portraits according to personal preferences. At the same time, we take advantage of the BP neural network’s ability to handle complex nonlinear problems and construct a rating prediction model between the user and item attribute features, referred to as the PSO-BP rating prediction model, by combining the features of global optimization of particle swarm optimization algorithm, and make further improvements based on the traditional collaborative filtering algorithm.
APA, Harvard, Vancouver, ISO, and other styles
23

Hao, Jianhong. "Optimizing the Design of a Vocal Teaching Platform Based on Big Data Feature Analysis of the Audio Spectrum." Wireless Communications and Mobile Computing 2022 (May 9, 2022): 1–9. http://dx.doi.org/10.1155/2022/9972223.

Full text
Abstract:
With the development of electronics and communication technology, digital audio processing technologies such as digital audio broadcasting and multimedia communication have been widely used in society, and their influence on people’s lives has become increasingly profound. At present, the real-time and accuracy of musical instrument tuners on the market need to be improved, which hinders the design of vocal music teaching system. Based on the BP neural network algorithm and fast Fourier transform algorithm in FPGA, this paper designs a real-time and efficient audio spectrum analysis system, which realizes the spectrum analysis function of music signal. The methods to calculate fast discrete Fourier transform are the FFT algorithm based on time extraction and the FFT algorithm based on frequency extraction. The characteristic of BP neural network algorithm is that it can not only obtain the corresponding estimation results by forward propagation of the input data but also carry out back propagation from the output layer according to the error between the estimation results and the actual results, so as to optimize the connection weight between each layer. This paper proposes to add Nios II system to FPGA processor and adopt cyclone IV in the hardware design of the system, which can be better compatible with the system designed in this paper. In the software part, WM8731 is used to process the audio data. WM8731 consumes very little power to the system, which will effectively improve the processing efficiency of the system. Compared with the original system, the data model obtained after screening and processing of the system model designed in this paper has an algorithm accuracy of more than 90%, among which the audio spectrum clarity of vocal music can reach 95%. Based on the above, the circuit of each module is tested, and in the specific experimental process, the audio frequency spectrum under different conditions is analyzed and data processed. The system can complete the collection and analysis of various music signals in real time, overcome the limitation of single function of traditional tuner, improve the utilization rate of tuner and the clarity of timbre, and also tune a variety of musical instruments and greatly improve the intonation of musical instruments and the utilization rate of tuner, which has a certain practical value.
APA, Harvard, Vancouver, ISO, and other styles
24

Shi, Junyan, and Qinliang Ning. "Research on the Innovation of Music Teaching in Universities Based on Artificial Intelligence Technology." Security and Communication Networks 2022 (May 30, 2022): 1–9. http://dx.doi.org/10.1155/2022/3140370.

Full text
Abstract:
At present, music education in colleges is in a period of rapid development in China. At the same time, music education in universities is facing innovation and reform of teaching modes. How to improve music education in colleges and universities has become an important issue for music teachers in colleges. Music teaching and management activities for students in general universities can enrich students’ talents and expand their knowledge. It can also help them develop a positive emotional psychology and develop positive and healthy character characteristics, both of which are vital for college students’ healthy development. Innovation and modification in music teaching and management activities for students is the only way to increase the quality and effectiveness of music teaching for nonarts students and to promote the overall quality of students. Based on this, this paper proposes an innovative research method of college music teaching based on artificial intelligence technology. The method introduces a fuzzy evaluation algorithm to establish a two-level teaching evaluation index system and calculates the weights of each index based on fuzzy mathematical theory. In data processing, the SVM algorithm in the field of data mining is used to classify all collected teaching evaluation data in advance through supervised learning, which significantly improves the efficiency of data processing. The experimental results show that the model in this paper can well assess the quality of music teaching in colleges and universities and play a role in promoting the progress of music teaching in colleges and universities.
APA, Harvard, Vancouver, ISO, and other styles
25

LEBRUN-GUILLAUD, GÉÉRALDINE, BARBARA TILLMANN, and TIMOTHY JUSTUS. "PERCEPTION OF TONAL AND TEMPORAL STRUCTURES IN CHORD SEQUENCES BY PATIENTS WITH CEREBELLAR DAMAGE." Music Perception 25, no. 4 (April 1, 2008): 271–83. http://dx.doi.org/10.1525/mp.2008.25.4.271.

Full text
Abstract:
OUR STUDY INVESTIGATED THE PERCEPTION of pitch and time dimensions in chord sequences by patients with cerebellar damage. In eight-chord sequences, tonal relatedness and temporal regularity of the chords were manipulated and their processing was tested with indirect and direct investigation methods (i.e., priming paradigm in Experiment 1; subjective judgments of completion and temporal regularity in Experiments 2 and 3). Experiment 1 replicated a musical relatedness effect despite cerebellar damage (see Tillmann, Justus, & Bigand, 2008) and Experiment 2 extended it to completion judgments. This outcome suggests that an intact cerebellum is not mandatory to access tonal knowledge. However, data on temporal manipulations suggest that the cerebellum is involved in the processing of temporal regularities in music. The comparison between task performances obtained for the same sequences further suggests that the altered processing of temporal structures in patients impairs the rapid development of musical expectations on the time dimension.
APA, Harvard, Vancouver, ISO, and other styles
26

Magne, Cyrille, Daniele Schön, and Mireille Besson. "Musician Children Detect Pitch Violations in Both Music and Language Better than Nonmusician Children: Behavioral and Electrophysiological Approaches." Journal of Cognitive Neuroscience 18, no. 2 (February 1, 2006): 199–211. http://dx.doi.org/10.1162/jocn.2006.18.2.199.

Full text
Abstract:
The idea that extensive musical training can influence processing in cognitive domains other than music has received considerable attention from the educational system and the media. Here we analyzed behavioral data and recorded event-related brain potentials (ERPs) from 8-year-old children to test the hypothesis that musical training facilitates pitch processing not only in music but also in language. We used a parametric manipulation of pitch so that the final notes or words of musical phrases or sentences were congruous, weakly incongruous, or strongly incongruous. Musician children outperformed nonmusician children in the detection of the weak incongruity in both music and language. Moreover, the greatest differences in the ERPs of musician and nonmusician children were also found for the weak incongruity: whereas for musician children, early negative components developed in music and late positive components in language, no such components were found for nonmusician children. Finally, comparison of these results with previous ones from adults suggests that some aspects of pitch processing are in effect earlier in music than in language. Thus, the present results reveal positive transfer effects between cognitive domains and shed light on the time course and neural basis of the development of prosodic and melodic processing.
APA, Harvard, Vancouver, ISO, and other styles
27

Bolya, Mátyás. "AI-SUPPORTED PROCESSING OF HANDWRITTEN TRANSCRIPTIONS FOR HUNGARIAN FOLK SONGS IN A DIGITAL ENVIRONMENT." Ethnomusic 18, no. 1 (December 2022): 65–82. http://dx.doi.org/10.33398/2523-4846-2022-18-1-65-82.

Full text
Abstract:
My research focuses on creating an AI-supported Digital Research Environment (DRE) that helps analysing and systematizing folk music tunes with the help of the latest information theory and database management results. The study may be ex- tended to the entire source material accumulated by researchers so far, thus inte- grating Hungarian ethnomusicology results of the last hundred years. In this way, new dimensions of structural analysis open up and a large amount of information can be processed that already exceeds the limits of human musical memory. Previous computerized music analysis experiments in Hungary have inadequate- ly defined the role of artificial intelligence. In our case, the AI-supported digital en- vironment that is the subject of the research does not work independently, because the researcher’s scientifically abstract thinking, preferences, and the recognition of characteristic melodic elements cannot yet be replaced by computer data processing. Crucial goal of the research is to precisely define the researcher’s role in musi- cal data processing. Thus the attitude of researchers rejecting software support may 1 The institute previously belonged to the Hungarian Academy of Science, currently it belongs to the ELKH (Eötvös Lóránd Research Network). 2 List of publications: MTMT. Hungarian Scientific Bibliography. URL: https:// m2.mtmt.hu/gui2/?type=authors&mode=browse&sel=10063399 (Access: 23.10.2022). https://doi.org/10.33398/2523-4846-2022-18-1-65-82 66 change in favour of actually using our digital framework. For the first time in Hungar- ian folk music research history, a detailed and documented digital research environ- ment can be created, integrating the useful, relevant software tools. We can map out data entry problems and define the standard format of the musical data suitable for mass input and analysis. If possible, we will replace the previously widely used op- tional data with scalable data to have a broader range of parametrization and search options, and their free combination allows us to study new scientific models. With DRE, the validity range of the previous scientific musical classification can be more precisely specified and the processing as well as classification of unreported melodies and the process of type creating can be significantly accelerated. The most significant debate in the previous research has been the dataset speci- fication of analyses. I am convinced that only similarly processed tune-data-elements can be compared, so one of the most critical tasks is to determine the input data’s standard format and information density. As a first step, the digital conversion of the musical manuscript needs to be solved. International research has mainly led to results in the recognition of printed music, some of which can be used in the project, but many new developments are also needed. Keywords: AI-supported Digital Research Environment (DRE), Optical Music Recognition (OMR), Musical Manuscripts, Hungarian Folk Songs, scientific musical classification, ethnomusicology, digital archives, folklore database.
APA, Harvard, Vancouver, ISO, and other styles
28

Bolya, Mátyás. "AI-SUPPORTED PROCESSING OF HANDWRITTEN TRANSCRIPTIONS FOR HUNGARIAN FOLK SONGS IN A DIGITAL ENVIRONMENT." Ethnomusic 18, no. 1 (December 2022): 65–82. http://dx.doi.org/10.33398/2523-4846-2022-18-2-65-82.

Full text
Abstract:
My research focuses on creating an AI-supported Digital Research Environment (DRE) that helps analysing and systematizing folk music tunes with the help of the latest information theory and database management results. The study may be ex- tended to the entire source material accumulated by researchers so far, thus inte- grating Hungarian ethnomusicology results of the last hundred years. In this way, new dimensions of structural analysis open up and a large amount of information can be processed that already exceeds the limits of human musical memory. Previous computerized music analysis experiments in Hungary have inadequate- ly defined the role of artificial intelligence. In our case, the AI-supported digital en- vironment that is the subject of the research does not work independently, because the researcher’s scientifically abstract thinking, preferences, and the recognition of characteristic melodic elements cannot yet be replaced by computer data processing. Crucial goal of the research is to precisely define the researcher’s role in musi- cal data processing. Thus the attitude of researchers rejecting software support may 1 The institute previously belonged to the Hungarian Academy of Science, currently it belongs to the ELKH (Eötvös Lóránd Research Network). 2 List of publications: MTMT. Hungarian Scientific Bibliography. URL: https:// m2.mtmt.hu/gui2/?type=authors&mode=browse&sel=10063399 (Access: 23.10.2022). https://doi.org/10.33398/2523-4846-2022-18-1-65-82 66 change in favour of actually using our digital framework. For the first time in Hungar- ian folk music research history, a detailed and documented digital research environ- ment can be created, integrating the useful, relevant software tools. We can map out data entry problems and define the standard format of the musical data suitable for mass input and analysis. If possible, we will replace the previously widely used op- tional data with scalable data to have a broader range of parametrization and search options, and their free combination allows us to study new scientific models. With DRE, the validity range of the previous scientific musical classification can be more precisely specified and the processing as well as classification of unreported melodies and the process of type creating can be significantly accelerated. The most significant debate in the previous research has been the dataset speci- fication of analyses. I am convinced that only similarly processed tune-data-elements can be compared, so one of the most critical tasks is to determine the input data’s standard format and information density. As a first step, the digital conversion of the musical manuscript needs to be solved. International research has mainly led to results in the recognition of printed music, some of which can be used in the project, but many new developments are also needed. Keywords: AI-supported Digital Research Environment (DRE), Optical Music Recognition (OMR), Musical Manuscripts, Hungarian Folk Songs, scientific musical classification, ethnomusicology, digital archives, folklore database.
APA, Harvard, Vancouver, ISO, and other styles
29

Jin, Lin, and Qiu Lei Du. "The Systematic Study of Music Spectrum Display Based on FFT." Advanced Materials Research 945-949 (June 2014): 1764–67. http://dx.doi.org/10.4028/www.scientific.net/amr.945-949.1764.

Full text
Abstract:
Spectrum analysis is the superposition signal intodifferent frequency component and the imaginary indexcomponent, the analysis of signal from the frequency point of view. It is widely used in voice processing, image processing, digital audio, seismic exploration. The FFT algorithm, provides an effective solution forreal-time processing of frequency domain analysis. Music display system based on FFT spectrum, the main work is as follows:First, the fast Fourier transform ideas a brief description, and combined with the specific hardware and software implementations of the algorithm of the system used by the fast Fourier transform is described in detail. Then, describes the main hardware devices used in the system, including STC12C5A60S2 chips, display principle Principle 32 * 64LED dot matrix screen, use 74LS595 and 74LS164's. The system is simple, easy to use, you can use STC12C5A60S2 SCM data processing functions of the sound signal spectrum analysis.
APA, Harvard, Vancouver, ISO, and other styles
30

Zhang, Qi. "Network Audio Data and Music Composition Teaching Based on Heterogeneous Cellular Network." Computational Intelligence and Neuroscience 2022 (June 13, 2022): 1–12. http://dx.doi.org/10.1155/2022/9329856.

Full text
Abstract:
With the rapid development of services such as Industry 4.0 and Internet of Vehicles, it is difficult for traditional cellular networks to meet the needs of network users for quantification, diversification, and greenness in the future. Various cellular networks expand multiple micro-cell nodes and relay nodes under macro-cells to form a multilayer network architecture. Based on this, in the process of data transmission, the links have been repeatedly reduced, and at the same time, the terminal power consumption has been reduced and the running system has been improved. This article will use the ratio of the capacity, energy consumption, and resource allocation of different cellular networks as the main means to optimize the cost. Using graph theory, auction theory, and multipurpose optimization algorithms, we have conducted in-depth research topics on upstream and downstream wireless resource allocation, network relay deployment and transmission scheduling, MMW large-scale multi-antenna transmission technology, and base station energy management. A series of optimization schemes and algorithms are proposed. This dissertation is based on the research of educational system design theory in the field of educational technology so as to carry out the research of music education system design theory suitable for the nature of music subjects and learning and education characteristics. Based on the necessity and importance of music education system design theory, the research framework of music education system design theory is constructed in advance. The voice data acquisition system collects voice data through a network grabber and real-time recording and uses signal processing and pattern recognition technology to automatically classify the collected voice data into three categories: voice, environmental sound, and music. After establishing the audio data deployment strategy, simulation method, and architecture design based on heterogeneous cellular network, this paper designs the corresponding music composition teaching system, mainly including score editing, viewing, and content display of the composition teaching system, and the final test shows that the system designed in this paper can be effectively used in various music school teaching combined with heterogeneous cellular networks.
APA, Harvard, Vancouver, ISO, and other styles
31

Gong, Tianzhuo, and Sibing Sun. "Feature Extraction of Music Signal Based on Adaptive Wave Equation Inversion." Advances in Mathematical Physics 2021 (October 22, 2021): 1–12. http://dx.doi.org/10.1155/2021/8678853.

Full text
Abstract:
The digitization, analysis, and processing technology of music signals are the core of digital music technology. There is generally a preprocessing process before the music signal processing. The preprocessing process usually includes antialiasing filtering, digitization, preemphasis, windowing, and framing. Songs in the popular wav format and MP3 format on the Internet are all songs that have been processed by digital technology and do not need to be digitalized. Preprocessing can affect the effectiveness and reliability of the feature parameter extraction of music signals. Since the music signal is a kind of voice signal, the processing of the voice is also applicable to the music signal. In the study of adaptive wave equation inversion, the traditional full-wave equation inversion uses the minimum mean square error between real data and simulated data as the objective function. The gradient direction is determined by the cross-correlation of the back propagation residual wave field and the forward simulation wave field with respect to the second derivative of time. When there is a big gap between the initial model and the formal model, the phenomenon of cycle jumping will inevitably appear. In this paper, adaptive wave equation inversion is used. This method adopts the idea of penalty function and introduces the Wiener filter to establish a dual objective function for the phase difference that appears in the inversion. This article discusses the calculation formulas of the accompanying source, gradient, and iteration step length and uses the conjugate gradient method to iteratively reduce the phase difference. In the test function group and the recorded music signal library, a large number of simulation experiments and comparative analysis of the music signal recognition experiment were performed on the extracted features, which verified the time-frequency analysis performance of the wave equation inversion and the improvement of the decomposition algorithm. The features extracted by the wave equation inversion have a higher recognition rate than the features extracted based on the standard decomposition algorithm, which verifies that the wave equation inversion has a better decomposition ability.
APA, Harvard, Vancouver, ISO, and other styles
32

McDonald, Thomas, Mark Robinson, and GuiYun Tian. "Spatial resolution enhancement of rotational-radar subsurface datasets using combined processing method." Journal of Physics: Conference Series 2090, no. 1 (November 1, 2021): 012001. http://dx.doi.org/10.1088/1742-6596/2090/1/012001.

Full text
Abstract:
Abstract Effective visualisation of railway tunnel subsurface features (e.g. voids, utilities) provides critical insight into structural health and underpins planning of essential targeted predictive maintenance. Subsurface visualisation here utilises a rotating ground penetrating radar antenna system for 360° point cloud data capture. This technology has been constructed by our industry partner Railview Ltd, and requires the development of complimentary signal processing algorithms to improve feature localisation. The main novelty of this work is extension of Shrestha and Arai’s Combined Processing Method (CPM) to 360° Ground Penetrating Radar (360GPR) datasets, for first-time application in the context of railway tunnel structural health inspection. Initial experimental acquisition of a sample rotational transect for CPM enhancement is achieved by scanning a test section of tunnel sidewall - featuring predefined target geometry - with the rotating antenna. Next, frequency data separately undergo Inverse Fast Fourier Transform (IFFT) and Multiple Signal Classification (MUSIC) processing to recover temporal responses. Numerical implementation steps are explicitly provided for both MUSIC and two associated spatial smoothing algorithms, addressing an identified information deficit in the field. Described IFFT amplitude is combined with high spatial resolution of MUSIC via the CPM relation. Finally, temporal responses are compared qualitatively and quantitatively, evidencing the significant enhancement capabilities of CPM.
APA, Harvard, Vancouver, ISO, and other styles
33

Deswarni, Deswarni, and Budiwirman Budiwirman. "MENINGKATKAN KEMAMPUAN SISWA MEMBACA NOTASI MUSIK DENGAN MENGGUNAKAN METODE DEMONSTRASI DALAM PEMBELAJARAN SENI MUSIK." Gorga : Jurnal Seni Rupa 8, no. 2 (November 22, 2019): 374. http://dx.doi.org/10.24114/gr.v8i2.15419.

Full text
Abstract:
AbstrakKemampuan membaca notasi musik dalam aktivitas siswa meliputi kegiatan memperhatikan guru dalam menerangkan pelajaran, bertanya, dan keberanian siswa maju kedepan kelas (Nasional, D.P. (2002), itusemua sangat kurang sekali ditemui dikelas XI IPS 1. Untukituperlu dilakukan suatu penelitian untuk mengetahui penyebapnya. Penelitian ini bertujuan untuk meningkatkan kemampuan siswa membaca notasi musik dalam pembelajaran seni musik, dengan target yang ingin dicapai adalah 75%. Tindakan yang diterapkan adalah dengan menggunakan metode demonstrasi. Data dikumpulkan dengan bantuan instrument serta dilengkapi dengan observasi, selanjutnya diolah dengan teknik persentase untuk melihat kecendrungan-kecendrungan data setelah perlakuan diberikan. Jenis penelitian yang digunakan adalah penelitian tindakan kelas, sedang subjek penelitian ini adalah siswa kelas XI IPS 1 SMA Negeri 5 Pariaman dan perlakuan yang diberikan yaitu dengan menggunakan metode demonstrasi. Prosedur penelitian ini dilaksanakan dalam dua siklus, siklus I dilaksanakan 4 kali pertemuan dan siklus ke II dilaksanakan dalam 2 kali pertemuan dengan alokasi waktu satu kali pertemuan adalah 2 X 45 menit. Hasil penelitian dari data siklus I dalam meningkatkan kemampuan siswa membaca notasi musik dalam kegiatan aktifitas siswa dalam memperhatikan guru yang aktif 79,41%, mengajukan pertanyaaan yang aktif mencapai 29,41%, berani maju kedepan mencapai 44,11%. Sedangkan kemampuan siswa membaca notasi musik mencapai 59,32%. Data siklus II meningkatkan kemampuan siswa membaca notasi musik dalam kegiatan aktifitas siswa memperhatikan guru 82,35%, mengajukan pertanyaaan 64,71%, berani maju kedepan 70,59%. Sedangkan kemampuan siswa dalam membaca notasi musik mencapai 75,50%.Berdasarkan hasil pengolahan data dapat disimpulkan bahwa ada pengaruh tindakan yang diberikan terhadap peningkatan kemampuan siswa dalam membaca notasi musik dari siklus I ke siklus ke II meningkat menjadi 16,18%. Kata Kunci: notasi musik, metode demonstrasi.AbstractThe ability to read music notation in student activities includes paying attention to the teacher in explaining the lesson, asking questions, and the courage of students to move forward in the classroom (National, DP (2002), all of them are very poorly found in class XI IPS 1. For this reason, a research is needed to find out. This study aims to improve students' ability to read music notation in learning music, with the target to be achieved is 75%. The measures applied are using the demonstration method. Data is collected with the help of instruments and completed with observation, then processed with percentage techniques to see data trends after the treatment is given. This type of research is a classroom action research, while the subject of this study was a class XI IPS 1 student at SMA Negeri 5 Pariaman and the treatment given was using the demonstration method. The procedure of this research was carried out in two cycles, the first cycle was held 4 times and the second cycle was carried out in 2 meetings with the time allocation of one meeting was 2 X 45 minutes. The results of the study from the data cycle I in improving the ability of students to read music notation in the activities of students in paying attention to active teachers 79.41%, asking active questions reached 29.41%, dare to move forward reaching 44.11%. While the ability of students to read music notation reached 32.17%. Cycle II data improved the ability of students to read music notation in student activities paying attention to the teacher 82.35%, asking questions 64.71%, daring to go forward 70.59%. While the ability of students to read music notation reaches 75.50%. Based on the results of data processing, it can be concluded that there is an effect of the action given to the increase in students' ability to read music notation from cycle I to cycle II increasing to 16,18%.Keywords: music notation, demonstration method.
APA, Harvard, Vancouver, ISO, and other styles
34

Tan, Si-hao, Hui-lin Zhou, Lei Xiang, and Zhen Shu. "An Automatic Framework Using Space-Time Processing and TR-MUSIC for Subsurface and Through-Wall Multitarget Imaging." International Journal of Antennas and Propagation 2012 (2012): 1–10. http://dx.doi.org/10.1155/2012/392308.

Full text
Abstract:
We present an automatic framework combined space-time signal processing with Time Reversal electromagnetic (EM) inversion for subsurface and through-wall multitarget imaging using electromagnetic waves. This framework is composed of a frequency-wavenumber (FK) filter to suppress direct wave and medium bounce, a FK migration algorithm to automatically estimate the number of targets and identify target regions, which can be used to reduce the computational complexity of the following imaging algorithm, and a EM inversion algorithm using Time Reversal Multiple Signal Classification (TR-MUSIC) to reconstruct hidden objects. The feasibility of the framework is demonstrated with simulated data generated by GPRMAX.
APA, Harvard, Vancouver, ISO, and other styles
35

Legrady, George. "Culture, Data and Algorithmic Organization." Leonardo 45, no. 3 (June 2012): 286. http://dx.doi.org/10.1162/leon_a_00378.

Full text
Abstract:
The author presents his interactive digital installations of the past decade, featured in museums, media arts festivals and galleries, that engage the audience to contribute data that is then transformed into content and visually projected large scale in the exhibition space. Collected over time, the data occasions further data-mining, algorithmic processing, with visualization of the results.
APA, Harvard, Vancouver, ISO, and other styles
36

Aljanaki, Anna, Stefano Kalonaris, Gianluca Micchi, and Eric Nichols. "MCMA: A Symbolic Multitrack Contrapuntal Music Archive." Empirical Musicology Review 16, no. 1 (December 10, 2021): 99–105. http://dx.doi.org/10.18061/emr.v16i1.7637.

Full text
Abstract:
We present Multitrack Contrapuntal Music Archive (MCMA, available at https://mcma.readthedocs.io), a symbolic dataset of pieces specifically curated to comprise, for any given polyphonic work, independent voices. So far, MCMA consists only of pieces from the Baroque repertoire but we aim to extend it to other contrapuntal music. MCMA is FAIR-compliant and it is geared towards musicological tasks such as (computational) analysis or education, as it brings to the fore contrapuntal interactions by explicit and independent representation. Furthermore, it affords for a more apt usage of recent advances in the field of natural language processing (e.g., neural machine translation). For example, MCMA can be particularly useful in the context of language-based machine learning models for music generation. Despite its current modest size, we believe MCMA to be an important addition to online contrapuntal music databases, and we thus open it to contributions from the wider community, in the hope that MCMA can continue to grow beyond our efforts. In this article, we provide the rationale for this corpus, suggest possible use cases, offer an overview of the compiling process (data sourcing and processing), and present a brief statistical analysis of the corpus at the time of writing. Finally, future work that we endeavor to undertake is discussed.
APA, Harvard, Vancouver, ISO, and other styles
37

Dwisaputra, Indra, and Ocsirendi Ocsirendi. "Teknik Pengenalan Suara Musik Pada Robot Seni Tari." Manutech : Jurnal Teknologi Manufaktur 10, no. 02 (May 20, 2019): 35–39. http://dx.doi.org/10.33504/manutech.v10i02.66.

Full text
Abstract:
The dancing robot has become an annual competition in Indonesia that needs to be developed to improve robot performance. The dancing robot is a humanoid robot that has 24 degrees of freedom. For 2018 the theme raised was "Remo Dancer Robot". Sound processing provides a very important role in dance robots. This robot moves dancing to adjust to the rhythm of the music. The robot will stop dancing when the music is mute. The resulting sound signal is still analogous. Voice signals must be changed to digital data to access the signal. Convert analog to digital signals using Analog Digital Converter (ADC). ADC data is taken by sampling time 254 data per second. The sampling data is stored and grouped per 1 second to classify the parts of Remo Dance music. The results of data classification are in the form of digital numbers which then become a reference to determine the movement of the robot. Robots can recognize conditions when music is in a mute or a play condition.
APA, Harvard, Vancouver, ISO, and other styles
38

Węgrzyn, Justyna. "Granting of Consent by a Child for the Processing of Their Personal Data Within the Framework of Information Society Services." Przegląd Prawa Konstytucyjnego 67, no. 3 (June 30, 2022): 363–72. http://dx.doi.org/10.15804/ppk.2022.03.27.

Full text
Abstract:
For a long time, it has been observed that services available in the virtual world, such as social networks, gaming platforms, music streaming services, have attracted the interest of internet users of different ages. They include children, who require special protection as relates to the processing of their personal data. These issues have been addressed by the EU legislator in Art. 8 of GDPR2. The purpose of this paper is to analyze the solutions adopted in Article 8 GDPR and to assess their application in practice.
APA, Harvard, Vancouver, ISO, and other styles
39

Zhang, Hao-Lan, and Peng-Jiang Yu. "Design of Multimedia Vocal Music Education Data Integration System Based on Adaptive Genetic Algorithm." Security and Communication Networks 2021 (August 2, 2021): 1–9. http://dx.doi.org/10.1155/2021/3897696.

Full text
Abstract:
In order to solve the problems of low accuracy of data integration results, low integration efficiency, and easy confusion between different types of data in traditional methods, a multimedia vocal education data integration system based on adaptive genetic algorithm was designed. Specifically, the designed system is divided into three parts: data source management module, system administrator module, and database management module. The synchronized multimedia vocal education data are first processed by the synchronous multimedia vocal education data processing and then integrated by an adaptive genetic algorithm. The experimental results show that the longest data transmission time of the system is 2.3 s, which is much lower than that of the traditional method, and the accuracy of the integration result is higher, and the probability of data integration confusion is lower, which all indicate that the designed system has better application performance.
APA, Harvard, Vancouver, ISO, and other styles
40

Xu, Caibin, Hao Zuo, and Mingxi Deng. "Dispersive MUSIC algorithm for Lamb wave phased array." Smart Materials and Structures 31, no. 2 (January 21, 2022): 025033. http://dx.doi.org/10.1088/1361-665x/ac4874.

Full text
Abstract:
Abstract By controlling the excitation time delay on each element, the conventional phased array can physically focus signals transmitted by different elements on a desired point in turn. An alternative and time-saving strategy is that every element takes turns to transmit the excitation and the remaining elements receive the corresponding response signals, which is known as the full matrix capture (FMC) method for data acquisition, and then let the signals virtually focus on every desired point by post-processing technique. In this study, based on the FMC, a dispersive multiple signal classification (MUSIC) algorithm for Lamb wave phased array is developed to locate defects. The virtual time reversal is implemented to back propagate the wave packets corresponding to the desired focusing point and a window function is adopted to adaptively isolate the desired packets from the other components. Then those wave packets are forward propagated to the original focusing point at a constant velocity. For every potential focusing point and all receivers, the virtual array focuses the signals from all transmitters so as to obtain the focusing signals. The MUSIC algorithm with the obtained focusing signals is adopted to achieve Lamb wave imaging. Benefiting from the post-processing operations, the baseline subtraction as well as the estimation for the number of the scattering sources is no longer required in the proposed algorithm. Experiments on an aluminum plate with three artificial defects and a compact circular PZT array are implemented and the results demonstrate the efficacy of the proposed algorithm.
APA, Harvard, Vancouver, ISO, and other styles
41

Chen, Bo, Heung Kou, Bowen Hou, and Yanbing Zhou. "Music Feature Extraction Method Based on Internet of Things Technology and Its Application." Computational Intelligence and Neuroscience 2022 (April 18, 2022): 1–10. http://dx.doi.org/10.1155/2022/8615152.

Full text
Abstract:
Due to the influence of factors such as strong music specialization, complex music theory knowledge, and various variations, it is difficult to identify music features. We have developed a music characteristic identification system using the Internet-based method. The physical sensing layer of our designed system deploys audio sensors on various coordinates to capture the raw audio signal and performs audio signal processing and analysis using the TMS320VC5402 digital signal processor; the Internet transport layer places audio sensors at various locations to capture the raw audio signal. The TMS320VC5402 digital signal processor is used for audio signal diagnosis and treatment. The network transport layer transmits the finished audio signal to the data base of song signal in the application layer of the system; the song characteristic analysis block in the application layer adopts dynamics. The music characteristic analysis block in the applied layer adopts dynamic time warping algorithm to acquire the maximal resemblance between the test template and the reference template to achieve music signal characteristic identification and identify music tunes and music modes based on the identification results. The application layer music feature analysis block adopts dynamic time regularization algorithm and mel-frequency cepstrum coefficient to achieve music signal feature recognition and identify music tunes and music patterns based on the recognition results. We have verified through experiments, and the results show that the system operates consistently, can obtain high-quality music samples, and can extract good music characteristics.
APA, Harvard, Vancouver, ISO, and other styles
42

Xiang, Yuehua. "Analysis of Psychological Shaping Function of Music Education under the Background of Artificial Intelligence." Journal of Environmental and Public Health 2022 (September 9, 2022): 1–14. http://dx.doi.org/10.1155/2022/7162069.

Full text
Abstract:
In order to solve the problem of integrating intelligent technology into music teaching, this paper puts forward the methods of using intelligent technology to optimize the music teaching system; to enhance the effectiveness of music psychological guidance, music intelligent creation, the development of Yi Guzheng platform, the integration of online sparring technology, and the rational use of Mu class platform, and to ensure the construction of loop curriculum system with “intelligent piano.” RBF algorithm has strong data processing ability, which can ensure the operation quality of music score, music score and performance learning modules, and effectively strengthen the training effect of music psychological function. The artificial intelligence platform is more progressive in evaluation. It can use emotion to evaluate courses, grasp the course direction in advance, ensure the construction quality of psychological function, and improve the effect of music teaching; take 100 people as the basis of students, and measure the time proportion of students’ music learning in the intelligent system. cjσj refers to the learning situation of music courses within the learning time X of students with excellent psychological quality of music. This result is parallel. Y represents the best score of music output by each algorithm. 2024 music intelligent platforms have collected various course resources with difficulty coefficients of 1 to 5, and the course resources of pictures and videos are sufficient. It can provide students with comprehensive music psychological education and give full play to the teaching advantages of intelligent technology. It can be seen from the data: the operation of music score, music score and performance module viewed in March 2021, with high transportation times and resource download times, indicating that the operation is in good condition; The error correction accuracy of the system is greater than 99%, indicating that the system has strong error correction ability.
APA, Harvard, Vancouver, ISO, and other styles
43

Takeda, Ryu, and Kazunori Komatani. "Noise-Robust MUSIC-Based Sound Source Localization Using Steering Vector Transformation for Small Humanoids." Journal of Robotics and Mechatronics 29, no. 1 (February 20, 2017): 26–36. http://dx.doi.org/10.20965/jrm.2017.p0026.

Full text
Abstract:
[abstFig src='/00290001/03.jpg' width='300' text='Sound source localization and problem' ] We focus on the problem of localizing soft/weak voices recorded by small humanoid robots, such as NAO. Sound source localization (SSL) for such robots requires fast processing and noise robustness owing to the restricted resources and the internal noise close to the microphones. Multiple signal classification using generalized eigenvalue decomposition (GEVD-MUSIC) is a promising method for SSL. It achieves noise robustness by whitening robot internal noise using prior noise information. However, whitening increases the computational cost and creates a direction-dependent bias in the localization score, which degrades the localization accuracy. We have thus developed a new implementation of GEVD-MUSIC based on steering vector transformation (TSV-MUSIC). The application of a transformation equivalent to whitening to steering vectors in advance reduces the real-time computational cost of TSV-MUSIC. Moreover, normalization of the transformed vectors cancels the direction-dependent bias and improves the localization accuracy. Experiments using simulated data showed that TSV-MUSIC had the highest accuracy of the methods tested. An experiment using real recoded data showed that TSV-MUSIC outperformed GEVD-MUSIC and other MUSIC methods in terms of localization by about 4 points under low signal-to-noise-ratio conditions.
APA, Harvard, Vancouver, ISO, and other styles
44

Lai, Wen. "Automatic Music Classification Model Based on Instantaneous Frequency and CNNs in High Noise Environment." Journal of Environmental and Public Health 2022 (September 21, 2022): 1–10. http://dx.doi.org/10.1155/2022/1317439.

Full text
Abstract:
Automatic music classification has significant research implications because it is the foundation for quick and efficient music resource retrieval and has a wide range of possible applications. In this study, DL is used to extract and categorize musical features, and a DL-based model for music feature extraction and classification is created. In this study, the instantaneous frequency and short-time Fourier transform are used to estimate the sine of a mixed music signal. Based on peak-frequency pairs, the DL algorithm is then used to calculate multiple candidate pitch estimates for each frame, and the melody pitch sequence is then obtained in accordance with the pitch profile duration and continuity characteristics. With this approach, the pitch can be calculated without reference to the fundamental frequency component. A music feature classification approach using spectrogram as input data and CNN as classifier is proposed at the same time in light of CNN’s benefits in image processing. Studies reveal that this model’s categorization and music feature extraction accuracy is as high as 94.18 percent and 95.66 percent, respectively. The outcomes demonstrate the efficiency of this technique for the extraction and classification of musical features. The field of music information retrieval is a good fit for it.
APA, Harvard, Vancouver, ISO, and other styles
45

He, Xin. "A Random Matrix Network Model for the Network Teaching System of College Music Education Courses." Mathematical Problems in Engineering 2022 (September 17, 2022): 1–11. http://dx.doi.org/10.1155/2022/1827731.

Full text
Abstract:
How to improve the teaching management model has always been an important part of the research and exploration of university music teaching management. Based on the random matrix network theory, this paper builds a random matrix network model and uses the random matrix network of Internet/Intranet to realize the electronic and networked multimedia information management, which makes the data query more flexible and convenient. In the random matrix network environment, the security of the model network is greatly improved, which solves the expansion problem of the random matrix network. During the simulation process, the model integrates the relevant image, audio, video, animation, and other multimedia processing technologies and storage technologies. The system adopts object-oriented analysis and design ideas to analyze the requirements, adopts the random matrix network architecture, and uses tomcat as the server, and the SQL Server database launched by Microsoft is used as the back-end data support, and the Struts architecture is designed by using the MVC development mode to ensure the system has good maintainability and enhanced data processing capabilities. The experimental results show that the response time of the system login verification is less than 2 seconds, the response time of adding users is less than 2 seconds, the response time of downloading assignments is less than 2 seconds, the response time of uploading courseware is less than 3 seconds, and the response time of checking the results is less than 2 seconds. The separation of technology effectively improves the scalability and maintainability of the system.
APA, Harvard, Vancouver, ISO, and other styles
46

Li, Yanjing, and Xinyuan Liu. "An Intelligent Music Production Technology Based on Generation Confrontation Mechanism." Computational Intelligence and Neuroscience 2022 (February 10, 2022): 1–10. http://dx.doi.org/10.1155/2022/5083146.

Full text
Abstract:
In recent years, with the development of deep neural network becoming more and more mature, especially after the proposal of generative confrontation mechanism, academia has made many achievements in the research of image, video and text generation. Therefore, scholars began to use similar attempts in the research of music generation. Therefore, based on the existing theoretical technology and research work, this paper studies music production, and then proposes an intelligent music production technology based on generation confrontation mechanism to enrich the research in the field of computer music generation. This paper takes the music generation method based on generation countermeasure mechanism as the research topic, and mainly studies the following: after studying the existing music generation model based on generation countermeasure network, a time structure model for maintaining music coherence is proposed. In music generation, avoid manual input and ensure the interdependence between tracks. At the same time, this paper studies and implements the generation method of discrete music events based on multi track, including multi track correlation model and discrete processing. The lakh MIDI data set is studied. On this basis, the lakh MIDI is pre-processed to obtain the LMD piano roll data set, which is used in the music generation experiment of MCT-GAN. When studying the multi track music generation based on generation countermeasure network, this paper studies and analyzes three models, and puts forward the multi track music generation method based on CT-GAN, which mainly improves the existing music generation model based on GAN. Finally, the generation results of MCT-GAN are compared with those of Muse-GAN, so as to reflect the improvement effect of MCT-GAN. Select 20 auditees to listen to the generated music and real music and distinguish them. Finally, analyze them according to the evaluation results. After evaluation, it is concluded that the research effect of multi track music generation based on CT-GAN is improved.
APA, Harvard, Vancouver, ISO, and other styles
47

Jing, Zhao. "Design of Flute Music Remote Teaching System Based on Multi-Pass Scheduling Optimization." Computational Intelligence and Neuroscience 2022 (September 29, 2022): 1–12. http://dx.doi.org/10.1155/2022/1126785.

Full text
Abstract:
With the gradual development of the Internet industry, every aspect of people’s life has been affected by the Internet, playing increasingly irreplaceable functions in people’s entertainment, office, and other aspects. Judging from the current development situation, the old Internet digital teaching system has many problems, such as low artificial intelligence, weak information processing ability, and lack of effective learning ability. This paper designs the flute music remote teaching system, which can realize remote music teaching and provide help in providing real-time music teaching. The music learning system includes the user’s access records, the user’s operation and the completion of the test data, the discussion and communication of online participation, the user’s interests, specialties and operation methods, learning progress and scoring, and so on. In addition, it explores and explains all the key steps required by the current distance education model and invents a sample of the distance education model. On this basis, Internet algorithm programs will be used for all key processing functions of the system. The use of Internet algorithm programs is interactive and automated, which greatly enhances the role of the education system. This article first discusses the unique teaching and automated teaching mode of the system, which lays the cornerstone for further reforms in this field in the future.
APA, Harvard, Vancouver, ISO, and other styles
48

Fan, Yue Bin. "Study on the Realization of Music Education and Teaching Management System of Universities." Advanced Materials Research 926-930 (May 2014): 4618–21. http://dx.doi.org/10.4028/www.scientific.net/amr.926-930.4618.

Full text
Abstract:
Aiming at developing a music education and teaching management system for universities, the user’s need for this system is analyzed in this study. The system’s development process has always been object-oriented analysis and design analysis of needs, combined with extensive application of computer technology, the various features of object-oriented design, and ultimately a university music education tailored to teaching management system. The system uses J2EE architecture, using tomcat as a server, using Microsoft launched the SQL Server database as a background support. The Struts MVC architecture design is used to ensure good maintenance of the system with the same time enhances data processing capabilities. data with the view of the separation technology is used to keep good expansion of the system and promote the maintenance for the system.
APA, Harvard, Vancouver, ISO, and other styles
49

Young, John P. "Networked music: bridging real and virtual space." Organised Sound 6, no. 2 (August 2001): 107–10. http://dx.doi.org/10.1017/s1355771801002059.

Full text
Abstract:
This paper describes an exploration of utilising the World Wide Web for interactive music. The origin of this investigation was the intermedia work Telemusic #1, by Randall Packer, which combined live performers with live public participation via the Web. During the event, visitors to the site navigated through a virtual interface, and while manipulating elements, projected their actions in the form of triggered sounds into the physical space. Simultaneously, the live audio performance was streamed back out to the Internet participants. Thus, anyone could take part in the collective realisation of the work and hear the musical results in real time. The underlying technology is, to our knowledge, the first standards-based implementation linking the Web with Cycling '74 MAX. Using only ECMAScript/JavaScript, Java, and the OTUDP external from UC Berkeley CNMAT, virtually any conceivable interaction with a Web page can send data to a MAX patch for processing. The code can also be readily adapted to work with Pd, jMAX and other network-enabled applications.
APA, Harvard, Vancouver, ISO, and other styles
50

Shahin, Antoine J., Laurel J. Trainor, Larry E. Roberts, Kristina C. Backer, and Lee M. Miller. "Development of Auditory Phase-Locked Activity for Music Sounds." Journal of Neurophysiology 103, no. 1 (January 2010): 218–29. http://dx.doi.org/10.1152/jn.00402.2009.

Full text
Abstract:
The auditory cortex undergoes functional and anatomical development that reflects specialization for learned sounds. In humans, auditory maturation is evident in transient auditory-evoked potentials (AEPs) elicited by speech or music. However, neural oscillations at specific frequencies are also known to play an important role in perceptual processing. We hypothesized that, if oscillatory activity in different frequency bands reflects different aspects of sound processing, the development of phase-locking to stimulus attributes at these frequencies may have different trajectories. We examined the development of phase-locking of oscillatory responses to music sounds and to pure tones matched to the fundamental frequency of the music sounds. Phase-locking for theta (4–8 Hz), alpha (8–14 Hz), lower-to-mid beta (14–25 Hz), and upper-beta and gamma (25–70 Hz) bands strengthened with age. Phase-locking in the upper-beta and gamma range matured later than in lower frequencies and was stronger for music sounds than for pure tones, likely reflecting the maturation of neural networks that code spectral complexity. Phase-locking for theta, alpha, and lower-to-mid beta was sensitive to temporal onset (rise time) sound characteristics. The data were also consistent with phase-locked oscillatory effects of acoustic (spectrotemporal) complexity and timbre familiarity. Future studies are called for to evaluate developmental trajectories for oscillatory activity, using stimuli selected to address hypotheses related to familiarity and spectral and temporal encoding suggested by the current findings.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography