Добірка наукової літератури з теми "Music auto-tagging"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Music auto-tagging".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Статті в журналах з теми "Music auto-tagging"
Akama, Taketo, Hiroaki Kitano, Katsuhiro Takematsu, Yasushi Miyajima, and Natalia Polouliakh. "Auxiliary self-supervision to metric learning for music similarity-based retrieval and auto-tagging." PLOS ONE 18, no. 11 (November 30, 2023): e0294643. http://dx.doi.org/10.1371/journal.pone.0294643.
Повний текст джерелаBengani, Shaleen, S. Vadivel, and J. Angel Arul Jothi. "Efficient Music Auto-Tagging with Convolutional Neural Networks." Journal of Computer Science 15, no. 8 (August 1, 2019): 1203–8. http://dx.doi.org/10.3844/jcssp.2019.1203.1208.
Повний текст джерелаSong, Guangxiao, Zhijie Wang, Fang Han, Shenyi Ding, and Muhammad Ather Iqbal. "Music auto-tagging using deep Recurrent Neural Networks." Neurocomputing 292 (May 2018): 104–10. http://dx.doi.org/10.1016/j.neucom.2018.02.076.
Повний текст джерелаEllis, Katherine, Emanuele Coviello, Antoni B. Chan, and Gert Lanckriet. "A Bag of Systems Representation for Music Auto-Tagging." IEEE Transactions on Audio, Speech, and Language Processing 21, no. 12 (December 2013): 2554–69. http://dx.doi.org/10.1109/tasl.2013.2279318.
Повний текст джерелаLee, Jaehwan, Daekyeong Moon, Jik-Soo Kim, and Minkyoung Cho. "ATOSE: Audio Tagging with One-Sided Joint Embedding." Applied Sciences 13, no. 15 (August 6, 2023): 9002. http://dx.doi.org/10.3390/app13159002.
Повний текст джерелаShao, Xi, Zhiyong Cheng, and Mohan S. Kankanhalli. "Music auto-tagging based on the unified latent semantic modeling." Multimedia Tools and Applications 78, no. 1 (January 20, 2018): 161–76. http://dx.doi.org/10.1007/s11042-018-5632-2.
Повний текст джерелаSong, Guangxiao, Zhijie Wang, Fang Han, Shenyi Ding, and Xiaochun Gu. "Music auto-tagging using scattering transform and convolutional neural network with self-attention." Applied Soft Computing 96 (November 2020): 106702. http://dx.doi.org/10.1016/j.asoc.2020.106702.
Повний текст джерелаLee, Jongpil, and Juhan Nam. "Multi-Level and Multi-Scale Feature Aggregation Using Pretrained Convolutional Neural Networks for Music Auto-Tagging." IEEE Signal Processing Letters 24, no. 8 (August 2017): 1208–12. http://dx.doi.org/10.1109/lsp.2017.2713830.
Повний текст джерелаYu, Yong-bin, Min-hui Qi, Yi-fan Tang, Quan-xin Deng, Feng Mai, and Nima Zhaxi. "A sample-level DCNN for music auto-tagging." Multimedia Tools and Applications, January 6, 2021. http://dx.doi.org/10.1007/s11042-020-10330-9.
Повний текст джерелаLin, Yi-Hsun, and Homer Chen. "Tag Propagation and Cost-Sensitive Learning for Music Auto-Tagging." IEEE Transactions on Multimedia, 2020, 1. http://dx.doi.org/10.1109/tmm.2020.3001521.
Повний текст джерелаДисертації з теми "Music auto-tagging"
Ibrahim, Karim M. "Personalized audio auto-tagging as proxy for contextual music recommendation." Electronic Thesis or Diss., Institut polytechnique de Paris, 2021. http://www.theses.fr/2021IPPAT039.
Повний текст джерелаThe exponential growth of online services and user data changed how we interact with various services, and how we explore and select new products. Hence, there is a growing need for methods to recommend the appropriate items for each user. In the case of music, it is more important to recommend the right items at the right moment. It has been well documented that the context, i.e. the listening situation of the users, strongly influences their listening preferences. Hence, there has been an increasing attention towards developing recommendation systems. State-of-the-art approaches are sequence-based models aiming at predicting the tracks in the next session using available contextual information. However, these approaches lack interpretability and serve as a hit-or-miss with no room for user involvement. Additionally, few previous approaches focused on studying how the audio content relates to these situational influences, and even to a less extent making use of the audio content in providing contextual recommendations. Hence, these approaches suffer from both lack of interpretability.In this dissertation, we study the potential of using the audio content primarily to disambiguate the listening situations, providing a pathway for interpretable recommendations based on the situation.First, we study the potential listening situations that influence/change the listening preferences of the users. We developed a semi-automated approach to link between the listened tracks and the listening situation using playlist titles as a proxy. Through this approach, we were able to collect datasets of music tracks labelled with their situational use. We proceeded with studying the use of music auto-taggers to identify potential listening situations using the audio content. These studies led to the conclusion that the situational use of a track is highly user-dependent. Hence, we proceeded with extending the music-autotaggers to a user-aware model to make personalized predictions. Our studies showed that including the user in the loop significantly improves the performance of predicting the situations. This user-aware music auto-tagger enabled us to tag a given track through the audio content with potential situational use, according to a given user by leveraging their listening history.Finally, to successfully employ this approach for a recommendation task, we needed a different method to predict the potential current situations of a given user. To this end, we developed a model to predict the situation given the data transmitted from the user's device to the service, and the demographic information of the given user. Our evaluations show that the models can successfully learn to discriminate the potential situations and rank them accordingly. By combining the two model; the auto-tagger and situation predictor, we developed a framework to generate situational sessions in real-time and propose them to the user. This framework provides an alternative pathway to recommending situational sessions, aside from the primary sequential recommendation system deployed by the service, which is both interpretable and addressing the cold-start problem in terms of recommending tracks based on their content
Semela, René. "Automatické tagování hudebních děl pomocí metod strojového učení." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2020. http://www.nusl.cz/ntk/nusl-413253.
Повний текст джерелаChiang, Yen-Lin, and 江衍霖. "Sketch-based Music Retrieval Based on Frame-level Auto-tagging Predictions." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/687mkw.
Повний текст джерела國立清華大學
資訊工程學系所
105
We proposed a novel and intuitive music retrieval interface that allows users to precisely search music containing multiple localized social tags with merely simple sketches. For example, one may search for a “classical” music clip that also includes a segment with “violin”, followed by another segment which simultaneously includes “slow” and “guitar”, while such complex conditions can be simply and correctly expressed in the query. We also proposed a segment-level database with thousands of songs and its preprocessing algorithms for our music retrieval method, which leverages the predictions by Liu and Yang’s deep learning-based frame-level auto-tagging model. To assess how users feel about this system, we have conducted a user study with a questionnaire and a demo website. Experimental results show that: i) the proposed sketch-based system outperforms the two non-sketch-based baselines we implemented in “interestingness” and “satisfaction in user experience”; ii) our proposed method is especially beneficial to multimedia content creators.
Sheng-WeiSyu and 徐陞瑋. "A Method of Music Auto-tagging Based on Audio and Lyric." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/9e8723.
Повний текст джерела國立成功大學
資訊管理研究所
107
With the development of the Internet and technology, online music platforms and music streaming services are booming, the large number of digital music makes users face the problem of information overloading. In order to solve this problem, these platforms need to construct a comprehensive recommendation system by using user information and meta data to help users in searching, querying or discovering new music. Social tags are considered to help the music recommendation system to make better recommendations. However, social tags face the problem of tag sparsity and cold start, limiting their effectiveness in helping the recommendation system. To solve these problems, it is necessary to supplement the shortage of the tags through a music auto-tagging system. In the past, most of the research on auto-tagging used only audio for analysis. However, many studies have proved that the lyrics can help the music classification system to obtain more information and improve the classification accuracy. This study proposed a method of music auto-tagging, which use both audio and lyric for analysis. Besides, we also experimented the different architecture of tag classification, the result shows that the structure using late fusion model and multi-task classification method has the best performance.
Yang, Jia-Hong, and 楊佳虹. "A Robust Music Auto-Tagging Technique Using Audio Fingerprinting and Deep Convolutional Neural Networks." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/vagbse.
Повний текст джерела國立中興大學
資訊科學與工程學系
106
Music tags are a set of descriptive keywords that convey high-level information about a music clip, such as emotions(sadness, happiness), genres(jazz, classical), and instruments(guitar, vocal). Since tags provide high-level information from the listener’s perspectives, they can be used for music discovery and recommendation. However, in music information retrieval (MIR), researchers need to have expertise based on acoustics or engineering design in order to analyze and organize music informations, classify them according to music forms, and then provide music information retrieval. In recent years, people have been paying more attention to the feature learning and deep architecture, thus reducing the required of the engineering works and the need for prior knowledge. The use of deep convolutional neural networks has been successfully explored in the image, text and speech field. However, previous methods for music auto-tagging can’t accurately discriminate the type of music for the distortion and noise audio, it will have the bad results in the auto-tagging. Therefore, we will propose a robust method to implement auto-music tagging. First, convert the music into a spectrogram, and find out the important information from the spectrogram, that is, the audio fingerprint. Then use it as the input of convolutional neural networks to learn the features, in this way to get a good music search result. Experimental results demonstrate the robustness of the proposed method.
Arjannikov, Tom. "Positive unlabeled learning applications in music and healthcare." Thesis, 2021. http://hdl.handle.net/1828/13376.
Повний текст джерелаGraduate
Частини книг з теми "Music auto-tagging"
Yu, Yongbin, Yifan Tang, Minhui Qi, Feng Mai, Quanxin Deng, and Zhaxi Nima. "Music Auto-Tagging with Capsule Network." In Communications in Computer and Information Science, 292–98. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-7981-3_20.
Повний текст джерелаDabral, Tanmaya Shekhar, Amala Sanjay Deshmukh, and Aruna Malapati. "A Multi-scale Convolutional Neural Network Architecture for Music Auto-Tagging." In Advances in Intelligent Systems and Computing, 757–64. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-1592-3_60.
Повний текст джерелаJu, Chen, Lixin Han, and Guozheng Peng. "Music Auto-tagging Based on Attention Mechanism and Multi-label Classification." In Lecture Notes in Electrical Engineering, 245–55. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-6963-7_23.
Повний текст джерелаNguyen Cao Minh, Khanh, Thinh Dang An, Vu Tran Quang, and Van Hoai Tran. "Comparative Study on Different Approaches in Optimizing Threshold for Music Auto-Tagging." In Future Data and Security Engineering, 237–50. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-03192-3_18.
Повний текст джерелаТези доповідей конференцій з теми "Music auto-tagging"
Liu, Jen-Yu, and Yi-Hsuan Yang. "Event Localization in Music Auto-tagging." In MM '16: ACM Multimedia Conference. New York, NY, USA: ACM, 2016. http://dx.doi.org/10.1145/2964284.2964292.
Повний текст джерелаJoung, Haesun, and Kyogu Lee. "Music Auto-Tagging with Robust Music Representation Learned via Domain Adversarial Training." In ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2024. http://dx.doi.org/10.1109/icassp48485.2024.10447318.
Повний текст джерелаIbrahim, Karim M., Jimena Royo-Letelier, Elena V. Epure, Geoffroy Peeters, and Gael Richard. "Audio-Based Auto-Tagging With Contextual Tags for Music." In ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020. http://dx.doi.org/10.1109/icassp40776.2020.9054352.
Повний текст джерелаYang, Yi-Hsuan. "Towards real-time music auto-tagging using sparse features." In 2013 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2013. http://dx.doi.org/10.1109/icme.2013.6607505.
Повний текст джерелаLin, Yi-Hsun, Chia-Hao Chung, and Homer H. Chen. "Playlist-Based Tag Propagation for Improving Music Auto-Tagging." In 2018 26th European Signal Processing Conference (EUSIPCO). IEEE, 2018. http://dx.doi.org/10.23919/eusipco.2018.8553318.
Повний текст джерелаYeh, Chin-Chia Michael, Ju-Chiang Wang, Yi-Hsuan Yang, and Hsin-Min Wang. "Improving music auto-tagging by intra-song instance bagging." In ICASSP 2014 - 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2014. http://dx.doi.org/10.1109/icassp.2014.6853977.
Повний текст джерелаYan, Qin, Cong Ding, Jingjing Yin, and Yong Lv. "Improving music auto-tagging with trigger-based context model." In ICASSP 2015 - 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2015. http://dx.doi.org/10.1109/icassp.2015.7178006.
Повний текст джерелаSilva, Diego Furtado, Angelo Cesar Mendes da Silva, Luís Felipe Ortolan, and Ricardo Marcondes Marcacini. "On Generalist and Domain-Specific Music Classification Models and Their Impacts on Brazilian Music Genre Recognition." In Simpósio Brasileiro de Computação Musical. Sociedade Brasileira de Computação - SBC, 2021. http://dx.doi.org/10.5753/sbcm.2021.19427.
Повний текст джерелаYin, Jingjing, Qin Yan, Yong Lv, and Qiuyu Tao. "Music auto-tagging with variable feature sets and probabilistic annotation." In 2014 9th International Symposium on Communication Systems, Networks & Digital Signal Processing (CSNDSP). IEEE, 2014. http://dx.doi.org/10.1109/csndsp.2014.6923816.
Повний текст джерелаWang, Shuo-Yang, Ju-Chiang Wang, Yi-Hsuan Yang, and Hsin-Min Wang. "Towards time-varying music auto-tagging based on CAL500 expansion." In 2014 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2014. http://dx.doi.org/10.1109/icme.2014.6890290.
Повний текст джерела