Auswahl der wissenschaftlichen Literatur zum Thema „Music auto-tagging“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Inhaltsverzeichnis
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Music auto-tagging" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Music auto-tagging"
Akama, Taketo, Hiroaki Kitano, Katsuhiro Takematsu, Yasushi Miyajima und Natalia Polouliakh. „Auxiliary self-supervision to metric learning for music similarity-based retrieval and auto-tagging“. PLOS ONE 18, Nr. 11 (30.11.2023): e0294643. http://dx.doi.org/10.1371/journal.pone.0294643.
Der volle Inhalt der QuelleBengani, Shaleen, S. Vadivel und J. Angel Arul Jothi. „Efficient Music Auto-Tagging with Convolutional Neural Networks“. Journal of Computer Science 15, Nr. 8 (01.08.2019): 1203–8. http://dx.doi.org/10.3844/jcssp.2019.1203.1208.
Der volle Inhalt der QuelleSong, Guangxiao, Zhijie Wang, Fang Han, Shenyi Ding und Muhammad Ather Iqbal. „Music auto-tagging using deep Recurrent Neural Networks“. Neurocomputing 292 (Mai 2018): 104–10. http://dx.doi.org/10.1016/j.neucom.2018.02.076.
Der volle Inhalt der QuelleEllis, Katherine, Emanuele Coviello, Antoni B. Chan und Gert Lanckriet. „A Bag of Systems Representation for Music Auto-Tagging“. IEEE Transactions on Audio, Speech, and Language Processing 21, Nr. 12 (Dezember 2013): 2554–69. http://dx.doi.org/10.1109/tasl.2013.2279318.
Der volle Inhalt der QuelleLee, Jaehwan, Daekyeong Moon, Jik-Soo Kim und Minkyoung Cho. „ATOSE: Audio Tagging with One-Sided Joint Embedding“. Applied Sciences 13, Nr. 15 (06.08.2023): 9002. http://dx.doi.org/10.3390/app13159002.
Der volle Inhalt der QuelleShao, Xi, Zhiyong Cheng und Mohan S. Kankanhalli. „Music auto-tagging based on the unified latent semantic modeling“. Multimedia Tools and Applications 78, Nr. 1 (20.01.2018): 161–76. http://dx.doi.org/10.1007/s11042-018-5632-2.
Der volle Inhalt der QuelleSong, Guangxiao, Zhijie Wang, Fang Han, Shenyi Ding und Xiaochun Gu. „Music auto-tagging using scattering transform and convolutional neural network with self-attention“. Applied Soft Computing 96 (November 2020): 106702. http://dx.doi.org/10.1016/j.asoc.2020.106702.
Der volle Inhalt der QuelleLee, Jongpil, und Juhan Nam. „Multi-Level and Multi-Scale Feature Aggregation Using Pretrained Convolutional Neural Networks for Music Auto-Tagging“. IEEE Signal Processing Letters 24, Nr. 8 (August 2017): 1208–12. http://dx.doi.org/10.1109/lsp.2017.2713830.
Der volle Inhalt der QuelleYu, Yong-bin, Min-hui Qi, Yi-fan Tang, Quan-xin Deng, Feng Mai und Nima Zhaxi. „A sample-level DCNN for music auto-tagging“. Multimedia Tools and Applications, 06.01.2021. http://dx.doi.org/10.1007/s11042-020-10330-9.
Der volle Inhalt der QuelleLin, Yi-Hsun, und Homer Chen. „Tag Propagation and Cost-Sensitive Learning for Music Auto-Tagging“. IEEE Transactions on Multimedia, 2020, 1. http://dx.doi.org/10.1109/tmm.2020.3001521.
Der volle Inhalt der QuelleDissertationen zum Thema "Music auto-tagging"
Ibrahim, Karim M. „Personalized audio auto-tagging as proxy for contextual music recommendation“. Electronic Thesis or Diss., Institut polytechnique de Paris, 2021. http://www.theses.fr/2021IPPAT039.
Der volle Inhalt der QuelleThe exponential growth of online services and user data changed how we interact with various services, and how we explore and select new products. Hence, there is a growing need for methods to recommend the appropriate items for each user. In the case of music, it is more important to recommend the right items at the right moment. It has been well documented that the context, i.e. the listening situation of the users, strongly influences their listening preferences. Hence, there has been an increasing attention towards developing recommendation systems. State-of-the-art approaches are sequence-based models aiming at predicting the tracks in the next session using available contextual information. However, these approaches lack interpretability and serve as a hit-or-miss with no room for user involvement. Additionally, few previous approaches focused on studying how the audio content relates to these situational influences, and even to a less extent making use of the audio content in providing contextual recommendations. Hence, these approaches suffer from both lack of interpretability.In this dissertation, we study the potential of using the audio content primarily to disambiguate the listening situations, providing a pathway for interpretable recommendations based on the situation.First, we study the potential listening situations that influence/change the listening preferences of the users. We developed a semi-automated approach to link between the listened tracks and the listening situation using playlist titles as a proxy. Through this approach, we were able to collect datasets of music tracks labelled with their situational use. We proceeded with studying the use of music auto-taggers to identify potential listening situations using the audio content. These studies led to the conclusion that the situational use of a track is highly user-dependent. Hence, we proceeded with extending the music-autotaggers to a user-aware model to make personalized predictions. Our studies showed that including the user in the loop significantly improves the performance of predicting the situations. This user-aware music auto-tagger enabled us to tag a given track through the audio content with potential situational use, according to a given user by leveraging their listening history.Finally, to successfully employ this approach for a recommendation task, we needed a different method to predict the potential current situations of a given user. To this end, we developed a model to predict the situation given the data transmitted from the user's device to the service, and the demographic information of the given user. Our evaluations show that the models can successfully learn to discriminate the potential situations and rank them accordingly. By combining the two model; the auto-tagger and situation predictor, we developed a framework to generate situational sessions in real-time and propose them to the user. This framework provides an alternative pathway to recommending situational sessions, aside from the primary sequential recommendation system deployed by the service, which is both interpretable and addressing the cold-start problem in terms of recommending tracks based on their content
Semela, René. „Automatické tagování hudebních děl pomocí metod strojového učení“. Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2020. http://www.nusl.cz/ntk/nusl-413253.
Der volle Inhalt der QuelleChiang, Yen-Lin, und 江衍霖. „Sketch-based Music Retrieval Based on Frame-level Auto-tagging Predictions“. Thesis, 2017. http://ndltd.ncl.edu.tw/handle/687mkw.
Der volle Inhalt der Quelle國立清華大學
資訊工程學系所
105
We proposed a novel and intuitive music retrieval interface that allows users to precisely search music containing multiple localized social tags with merely simple sketches. For example, one may search for a “classical” music clip that also includes a segment with “violin”, followed by another segment which simultaneously includes “slow” and “guitar”, while such complex conditions can be simply and correctly expressed in the query. We also proposed a segment-level database with thousands of songs and its preprocessing algorithms for our music retrieval method, which leverages the predictions by Liu and Yang’s deep learning-based frame-level auto-tagging model. To assess how users feel about this system, we have conducted a user study with a questionnaire and a demo website. Experimental results show that: i) the proposed sketch-based system outperforms the two non-sketch-based baselines we implemented in “interestingness” and “satisfaction in user experience”; ii) our proposed method is especially beneficial to multimedia content creators.
Sheng-WeiSyu und 徐陞瑋. „A Method of Music Auto-tagging Based on Audio and Lyric“. Thesis, 2019. http://ndltd.ncl.edu.tw/handle/9e8723.
Der volle Inhalt der Quelle國立成功大學
資訊管理研究所
107
With the development of the Internet and technology, online music platforms and music streaming services are booming, the large number of digital music makes users face the problem of information overloading. In order to solve this problem, these platforms need to construct a comprehensive recommendation system by using user information and meta data to help users in searching, querying or discovering new music. Social tags are considered to help the music recommendation system to make better recommendations. However, social tags face the problem of tag sparsity and cold start, limiting their effectiveness in helping the recommendation system. To solve these problems, it is necessary to supplement the shortage of the tags through a music auto-tagging system. In the past, most of the research on auto-tagging used only audio for analysis. However, many studies have proved that the lyrics can help the music classification system to obtain more information and improve the classification accuracy. This study proposed a method of music auto-tagging, which use both audio and lyric for analysis. Besides, we also experimented the different architecture of tag classification, the result shows that the structure using late fusion model and multi-task classification method has the best performance.
Yang, Jia-Hong, und 楊佳虹. „A Robust Music Auto-Tagging Technique Using Audio Fingerprinting and Deep Convolutional Neural Networks“. Thesis, 2018. http://ndltd.ncl.edu.tw/handle/vagbse.
Der volle Inhalt der Quelle國立中興大學
資訊科學與工程學系
106
Music tags are a set of descriptive keywords that convey high-level information about a music clip, such as emotions(sadness, happiness), genres(jazz, classical), and instruments(guitar, vocal). Since tags provide high-level information from the listener’s perspectives, they can be used for music discovery and recommendation. However, in music information retrieval (MIR), researchers need to have expertise based on acoustics or engineering design in order to analyze and organize music informations, classify them according to music forms, and then provide music information retrieval. In recent years, people have been paying more attention to the feature learning and deep architecture, thus reducing the required of the engineering works and the need for prior knowledge. The use of deep convolutional neural networks has been successfully explored in the image, text and speech field. However, previous methods for music auto-tagging can’t accurately discriminate the type of music for the distortion and noise audio, it will have the bad results in the auto-tagging. Therefore, we will propose a robust method to implement auto-music tagging. First, convert the music into a spectrogram, and find out the important information from the spectrogram, that is, the audio fingerprint. Then use it as the input of convolutional neural networks to learn the features, in this way to get a good music search result. Experimental results demonstrate the robustness of the proposed method.
Arjannikov, Tom. „Positive unlabeled learning applications in music and healthcare“. Thesis, 2021. http://hdl.handle.net/1828/13376.
Der volle Inhalt der QuelleGraduate
Buchteile zum Thema "Music auto-tagging"
Yu, Yongbin, Yifan Tang, Minhui Qi, Feng Mai, Quanxin Deng und Zhaxi Nima. „Music Auto-Tagging with Capsule Network“. In Communications in Computer and Information Science, 292–98. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-7981-3_20.
Der volle Inhalt der QuelleDabral, Tanmaya Shekhar, Amala Sanjay Deshmukh und Aruna Malapati. „A Multi-scale Convolutional Neural Network Architecture for Music Auto-Tagging“. In Advances in Intelligent Systems and Computing, 757–64. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-1592-3_60.
Der volle Inhalt der QuelleJu, Chen, Lixin Han und Guozheng Peng. „Music Auto-tagging Based on Attention Mechanism and Multi-label Classification“. In Lecture Notes in Electrical Engineering, 245–55. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-6963-7_23.
Der volle Inhalt der QuelleNguyen Cao Minh, Khanh, Thinh Dang An, Vu Tran Quang und Van Hoai Tran. „Comparative Study on Different Approaches in Optimizing Threshold for Music Auto-Tagging“. In Future Data and Security Engineering, 237–50. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-03192-3_18.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "Music auto-tagging"
Liu, Jen-Yu, und Yi-Hsuan Yang. „Event Localization in Music Auto-tagging“. In MM '16: ACM Multimedia Conference. New York, NY, USA: ACM, 2016. http://dx.doi.org/10.1145/2964284.2964292.
Der volle Inhalt der QuelleJoung, Haesun, und Kyogu Lee. „Music Auto-Tagging with Robust Music Representation Learned via Domain Adversarial Training“. In ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2024. http://dx.doi.org/10.1109/icassp48485.2024.10447318.
Der volle Inhalt der QuelleIbrahim, Karim M., Jimena Royo-Letelier, Elena V. Epure, Geoffroy Peeters und Gael Richard. „Audio-Based Auto-Tagging With Contextual Tags for Music“. In ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020. http://dx.doi.org/10.1109/icassp40776.2020.9054352.
Der volle Inhalt der QuelleYang, Yi-Hsuan. „Towards real-time music auto-tagging using sparse features“. In 2013 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2013. http://dx.doi.org/10.1109/icme.2013.6607505.
Der volle Inhalt der QuelleLin, Yi-Hsun, Chia-Hao Chung und Homer H. Chen. „Playlist-Based Tag Propagation for Improving Music Auto-Tagging“. In 2018 26th European Signal Processing Conference (EUSIPCO). IEEE, 2018. http://dx.doi.org/10.23919/eusipco.2018.8553318.
Der volle Inhalt der QuelleYeh, Chin-Chia Michael, Ju-Chiang Wang, Yi-Hsuan Yang und Hsin-Min Wang. „Improving music auto-tagging by intra-song instance bagging“. In ICASSP 2014 - 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2014. http://dx.doi.org/10.1109/icassp.2014.6853977.
Der volle Inhalt der QuelleYan, Qin, Cong Ding, Jingjing Yin und Yong Lv. „Improving music auto-tagging with trigger-based context model“. In ICASSP 2015 - 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2015. http://dx.doi.org/10.1109/icassp.2015.7178006.
Der volle Inhalt der QuelleSilva, Diego Furtado, Angelo Cesar Mendes da Silva, Luís Felipe Ortolan und Ricardo Marcondes Marcacini. „On Generalist and Domain-Specific Music Classification Models and Their Impacts on Brazilian Music Genre Recognition“. In Simpósio Brasileiro de Computação Musical. Sociedade Brasileira de Computação - SBC, 2021. http://dx.doi.org/10.5753/sbcm.2021.19427.
Der volle Inhalt der QuelleYin, Jingjing, Qin Yan, Yong Lv und Qiuyu Tao. „Music auto-tagging with variable feature sets and probabilistic annotation“. In 2014 9th International Symposium on Communication Systems, Networks & Digital Signal Processing (CSNDSP). IEEE, 2014. http://dx.doi.org/10.1109/csndsp.2014.6923816.
Der volle Inhalt der QuelleWang, Shuo-Yang, Ju-Chiang Wang, Yi-Hsuan Yang und Hsin-Min Wang. „Towards time-varying music auto-tagging based on CAL500 expansion“. In 2014 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2014. http://dx.doi.org/10.1109/icme.2014.6890290.
Der volle Inhalt der Quelle