Gotowa bibliografia na temat „Auto-tagging musical”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Spis treści
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Auto-tagging musical”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Rozprawy doktorskie na temat "Auto-tagging musical"
Ibrahim, Karim M. "Personalized audio auto-tagging as proxy for contextual music recommendation". Electronic Thesis or Diss., Institut polytechnique de Paris, 2021. http://www.theses.fr/2021IPPAT039.
Pełny tekst źródłaThe exponential growth of online services and user data changed how we interact with various services, and how we explore and select new products. Hence, there is a growing need for methods to recommend the appropriate items for each user. In the case of music, it is more important to recommend the right items at the right moment. It has been well documented that the context, i.e. the listening situation of the users, strongly influences their listening preferences. Hence, there has been an increasing attention towards developing recommendation systems. State-of-the-art approaches are sequence-based models aiming at predicting the tracks in the next session using available contextual information. However, these approaches lack interpretability and serve as a hit-or-miss with no room for user involvement. Additionally, few previous approaches focused on studying how the audio content relates to these situational influences, and even to a less extent making use of the audio content in providing contextual recommendations. Hence, these approaches suffer from both lack of interpretability.In this dissertation, we study the potential of using the audio content primarily to disambiguate the listening situations, providing a pathway for interpretable recommendations based on the situation.First, we study the potential listening situations that influence/change the listening preferences of the users. We developed a semi-automated approach to link between the listened tracks and the listening situation using playlist titles as a proxy. Through this approach, we were able to collect datasets of music tracks labelled with their situational use. We proceeded with studying the use of music auto-taggers to identify potential listening situations using the audio content. These studies led to the conclusion that the situational use of a track is highly user-dependent. Hence, we proceeded with extending the music-autotaggers to a user-aware model to make personalized predictions. Our studies showed that including the user in the loop significantly improves the performance of predicting the situations. This user-aware music auto-tagger enabled us to tag a given track through the audio content with potential situational use, according to a given user by leveraging their listening history.Finally, to successfully employ this approach for a recommendation task, we needed a different method to predict the potential current situations of a given user. To this end, we developed a model to predict the situation given the data transmitted from the user's device to the service, and the demographic information of the given user. Our evaluations show that the models can successfully learn to discriminate the potential situations and rank them accordingly. By combining the two model; the auto-tagger and situation predictor, we developed a framework to generate situational sessions in real-time and propose them to the user. This framework provides an alternative pathway to recommending situational sessions, aside from the primary sequential recommendation system deployed by the service, which is both interpretable and addressing the cold-start problem in terms of recommending tracks based on their content
Streszczenia konferencji na temat "Auto-tagging musical"
Silva, Diego Furtado, Angelo Cesar Mendes da Silva, Luís Felipe Ortolan i Ricardo Marcondes Marcacini. "On Generalist and Domain-Specific Music Classification Models and Their Impacts on Brazilian Music Genre Recognition". W Simpósio Brasileiro de Computação Musical. Sociedade Brasileira de Computação - SBC, 2021. http://dx.doi.org/10.5753/sbcm.2021.19427.
Pełny tekst źródłaLiu, Jen-Yu, i Yi-Hsuan Yang. "Event Localization in Music Auto-tagging". W MM '16: ACM Multimedia Conference. New York, NY, USA: ACM, 2016. http://dx.doi.org/10.1145/2964284.2964292.
Pełny tekst źródłaIbrahim, Karim M., Jimena Royo-Letelier, Elena V. Epure, Geoffroy Peeters i Gael Richard. "Audio-Based Auto-Tagging With Contextual Tags for Music". W ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020. http://dx.doi.org/10.1109/icassp40776.2020.9054352.
Pełny tekst źródłaYang, Yi-Hsuan. "Towards real-time music auto-tagging using sparse features". W 2013 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2013. http://dx.doi.org/10.1109/icme.2013.6607505.
Pełny tekst źródłaLin, Yi-Hsun, Chia-Hao Chung i Homer H. Chen. "Playlist-Based Tag Propagation for Improving Music Auto-Tagging". W 2018 26th European Signal Processing Conference (EUSIPCO). IEEE, 2018. http://dx.doi.org/10.23919/eusipco.2018.8553318.
Pełny tekst źródłaYeh, Chin-Chia Michael, Ju-Chiang Wang, Yi-Hsuan Yang i Hsin-Min Wang. "Improving music auto-tagging by intra-song instance bagging". W ICASSP 2014 - 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2014. http://dx.doi.org/10.1109/icassp.2014.6853977.
Pełny tekst źródłaYan, Qin, Cong Ding, Jingjing Yin i Yong Lv. "Improving music auto-tagging with trigger-based context model". W ICASSP 2015 - 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2015. http://dx.doi.org/10.1109/icassp.2015.7178006.
Pełny tekst źródłaJoung, Haesun, i Kyogu Lee. "Music Auto-Tagging with Robust Music Representation Learned via Domain Adversarial Training". W ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2024. http://dx.doi.org/10.1109/icassp48485.2024.10447318.
Pełny tekst źródłaYin, Jingjing, Qin Yan, Yong Lv i Qiuyu Tao. "Music auto-tagging with variable feature sets and probabilistic annotation". W 2014 9th International Symposium on Communication Systems, Networks & Digital Signal Processing (CSNDSP). IEEE, 2014. http://dx.doi.org/10.1109/csndsp.2014.6923816.
Pełny tekst źródłaWang, Shuo-Yang, Ju-Chiang Wang, Yi-Hsuan Yang i Hsin-Min Wang. "Towards time-varying music auto-tagging based on CAL500 expansion". W 2014 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2014. http://dx.doi.org/10.1109/icme.2014.6890290.
Pełny tekst źródła