Academic literature on the topic 'Deep supervised learning'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Deep supervised learning.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Deep supervised learning"
Kim, Taeheon, Jaewon Hur, and Youkyung Han. "Very High-Resolution Satellite Image Registration Based on Self-supervised Deep Learning." Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography 41, no. 4 (August 31, 2023): 217–25. http://dx.doi.org/10.7848/ksgpc.2023.41.4.217.
Full textAlZuhair, Mona Suliman, Mohamed Maher Ben Ismail, and Ouiem Bchir. "Soft Semi-Supervised Deep Learning-Based Clustering." Applied Sciences 13, no. 17 (August 27, 2023): 9673. http://dx.doi.org/10.3390/app13179673.
Full textWei, Xiang, Xiaotao Wei, Xiangyuan Kong, Siyang Lu, Weiwei Xing, and Wei Lu. "FMixCutMatch for semi-supervised deep learning." Neural Networks 133 (January 2021): 166–76. http://dx.doi.org/10.1016/j.neunet.2020.10.018.
Full textZhou, Shusen, Hailin Zou, Chanjuan Liu, Mujun Zang, Zhiwang Zhang, and Jun Yue. "Deep extractive networks for supervised learning." Optik 127, no. 20 (October 2016): 9008–19. http://dx.doi.org/10.1016/j.ijleo.2016.07.007.
Full textFong, A. C. M., and G. Hong. "Boosted Supervised Intensional Learning Supported by Unsupervised Learning." International Journal of Machine Learning and Computing 11, no. 2 (March 2021): 98–102. http://dx.doi.org/10.18178/ijmlc.2021.11.2.1020.
Full textHu, Yu, and Hongmin Cai. "Hypergraph-Supervised Deep Subspace Clustering." Mathematics 9, no. 24 (December 15, 2021): 3259. http://dx.doi.org/10.3390/math9243259.
Full textFu, Zheren, Yan Li, Zhendong Mao, Quan Wang, and Yongdong Zhang. "Deep Metric Learning with Self-Supervised Ranking." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 2 (May 18, 2021): 1370–78. http://dx.doi.org/10.1609/aaai.v35i2.16226.
Full textDutta, Ujjal Kr, Mehrtash Harandi, and C. Chandra Shekhar. "Semi-Supervised Metric Learning: A Deep Resurrection." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 8 (May 18, 2021): 7279–87. http://dx.doi.org/10.1609/aaai.v35i8.16894.
Full textBharati, Aparna, Richa Singh, Mayank Vatsa, and Kevin W. Bowyer. "Detecting Facial Retouching Using Supervised Deep Learning." IEEE Transactions on Information Forensics and Security 11, no. 9 (September 2016): 1903–13. http://dx.doi.org/10.1109/tifs.2016.2561898.
Full textMathilde Caron. "Self-supervised learning of deep visual representations." Bulletin 1024, no. 21 (April 2023): 171–72. http://dx.doi.org/10.48556/sif.1024.21.171.
Full textDissertations / Theses on the topic "Deep supervised learning"
Tran, Khanh-Hung. "Semi-supervised dictionary learning and Semi-supervised deep neural network." Thesis, université Paris-Saclay, 2021. http://www.theses.fr/2021UPASP014.
Full textSince the 2010's, machine learning (ML) has been one of the topics that attract a lot of attention from scientific researchers. Many ML models have been demonstrated their ability to produce excellent results in various fields such as Computer Vision, Natural Language Processing, Robotics... However, most of these models use supervised learning, which requires a massive annotation. Therefore, the objective of this thesis is to study and to propose semi-supervised learning approaches that have many advantages over supervised learning. Instead of directly applying a semi-supervised classifier on the original representation of data, we rather use models that integrate a representation learning stage before the classification stage, to better adapt to the non-linearity of the data. In the first step, we revisit tools that allow us to build our semi-supervised models. First, we present two types of model that possess representation learning in their architecture: dictionary learning and neural network, as well as the optimization methods for each type of model. Moreover, in the case of neural network, we specify the problem with adversarial examples. Then, we present the techniques that often accompany with semi-supervised learning such as variety learning and pseudo-labeling. In the second part, we work on dictionary learning. We synthesize generally three steps to build a semi-supervised model from a supervised model. Then, we propose our semi-supervised model to deal with the classification problem typically in the case of a low number of training samples (including both labelled and non-labelled samples). On the one hand, we apply the preservation of the data structure from the original space to the sparse code space (manifold learning), which is considered as regularization for sparse codes. On the other hand, we integrate a semi-supervised classifier in the sparse code space. In addition, we perform sparse coding for test samples by taking into account also the preservation of the data structure. This method provides an improvement on the accuracy rate compared to other existing methods. In the third step, we work on neural network models. We propose an approach called "manifold attack" which allows reinforcing manifold learning. This approach is inspired from adversarial learning : finding virtual points that disrupt the cost function on manifold learning (by maximizing it) while fixing the model parameters; then the model parameters are updated by minimizing this cost function while fixing these virtual points. We also provide criteria for limiting the space to which the virtual points belong and the method for initializing them. This approach provides not only an improvement on the accuracy rate but also a significant robustness to adversarial examples. Finally, we analyze the similarities and differences, as well as the advantages and disadvantages between dictionary learning and neural network models. We propose some perspectives on both two types of models. In the case of semi-supervised dictionary learning, we propose some techniques inspired by the neural network models. As for the neural network, we propose to integrate manifold attack on generative models
Roychowdhury, Soumali. "Supervised and Semi-Supervised Learning in Vision using Deep Neural Networks." Thesis, IMT Alti Studi Lucca, 2019. http://e-theses.imtlucca.it/273/1/Roychowdhury_phdthesis.pdf.
Full textGeiler, Louis. "Deep learning for churn prediction." Electronic Thesis or Diss., Université Paris Cité, 2022. http://www.theses.fr/2022UNIP7333.
Full textThe problem of churn prediction has been traditionally a field of study for marketing. However, in the wake of the technological advancements, more and more data can be collected to analyze the customers behaviors. This manuscript has been built in this frame, with a particular focus on machine learning. Thus, we first looked at the supervised learning problem. We have demonstrated that logistic regression, random forest and XGBoost taken as an ensemble offer the best results in terms of Area Under the Curve (AUC) among a wide range of traditional machine learning approaches. We also have showcased that the re-sampling approaches are solely efficient in a local setting and not a global one. Subsequently, we aimed at fine-tuning our prediction by relying on customer segmentation. Indeed,some customers can leave a service because of a cost that they deem to high, and other customers due to a problem with the customer’s service. Our approach was enriched with a novel deep neural network architecture, which operates with both the auto-encoders and the k-means approach. Going further, we focused on self-supervised learning in the tabular domain. More precisely, the proposed architecture was inspired by the work on the SimCLR approach, where we altered the architecture with the Mean-Teacher model from semi-supervised learning. We showcased through the win matrix the superiority of our approach with respect to the state of the art. Ultimately, we have proposed to apply what we have built in this manuscript in an industrial setting, the one of Brigad. We have alleviated the company churn problem with a random forest that we optimized through grid-search and threshold optimization. We also proposed to interpret the results with SHAP (SHapley Additive exPlanations)
Khan, Umair. "Self-supervised deep learning approaches to speaker recognition." Doctoral thesis, Universitat Politècnica de Catalunya, 2021. http://hdl.handle.net/10803/671496.
Full textLos avances recientes en Deep Learning (DL) para el reconocimiento del hablante están mejorado el rendimiento de los sistemas tradicionales basados en i-vectors. En el reconocimiento de locutor basado en i-vectors, la distancia coseno y el análisis discriminante lineal probabilístico (PLDA) son las dos técnicas más usadas de puntuación. La primera no es supervisada, pero la segunda necesita datos etiquetados por el hablante, que no son siempre fácilmente accesibles en la práctica. Esto crea una gran brecha de rendimiento entre estas dos técnicas de puntuación. La pregunta es: ¿cómo llenar esta brecha de rendimiento sin usar etiquetas del hablante en los datos de background? En esta tesis, el problema anterior se ha abordado utilizando técnicas de DL sin utilizar y/o limitar el uso de datos etiquetados. Se han realizado tres propuestas basadas en DL. En la primera, se propone una representación vectorial de voz basada en la máquina de Boltzmann restringida (RBM) para las tareas de agrupación de hablantes y seguimiento de hablantes en programas de televisión. Los experimentos en la base de datos AGORA, muestran que en agrupación de hablantes los vectores RBM suponen una mejora relativa del 12%. Y, por otro lado, en seguimiento del hablante, los vectores RBM,utilizados solo en la etapa de identificación del hablante, muestran una mejora relativa del 11% (coseno) y 7% (PLDA). En la segunda, se utiliza DL para aumentar el poder discriminativo de los i-vectors en la verificación del hablante. Se ha propuesto el uso del autocodificador de varias formas. En primer lugar, se utiliza un autocodificador como preentrenamiento de una red neuronal profunda (DNN) utilizando una gran cantidad de datos de background sin etiquetar, para posteriormente entrenar un clasificador DNN utilizando un conjunto reducido de datos etiquetados. En segundo lugar, se entrena un autocodificador para transformar i-vectors en una nueva representación para aumentar el poder discriminativo de los i-vectors. El entrenamiento se lleva a cabo en base a los i-vectors vecinos más cercanos, que se eligen de forma no supervisada. La evaluación se ha realizado con la base de datos VoxCeleb-1. Los resultados muestran que usando el primer sistema obtenemos una mejora relativa del 21% sobre i-vectors, mientras que usando el segundo sistema, se obtiene una mejora relativa del 42%. Además, si utilizamos los datos de background en la etapa de prueba, se obtiene una mejora relativa del 53%. En la tercera, entrenamos un sistema auto-supervisado de verificación de locutor de principio a fin. Utilizamos impostores junto con los vecinos más cercanos para formar pares cliente/impostor sin supervisión. La arquitectura se basa en un codificador de red neuronal convolucional (CNN) que se entrena como una red siamesa con dos ramas. Además, se entrena otra red con tres ramas utilizando la función de pérdida triplete para extraer embeddings de locutores. Los resultados muestran que tanto el sistema de principio a fin como los embeddings de locutores, a pesar de no estar supervisados, tienen un rendimiento comparable a una referencia supervisada. Cada uno de los enfoques propuestos tienen sus pros y sus contras. El mejor resultado se obtuvo utilizando el autocodificador con el vecino más cercano, con la desventaja de que necesita los i-vectors de background en el test. El uso del preentrenamiento del autocodificador para DNN no tiene este problema, pero es un enfoque semi-supervisado, es decir, requiere etiquetas de hablantes solo de una parte pequeña de los datos de background. La tercera propuesta no tienes estas dos limitaciones y funciona de manera razonable. Es un en
Zhang, Kun. "Supervised and Self-Supervised Learning for Video Object Segmentation in the Compressed Domain." Thesis, The University of Sydney, 2022. https://hdl.handle.net/2123/29361.
Full textLiu, Dongnan. "Supervised and Unsupervised Deep Learning-based Biomedical Image Segmentation." Thesis, The University of Sydney, 2021. https://hdl.handle.net/2123/24744.
Full textHan, Kun. "Supervised Speech Separation And Processing." The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1407865723.
Full textNasrin, Mst Shamima. "Pathological Image Analysis with Supervised and Unsupervised Deep Learning Approaches." University of Dayton / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1620052562772676.
Full textKarlsson, Erik, and Gilbert Nordhammar. "Naive semi-supervised deep learning med sammansättning av pseudo-klassificerare." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-17177.
Full textÖrnberg, Oscar. "Semi-Supervised Methods for Classification of Hyperspectral Images with Deep Learning." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-288726.
Full textHyperspektrala bilder (HSI) kan avslöja fler mönster än vanliga bilder. Dimensionaliteten är hög med ett bredare spektrum för varje pixel. Få dataset som är etiketter finns, medan rådata finns i överflöd. Detta gör att semi-vägledd inlärning är väl anpassad för HSI klassificering. Genom att utnyttja nya rön inom djupinlärning och semi-vägledda methods, två modeller kallade FixMatch och Mean Teacher adapterades för att mäta effektiviteten hos konsekvens regularisering metoder inom semi-vägledd inlärning på HSI klassifikation. Traditionella maskininlärnings metoder så som SVM, Random Forest och XGBoost jämfördes i samband med two semi-vägledda maskininlärnings metoder, TSVM och QN-S3VM, som basnivå. De semi-vägledda djupinlärnings metoderna testades med två olika nätverk, en 3D och 1D CNN. För att kunna använda konsekvens regularisering, flera nya data augmenterings metoder adapterades till HSI data. Nuvarande metoder är få och förlitar sig på att datan har etiketter, vilket inte är tillgängligt i detta scenariot. Data augmenterings metoderna som presenterades visade sig vara användbara och adapterades i ett automatiskt augmenteringssystem. Noggrannheten av basnivå och de semi-vägledda metoderna visade att SVM var bäst i alla fall. Ingen av de semi-vägledda metoderna visade konsekvent bättre resultat än deras vägledda motsvarigheter.
Books on the topic "Deep supervised learning"
Pal, Sujit, Amita Kapoor, Antonio Gulli, and François Chollet. Deep Learning with TensorFlow and Keras: Build and Deploy Supervised, Unsupervised, Deep, and Reinforcement Learning Models. Packt Publishing, Limited, 2022.
Find full textSawarkar, Kunal, and Dheeraj Arremsetty. Deep Learning with PyTorch Lightning: Build and Train High-Performance Artificial Intelligence and Self-Supervised Models Using Python. Packt Publishing, Limited, 2021.
Find full textRajakumar, P. S., S. Geetha, and T. V. Ananthan. Fundamentals of Image Processing. Jupiter Publications Consortium, 2023. http://dx.doi.org/10.47715/jpc.b.978-93-91303-80-8.
Full textBook chapters on the topic "Deep supervised learning"
Jo, Taeho. "Supervised Learning." In Deep Learning Foundations, 29–55. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-32879-4_2.
Full textCerulli, Giovanni. "Deep Learning." In Fundamentals of Supervised Machine Learning, 323–64. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-41337-7_7.
Full textRos, Frederic, and Rabia Riad. "Deep clustering techniques." In Unsupervised and Semi-Supervised Learning, 151–58. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-48743-9_9.
Full textRos, Frederic, and Rabia Riad. "Deep Feature selection." In Unsupervised and Semi-Supervised Learning, 131–49. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-48743-9_8.
Full textWani, M. Arif, Farooq Ahmad Bhat, Saduf Afzal, and Asif Iqbal Khan. "Supervised Deep Learning Architectures." In Studies in Big Data, 53–75. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-6794-6_4.
Full textShanthamallu, Uday Shankar, and Andreas Spanias. "Semi-Supervised Learning." In Machine and Deep Learning Algorithms and Applications, 33–41. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-03758-0_4.
Full textRos, Frederic, and Rabia Riad. "Deep clustering techniques: synthesis." In Unsupervised and Semi-Supervised Learning, 243–52. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-48743-9_13.
Full textRos, Frederic, and Rabia Riad. "Chapter 6: Deep learning architectures." In Unsupervised and Semi-Supervised Learning, 81–103. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-48743-9_6.
Full textPratama, Mahardhika, Andri Ashfahani, and Edwin Lughofer. "Unsupervised Continual Learning via Self-adaptive Deep Clustering Approach." In Continual Semi-Supervised Learning, 48–61. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-17587-9_4.
Full textZemmari, Akka, and Jenny Benois-Pineau. "Supervised Learning Problem Formulation." In Deep Learning in Mining of Visual Content, 5–11. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-34376-7_2.
Full textConference papers on the topic "Deep supervised learning"
Hailat, Zeyad, Artem Komarichev, and Xue-Wen Chen. "Deep Semi-Supervised Learning." In 2018 24th International Conference on Pattern Recognition (ICPR). IEEE, 2018. http://dx.doi.org/10.1109/icpr.2018.8546327.
Full textBaucum, Michael, Daniel Belotto, Sayre Jeannet, Eric Savage, Prannoy Mupparaju, and Carlos W. Morato. "Semi-supervised Deep Continuous Learning." In the 2017 International Conference. New York, New York, USA: ACM Press, 2017. http://dx.doi.org/10.1145/3094243.3094247.
Full textRottmann, Matthias, Karsten Kahl, and Hanno Gottschalk. "Deep Bayesian Active Semi-Supervised Learning." In 2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA). IEEE, 2018. http://dx.doi.org/10.1109/icmla.2018.00031.
Full textWeston, Jason, Frédéric Ratle, and Ronan Collobert. "Deep learning via semi-supervised embedding." In the 25th international conference. New York, New York, USA: ACM Press, 2008. http://dx.doi.org/10.1145/1390156.1390303.
Full textPathapati, Aravind Ganesh, Nakka Chakradhar, P. N. V. S. S. K. Havish, Sai Ashish Somayajula, and Saidhiraj Amuru. "Supervised Deep Learning for MIMO Precoding." In 2020 IEEE 3rd 5G World Forum (5GWF). IEEE, 2020. http://dx.doi.org/10.1109/5gwf49715.2020.9221261.
Full textZhang, Junbo, Guangjian Tian, Yadong Mu, and Wei Fan. "Supervised deep learning with auxiliary networks." In KDD '14: The 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York, NY, USA: ACM, 2014. http://dx.doi.org/10.1145/2623330.2623618.
Full textAlves de Lima, Bruno Vicente, Adriao Duarte Doria Neto, Lucia Emilia Soares Silva, Vinicius Ponte Machado, and Joao Guilherme Cavalcanti Costa. "Semi-supervised Classification Using Deep Learning." In 2019 8th Brazilian Conference on Intelligent Systems (BRACIS). IEEE, 2019. http://dx.doi.org/10.1109/bracis.2019.00158.
Full textReader, Andrew J. "Self-supervised and supervised deep learning for PET image reconstruction." In INTERNATIONAL WORKSHOP ON MACHINE LEARNING AND QUANTUM COMPUTING APPLICATIONS IN MEDICINE AND PHYSICS: WMLQ2022. AIP Publishing, 2024. http://dx.doi.org/10.1063/5.0203321.
Full textChen, Dong-Dong, Wei Wang, Wei Gao, and Zhi-Hua Zhou. "Tri-net for Semi-Supervised Deep Learning." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/278.
Full textChen, Hung-Yu, and Jen-Tzung Chien. "Deep semi-supervised learning for domain adaptation." In 2015 IEEE 25th International Workshop on Machine Learning for Signal Processing (MLSP). IEEE, 2015. http://dx.doi.org/10.1109/mlsp.2015.7324325.
Full textReports on the topic "Deep supervised learning"
Lin, Youzuo. Physics-guided Machine Learning: from Supervised Deep Networks to Unsupervised Lightweight Models. Office of Scientific and Technical Information (OSTI), August 2023. http://dx.doi.org/10.2172/1994110.
Full textTran, Anh, Theron Rodgers, and Timothy Wildey. Reification of latent microstructures: On supervised unsupervised and semi-supervised deep learning applications for microstructures in materials informatics. Office of Scientific and Technical Information (OSTI), September 2020. http://dx.doi.org/10.2172/1673174.
Full textMbani, Benson, Timm Schoening, and Jens Greinert. Automated and Integrated Seafloor Classification Workflow (AI-SCW). GEOMAR, May 2023. http://dx.doi.org/10.3289/sw_2_2023.
Full text