Tesis sobre el tema "Learning algorithms"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 50 mejores tesis para su investigación sobre el tema "Learning algorithms".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Andersson, Viktor. "Machine Learning in Logistics: Machine Learning Algorithms : Data Preprocessing and Machine Learning Algorithms". Thesis, Luleå tekniska universitet, Datavetenskap, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-64721.
Texto completoData Ductus är ett svenskt IT-konsultbolag, deras kundbas sträcker sig från små startups till stora redan etablerade företag. Företaget har stadigt växt sedan 80-talet och har etablerat kontor både i Sverige och i USA. Med hjälp av maskininlärning kommer detta projket att presentera en möjlig lösning på de fel som kan uppstå inom logistikverksamheten, orsakade av den mänskliga faktorn.Ett sätt att förbehandla data innan den tillämpas på en maskininlärning algoritm, liksom ett par algoritmer för användning kommer att presenteras.
Ameur, Foued ben Fredj. "Space-bounded learning algorithms /". Paderborn : Heinz Nixdorf Inst, 1996. http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&doc_number=007171235&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA.
Texto completoHsu, Daniel Joseph. "Algorithms for active learning". Diss., [La Jolla] : University of California, San Diego, 2010. http://wwwlib.umi.com/cr/ucsd/fullcit?p3404377.
Texto completoTitle from first page of PDF file (viewed June 10, 2010). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references (leaves 97-101).
CESARI, TOMMASO RENATO. "ALGORITHMS, LEARNING, AND OPTIMIZATION". Doctoral thesis, Università degli Studi di Milano, 2020. http://hdl.handle.net/2434/699354.
Texto completoJanagam, Anirudh y Saddam Hossen. "Analysis of Network Intrusion Detection System with Machine Learning Algorithms (Deep Reinforcement Learning Algorithm)". Thesis, Blekinge Tekniska Högskola, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-17126.
Texto completoSentenac, Flore. "Learning and Algorithms for Online Matching". Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAG005.
Texto completoThis thesis focuses mainly on online matching problems, where sets of resources are sequentially allocated to demand streams. We treat them both from an online learning and a competitive analysis perspective, always in the case when the input is stochastic.On the online learning side, we study how the specific matching structure influences learning in the first part, then how carry-over effects in the system affect its performance.On the competitive analysis side, we study the online matching problem in specific classes of random graphs, in an effort to move away from worst-case analysis.Finally, we explore how learning can be leveraged in the scheduling problem
Thompson, Simon Giles. "Distributed boosting algorithms". Thesis, University of Portsmouth, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.285529.
Texto completoMoon, Gordon Euhyun. "Parallel Algorithms for Machine Learning". The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1561980674706558.
Texto completoAtiya, Amir Abu-Mostafa Yaser S. "Learning algorithms for neural networks /". Diss., Pasadena, Calif. : California Institute of Technology, 1991. http://resolver.caltech.edu/CaltechETD:etd-09232005-083502.
Texto completoSanchez, Merchante Luis Francisco. "Learning algorithms for sparse classification". Phd thesis, Université de Technologie de Compiègne, 2013. http://tel.archives-ouvertes.fr/tel-00868847.
Texto completoDalla, Libera Alberto. "Learning algorithms for robotics systems". Doctoral thesis, Università degli studi di Padova, 2019. http://hdl.handle.net/11577/3422839.
Texto completoAddanki, Ravichandra. "Learning generalizable device placement algorithms for distributed machine learning". Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122746.
Texto completoCataloged from PDF version of thesis.
Includes bibliographical references (pages 47-50).
We present Placeto, a reinforcement learning (RL) approach to efficiently find device placements for distributed neural network training. Unlike prior approaches that only find a device placement for a specific computation graph, Placeto can learn generalizable device placement policies that can be applied to any graph. We propose two key ideas in our approach: (1) we represent the policy as performing iterative placement improvements, rather than outputting a placement in one shot; (2) we use graph embeddings to capture relevant information about the structure of the computation graph, without relying on node labels for indexing. These ideas allow Placeto to train efficiently and generalize to unseen graphs. Our experiments show that Placeto requires up to 6.1 x fewer training steps to find placements that are on par with or better than the best placements found by prior approaches. Moreover, Placeto is able to learn a generalizable placement policy for any given family of graphs, which can then be used without any retraining to predict optimized placements for unseen graphs from the same family. This eliminates the large overhead incurred by prior RL approaches whose lack of generalizability necessitates re-training from scratch every time a new graph is to be placed.
by Ravichandra Addanki.
S.M.
S.M. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
Philips, Petra Camilla y petra philips@gmail com. "Data-Dependent Analysis of Learning Algorithms". The Australian National University. Research School of Information Sciences and Engineering, 2005. http://thesis.anu.edu.au./public/adt-ANU20050901.204523.
Texto completoHadjifaradji, Saeed. "Learning algorithms for restricted neural networks". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0016/NQ48102.pdf.
Texto completoAmann, Notker. "Optimal algorithms for iterative learning control". Thesis, University of Exeter, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.337751.
Texto completoNambiar, Raghu. "Learning algorithms for adaptive digital filtering". Thesis, Durham University, 1993. http://etheses.dur.ac.uk/5544/.
Texto completoHatzikos, Vasilis E. "Genetic algorithms into iterative learning control". Thesis, University of Sheffield, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.408314.
Texto completoHuang, Qingqing Ph D. Massachusetts Institute of Technology. "Efficient algorithms for learning mixture models". Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/107337.
Texto completoCataloged from PDF version of thesis.
Includes bibliographical references (pages 261-274).
We study the statistical learning problems for a class of probabilistic models called mixture models. Mixture models are usually used to model settings where the observed data consists of different sub-populations, yet we only have access to a limited number of samples of the pooled data. It includes many widely used models such as Gaussian mixtures models, Hidden Markov Models, and topic models. We focus on parametric learning: given unlabeled data generated according to a mixture model, infer about the parameters of the underlying model. The hierarchical structure of the probabilistic model leads to non-convexity of the likelihood function in the model parameters, thus imposing great challenges in finding statistically efficient and computationally efficient solutions. We start with a simple, yet general setup of mixture model in the first part. We study the problem of estimating a low rank M x M matrix which represents a discrete distribution over M2 outcomes, given access to sample drawn according to the distribution. We propose a learning algorithm that accurately recovers the underlying matrix using 9(M) number of samples, which immediately lead to improved learning algorithms for various mixture models including topic models and HMMs. We show that the linear sample complexity is actually optimal in the min-max sense. There are "hard" mixture models for which there exist worst case lower bounds of sample complexity that scale exponentially in the model dimensions. In the second part, we study Gaussian mixture models and HMMs. We propose new learning algorithms with polynomial runtime. We leverage techniques in probabilistic analysis to prove that worst case instances are actually rare, and our algorithm can efficiently handle all the non-worst case instances. In the third part, we study the problem of super-resolution. Despite the lower bound for any deterministic algorithm, we propose a new randomized algorithm which complexity scales only quadratically in all dimensions, and show that it can handle any instance with high probability over the randomization.
by Qingqing Huang.
Ph. D.
Quattoni, Ariadna J. "Transfer learning algorithms for image classification". Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/53294.
Texto completoCataloged from PDF version of thesis.
Includes bibliographical references (p. 124-128).
An ideal image classifier should be able to exploit complex high dimensional feature representations even when only a few labeled examples are available for training. To achieve this goal we develop transfer learning algorithms that: 1) Leverage unlabeled data annotated with meta-data and 2) Exploit labeled data from related categories. In the first part of this thesis we show how to use the structure learning framework (Ando and Zhang, 2005) to learn efficient image representations from unlabeled images annotated with meta-data. In the second part we present a joint sparsity transfer algorithm for image classification. Our algorithm is based on the observation that related categories might be learnable using only a small subset of shared relevant features. To find these features we propose to train classifiers jointly with a shared regularization penalty that minimizes the total number of features involved in the approximation. To solve the joint sparse approximation problem we develop an optimization algorithm whose time and memory complexity is O(n log n) with n being the number of parameters of the joint model. We conduct experiments on news-topic and keyword prediction image classification tasks. We test our method in two settings: a transfer learning and multitask learning setting and show that in both cases leveraging knowledge from related categories can improve performance when training data per category is scarce. Furthermore, our results demonstrate that our model can successfully recover jointly sparse solutions.
by Ariadna Quattoni.
Ph.D.
Zavaroni, Sofia. "Modulation Classification with Deep Learning Algorithms". Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2022.
Buscar texto completoEllis, Kevin Ph D. (Kevin M. )Massachusetts Institute of Technology. "Algorithms for learning to induce programs". Thesis, Massachusetts Institute of Technology, 2020. https://hdl.handle.net/1721.1/130184.
Texto completoCataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 213-224).
The future of machine learning should have a knowledge representation that supports, at a minimum, several features: Expressivity, interpretability, the potential for reuse by both humans and machines, while also enabling sample-efficient generalization. Here we argue that programs-i.e., source code-are a knowledge representation which can contribute to the project of capturing these elements of intelligence. This research direction however requires new program synthesis algorithms which can induce programs solving a range of AI tasks. This program induction challenge confronts two primary obstacles: the space of all programs is infinite, so we need a strong inductive bias or prior to steer us toward the correct programs; and even if we have that prior, effectively searching through the vast combinatorial space of all programs is generally intractable. We introduce algorithms that learn to induce programs, with the goal of addressing these two primary obstacles. Focusing on case studies in vision, computational linguistics, and learning-to-learn, we develop an algorithmic toolkit for learning inductive biases over programs as well as learning to search for programs, drawing on probabilistic, neural, and symbolic methods. Together this toolkit suggests ways in which program induction can contribute to AI, and how we can use learning to improve program synthesis technologies.
by Kevin Ellis.
Ph. D. in Cognitive Science
Ph.D.inCognitiveScience Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences
Ramakrishnan, Naveen. "Distributed Learning Algorithms for Sensor Networks". The Ohio State University, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=osu1284991632.
Texto completoPratap, Amrit Abu-Mostafa Yaser S. Abu-Mostafa Yaser S. "Adaptive learning algorithms and data cloning /". Diss., Pasadena, Calif. : Caltech, 2008. http://resolver.caltech.edu/CaltechETD:etd-05292008-231048.
Texto completoLee, Jun won. "Relationships Among Learning Algorithms and Tasks". BYU ScholarsArchive, 2011. https://scholarsarchive.byu.edu/etd/2478.
Texto completoGupta, Pramod. "Robust clustering algorithms". Thesis, Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/39553.
Texto completoDong, Lin. "A Comparison of Multi-instance Learning Algorithms". The University of Waikato, 2006. http://hdl.handle.net/10289/2453.
Texto completoGolea, Mostefa. "On efficient learning algorithms for neural networks". Thesis, University of Ottawa (Canada), 1993. http://hdl.handle.net/10393/6508.
Texto completoAgache, Mariana. "Families of estimator-based stochastic learning algorithms". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0027/MQ52379.pdf.
Texto completoMitchell, Brian. "Prepositional phrase attachment using machine learning algorithms". Thesis, University of Sheffield, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.412729.
Texto completoPasteris, S. U. "Efficient algorithms for online learning over graphs". Thesis, University College London (University of London), 2016. http://discovery.ucl.ac.uk/1516210/.
Texto completoLi, Xia. "Travel time prediction using ensemble learning algorithms". Thesis, University of Nottingham, 2018. http://eprints.nottingham.ac.uk/53358/.
Texto completoAslam, Javed A. "Noise tolerant algorithms for learning and searching". Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/36533.
Texto completoIncludes bibliographical references (p. 109-112).
by Javed Alexander Aslam.
Ph.D.
Sloan, Robert Hal. "Computational learning theory : new models and algorithms". Thesis, Massachusetts Institute of Technology, 1989. http://hdl.handle.net/1721.1/38339.
Texto completoIncludes bibliographical references (leaves 116-120).
by Robert Hal Sloan.
Ph.D.
Betke, Margrit. "Learning and vision algorithms for robot navigation". Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/11402.
Texto completoSonntag, Dag. "Chain Graphs : Interpretations, Expressiveness and Learning Algorithms". Doctoral thesis, Linköpings universitet, Databas och informationsteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-125921.
Texto completoJohansson, Samuel y Karol Wojtulewicz. "Machine learning algorithms in a distributed context". Thesis, Linköpings universitet, Institutionen för datavetenskap, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-148920.
Texto completoShen, Chenyang. "Regularized models and algorithms for machine learning". HKBU Institutional Repository, 2015. https://repository.hkbu.edu.hk/etd_oa/195.
Texto completoChoudhury, A. "Fast machine learning algorithms for large data". Thesis, University of Southampton, 2002. https://eprints.soton.ac.uk/45907/.
Texto completoKarlsson, Daniel. "Hyperparameter optimisation using Q-learning based algorithms". Thesis, Karlstads universitet, Fakulteten för hälsa, natur- och teknikvetenskap (from 2013), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-78096.
Texto completoMaskininlärningsalgoritmer har många tillämpningsområden, både akademiska och inom industrin. Exempel på tillämpningar är, klassificering av diffraktionsmönster inom materialvetenskap och klassificering av egenskaper hos kemiska sammansättningar inom läkemedelsindustrin. För att dessa algoritmer ska prestera bra behöver de optimeras. En del av optimering sker vid träning av algoritmerna, men det finns komponenter som inte kan tränas. Dessa hyperparametrar måste justeras separat. Fokuset för det här arbetet var optimering av hyperparametrar till klassificeringsalgoritmer baserade på faltande neurala nätverk. Syftet med avhandlingen var att undersöka möjligheterna att använda förstärkningsinlärningsalgoritmer, främst ''Q-learning'', som den optimerande algoritmen. Tre olika algoritmer undersöktes, ''Q-learning'', dubbel ''Q-learning'' samt en algoritm inspirerad av ''Q-learning'', denna utvecklades under arbetets gång. Algoritmerna utvärderades på olika testproblem och jämfördes mot resultat uppnådda med en slumpmässig sökning av hyperparameterrymden, vilket är en av de vanligare metoderna för att optimera den här typen av algoritmer. Alla tre algoritmer påvisade någon form av inlärning, men endast den ''Q-learning'' inspirerade algoritmen presterade bättre än den slumpmässiga sökningen. En iterativ implemetation av den ''Q-learning'' inspirerade algoritmen utvecklades också. Den iterativa metoden tillät den tillgängliga hyperparameterrymden att förfinas mellan varje iteration. Detta medförde ytterligare förbättringar av resultaten som indikerade att beräkningstiden i vissa fall kunde minskas med upp till 40% jämfört med den slumpmässiga sökningen med bibehållet eller förbättrat resultat.
Westerlund, Fredrik. "CREDIT CARD FRAUD DETECTION (Machine learning algorithms)". Thesis, Umeå universitet, Statistik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-136031.
Texto completoValiveti, Radhakrishna Carleton University Dissertation Engineering Electrical. "Learning algorithms for data retrieval and storage". Ottawa, 1990.
Buscar texto completoAgache, Mariana Carleton University Dissertation Computer Science. "Families of estimator-based stochastic learning algorithms". Ottawa, 2000.
Buscar texto completoDragone, Paolo. "Coactive Learning Algorithms for Constructive Preference Elicitation". Doctoral thesis, University of Trento, 2019. http://eprints-phd.biblio.unitn.it/3581/1/thesis-final.pdf.
Texto completoLi, Xiao. "Regularized adaptation : theory, algorithms, and applications /". Thesis, Connect to this title online; UW restricted, 2007. http://hdl.handle.net/1773/5928.
Texto completoChalup, Stephan Konrad. "Incremental learning with neural networks, evolutionary computation and reinforcement learning algorithms". Thesis, Queensland University of Technology, 2001.
Buscar texto completoDinh, The Canh. "Distributed Algorithms for Fast and Personalized Federated Learning". Thesis, The University of Sydney, 2023. https://hdl.handle.net/2123/30019.
Texto completoTu, Zhuozhuo. "Towards Robust and Reliable Machine Learning: Theory and Algorithms". Thesis, The University of Sydney, 2022. https://hdl.handle.net/2123/28832.
Texto completoWang, Gang. "Solution path algorithms : an efficient model selection approach /". View abstract or full-text, 2007. http://library.ust.hk/cgi/db/thesis.pl?CSED%202007%20WANGG.
Texto completoSi, Si y 斯思. "Cross-domain subspace learning". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2010. http://hub.hku.hk/bib/B44912912.
Texto completoHarrington, Edward Francis. "Aspects of online learning /". View thesis entry in Australian Digital Theses Program, 2004. http://thesis.anu.edu.au/public/adt-ANU20060328.160810/index.html.
Texto completo