Academic literature on the topic 'Machine learning algorithms'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Machine learning algorithms.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Machine learning algorithms"

1

Mahesh, Batta. "Machine Learning Algorithms - A Review." International Journal of Science and Research (IJSR) 9, no. 1 (January 5, 2020): 381–86. http://dx.doi.org/10.21275/art20203995.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

TURAN, SELIN CEREN, and MEHMET ALI CENGIZ. "ENSEMBLE LEARNING ALGORITHMS." Journal of Science and Arts 22, no. 2 (June 30, 2022): 459–70. http://dx.doi.org/10.46939/j.sci.arts-22.2-a18.

Full text
Abstract:
Artificial intelligence is a method that is increasingly becoming widespread in all areas of life and enables machines to imitate human behavior. Machine learning is a subset of artificial intelligence techniques that use statistical methods to enable machines to evolve with experience. As a result of the advancement of technology and developments in the world of science, the interest and need for machine learning is increasing day by day. Human beings use machine learning techniques in their daily life without realizing it. In this study, ensemble learning algorithms, one of the machine learning techniques, are mentioned. The methods used in this study are Bagging and Adaboost algorithms which are from Ensemble Learning Algorithms. The main purpose of this study is to find the best performing classifier with the Classification and Regression Trees (CART) basic classifier on three different data sets taken from the UCI machine learning database and then to obtain the ensemble learning algorithms that can make this performance better and more determined using two different ensemble learning algorithms. For this purpose, the performance measures of the single basic classifier and the ensemble learning algorithms were compared
APA, Harvard, Vancouver, ISO, and other styles
3

Ling, Qingyang. "Machine learning algorithms review." Applied and Computational Engineering 4, no. 1 (June 14, 2023): 91–98. http://dx.doi.org/10.54254/2755-2721/4/20230355.

Full text
Abstract:
Machine learning is a field of study where the computer can learn for itself without a human explicitly hardcoding the knowledge for it. These algorithms make up the backbone of machine learning. This paper aims to study the field of machine learning and its algorithms. It will examine different types of machine learning models and introduce their most popular algorithms. The methodology of this paper is a literature review, which examines the most commonly used machine learning algorithms in the current field. Such algorithms include Nave Bayes, Decision Tree, KNN, and K-Mean Cluster. Nowadays, machine learning is everywhere and almost everyone using a technology product is enjoying its convenience. Applications like spam mail classification, image recognition, personalized product recommendations, and natural language processing all use machine learning algorithms. The conclusion is that there is no single algorithm that can solve all the problems. The choice of the use of algorithms and models must depend on the specific problem.
APA, Harvard, Vancouver, ISO, and other styles
4

K.M., Umamaheswari. "Road Accident Perusal Using Machine Learning Algorithms." International Journal of Psychosocial Rehabilitation 24, no. 5 (March 31, 2020): 1676–82. http://dx.doi.org/10.37200/ijpr/v24i5/pr201839.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Nair, Dr Prabha Shreeraj. "Analyzing Titanic Disaster using Machine Learning Algorithms." International Journal of Trend in Scientific Research and Development Volume-2, Issue-1 (December 31, 2017): 410–16. http://dx.doi.org/10.31142/ijtsrd7003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Mallika, Madasu, and K. Suresh Babu. "Breast Cancer Prediction using Machine Learning Algorithms." International Journal of Science and Research (IJSR) 12, no. 10 (October 5, 2023): 1235–38. http://dx.doi.org/10.21275/sr231015173828.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kumar, Nikhil. "Review Paper on Machine Learning Algorithms." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 05 (June 2, 2024): 1–5. http://dx.doi.org/10.55041/ijsrem34900.

Full text
Abstract:
This paper comprehensively reviews widely used machine learning algorithms across supervised, unsupervised, and reinforcement learning paradigms. It covers linear models, decision trees, support vector machines, neural networks, clustering techniques, dimensionality reduction methods, and ensemble approaches. For each algorithm, theoretical foundations, mathematical formulations, practical considerations like parameter tuning and computational complexity, and real-world applications across domains like computer vision and finance are discussed. Challenges and limitations such as overfitting and scalability are explored. Recent advancements like deep learning and transfer learning are highlighted. Finally, a comparative analysis evaluating strengths, weaknesses, and suitable problem domains for the algorithms is provided, serving as a guide for effective utilization of machine learning techniques. Keywords:- Machine learning · Deep learning, Gradient Descent, Logistic Regression, Support Vector Machine, K Nearest Neighbor, Predictive analytics,
APA, Harvard, Vancouver, ISO, and other styles
8

Meena, Munesh, and Ruchi Sehrawat. "Breakdown of Machine Learning Algorithms." Recent Trends in Artificial Intelligence & it's Applications 1, no. 3 (October 16, 2022): 25–29. http://dx.doi.org/10.46610/rtaia.2022.v01i03.005.

Full text
Abstract:
Machine Learning (ML) is a technology that can revolutionize the world. It is a technology based on AI (Artificial Intelligence) and can predict the outcomes using the previous algorithms without programming it. A subset of artificial intelligence is called machine learning (AI). A machine may automatically learn from data and get better at what it does thanks to machine learning. “If additional data can be gathered to help a machine perform better, it can learn. A developing technology called machine learning allows computers to learn from historical data. Machines can predict the outcomes by machine learning. For Nowadays machine learning is very important for us because it makes our work easy. to many companies are using machine learning in their products, like google is using google its google assistant, which takes our voice command and gives what do we want from it, and google is also using its goggle lens form which we can find anything just by clicking a picture, and Netflix is using machine learning for recommendation of any movies or series, Machine learning has a very deep effect on our life, like nowadays we are using selfdriving car’s.
APA, Harvard, Vancouver, ISO, and other styles
9

Yu, Binyan, and Yuanzheng Zheng. "Research on algorithms of machine learning." Applied and Computational Engineering 39, no. 1 (February 21, 2024): 277–81. http://dx.doi.org/10.54254/2755-2721/39/20230614.

Full text
Abstract:
Machine learning has endless application possibilities, with many algorithms worth learning in depth. Different algorithms can be flexibly applied to a variety of vertical fields, such as the most common neural network algorithms for face recognition, garbage classification, picture classification, and other application scenarios image recognition and computer vision, the hottest recent natural language processing and recommendation algorithms for different applications are from it. In the field of financial analysis, the decision tree algorithm and its derivative algorithms such as random forest are the mainstream. As well as support vector machines, naive Bayes, K-nearest neighbor algorithms, and so on. From the traditional regression algorithm to the hottest neural network algorithm. This paper discusses the application principle of the algorithm and lists some corresponding applications. Linear regression, decision trees, supervised learning, etc., while some have been replaced by more powerful and flexible algorithms and methods, by studying and understanding these foundational algorithms in depth, neural network models can be better designed and optimized, and a better understanding of how they work can be obtained.
APA, Harvard, Vancouver, ISO, and other styles
10

Pandey, Mrs Arjoo. "Machine Learning." International Journal for Research in Applied Science and Engineering Technology 11, no. 8 (August 31, 2023): 864–69. http://dx.doi.org/10.22214/ijraset.2023.55224.

Full text
Abstract:
Abstract: Machine learning refers to the study and development of machine learning algorithms and techniques at a conceptual level, focusing on theoretical foundations, algorithmic design, and mathematical analysis rather than specific implementation details or application domains. It aimsto provide a deeper understanding of the fundamental principles and limitations of machine learning, enabling researchers to develop novel algorithms and advance the field. In abstract machine learning, the emphasis is on formalizing and analyzing learning tasks, developing mathematical models for learning processes, and studying the properties and behavior of various learning algorithms. This involves investigating topics such as learning theory, statistical learning, optimization, computational complexity, and generalization. The goalis to develop theoretical frameworks and mathematical tools that help explain why certain algorithms work and how they can be improved. Abstract machine learning also explores fundamental questions related to the theoretical underpinnings of machine learning, such as the trade-offs between bias and variance, the existence of optimal learning algorithms, the sample complexity of learning tasks, and the limits of what can be learned from data. It provides a theoretical foundation for understanding the capabilities and limitations of machine learning algorithms, guiding the development of new algorithms and techniques. Moreover, abstract machine learning serves as a bridge between theory and practice, facilitating the transfer of theoretical insights into practical applications. Theoretical advances in abstract machine learning can inspire new algorithmic approaches and inform the design of real-world machine learning systems. Conversely, practical challenges and observations from realworld applications can motivate and guide theoretical investigations in abstract machine learning. Overall, abstract machine learning plays a crucial role in advancing the field of machine learning by providing rigorous theoretical frameworks, mathematical models, and algorithmic principles that deepen our understanding of learning processes and guide the development of more effectiveand efficient machine learning algorithms.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Machine learning algorithms"

1

Andersson, Viktor. "Machine Learning in Logistics: Machine Learning Algorithms : Data Preprocessing and Machine Learning Algorithms." Thesis, Luleå tekniska universitet, Datavetenskap, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-64721.

Full text
Abstract:
Data Ductus is a Swedish IT-consultant company, their customer base ranging from small startups to large scale cooperations. The company has steadily grown since the 80s and has established offices in both Sweden and the US. With the help of machine learning, this project will present a possible solution to the errors caused by the human factor in the logistic business.A way of preprocessing data before applying it to a machine learning algorithm, as well as a couple of algorithms to use will be presented.
Data Ductus är ett svenskt IT-konsultbolag, deras kundbas sträcker sig från små startups till stora redan etablerade företag. Företaget har stadigt växt sedan 80-talet och har etablerat kontor både i Sverige och i USA. Med hjälp av maskininlärning kommer detta projket att presentera en möjlig lösning på de fel som kan uppstå inom logistikverksamheten, orsakade av den mänskliga faktorn.Ett sätt att förbehandla data innan den tillämpas på en maskininlärning algoritm, liksom ett par algoritmer för användning kommer att presenteras.
APA, Harvard, Vancouver, ISO, and other styles
2

Moon, Gordon Euhyun. "Parallel Algorithms for Machine Learning." The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1561980674706558.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Roderus, Jens, Simon Larson, and Eric Pihl. "Hadoop scalability evaluation for machine learning algorithms on physical machines : Parallel machine learning on computing clusters." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-20102.

Full text
Abstract:
The amount of available data has allowed the field of machine learning to flourish. But with growing data set sizes comes an increase in algorithm execution times. Cluster computing frameworks provide tools for distributing data and processing power on several computer nodes and allows for algorithms to run in feasible time frames when data sets are large. Different cluster computing frameworks come with different trade-offs. In this thesis, the scalability of the execution time of machine learning algorithms running on the Hadoop cluster computing framework is investigated. A recent version of Hadoop and algorithms relevant in industry machine learning, namely K-means, latent Dirichlet allocation and naive Bayes are used in the experiments. This paper provides valuable information to anyone choosing between different cluster computing frameworks. The results show everything from moderate scalability to no scalability at all. These results indicate that Hadoop as a framework may have serious restrictions in how well tasks are actually parallelized. Possible scalability improvements could be achieved by modifying the machine learning library algorithms or by Hadoop parameter tuning.
APA, Harvard, Vancouver, ISO, and other styles
4

Romano, Donato. "Machine Learning algorithms for predictive diagnostics applied to automatic machines." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/22319/.

Full text
Abstract:
In questo lavoro di tesi è stato analizzato l'avvento dell'industria 4.0 all'interno dell' industria nel settore packaging. In particolare, è stata discussa l'importanza della diagnostica predittiva e sono stati analizzati e testati diversi approcci per la determinazione di modelli descrittivi del problema a partire dai dati. Inoltre, sono state applicate le principali tecniche di Machine Learning in modo da classificare i dati analizzati nelle varie classi di appartenenza.
APA, Harvard, Vancouver, ISO, and other styles
5

Addanki, Ravichandra. "Learning generalizable device placement algorithms for distributed machine learning." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122746.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 47-50).
We present Placeto, a reinforcement learning (RL) approach to efficiently find device placements for distributed neural network training. Unlike prior approaches that only find a device placement for a specific computation graph, Placeto can learn generalizable device placement policies that can be applied to any graph. We propose two key ideas in our approach: (1) we represent the policy as performing iterative placement improvements, rather than outputting a placement in one shot; (2) we use graph embeddings to capture relevant information about the structure of the computation graph, without relying on node labels for indexing. These ideas allow Placeto to train efficiently and generalize to unseen graphs. Our experiments show that Placeto requires up to 6.1 x fewer training steps to find placements that are on par with or better than the best placements found by prior approaches. Moreover, Placeto is able to learn a generalizable placement policy for any given family of graphs, which can then be used without any retraining to predict optimized placements for unseen graphs from the same family. This eliminates the large overhead incurred by prior RL approaches whose lack of generalizability necessitates re-training from scratch every time a new graph is to be placed.
by Ravichandra Addanki.
S.M.
S.M. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
6

Mitchell, Brian. "Prepositional phrase attachment using machine learning algorithms." Thesis, University of Sheffield, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.412729.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Johansson, Samuel, and Karol Wojtulewicz. "Machine learning algorithms in a distributed context." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-148920.

Full text
Abstract:
Interest in distributed approaches to machine learning has increased significantly in recent years due to continuously increasing data sizes for training machine learning models. In this thesis we describe three popular machine learning algorithms: decision trees, Naive Bayes and support vector machines (SVM) and present existing ways of distributing them. We also perform experiments with decision trees distributed with bagging, boosting and hard data partitioning and evaluate them in terms of performance measures such as accuracy, F1 score and execution time. Our experiments show that the execution time of bagging and boosting increase linearly with the number of workers, and that boosting performs significantly better than bagging and hard data partitioning in terms of F1 score. The hard data partitioning algorithm works well for large datasets where the execution time decrease as the number of workers increase without any significant loss in accuracy or F1 score, while the algorithm performs poorly on small data with an increase in execution time and loss in accuracy and F1 score when the number of workers increase.
APA, Harvard, Vancouver, ISO, and other styles
8

Shen, Chenyang. "Regularized models and algorithms for machine learning." HKBU Institutional Repository, 2015. https://repository.hkbu.edu.hk/etd_oa/195.

Full text
Abstract:
Multi-lable learning (ML), multi-instance multi-label learning (MIML), large network learning and random under-sampling system are four active research topics in machine learning which have been studied intensively recently. So far, there are still a lot of open problems to be figured out in these topics which attract worldwide attention of researchers. This thesis mainly focuses on several novel methods designed for these research tasks respectively. Then main difference between ML learning and traditional classification task is that in ML learning, one object can be characterized by several different labels (or classes). One important observation is that the labels received by similar objects in ML data are usually highly correlated with each other. In order to exploring this correlation of labels between objects which might be a key issue in ML learning, we consider to require the resulting label indicator to be low rank. In the proposed model, nuclear norm which is a famous convex relaxation of intractable matrix rank is introduced to label indicator in order to exploiting the underlying correlation in label domain. Motivated by the idea of spectral clustering, we also incorporate information from feature domain by constructing a graph among objects based on their features. Then with partial label information available, we integrate them together into a convex low rank based model designed for ML learning. The proposed model can be solved efficiently by using alternating direction method of multiplier (ADMM). We test the performance on several benchmark ML data sets and make comparisons with the state-of-art algorithms. The classification results demonstrate the efficiency and effectiveness of the proposed low rank based methods. One step further, we consider MIML learning problem which is usually more complicated than ML learning: besides the possibility of having multiple labels, each object can be described by multiple instances simultaneously which may significantly increase the size of data. To handle the MIML learning problem we first propose and develop a novel sparsity-based MIML learning algorithm. Our idea is to formulate and construct a transductive objective function for label indicator to be learned by using the method of random walk with restart that exploits the relationships among instances and labels of objects, and computes the affinities among the objects. Then sparsity can be introduced in the labels indicator of the objective function such that relevant and irrelevant objects with respect to a given class can be distinguished. The resulting sparsity-based MIML model can be given as a constrained convex optimization problem, and it can be solved very efficiently by using the augmented Lagrangian method (ALM). Experimental results on benchmark data have shown that the proposed sparse-MIML algorithm is computationally efficient, and effective in label prediction for MIML data. We demonstrate that the performance of the proposed method is better than the other testing MIML learning algorithms. Moreover, one big concern of an MIML learning algorithm is computational efficiency, especially when figuring out classification problem for large data sets. Most of the existing methods for solving MIML problems in literature may take a long computational time and have a huge storage cost for large MIML data sets. In this thesis, our main aim is to propose and develop an efficient Markov Chain based learning algorithm for MIML problems. Our idea is to perform labels classification among objects and features identification iteratively through two Markov chains constructed by using objects and features respectively. The classification of objects can be obtained by using labels propagation via training data in the iterative method. Because it is not necessary to compute and store a huge affinity matrix among objects/instances, both the storage and computational time can be reduced significantly. For instance, when we handle MIML image data set of 10000 objects and 250000 instances, the proposed algorithm takes about 71 seconds. Also experimental results on some benchmark data sets are reported to illustrate the effectiveness of the proposed method in one-error, ranking loss, coverage and average precision, and show that it is competitive with the other methods. In addition, we consider the module identification from large biological networks. Nowadays, the interactions among different genes, proteins and other small molecules are becoming more and more significant and have been studied intensively. One general way that helps people understand these interactions is to analyze networks constructed from genes/proteins. In particular, module structure as a common property of most biological networks has drawn much attention of researchers from different fields. However, biological networks might be corrupted by noise in the data which often lead to the miss-identification of module structure. Besides, some edges in network might be removed (or some nodes might be miss-connected) when improper parameters are selected which may also affect the module identified significantly. In conclusion, the module identification results are sensitive to noise as well as parameter selection of network. In this thesis, we consider employing multiple networks for consistent module detection in order to reduce the effect of noise and parameter settings. Instead of studying different networks separately, our idea is to combine multiple networks together by building them into tensor structure data. Then given any node as prior label information, tensor-based Markov chains are constructed iteratively for identification of the modules shared by the multiple networks. In addition, the proposed tensor-based Markov chain algorithm is capable of simultaneously evaluating the contribution from each network. It would be useful to measure the consistency of modules in the multiple networks. In the experiments, we test our method on two groups of gene co-expression networks from human beings. We also validate biological meaning of modules identified by the proposed method. Finally, we introduce random under-sampling techniques with application to X-ray computed tomography (CT). Under-sampling techniques are realized to be powerful tools of reducing the scale of problem especially for large data analysis. However, information loss seems to be un-avoidable which inspires different under-sampling strategies for preserving more useful information. Here we focus on under-sampling for the real-world CT reconstruction problem. The main motivation is to reduce the total radiation dose delivered to patient which has arisen significant clinical concern for CT imaging. We compare two popular regular CT under-sampling strategies with ray random under-sampling. The results support the conclusion that random under-sampling always outperforms regular ones especially for the high down-sampling ratio cases. Moreover, based on the random ray under-sampling strategy, we propose a novel scatter removal method which further improves performance of ray random under-sampling in CT reconstruction.
APA, Harvard, Vancouver, ISO, and other styles
9

Choudhury, A. "Fast machine learning algorithms for large data." Thesis, University of Southampton, 2002. https://eprints.soton.ac.uk/45907/.

Full text
Abstract:
Traditional machine learning has been largely concerned with developing techniques for small or modestly sized datasets. These techniques fail to scale up well for large data problems, a situation becoming increasingly common in today’s world. This thesis is concerned with the problem of learning with large data. In particular, it considers solving the three basic tasks in machine learning, viz., classification, regression and density approximation. We develop fast memory- efficient algorithmics for kernel machine training and deployment. These include considering efficient preprocessing steps for speeding up existing training algorithms as well as developing a general purpose framework for machine learning using kernel methods. Emphasis is placed on the development of computationally efficient greedy schemes which leverage state-of-the-art techniques from the field of numerical linear algebra. The algorithms presented here underline a basic premise that it is possible to efficiently train a kernel machine on large data, which generalizes well and yet has a sparse expansion leading to improved runtime performance. Empirical evidence is provided in support of this premise throughout the thesis.
APA, Harvard, Vancouver, ISO, and other styles
10

Westerlund, Fredrik. "CREDIT CARD FRAUD DETECTION (Machine learning algorithms)." Thesis, Umeå universitet, Statistik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-136031.

Full text
Abstract:
Credit card fraud is a field with perpetrators performing illegal actions that may affect other individuals or companies negatively. For instance, a criminalcan steal credit card information from an account holder and then conduct fraudulent transactions. The activities are a potential contributory factor to how illegal organizations such as terrorists and drug traffickers support themselves financially. Within the machine learning area, there are several methods that possess the ability to detect credit card fraud transactions; supervised learning and unsupervised learning algorithms. This essay investigates the supervised approach, where two algorithms (Hellinger Distance Decision Tree (HDDT) and Random Forest) are evaluated on a real life dataset of 284,807 transactions. Under those circumstances, the main purpose is to develop a “well-functioning” model with a reasonable capacity to categorize transactions as fraudulent or legit. As the data is heavily unbalanced, reducing the false-positive rate is also an important part when conducting research in the chosen area. In conclusion, evaluated algorithms present a fairly similar outcome, where both models have the capability to distinguish the classes from each other. However, the Random Forest approach has a better performance than HDDT in all measures of interest.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Machine learning algorithms"

1

Li, Fuwei, Lifeng Lai, and Shuguang Cui. Machine Learning Algorithms. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-16375-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ayyadevara, V. Kishore. Pro Machine Learning Algorithms. Berkeley, CA: Apress, 2018. http://dx.doi.org/10.1007/978-1-4842-3564-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hutchinson, Alan. Algorithmic learning. Oxford: Clarendon Press, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Grefenstette, John J., ed. Genetic Algorithms for Machine Learning. Boston, MA: Springer US, 1994. http://dx.doi.org/10.1007/978-1-4615-2740-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Mandal, Jyotsna Kumar, Somnath Mukhopadhyay, Paramartha Dutta, and Kousik Dasgupta, eds. Algorithms in Machine Learning Paradigms. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-1041-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

J, Grefenstette John, ed. Genetic algorithms for machine learning. Boston: Kluwer Academic Publishers, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Paliouras, Georgios. Scalability of machine learning algorithms. Manchester: University of Manchester, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Grefenstette, John J. Genetic Algorithms for Machine Learning. Boston, MA: Springer US, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ertekin, Şeyda. Algorithms for efficient learning systems: Online and active learning approaches. Saarbrücken: VDM Verlag Dr. Müller, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Mohri, Mehryar. Foundations of machine learning. Cambridge, MA: MIT Press, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Machine learning algorithms"

1

Geetha, T. V., and S. Sendhilkumar. "Classification Algorithms." In Machine Learning, 127–51. Boca Raton: Chapman and Hall/CRC, 2023. http://dx.doi.org/10.1201/9781003290100-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Fernandes de Mello, Rodrigo, and Moacir Antonelli Ponti. "Assessing Supervised Learning Algorithms." In Machine Learning, 129–61. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-94989-5_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Pendyala, Vishnu. "Machine Learning Algorithms." In Veracity of Big Data, 87–118. Berkeley, CA: Apress, 2018. http://dx.doi.org/10.1007/978-1-4842-3633-8_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Panesar, Arjun. "Machine Learning Algorithms." In Machine Learning and AI for Healthcare, 119–88. Berkeley, CA: Apress, 2019. http://dx.doi.org/10.1007/978-1-4842-3799-1_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Panesar, Arjun. "Machine Learning Algorithms." In Machine Learning and AI for Healthcare, 85–144. Berkeley, CA: Apress, 2020. http://dx.doi.org/10.1007/978-1-4842-6537-6_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zhou, Ding-Xuan. "Machine Learning Algorithms." In Encyclopedia of Applied and Computational Mathematics, 839–41. Berlin, Heidelberg: Springer Berlin Heidelberg, 2015. http://dx.doi.org/10.1007/978-3-540-70529-1_301.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Singh, Rajesh, Anita Gehlot, Mahesh Kumar Prajapat, and Bhupendra Singh. "Machine Learning Algorithms." In Artificial Intelligence in Agriculture, 106–36. London: CRC Press, 2021. http://dx.doi.org/10.1201/9781003245759-11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Somogyi, Zoltán. "Machine Learning Algorithms." In The Application of Artificial Intelligence, 17–86. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-60032-7_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Gupta, Pramod, and Naresh K. Sehgal. "Machine Learning Algorithms." In Introduction to Machine Learning in the Cloud with Python, 23–77. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-71270-9_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Hazzan, Orit, and Koby Mike. "Machine Learning Algorithms." In Guide to Teaching Data Science, 225–34. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-24758-3_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Machine learning algorithms"

1

Khan, Rehan Ullah, and Saleh Albahli. "Machine Learning Augmentation." In ACAI 2019: 2019 2nd International Conference on Algorithms, Computing and Artificial Intelligence. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3377713.3377726.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Abdullahi, M. I., G. I. O. Aimufua, and U. A. Muhammad. "Application of Sales Forecasting Model Based on Machine Learning Algorithms." In 28th iSTEAMS Multidisciplinary Research Conference AIUWA The Gambia. Society for Multidisciplinary and Advanced Research Techniques - Creative Research Publishers, 2021. http://dx.doi.org/10.22624/aims/isteams-2021/v28p17.

Full text
Abstract:
Machine learning has been a subject undergoing intense study across many different industries and fortunately, companies are becoming gradually more aware of the various machine learning approaches to solve their problems. However, to fully harvest the potential of different machine learning models and to achieve efficient results, one needs to have a good understanding of the application of the models and the nature of data. This paper aims to investigate different approaches to obtain good results of the machine learning algorithms applied for a given forecasting task. To this end, the paper critically analyzes and investigate the applicability of machine learning algorithm in sales forecasting under dynamic conditions, develop a forecasting model based on the regression model, and evaluate the performance of four machine learning regression algorithms (Random Forest, Extreme Gradient Boosting, Support Vector Machine for Regression and Ensemble Model) using data set from Nigeria retail shops for sales forecasting based on performance matrices such as R-squared, Root Mean Square Error, Mean Absolute Error and Mean Absolute Percentage Error. Keywords: Sales Forecasting, Model Based, Algorithms Machine Learning
APA, Harvard, Vancouver, ISO, and other styles
3

Kunz, Philipp, Ilche Georgievski, and Marco Aiello. "Towards a Framework for Learning of Algorithms: The Case of Learned Comparison Sorts." In Thirty-Third International Joint Conference on Artificial Intelligence {IJCAI-24}. California: International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/ijcai.2024/481.

Full text
Abstract:
Designing algorithms is cumbersome and error-prone. This, among other things, has increasingly led to efforts to extend or even replace designing algorithms with machine learning models. While previous research has demonstrated that some machine learning models possess Turing-completeness, the findings are largely theoretical, and solutions for specific algorithmic tasks remain unclear. With this in mind, we investigate the feasibility of learning representations of classical algorithms from data on their execution, enabling their application to different inputs. We propose a novel and general framework for algorithm learning consisting of a model of computation that facilitates algorithm analysis across various levels of abstraction. We formalize the problem of learning an algorithm using an algebraic approach for graph traversal. We apply this framework to comparison sorts and evaluate the inferred machine learning models' performance, demonstrating the applicability of the approach in terms of accuracy and sensitivity.
APA, Harvard, Vancouver, ISO, and other styles
4

Grefenstette, John J. "Genetic algorithms and machine learning." In the sixth annual conference. New York, New York, USA: ACM Press, 1993. http://dx.doi.org/10.1145/168304.168305.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Squillero, Giovanni, and Alberto Tonda. "Evolutionary algorithms and machine learning." In GECCO '20: Genetic and Evolutionary Computation Conference. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3377929.3389863.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Rudin, Cynthia. "Algorithms for interpretable machine learning." In KDD '14: The 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York, NY, USA: ACM, 2014. http://dx.doi.org/10.1145/2623330.2630823.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Shaw, Peter. "Combinatorial Algorithms in Machine Learning." In 2018 First International Conference on Artificial Intelligence for Industries (AI4I). IEEE, 2018. http://dx.doi.org/10.1109/ai4i.2018.8665720.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kearns, Michael. "Fair Algorithms for Machine Learning." In EC '17: ACM Conference on Economics and Computation. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3033274.3084096.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ghanta, Sindhu, Sriram Subramanian, Lior Khermosh, Swaminathan Sundararaman, Harshil Shah, Yakov Goldberg, Drew Roselli, and Nisha Talagala. "ML health monitor: taking the pulse of machine learning algorithms in production." In Applications of Machine Learning, edited by Michael E. Zelinski, Tarek M. Taha, Jonathan Howe, Abdul A. Awwal, and Khan M. Iftekharuddin. SPIE, 2019. http://dx.doi.org/10.1117/12.2529598.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Arden, Farel, and Cutifa Safitri. "Hyperparameter Tuning Algorithm Comparison with Machine Learning Algorithms." In 2022 6th International Conference on Information Technology, Information Systems and Electrical Engineering (ICITISEE). IEEE, 2022. http://dx.doi.org/10.1109/icitisee57756.2022.10057630.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Machine learning algorithms"

1

Stepp, Robert E., Bradley L. Whitehall, and Lawrence B. Holder. Toward Intelligent Machine Learning Algorithms. Fort Belvoir, VA: Defense Technical Information Center, May 1988. http://dx.doi.org/10.21236/ada197049.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Alwan, Iktimal, Dennis D. Spencer, and Rafeed Alkawadri. Comparison of Machine Learning Algorithms in Sensorimotor Functional Mapping. Progress in Neurobiology, December 2023. http://dx.doi.org/10.60124/j.pneuro.2023.30.03.

Full text
Abstract:
Objective: To compare the performance of popular machine learning algorithms (ML) in mapping the sensorimotor cortex (SM) and identifying the anterior lip of the central sulcus (CS). Methods: We evaluated support vector machines (SVMs), random forest (RF), decision trees (DT), single layer perceptron (SLP), and multilayer perceptron (MLP) against standard logistic regression (LR) to identify the SM cortex employing validated features from six-minute of NREM sleep icEEG data and applying standard common hyperparameters and 10-fold cross-validation. Each algorithm was tested using vetted features based on the statistical significance of classical univariate analysis (p<0.05) and extended () 17 features representing power/coherence of different frequency bands, entropy, and interelectrode-based distance. The analysis was performed before and after weight adjustment for imbalanced data (w). Results: 7 subjects and 376 contacts were included. Before optimization, ML algorithms performed comparably employing conventional features (median CS accuracy: 0.89, IQR [0.88-0.9]). After optimization, neural networks outperformed others in means of accuracy (MLP: 0.86), the area under the curve (AUC) (SLPw, MLPw, MLP: 0.91), recall (SLPw: 0.82, MLPw: 0.81), precision (SLPw: 0.84), and F1-scores (SLPw: 0.82). SVM achieved the best specificity performance. Extending the number of features and adjusting the weights improved recall, precision, and F1-scores by 48.27%, 27.15%, and 39.15%, respectively, with gains or no significant losses in specificity and AUC across CS and Function (correlation r=0.71 between the two clinical scenarios in all performance metrics, p<0.001). Interpretation: Computational passive sensorimotor mapping is feasible and reliable. Feature extension and weight adjustments improve the performance and counterbalance the accuracy paradox. Optimized neural networks outperform other ML algorithms even in binary classification tasks. The best-performing models and the MATLAB® routine employed in signal processing are available to the public at (Link 1).
APA, Harvard, Vancouver, ISO, and other styles
3

Caravelli, Francesco. Towards memristor supremacy with novel machine learning algorithms. Office of Scientific and Technical Information (OSTI), September 2021. http://dx.doi.org/10.2172/1822713.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Dim, Odera, Carlos Soto, Yonggang Cui, Lap-Yan Cheng, Maia Gemmill, Thomas Grice, Joseph Rivers, Warren Stern, and Michael Todosow. VERIFICATION OF TRISO FUEL BURNUP USING MACHINE LEARNING ALGORITHMS. Office of Scientific and Technical Information (OSTI), August 2021. http://dx.doi.org/10.2172/1813329.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Varastehpour, Soheil, Hamid Sharifzadeh, and Iman Ardekani. A Comprehensive Review of Deep Learning Algorithms. Unitec ePress, 2021. http://dx.doi.org/10.34074/ocds.092.

Full text
Abstract:
Deep learning algorithms are a subset of machine learning algorithms that aim to explore several levels of the distributed representations from the input data. Recently, many deep learning algorithms have been proposed to solve traditional artificial intelligence problems. In this review paper, some of the up-to-date algorithms of this topic in the field of computer vision and image processing are reviewed. Following this, a brief overview of several different deep learning methods and their recent developments are discussed.
APA, Harvard, Vancouver, ISO, and other styles
6

Waldrop, Lauren, Carl Hart, Nancy Parker, Chris Pettit, and Scotland McIntosh. Utility of machine learning algorithms for natural background photo classification. Cold Regions Research and Engineering Laboratory (U.S.), June 2018. http://dx.doi.org/10.21079/11681/27344.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Grechanuk, Pavel, Michael Rising, and Todd Palmer. Application of Machine Learning Algorithms to Identify Problematic Nuclear Data. Office of Scientific and Technical Information (OSTI), January 2021. http://dx.doi.org/10.2172/1906466.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Bissett, W. P. Optimizing Machine Learning Algorithms For Hyperspectral Very Shallow Water (VSW) Products. Fort Belvoir, VA: Defense Technical Information Center, January 2009. http://dx.doi.org/10.21236/ada531071.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Bissett, W. P. Optimizing Machine Learning Algorithms for Hyperspectral Very Shallow Water (VSW) Products. Fort Belvoir, VA: Defense Technical Information Center, June 2009. http://dx.doi.org/10.21236/ada504929.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Bissett, W. P. Optimizing Machine Learning Algorithms for Hyperspectral Very Shallow Water (VSW) Products. Fort Belvoir, VA: Defense Technical Information Center, January 2008. http://dx.doi.org/10.21236/ada516714.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography