Journal articles on the topic 'Learning algorithms'

To see the other types of publications on this topic, follow the link: Learning algorithms.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Learning algorithms.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Egorova, Irina Konstantinovna. "BASIC ALGORITHMS LEARNING ALGORITHMS." Economy. Business. Computer science, no. 3 (January 1, 2016): 47–58. http://dx.doi.org/10.19075/2500-2074-2016-3-47-58.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Xu, Chenyang, and Benjamin Moseley. "Learning-Augmented Algorithms for Online Steiner Tree." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 8 (June 28, 2022): 8744–52. http://dx.doi.org/10.1609/aaai.v36i8.20854.

Full text
Abstract:
This paper considers the recently popular beyond-worst-case algorithm analysis model which integrates machine-learned predictions with online algorithm design. We consider the online Steiner tree problem in this model for both directed and undirected graphs. Steiner tree is known to have strong lower bounds in the online setting and any algorithm’s worst-case guarantee is far from desirable. This paper considers algorithms that predict which terminal arrives online. The predictions may be incorrect and the algorithms’ performance is parameterized by the number of incorrectly predicted terminals. These guarantees ensure that algorithms break through the online lower bounds with good predictions and the competitive ratio gracefully degrades as the prediction error grows. We then observe that the theory is predictive of what will occur empirically. We show on graphs where terminals are drawn from a distribution, the new online algorithms have strong performance even with modestly correct predictions.
APA, Harvard, Vancouver, ISO, and other styles
3

Luan, Yuxuan, Junjiang He, Jingmin Yang, Xiaolong Lan, and Geying Yang. "Uniformity-Comprehensive Multiobjective Optimization Evolutionary Algorithm Based on Machine Learning." International Journal of Intelligent Systems 2023 (November 10, 2023): 1–21. http://dx.doi.org/10.1155/2023/1666735.

Full text
Abstract:
When solving real-world optimization problems, the uniformity of Pareto fronts is an essential strategy in multiobjective optimization problems (MOPs). However, it is a common challenge for many existing multiobjective optimization algorithms due to the skewed distribution of solutions and biases towards specific objective functions. This paper proposes a uniformity-comprehensive multiobjective optimization evolutionary algorithm based on machine learning to address this limitation. Our algorithm utilizes uniform initialization and self-organizing map (SOM) to enhance population diversity and uniformity. We track the IGD value and use K-means and CNN refinement with crossover and mutation techniques during evolutionary stages. Our algorithm’s uniformity and objective function balance superiority were verified through comparative analysis with 13 other algorithms, including eight traditional multiobjective optimization algorithms, three machine learning-based enhanced multiobjective optimization algorithms, and two algorithms with objective initialization improvements. Based on these comprehensive experiments, it has been proven that our algorithm outperforms other existing algorithms in these areas.
APA, Harvard, Vancouver, ISO, and other styles
4

Zhang, Pingke, and Jinyu Huang. "APTM: Structurally Informative Network Representation Learning." Frontiers in Science and Engineering 3, no. 11 (November 21, 2023): 5–11. http://dx.doi.org/10.54691/fse.v3i11.5701.

Full text
Abstract:
Network representation learning algorithms provide a method to map complex network data into low-dimensional real vectors, aiming to capture and preserve structural information within the network. In recent years, these algorithms have found widespread applications in tasks such as link prediction and node classification in graph data mining. In this work, we propose a novel algorithm based on an adaptive transfer probability matrix. We use a deep neural network, comprising an autoencoder, to encode and reduce the dimensionality of the generated matrix, thereby encoding the intricate structural information of the network into low-dimensional real vectors. We evaluate the algorithm's performance through node classification, and in comparison with mainstream network representation learning algorithms, our proposed algorithm demonstrates favorable results. It outperforms baseline models in terms of micro-F1 scores on three datasets: PPI, Citeseer, and Wiki.
APA, Harvard, Vancouver, ISO, and other styles
5

Bottou, Léon, and Vladimir Vapnik. "Local Learning Algorithms." Neural Computation 4, no. 6 (November 1992): 888–900. http://dx.doi.org/10.1162/neco.1992.4.6.888.

Full text
Abstract:
Very rarely are training data evenly distributed in the input space. Local learning algorithms attempt to locally adjust the capacity of the training system to the properties of the training set in each area of the input space. The family of local learning algorithms contains known methods, like the k-nearest neighbors method (kNN) or the radial basis function networks (RBF), as well as new algorithms. A single analysis models some aspects of these algorithms. In particular, it suggests that neither kNN or RBF, nor nonlocal classifiers, achieve the best compromise between locality and capacity. A careful control of these parameters in a simple local learning algorithm has provided a performance breakthrough for an optical character recognition problem. Both the error rate and the rejection performance have been significantly improved.
APA, Harvard, Vancouver, ISO, and other styles
6

Coe, James, and Mustafa Atay. "Evaluating Impact of Race in Facial Recognition across Machine Learning and Deep Learning Algorithms." Computers 10, no. 9 (September 10, 2021): 113. http://dx.doi.org/10.3390/computers10090113.

Full text
Abstract:
The research aims to evaluate the impact of race in facial recognition across two types of algorithms. We give a general insight into facial recognition and discuss four problems related to facial recognition. We review our system design, development, and architectures and give an in-depth evaluation plan for each type of algorithm, dataset, and a look into the software and its architecture. We thoroughly explain the results and findings of our experimentation and provide analysis for the machine learning algorithms and deep learning algorithms. Concluding the investigation, we compare the results of two kinds of algorithms and compare their accuracy, metrics, miss rates, and performances to observe which algorithms mitigate racial bias the most. We evaluate racial bias across five machine learning algorithms and three deep learning algorithms using racially imbalanced and balanced datasets. We evaluate and compare the accuracy and miss rates between all tested algorithms and report that SVC is the superior machine learning algorithm and VGG16 is the best deep learning algorithm based on our experimental study. Our findings conclude the algorithm that mitigates the bias the most is VGG16, and all our deep learning algorithms outperformed their machine learning counterparts.
APA, Harvard, Vancouver, ISO, and other styles
7

Mu, Tong, Georgios Theocharous, David Arbour, and Emma Brunskill. "Constraint Sampling Reinforcement Learning: Incorporating Expertise for Faster Learning." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 7 (June 28, 2022): 7841–49. http://dx.doi.org/10.1609/aaai.v36i7.20753.

Full text
Abstract:
Online reinforcement learning (RL) algorithms are often difficult to deploy in complex human-facing applications as they may learn slowly and have poor early performance. To address this, we introduce a practical algorithm for incorporating human insight to speed learning. Our algorithm, Constraint Sampling Reinforcement Learning (CSRL), incorporates prior domain knowledge as constraints/restrictions on the RL policy. It takes in multiple potential policy constraints to maintain robustness to misspecification of individual constraints while leveraging helpful ones to learn quickly. Given a base RL learning algorithm (ex. UCRL, DQN, Rainbow) we propose an upper confidence with elimination scheme that leverages the relationship between the constraints, and their observed performance, to adaptively switch among them. We instantiate our algorithm with DQN-type algorithms and UCRL as base algorithms, and evaluate our algorithm in four environments, including three simulators based on real data: recommendations, educational activity sequencing, and HIV treatment sequencing. In all cases, CSRL learns a good policy faster than baselines.
APA, Harvard, Vancouver, ISO, and other styles
8

Yu, Binyan, and Yuanzheng Zheng. "Research on algorithms of machine learning." Applied and Computational Engineering 39, no. 1 (February 21, 2024): 277–81. http://dx.doi.org/10.54254/2755-2721/39/20230614.

Full text
Abstract:
Machine learning has endless application possibilities, with many algorithms worth learning in depth. Different algorithms can be flexibly applied to a variety of vertical fields, such as the most common neural network algorithms for face recognition, garbage classification, picture classification, and other application scenarios image recognition and computer vision, the hottest recent natural language processing and recommendation algorithms for different applications are from it. In the field of financial analysis, the decision tree algorithm and its derivative algorithms such as random forest are the mainstream. As well as support vector machines, naive Bayes, K-nearest neighbor algorithms, and so on. From the traditional regression algorithm to the hottest neural network algorithm. This paper discusses the application principle of the algorithm and lists some corresponding applications. Linear regression, decision trees, supervised learning, etc., while some have been replaced by more powerful and flexible algorithms and methods, by studying and understanding these foundational algorithms in depth, neural network models can be better designed and optimized, and a better understanding of how they work can be obtained.
APA, Harvard, Vancouver, ISO, and other styles
9

Sun, Yuqin, Songlei Wang, Dongmei Huang, Yuan Sun, Anduo Hu, and Jinzhong Sun. "A multiple hierarchical clustering ensemble algorithm to recognize clusters arbitrarily shaped." Intelligent Data Analysis 26, no. 5 (September 5, 2022): 1211–28. http://dx.doi.org/10.3233/ida-216112.

Full text
Abstract:
As a research hotspot in ensemble learning, clustering ensemble obtains robust and highly accurate algorithms by integrating multiple basic clustering algorithms. Most of the existing clustering ensemble algorithms take the linear clustering algorithms as the base clusterings. As a typical unsupervised learning technique, clustering algorithms have difficulties properly defining the accuracy of the findings, making it difficult to significantly enhance the performance of the final algorithm. AGglomerative NESting method is used to build base clusters in this article, and an integration strategy for integrating multiple AGglomerative NESting clusterings is proposed. The algorithm has three main steps: evaluating the credibility of labels, producing multiple base clusters, and constructing the relation among clusters. The proposed algorithm builds on the original advantages of AGglomerative NESting and further compensates for the inability to identify arbitrarily shaped clusters. It can establish the proposed algorithm’s superiority in terms of clustering performance by comparing the proposed algorithm’s clustering performance to that of existing clustering algorithms on different datasets.
APA, Harvard, Vancouver, ISO, and other styles
10

Ling, Qingyang. "Machine learning algorithms review." Applied and Computational Engineering 4, no. 1 (June 14, 2023): 91–98. http://dx.doi.org/10.54254/2755-2721/4/20230355.

Full text
Abstract:
Machine learning is a field of study where the computer can learn for itself without a human explicitly hardcoding the knowledge for it. These algorithms make up the backbone of machine learning. This paper aims to study the field of machine learning and its algorithms. It will examine different types of machine learning models and introduce their most popular algorithms. The methodology of this paper is a literature review, which examines the most commonly used machine learning algorithms in the current field. Such algorithms include Nave Bayes, Decision Tree, KNN, and K-Mean Cluster. Nowadays, machine learning is everywhere and almost everyone using a technology product is enjoying its convenience. Applications like spam mail classification, image recognition, personalized product recommendations, and natural language processing all use machine learning algorithms. The conclusion is that there is no single algorithm that can solve all the problems. The choice of the use of algorithms and models must depend on the specific problem.
APA, Harvard, Vancouver, ISO, and other styles
11

Note, Johan, and Maaruf Ali. "Comparative Analysis of Intrusion Detection System Using Machine Learning and Deep Learning Algorithms." Annals of Emerging Technologies in Computing 6, no. 3 (July 1, 2022): 19–36. http://dx.doi.org/10.33166/aetic.2022.03.003.

Full text
Abstract:
Attacks against computer networks, “cyber-attacks”, are now common place affecting almost every Internet connected device on a daily basis. Organisations are now using machine learning and deep learning to thwart these types of attacks for their effectiveness without the need for human intervention. Machine learning offers the biggest advantage in their ability to detect, curtail, prevent, recover and even deal with untrained types of attacks without being explicitly programmed. This research will show the many different types of algorithms that are employed to fight against the different types of cyber-attacks, which are also explained. The classification algorithms, their implementation, accuracy and testing time are presented. The algorithms employed for this experiment were the Gaussian Naïve-Bayes algorithm, Logistic Regression Algorithm, SVM (Support Vector Machine) Algorithm, Stochastic Gradient Descent Algorithm, Decision Tree Algorithm, Random Forest Algorithm, Gradient Boosting Algorithm, K-Nearest Neighbour Algorithm, ANN (Artificial Neural Network) (here we also employed the Multilevel Perceptron Algorithm), Convolutional Neural Network (CNN) Algorithm and the Recurrent Neural Network (RNN) Algorithm. The study concluded that amongst the various machine learning algorithms, the Logistic Regression and Decision tree classifiers all took a very short time to be implemented giving an accuracy of over 90% for malware detection inside various test datasets. The Gaussian Naïve-Bayes classifier, though fast to implement, only gave an accuracy between 51-88%. The Multilevel Perceptron, non-linear SVM and Gradient Boosting algorithms all took a very long time to be implemented. The algorithm that performed with the greatest accuracy was the Random Forest Classification algorithm.
APA, Harvard, Vancouver, ISO, and other styles
12

Christakis, Nicholas, and Dimitris Drikakis. "Unsupervised Learning of Particles Dispersion." Mathematics 11, no. 17 (August 23, 2023): 3637. http://dx.doi.org/10.3390/math11173637.

Full text
Abstract:
This paper discusses using unsupervised learning in classifying particle-like dispersion. The problem is relevant to various applications, including virus transmission and atmospheric pollution. The Reduce Uncertainty and Increase Confidence (RUN-ICON) algorithm of unsupervised learning is applied to particle spread classification. The algorithm classifies the particles with higher confidence and lower uncertainty than other algorithms. The algorithm’s efficiency remains high also when noise is added to the system. Applying unsupervised learning in conjunction with the RUN-ICON algorithm provides a tool for studying particles’ dynamics and their impact on air quality, health, and climate.
APA, Harvard, Vancouver, ISO, and other styles
13

Zhao, Ji-chun, Shi-hong Liu, and Jun-feng Zhang. "Personalized Distance Learning System based on Sequence Analysis Algorithm." International Journal of Online Engineering (iJOE) 11, no. 7 (August 31, 2015): 33. http://dx.doi.org/10.3991/ijoe.v11i7.4764.

Full text
Abstract:
Personalized learning system can provide users with the most valuable learning resource to them through intelligent recommendation models and algorithms. This paper proposed the classical sequence analysis algorithms, and the Prefixspan algorithm is validated through distance learning platform data. In the event that the minimum support threshold is between 0.003 to 0.004%, test data shows that the performance of the algorithm's accuracy rate is relatively stable and the recommendation effect is satisfactory.
APA, Harvard, Vancouver, ISO, and other styles
14

Donghua Zhang. "Effectiveness Assessment and Optimization of Cross-Language Comparative Learning Algorithms in English Learning." Journal of Electrical Systems 20, no. 6s (April 29, 2024): 368–73. http://dx.doi.org/10.52783/jes.2657.

Full text
Abstract:
This study looks into the usefulness of cross-language comparison learning algorithms for enhancing English language acquisition among adult learners from various linguistic origins. In this study, two different algorithms, Algorithm A and Algorithm B, were systematically assessed to determine their impact on two critical components of language learning: listening comprehension and spoken fluency. A group of people including 100 adult learners participated in the study, taking exams customized to measure their proficiency in listening comprehension and speaking fluency. The evaluation indicated significant disparities in the efficacy of the two algorithms. Algorithm A outperformed Algorithm B, with higher mean scores in both comprehension and fluency evaluations. The results highlight the potential of optimized cross-language comparative learning algorithms to improve language learning outcomes, particularly in the context of English language acquisition. These algorithms show promise in meeting the different requirements and preferences of English language learners by leveraging computational approaches and multilingual data to effectively scaffold language learning processes. Furthermore, the study emphasizes the need for additional research to improve algorithmic designs and assess the long-term competence outcomes related to the usage of cross-language comparative learning algorithms. Embracing new technologies provides promising prospects to improve the effectiveness of English language instruction, encourage linguistic variety, and prepare students to succeed in an interconnected global society.
APA, Harvard, Vancouver, ISO, and other styles
15

Preethi, B. Meena, R. Gowtham, S. Aishvarya, S. Karthick, and D. G. Sabareesh. "Rainfall Prediction using Machine Learning and Deep Learning Algorithms." International Journal of Recent Technology and Engineering (IJRTE) 10, no. 4 (November 30, 2021): 251–54. http://dx.doi.org/10.35940/ijrte.d6611.1110421.

Full text
Abstract:
The project entitled as “Rainfall Prediction using Machine Learning & Deep Learning Algorithms” is a research project which is developed in Python Language and dataset is stored in Microsoft Excel. This prediction uses various machine learning and deep learning algorithms to find which algorithm predicts with most accurately. Rainfall prediction can be achieved by using binary classification under Data Mining. Predicting the rainfall is very important in several aspects of one’s country and can help from preventing serious natural disasters. For this prediction, Artificial Neural Network using Forward and Backward Propagation, Ada Boost, Gradient Boosting and XGBoost algorithms are used in this model for predicting the rainfall. There are totally five modules used in this project. The Data Analysis Module will analyse the datasets and finding the missing values in the dataset. The Data Pre-processing includes Data Cleaning which is the process of filling the missing values in the dataset. The Feature Transformation Module is used to modify the features of the dataset. The Data Mining Module is used to train the dataset to models using any algorithm for learning the pattern. The Model Evaluation Module is used to measure the performance of the model and finalize the overall best accuracy for the prediction. Dataset used in this prediction is for the country Australia. This main aim of the project is to compare the various boosting algorithms with the neural network and find the best algorithm among them. This prediction can be major advantage to the farmers in order to plant the types of crops according to the needy of water. Overall, we analyse the algorithm which is feasible for qualitatively predicting the rainfall.
APA, Harvard, Vancouver, ISO, and other styles
16

MØLLER, MARTIN. "SUPERVISED LEARNING ON LARGE REDUNDANT TRAINING SETS." International Journal of Neural Systems 04, no. 01 (March 1993): 15–25. http://dx.doi.org/10.1142/s0129065793000031.

Full text
Abstract:
Efficient supervised learning on large redundant training sets requires algorithms where the amount of computation involved in preparing each weight update is independent of the training set size. Off-line algorithms like the standard conjugate gradient algorithms do not have this property while on-line algorithms like the stochastic backpropagation algorithm do. A new algorithm combining the good properties of off-line and on-line algorithms is introduced.
APA, Harvard, Vancouver, ISO, and other styles
17

Budura, Georgeta, Corina Botoca, and Nicolae Miclău. "Competitive learning algorithms for data clustering." Facta universitatis - series: Electronics and Energetics 19, no. 2 (2006): 261–69. http://dx.doi.org/10.2298/fuee0602261b.

Full text
Abstract:
This paper presents and discusses some competitive learning algorithms for data clustering. A new competitive learning algorithm, named the dynamically penalized rival competitive learning algorithm (DPRCL), is introduced and studied. It is a variant of the rival penalized competitive algorithm [1] and it performs appropriate clustering without knowing the clusters number, by automatically driving the extra seed points far away from the input data set. It does not have the 'dead units' problem. Simulations results, performed in different conditions, are presented showing that the performance of the new DPRCL algorithm is better comparative with other competitive algorithms.
APA, Harvard, Vancouver, ISO, and other styles
18

Crandall, Jacob, Asad Ahmed, and Michael Goodrich. "Learning in Repeated Games with Minimal Information: The Effects of Learning Bias." Proceedings of the AAAI Conference on Artificial Intelligence 25, no. 1 (August 4, 2011): 650–56. http://dx.doi.org/10.1609/aaai.v25i1.7871.

Full text
Abstract:
Automated agents for electricity markets, social networks, and other distributed networks must repeatedly interact with other intelligent agents, often without observing associates' actions or payoffs (i.e., minimal information). Given this reality, our goal is to create algorithms that learn effectively in repeated games played with minimal information. As in other applications of machine learning, the success of a learning algorithm in repeated games depends on its learning bias. To better understand what learning biases are most successful, we analyze the learning biases of previously published multi-agent learning (MAL) algorithms. We then describe a new algorithm that adapts a successful learning bias from the literature to minimal information environments. Finally, we compare the performance of this algorithm with ten other algorithms in repeated games played with minimal information.
APA, Harvard, Vancouver, ISO, and other styles
19

TURAN, SELIN CEREN, and MEHMET ALI CENGIZ. "ENSEMBLE LEARNING ALGORITHMS." Journal of Science and Arts 22, no. 2 (June 30, 2022): 459–70. http://dx.doi.org/10.46939/j.sci.arts-22.2-a18.

Full text
Abstract:
Artificial intelligence is a method that is increasingly becoming widespread in all areas of life and enables machines to imitate human behavior. Machine learning is a subset of artificial intelligence techniques that use statistical methods to enable machines to evolve with experience. As a result of the advancement of technology and developments in the world of science, the interest and need for machine learning is increasing day by day. Human beings use machine learning techniques in their daily life without realizing it. In this study, ensemble learning algorithms, one of the machine learning techniques, are mentioned. The methods used in this study are Bagging and Adaboost algorithms which are from Ensemble Learning Algorithms. The main purpose of this study is to find the best performing classifier with the Classification and Regression Trees (CART) basic classifier on three different data sets taken from the UCI machine learning database and then to obtain the ensemble learning algorithms that can make this performance better and more determined using two different ensemble learning algorithms. For this purpose, the performance measures of the single basic classifier and the ensemble learning algorithms were compared
APA, Harvard, Vancouver, ISO, and other styles
20

Omohundro, Stephen M. "Geometric learning algorithms." Physica D: Nonlinear Phenomena 42, no. 1-3 (June 1990): 307–21. http://dx.doi.org/10.1016/0167-2789(90)90085-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Smale, Steve, and Yuan Yao. "Online Learning Algorithms." Foundations of Computational Mathematics 6, no. 2 (September 23, 2005): 145–70. http://dx.doi.org/10.1007/s10208-004-0160-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Javed, Abbas, Hadi Larijani, Ali Ahmadinia, and Rohinton Emmanuel. "RANDOM NEURAL NETWORK LEARNING HEURISTICS." Probability in the Engineering and Informational Sciences 31, no. 4 (May 22, 2017): 436–56. http://dx.doi.org/10.1017/s0269964817000201.

Full text
Abstract:
The random neural network (RNN) is a probabilitsic queueing theory-based model for artificial neural networks, and it requires the use of optimization algorithms for training. Commonly used gradient descent learning algorithms may reside in local minima, evolutionary algorithms can be also used to avoid local minima. Other techniques such as artificial bee colony (ABC), particle swarm optimization (PSO), and differential evolution algorithms also perform well in finding the global minimum but they converge slowly. The sequential quadratic programming (SQP) optimization algorithm can find the optimum neural network weights, but can also get stuck in local minima. We propose to overcome the shortcomings of these various approaches by using hybridized ABC/PSO and SQP. The resulting algorithm is shown to compare favorably with other known techniques for training the RNN. The results show that hybrid ABC learning with SQP outperforms other training algorithms in terms of mean-squared error and normalized root-mean-squared error.
APA, Harvard, Vancouver, ISO, and other styles
23

K.M., Umamaheswari. "Road Accident Perusal Using Machine Learning Algorithms." International Journal of Psychosocial Rehabilitation 24, no. 5 (March 31, 2020): 1676–82. http://dx.doi.org/10.37200/ijpr/v24i5/pr201839.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Nair, Dr Prabha Shreeraj. "Analyzing Titanic Disaster using Machine Learning Algorithms." International Journal of Trend in Scientific Research and Development Volume-2, Issue-1 (December 31, 2017): 410–16. http://dx.doi.org/10.31142/ijtsrd7003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Mallika, Madasu, and K. Suresh Babu. "Breast Cancer Prediction using Machine Learning Algorithms." International Journal of Science and Research (IJSR) 12, no. 10 (October 5, 2023): 1235–38. http://dx.doi.org/10.21275/sr231015173828.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Balcan, Maria-Florina, Tuomas Sandholm, and Ellen Vitercik. "Learning to Optimize Computational Resources: Frugal Training with Generalization Guarantees." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 3227–34. http://dx.doi.org/10.1609/aaai.v34i04.5721.

Full text
Abstract:
Algorithms typically come with tunable parameters that have a considerable impact on the computational resources they consume. Too often, practitioners must hand-tune the parameters, a tedious and error-prone task. A recent line of research provides algorithms that return nearly-optimal parameters from within a finite set. These algorithms can be used when the parameter space is infinite by providing as input a random sample of parameters. This data-independent discretization, however, might miss pockets of nearly-optimal parameters: prior research has presented scenarios where the only viable parameters lie within an arbitrarily small region. We provide an algorithm that learns a finite set of promising parameters from within an infinite set. Our algorithm can help compile a configuration portfolio, or it can be used to select the input to a configuration algorithm for finite parameter spaces. Our approach applies to any configuration problem that satisfies a simple yet ubiquitous structure: the algorithm's performance is a piecewise constant function of its parameters. Prior research has exhibited this structure in domains from integer programming to clustering.
APA, Harvard, Vancouver, ISO, and other styles
27

Yue, Yiqun, Yang Zhou, Lijuan Xu, and Dawei Zhao. "Optimal Defense Strategy Selection Algorithm Based on Reinforcement Learning and Opposition-Based Learning." Applied Sciences 12, no. 19 (September 24, 2022): 9594. http://dx.doi.org/10.3390/app12199594.

Full text
Abstract:
Industrial control systems (ICS) are facing increasing cybersecurity issues, leading to enormous threats and risks to numerous industrial infrastructures. In order to resist such threats and risks, it is particularly important to scientifically construct security strategies before an attack occurs. The characteristics of evolutionary algorithms are very suitable for finding optimal strategies. However, the more common evolutionary algorithms currently used have relatively large limitations in convergence accuracy and convergence speed, such as PSO, DE, GA, etc. Therefore, this paper proposes a hybrid strategy differential evolution algorithm based on reinforcement learning and opposition-based learning to construct the optimal security strategy. It greatly improved the common problems of evolutionary algorithms. This paper first scans the vulnerabilities of the water distribution system and generates an attack graph. Then, in order to solve the balance problem of cost and benefit, a cost–benefit-based objective function is constructed. Finally, the optimal security strategy set is constructed using the algorithm proposed in this paper. Through experiments, it is found that in the problem of security strategy construction, the algorithm in this paper has obvious advantages in convergence speed and convergence accuracy compared with some other intelligent strategy selection algorithms.
APA, Harvard, Vancouver, ISO, and other styles
28

Visca, Jorge, and Javier Baliosian. "rl4dtn: Q-Learning for Opportunistic Networks." Future Internet 14, no. 12 (November 23, 2022): 348. http://dx.doi.org/10.3390/fi14120348.

Full text
Abstract:
Opportunistic networks are highly stochastic networks supported by sporadic encounters between mobile devices. To route data efficiently, opportunistic-routing algorithms must capitalize on devices’ movement and data transmission patterns. This work proposes a routing method based on reinforcement learning, specifically Q-learning. As usual in routing algorithms, the objective is to select the best candidate devices to put forward once an encounter occurs. However, there is also the possibility of not forwarding if we know that a better candidate might be encountered in the future. This decision is not usually considered in learning schemes because there is no obvious way to represent the temporal evolution of the network. We propose a novel, distributed, and online method that allows learning both the network’s connectivity and its temporal evolution with the help of a temporal graph. This algorithm allows learning to skip forwarding opportunities to capitalize on future encounters. We show that explicitly representing the action for deferring forwarding increases the algorithm’s performance. The algorithm’s scalability is discussed and shown to perform well in a network of considerable size.
APA, Harvard, Vancouver, ISO, and other styles
29

Shivaswamy, Pannaga, and Thorsten Joachims. "Coactive Learning." Journal of Artificial Intelligence Research 53 (May 27, 2015): 1–40. http://dx.doi.org/10.1613/jair.4539.

Full text
Abstract:
We propose Coactive Learning as a model of interaction between a learning system and a human user, where both have the common goal of providing results of maximum utility to the user. Interactions in the Coactive Learning model take the following form: at each step, the system (e.g. search engine) receives a context (e.g. query) and predicts an object (e.g. ranking); the user responds by correcting the system if necessary, providing a slightly improved but not necessarily optimal object as feedback. We argue that such preference feedback can be inferred in large quantity from observable user behavior (e.g., clicks in web search), unlike the optimal feedback required in the expert model or the cardinal valuations required for bandit learning. Despite the relaxed requirements for the feedback, we show that it is possible to adapt many existing online learning algorithms to the coactive framework. In particular, we provide algorithms that achieve square root regret in terms of cardinal utility, even though the learning algorithm never observes cardinal utility values directly. We also provide an algorithm with logarithmic regret in the case of strongly convex loss functions. An extensive empirical study demonstrates the applicability of our model and algorithms on a movie recommendation task, as well as ranking for web search.
APA, Harvard, Vancouver, ISO, and other styles
30

Grzymala-Busse, Jerzy W. "Selected Algorithms of Machine Learning from Examples." Fundamenta Informaticae 18, no. 2-4 (April 1, 1993): 193–207. http://dx.doi.org/10.3233/fi-1993-182-408.

Full text
Abstract:
This paper presents and compares two algorithms of machine learning from examples, ID3 and AQ, and one recent algorithm from the same class, called LEM2. All three algorithms are illustrated using the same example. Production rules induced by these algorithms from the well-known Small Soybean Database are presented. Finally, some advantages and disadvantages of these algorithms are shown.
APA, Harvard, Vancouver, ISO, and other styles
31

Zhou, Zhili, Zihao Yin, Ruohan Meng, and Fei Peng. "Extensible Steganalysis via Continual Learning." Fractal and Fractional 6, no. 12 (November 28, 2022): 708. http://dx.doi.org/10.3390/fractalfract6120708.

Full text
Abstract:
To realize secure communication, steganography is usually implemented by embedding secret information into an image selected from a natural image dataset, in which the fractal images have occupied a considerable proportion. To detect those stego-images generated by existing steganographic algorithms, recent steganalysis models usually train a Convolutional Neural Network (CNN) on the dataset consisting of paired cover/stego-images. However, it is inefficient and impractical for those steganalysis models to completely retrain the CNN model to make it effective for detecting a new emerging steganographic algorithm while maintaining the ability to detect the existing steganographic algorithms. Thus, those steganalysis models usually lack dynamic extensibility for new steganographic algorithms, which limits their application in real-world scenarios. To address this issue, we propose an accurate parameter importance estimation (APIE)-based continual learning scheme for steganalysis. In this scheme, when a steganalysis model is trained on a new image dataset generated by a new emerging steganographic algorithm, its network parameters are effectively and efficiently updated with sufficient consideration of their importance evaluated in the previous training process. This scheme can guide the steganalysis model to learn the patterns of the new steganographic algorithm without significantly degrading the detectability against the previous steganographic algorithms. Experimental results demonstrate the proposed scheme has promising extensibility for new emerging steganographic algorithms.
APA, Harvard, Vancouver, ISO, and other styles
32

Jabar, Abdul Aziz, and Andi Sofyan Anas. "Aplikasi Belajar Interaktif Algoritma Sorting Berbasis Desktop." JTIM : Jurnal Teknologi Informasi dan Multimedia 1, no. 1 (May 10, 2019): 23–29. http://dx.doi.org/10.35746/jtim.v1i1.10.

Full text
Abstract:
Making software requires a lot of algorithms that are implemented into the making, one of the algorithms used is the sorting algorithm. Algorithms are logical sequences of steps to solve problems that are arranged systematically. Sorting algorithms (sequencing) are generally defined as the process of rearranging a series of objects in a certain order. The purpose of the sequencing process is to facilitate the search process for a beginner who wants to learn to make software, of course they have to go through stages in learning algorithms and in the learning process there will usually be obstacles faced such as not fully understanding how the algorithm works, especially the sorting algorithm or the media used as learning materials have not been able to maximize the knowledge of sorting algorithms, therefore it requires an application of learning aids media that can help in learning and understanding the material of the sorting algorithm more optimally. In making assistive media applications learn how to work the steps of the sorting algorithm, the author uses the luther-sutopo method. The stages are concept, design, collecting materials, assembly, testing, and distribution. In the application there is general material about sorting and video algorithms that can help maximize understanding of the sorting algorithm material. The results achieved by an auxiliary media application are learning desktopbased sorting algorithms that are used for learning specific sorting algorithms in theory
APA, Harvard, Vancouver, ISO, and other styles
33

Priyadarshini, Ishaani. "Dendritic Growth Optimization: A Novel Nature-Inspired Algorithm for Real-World Optimization Problems." Biomimetics 9, no. 3 (February 21, 2024): 130. http://dx.doi.org/10.3390/biomimetics9030130.

Full text
Abstract:
In numerous scientific disciplines and practical applications, addressing optimization challenges is a common imperative. Nature-inspired optimization algorithms represent a highly valuable and pragmatic approach to tackling these complexities. This paper introduces Dendritic Growth Optimization (DGO), a novel algorithm inspired by natural branching patterns. DGO offers a novel solution for intricate optimization problems and demonstrates its efficiency in exploring diverse solution spaces. The algorithm has been extensively tested with a suite of machine learning algorithms, deep learning algorithms, and metaheuristic algorithms, and the results, both before and after optimization, unequivocally support the proposed algorithm’s feasibility, effectiveness, and generalizability. Through empirical validation using established datasets like diabetes and breast cancer, the algorithm consistently enhances model performance across various domains. Beyond its working and experimental analysis, DGO’s wide-ranging applications in machine learning, logistics, and engineering for solving real-world problems have been highlighted. The study also considers the challenges and practical implications of implementing DGO in multiple scenarios. As optimization remains crucial in research and industry, DGO emerges as a promising avenue for innovation and problem solving.
APA, Harvard, Vancouver, ISO, and other styles
34

Liang, Yuan. "Fairness-Aware Dynamic Ride-Hailing Matching Based on Reinforcement Learning." Electronics 13, no. 4 (February 16, 2024): 775. http://dx.doi.org/10.3390/electronics13040775.

Full text
Abstract:
The core issue in ridesharing is designing reasonable algorithms to match drivers and passengers. The ridesharing matching problem, influenced by various constraints such as weather, traffic, and supply–demand dynamics in real-world scenarios, requires optimization of multiple objectives like total platform revenue and passenger waiting time. Due to its complexity in terms of constraints and optimization goals, the ridesharing matching problem becomes a central issue in the field of mobile transportation. However, the existing research lacks exploration into the fairness of driver income, and some algorithms are not practically applicable in the industrial context. To address these shortcomings, we have developed a fairness-oriented dynamic matching algorithm for ridesharing, effectively optimizing overall platform efficiency (expected total driver income) and income fairness among drivers (entropy of weighted amortization fairness information between drivers). Firstly, we introduced a temporal dependency of matching outcomes on subsequent matches in the scenario setup and used reinforcement learning to predict these temporal dependencies, overcoming the limitation of traditional matching algorithms that rely solely on historical data and current circumstances for order allocation. Then, we implemented a series of optimization solutions, including the introduction of a time window matching model, pruning operations, and metric representation adjustments, to enhance the algorithm’s adaptability and scalability for large datasets. These solutions also ensure the algorithm’s efficiency. Finally, experiments conducted on real datasets demonstrate that our fairness-oriented algorithm based on reinforcement learning achieves improvements of 81.4%, 28.5%, and 79.7% over traditional algorithms in terms of fairness, platform utility, and matching efficiency, respectively.
APA, Harvard, Vancouver, ISO, and other styles
35

Todorov, Dimitar Georgiev, and Karova Milena. "Appropriate Conversion of Machine Learning Data." ANNUAL JOURNAL OF TECHNICAL UNIVERSITY OF VARNA, BULGARIA 6, no. 2 (December 31, 2022): 63–76. http://dx.doi.org/10.29114/ajtuv.vol6.iss2.262.

Full text
Abstract:
Data is an important part of computer technology and, as such, explains the strong dependence of machine learning algorithms on it. The operation of any corresponding algorithm is directly dependent on the type of data and the proper data representation increases the productivity of these algorithms. Advanced in the present article is an algorithm for data pre-processing in a form that is most suitable for machine learning algorithms, with cryptographic secret keys being used as input data. The experimental results were satisfactory, and with the utilization of secret keys with significant differences, the recognition obtained is about 100%.
APA, Harvard, Vancouver, ISO, and other styles
36

Barbosa, Flávio, Arthur Vidal, and Flávio Mello. "Machine Learning for Cryptographic Algorithm Identification." Journal of Information Security and Cryptography (Enigma) 3, no. 1 (September 3, 2016): 3. http://dx.doi.org/10.17648/enig.v3i1.55.

Full text
Abstract:
This paper aims to study encrypted text files in order to identify their encoding algorithm. Plain texts were encoded with distinct cryptographic algorithms and then some metadata were extracted from these codifications. Afterward, the algorithm identification is obtained by using data mining techniques. Firstly, texts in Portuguese, English and Spanish were encrypted using DES, Blowfish, RSA, and RC4 algorithms. Secondly, the encrypted files were submitted to data mining techniques such as J48, FT, PART, Complement Naive Bayes, and Multilayer Perceptron classifiers. Charts were created using the confusion matrices generated in step two and it was possible to perceive that the percentage of identification for each of the algorithms is greater than a probabilistic bid. There are several scenarios where algorithm identification reaches almost 97, 23% of correctness.
APA, Harvard, Vancouver, ISO, and other styles
37

Baldi, Pierre, and Yves Chauvin. "Smooth On-Line Learning Algorithms for Hidden Markov Models." Neural Computation 6, no. 2 (March 1994): 307–18. http://dx.doi.org/10.1162/neco.1994.6.2.307.

Full text
Abstract:
A simple learning algorithm for Hidden Markov Models (HMMs) is presented together with a number of variations. Unlike other classical algorithms such as the Baum-Welch algorithm, the algorithms described are smooth and can be used on-line (after each example presentation) or in batch mode, with or without the usual Viterbi most likely path approximation. The algorithms have simple expressions that result from using a normalized-exponential representation for the HMM parameters. All the algorithms presented are proved to be exact or approximate gradient optimization algorithms with respect to likelihood, log-likelihood, or cross-entropy functions, and as such are usually convergent. These algorithms can also be casted in the more general EM (Expectation-Maximization) framework where they can be viewed as exact or approximate GEM (Generalized Expectation-Maximization) algorithms. The mathematical properties of the algorithms are derived in the appendix.
APA, Harvard, Vancouver, ISO, and other styles
38

Yuan, Hongyuan, Jingan Liu, Yu Zhou, and Hailong Pei. "State of Charge Estimation of Lithium Battery Based on Integrated Kalman Filter Framework and Machine Learning Algorithm." Energies 16, no. 5 (February 23, 2023): 2155. http://dx.doi.org/10.3390/en16052155.

Full text
Abstract:
Research on batteries’ State of Charge (SOC) estimation for equivalent circuit models based on the Kalman Filter (KF) framework and machine learning algorithms remains relatively limited. Most studies are focused on a few machine learning algorithms and do not present comprehensive analysis and comparison. Furthermore, most of them focus on obtaining the state space parameters of the Kalman filter frame algorithm models using machine learning algorithms and then substituting the state space parameters into the Kalman filter frame algorithm to estimate the SOC. Such algorithms are highly coupled, and present high complexity and low practicability. This study aims to integrate machine learning with the Kalman filter frame algorithm, and to estimate the final SOC by using different combinations of the input, output, and intermediate variable values of five Kalman filter frame algorithms as the input of the machine learning algorithms of six main streams. These are: linear regression, support vector Regression, XGBoost, AdaBoost, random forest, and LSTM; the algorithm coupling is lower for two-way parameter adjustment and is not applied between the machine learning and Kalman filtering framework algorithms. The results demonstrate that the integrated learning algorithm significantly improves the estimation accuracy when compared to the pure Kalman filter framework or the machine learning algorithms. Among the various integrated algorithms, the random forest and Kalman filter framework presents the highest estimation accuracy along with good real-time performance. Therefore, it can be implemented in various engineering applications.
APA, Harvard, Vancouver, ISO, and other styles
39

Kuo, Cathleen, Nolan Brown, and Julian Gendreau. "BIOS-04. THE ROLE OF MACHINE LEARNING, PREDICTIVE MODELING, AND DEEP LEARNING IN ASSESSING METABOLIC BIOMARKERS TO IMPROVE PROGNOSTICATION IN GLIOBLASTOMA MULTIFORME." Neuro-Oncology 25, Supplement_5 (November 1, 2023): v21. http://dx.doi.org/10.1093/neuonc/noad179.0081.

Full text
Abstract:
Abstract Glioblastoma multiforme (GBM) is the most common primary malignant brain tumor in the United States, accounting for approximately 56.6% of all gliomas and 47.7% of all primary malignant CNS tumors. The prognosis of GBM is notably grim, with a 1-year relative survival rate of 41.4% and a 5-year survival rate of 5.8% following diagnosis. Recent efforts to identify potential therapeutic targets have utilized tumor omics data integrated with clinical information that leverages machine learning (ML) algorithms. However, there remains a paucity of studies assessing the value of these ML models as prognostic tools in GBM. A systematic search adhering to PRISMA guidelines was conducted to identify all studies describing the use of a ML algorithm involving GBM metabolic biomarkers and each algorithm's accuracy. Ten studies were included for final analysis. They were diagnostic (n = 3, 30%), prognostic (n = 6, 60%), or both (n = 1, 10%), respectively. Most studies analyzed data from multiple databases, while 50% (n = 5) included additional original samples. At least 2,536 data samples were run through a ML algorithm. 27 ML algorithms were recorded with a mean 2.8 algorithms per study. Algorithms were supervised (n = 22, 79%) or unsupervised (n = 6, 21%), and continuous (n = 21, 75%) or categorical (n = 7, 25%). The mean reported accuracy and AUC of ROC was 95.63% and 0.779, respectively. 106 metabolic markers were identified, but only EMP3 was reported in multiple studies. Many studies have identified potential biomarkers for GBM diagnosis and prognostication. These algorithms show promise; although, a consensus on even a handful of biomarkers has not been made. An integration of ML algorithms for biomarker detection combined with radiomics-based tumor imaging will be necessary to ascertain the greatest level of accuracy and precision.
APA, Harvard, Vancouver, ISO, and other styles
40

Le, Hai S., Brendan Juba, and Roni Stern. "Learning Safe Action Models with Partial Observability." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 18 (March 24, 2024): 20159–67. http://dx.doi.org/10.1609/aaai.v38i18.29995.

Full text
Abstract:
A common approach for solving planning problems is to model them in a formal language such as the Planning Domain Definition Language (PDDL), and then use an appropriate PDDL planner. Several algorithms for learning PDDL models from observations have been proposed but plans created with these learned models may not be sound. We propose two algorithms for learning PDDL models that are guaranteed to be safe to use even when given observations that include partially observable states. We analyze these algorithms theoretically, characterizing the sample complexity each algorithm requires to guarantee probabilistic completeness. We also show experimentally that our algorithms are often better than FAMA, a state-of-the-art PDDL learning algorithm.
APA, Harvard, Vancouver, ISO, and other styles
41

Zou, Shujie, Chiawei Chu, Ning Shen, and Jia Ren. "Healthcare Cost Prediction Based on Hybrid Machine Learning Algorithms." Mathematics 11, no. 23 (November 27, 2023): 4778. http://dx.doi.org/10.3390/math11234778.

Full text
Abstract:
Healthcare cost is an issue of concern right now. While many complex machine learning algorithms have been proposed to analyze healthcare cost and address the shortcomings of linear regression and reliance on expert analyses, these algorithms do not take into account whether each characteristic variable contained in the healthcare data has a positive effect on predicting healthcare cost. This paper uses hybrid machine learning algorithms to predict healthcare cost. First, network structure learning algorithms (a score-based algorithm, constraint-based algorithm, and hybrid algorithm) for a Conditional Gaussian Bayesian Network (CGBN) are used to learn the isolated characteristic variables in healthcare data without changing the data properties (i.e., discrete or continuous). Then, the isolated characteristic variables are removed from the original data and the remaining data used to train regression algorithms. Two public healthcare datasets are used to test the performance of the proposed hybrid machine learning algorithm model. Experiments show that when compared to popular single machine learning algorithms (Long Short Term Memory, Random Forest, etc.) the proposed scheme can obtain similar or higher prediction accuracy with a reduced amount of data.
APA, Harvard, Vancouver, ISO, and other styles
42

Ma, Yindi, Yanhai Li, and Longquan Yong. "Teaching–Learning-Based Optimization Algorithm with Stochastic Crossover Self-Learning and Blended Learning Model and Its Application." Mathematics 12, no. 10 (May 20, 2024): 1596. http://dx.doi.org/10.3390/math12101596.

Full text
Abstract:
This paper presents a novel variant of the teaching–learning-based optimization algorithm, termed BLTLBO, which draws inspiration from the blended learning model, specifically designed to tackle high-dimensional multimodal complex optimization problems. Firstly, the perturbation conditions in the “teaching” and “learning” stages of the original TLBO algorithm are interpreted geometrically, based on which the search capability of the TLBO is enhanced by adjusting the range of values of random numbers. Second, a strategic restructuring has been ingeniously implemented, dividing the algorithm into three distinct phases: pre-course self-study, classroom blended learning, and post-course consolidation; this structural reorganization and the random crossover strategy in the self-learning phase effectively enhance the global optimization capability of TLBO. To evaluate its performance, the BLTLBO algorithm was tested alongside seven distinguished variants of the TLBO algorithm on thirteen multimodal functions from the CEC2014 suite. Furthermore, two excellent high-dimensional optimization algorithms were added to the comparison algorithm and tested in high-dimensional mode on five scalable multimodal functions from the CEC2008 suite. The empirical results illustrate the BLTLBO algorithm’s superior efficacy in handling high-dimensional multimodal challenges. Finally, a high-dimensional portfolio optimization problem was successfully addressed using the BLTLBO algorithm, thereby validating the practicality and effectiveness of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
43

Sigurdson, Devon, and Vadim Bulitko. "Deep Learning for Real-Time Heuristic Search Algorithm Selection." Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment 13, no. 1 (June 25, 2021): 108–14. http://dx.doi.org/10.1609/aiide.v13i1.12927.

Full text
Abstract:
Real-time heuristic search algorithms are used for creating agents that rely on local information and move in a bounded amount of time making them an excellent candidate for video games as planning time can be controlled. Path finding on video game maps has become the de facto standard for evaluating real-time heuristic search algorithms. Over the years researchers have worked to identify areas where these algorithms perform poorly in an attempt to mitigate their weaknesses. Recent work illustrates the benefits of tailoring algorithms for a given problem as performance is heavily dependent on the search space. In order to determine which algorithm to select for solving the search problems on a map the developer would have to run all the algorithms in consideration to obtain the correct choice. Our work extends the previous algorithm selection approach to use a deep learning classifier to select the algorithm to use on new maps without having to evaluate the algorithms on the map. To do so we select a portfolio of algorithms and train a classifier to predict which portfolio member to use on a unseen new map. Our empirical results show that selecting algorithms dynamically can outperform the single best algorithm from the portfolio on new maps, as well provide the lower bound for potential improvements to motivate further work on this approach.
APA, Harvard, Vancouver, ISO, and other styles
44

Shah, Kulin, and Naresh Manwani. "Online Active Learning of Reject Option Classifiers." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 5652–59. http://dx.doi.org/10.1609/aaai.v34i04.6019.

Full text
Abstract:
Active learning is an important technique to reduce the number of labeled examples in supervised learning. Active learning for binary classification has been well addressed in machine learning. However, active learning of the reject option classifier remains unaddressed. In this paper, we propose novel algorithms for active learning of reject option classifiers. We develop an active learning algorithm using double ramp loss function. We provide mistake bounds for this algorithm. We also propose a new loss function called double sigmoid loss function for reject option and corresponding active learning algorithm. We offer a convergence guarantee for this algorithm. We provide extensive experimental results to show the effectiveness of the proposed algorithms. The proposed algorithms efficiently reduce the number of label examples required.
APA, Harvard, Vancouver, ISO, and other styles
45

Huai, Mengdi, Di Wang, Chenglin Miao, Jinhui Xu, and Aidong Zhang. "Pairwise Learning with Differential Privacy Guarantees." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 01 (April 3, 2020): 694–701. http://dx.doi.org/10.1609/aaai.v34i01.5411.

Full text
Abstract:
Pairwise learning has received much attention recently as it is more capable of modeling the relative relationship between pairs of samples. Many machine learning tasks can be categorized as pairwise learning, such as AUC maximization and metric learning. Existing techniques for pairwise learning all fail to take into consideration a critical issue in their design, i.e., the protection of sensitive information in the training set. Models learned by such algorithms can implicitly memorize the details of sensitive information, which offers opportunity for malicious parties to infer it from the learned models. To address this challenging issue, in this paper, we propose several differentially private pairwise learning algorithms for both online and offline settings. Specifically, for the online setting, we first introduce a differentially private algorithm (called OnPairStrC) for strongly convex loss functions. Then, we extend this algorithm to general convex loss functions and give another differentially private algorithm (called OnPairC). For the offline setting, we also present two differentially private algorithms (called OffPairStrC and OffPairC) for strongly and general convex loss functions, respectively. These proposed algorithms can not only learn the model effectively from the data but also provide strong privacy protection guarantee for sensitive information in the training set. Extensive experiments on real-world datasets are conducted to evaluate the proposed algorithms and the experimental results support our theoretical analysis.
APA, Harvard, Vancouver, ISO, and other styles
46

Khan, Dr Rafiqul Zaman, and Haider Allamy. "Training Algorithms for Supervised Machine Learning: Comparative Study." INTERNATIONAL JOURNAL OF MANAGEMENT & INFORMATION TECHNOLOGY 4, no. 3 (July 25, 2013): 354–60. http://dx.doi.org/10.24297/ijmit.v4i3.773.

Full text
Abstract:
Supervised machine learning is an important task for learning artificial neural networks; therefore a demand for selected supervised learning algorithms such as back propagation algorithm, decision tree learning algorithm and perceptron algorithm has been arise in order to perform the learning stage of the artificial neural networks. In this paper; a comparative study has been presented for the aforementioned algorithms to evaluate their performance within a range of specific parameters such as speed of learning, overfitting avoidance, and their accuracy. Besides these parameters we have included their benefits and limitations to unveil their hidden features and provide more details regarding their performance. We have found the decision tree algorithm is the best as compared with other algorithms that can solve the complex problems with a remarkable speed.
APA, Harvard, Vancouver, ISO, and other styles
47

Olari, Viktoriya, Kostadin Cvejoski, and Øyvind Eide. "Introduction to Machine Learning with Robots and Playful Learning." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 17 (May 18, 2021): 15630–39. http://dx.doi.org/10.1609/aaai.v35i17.17841.

Full text
Abstract:
Inspired by explanations of machine learning concepts in children’s books, we developed an approach to introduce supervised, unsupervised, and reinforcement learning using a block-based programming language in combination with the benefits of educational robotics. Instead of using blocks as high-end APIs to access AI cloud services or to reproduce the machine learning algorithms, we use them as a means to put the student “in the algorithm’s shoes.” We adapt the training of neural networks, Q-learning, and k-means algorithms to a design and format suitable for children and equip the students with hands-on tools for playful experimentation. The children learn about direct supervision by modifying the weights in the neural networks and immediately observing the effects on the simulated robot. Following the ideas of constructionism, they experience how the algorithms and underlying machine learning concepts work in practice. We conducted and evaluated this approach with students in primary, middle, and high school. All the age groups perceived the topics to be very easy to moderately hard to grasp. Younger students experienced direct supervision as challenging, whereas they found Q-learning and k-means algorithms much more accessible. Most high-school students could cope with all the topics without particular difficulties.
APA, Harvard, Vancouver, ISO, and other styles
48

Chaulwar, Amit. "Sampling Algorithms Combination with Machine Learning for Efficient Safe Trajectory Planning." International Journal of Machine Learning and Computing 11, no. 1 (January 2021): 1–11. http://dx.doi.org/10.18178/ijmlc.2021.11.1.1007.

Full text
Abstract:
The planning of safe trajectories in critical traffic scenarios using model-based algorithms is a very computationally intensive task. Recently proposed algorithms, namely Hybrid Augmented CL-RRT, Hybrid Augmented CL-RRT+ and GATE-ARRT+, reduce the computation time for safe trajectory planning drastically using a combination of a deep learning algorithm 3D-ConvNet with a vehicle dynamic model. An efficient embedded implementation of these algorithms is required as the vehicle on-board micro-controller resources are limited. This work proposes methodologies for replacing the computationally intensive modules of these trajectory planning algorithms using different efficient machine learning and analytical methods. The required computational resources are measured by downloading and running the algorithms on various hardware platforms. The results show significant reduction in computational resources and the potential of proposed algorithms to run in real time. Also, alternative architectures for 3D-ConvNet are presented for further reduction of required computational resources.
APA, Harvard, Vancouver, ISO, and other styles
49

Li, Xinyu, Xiaoguang Gao, and Chenfeng Wang. "A Novel BN Learning Algorithm Based on Block Learning Strategy." Sensors 20, no. 21 (November 7, 2020): 6357. http://dx.doi.org/10.3390/s20216357.

Full text
Abstract:
Learning accurate Bayesian Network (BN) structures of high-dimensional and sparse data is difficult because of high computation complexity. To learn the accurate structure for high-dimensional and sparse data faster, this paper adopts a divide and conquer strategy and proposes a block learning algorithm with a mutual information based K-means algorithm (BLMKM algorithm). This method utilizes an improved K-means algorithm to block the nodes in BN and a maximum minimum parents and children (MMPC) algorithm to obtain the whole skeleton of BN and find possible graph structures based on separated blocks. Then, a pruned dynamic programming algorithm is performed sequentially for all possible graph structures to get possible BNs and find the best BN by scoring function. Experiments show that for high-dimensional and sparse data, the BLMKM algorithm can achieve the same accuracy in a reasonable time compared with non-blocking classical learning algorithms. Compared to the existing block learning algorithms, the BLMKM algorithm has a time advantage on the basis of ensuring accuracy. The analysis of the real radar effect mechanism dataset proves that BLMKM algorithm can quickly establish a global and accurate causality model to find the cause of interference, predict the detecting result, and guide the parameters optimization. BLMKM algorithm is efficient for BN learning and has practical application value.
APA, Harvard, Vancouver, ISO, and other styles
50

Ayuningtyas, Puji, Rahmawati Rahmawati, and Akhmad Miftahusalam. "Comparison of Machine Learning and Deep Learning Algorithms for Classification of Breast Cancer." Journal of Computer Engineering, Electronics and Information Technology 2, no. 2 (October 1, 2023): 89–98. http://dx.doi.org/10.17509/coelite.v2i2.59717.

Full text
Abstract:
Statistical data from the American Cancer Society which shows that breast cancer ranks first with the highest number of cases of all types of cases of malignant tumors (cancer) worldwide. through a data mining process that is used to extract information and data analysis, a classification process can be carried out to carry out further analysis of the pattern of a data. The dataset used in this study is the Breast Cancer Wisconsin (Diagnostic) Dataset obtained from UCI Machine Learning. The purpose of this study is to compare five algorithms, namely Logistic Regression, K Neighbors Classifier (KNN), Decision Tree Classifier, Deep Neural Network, Genetic Algorithm. The results showed that deep neural network algorithms and multilayer perceptron-genetic algorithms get 96% accuracy, logistic regression algorithms have 96% accuracy, then KNN with 94%, and decision tree classifier with 92%.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography