Journal articles on the topic 'Learning algorithm'

To see the other types of publications on this topic, follow the link: Learning algorithm.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Learning algorithm.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Mu, Tong, Georgios Theocharous, David Arbour, and Emma Brunskill. "Constraint Sampling Reinforcement Learning: Incorporating Expertise for Faster Learning." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 7 (June 28, 2022): 7841–49. http://dx.doi.org/10.1609/aaai.v36i7.20753.

Full text
Abstract:
Online reinforcement learning (RL) algorithms are often difficult to deploy in complex human-facing applications as they may learn slowly and have poor early performance. To address this, we introduce a practical algorithm for incorporating human insight to speed learning. Our algorithm, Constraint Sampling Reinforcement Learning (CSRL), incorporates prior domain knowledge as constraints/restrictions on the RL policy. It takes in multiple potential policy constraints to maintain robustness to misspecification of individual constraints while leveraging helpful ones to learn quickly. Given a base RL learning algorithm (ex. UCRL, DQN, Rainbow) we propose an upper confidence with elimination scheme that leverages the relationship between the constraints, and their observed performance, to adaptively switch among them. We instantiate our algorithm with DQN-type algorithms and UCRL as base algorithms, and evaluate our algorithm in four environments, including three simulators based on real data: recommendations, educational activity sequencing, and HIV treatment sequencing. In all cases, CSRL learns a good policy faster than baselines.
APA, Harvard, Vancouver, ISO, and other styles
2

Note, Johan, and Maaruf Ali. "Comparative Analysis of Intrusion Detection System Using Machine Learning and Deep Learning Algorithms." Annals of Emerging Technologies in Computing 6, no. 3 (July 1, 2022): 19–36. http://dx.doi.org/10.33166/aetic.2022.03.003.

Full text
Abstract:
Attacks against computer networks, “cyber-attacks”, are now common place affecting almost every Internet connected device on a daily basis. Organisations are now using machine learning and deep learning to thwart these types of attacks for their effectiveness without the need for human intervention. Machine learning offers the biggest advantage in their ability to detect, curtail, prevent, recover and even deal with untrained types of attacks without being explicitly programmed. This research will show the many different types of algorithms that are employed to fight against the different types of cyber-attacks, which are also explained. The classification algorithms, their implementation, accuracy and testing time are presented. The algorithms employed for this experiment were the Gaussian Naïve-Bayes algorithm, Logistic Regression Algorithm, SVM (Support Vector Machine) Algorithm, Stochastic Gradient Descent Algorithm, Decision Tree Algorithm, Random Forest Algorithm, Gradient Boosting Algorithm, K-Nearest Neighbour Algorithm, ANN (Artificial Neural Network) (here we also employed the Multilevel Perceptron Algorithm), Convolutional Neural Network (CNN) Algorithm and the Recurrent Neural Network (RNN) Algorithm. The study concluded that amongst the various machine learning algorithms, the Logistic Regression and Decision tree classifiers all took a very short time to be implemented giving an accuracy of over 90% for malware detection inside various test datasets. The Gaussian Naïve-Bayes classifier, though fast to implement, only gave an accuracy between 51-88%. The Multilevel Perceptron, non-linear SVM and Gradient Boosting algorithms all took a very long time to be implemented. The algorithm that performed with the greatest accuracy was the Random Forest Classification algorithm.
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Xinyu, Xiaoguang Gao, and Chenfeng Wang. "A Novel BN Learning Algorithm Based on Block Learning Strategy." Sensors 20, no. 21 (November 7, 2020): 6357. http://dx.doi.org/10.3390/s20216357.

Full text
Abstract:
Learning accurate Bayesian Network (BN) structures of high-dimensional and sparse data is difficult because of high computation complexity. To learn the accurate structure for high-dimensional and sparse data faster, this paper adopts a divide and conquer strategy and proposes a block learning algorithm with a mutual information based K-means algorithm (BLMKM algorithm). This method utilizes an improved K-means algorithm to block the nodes in BN and a maximum minimum parents and children (MMPC) algorithm to obtain the whole skeleton of BN and find possible graph structures based on separated blocks. Then, a pruned dynamic programming algorithm is performed sequentially for all possible graph structures to get possible BNs and find the best BN by scoring function. Experiments show that for high-dimensional and sparse data, the BLMKM algorithm can achieve the same accuracy in a reasonable time compared with non-blocking classical learning algorithms. Compared to the existing block learning algorithms, the BLMKM algorithm has a time advantage on the basis of ensuring accuracy. The analysis of the real radar effect mechanism dataset proves that BLMKM algorithm can quickly establish a global and accurate causality model to find the cause of interference, predict the detecting result, and guide the parameters optimization. BLMKM algorithm is efficient for BN learning and has practical application value.
APA, Harvard, Vancouver, ISO, and other styles
4

Kumar Jitender Kumar, Yogesh. "Facemask Detection using Deep Learning Algorithm." International Journal of Science and Research (IJSR) 12, no. 5 (May 5, 2023): 1520–24. http://dx.doi.org/10.21275/sr23518151522.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lin, Ying Jian, and Xiao Ji Chen. "Simulated Annealing Algorithm Improved BP Learning Algorithm." Applied Mechanics and Materials 513-517 (February 2014): 734–37. http://dx.doi.org/10.4028/www.scientific.net/amm.513-517.734.

Full text
Abstract:
BP learning algorithm has advantage of simple structure, easy to implement and so on, it has gained wide application in the malfunction diagnosis and pattern recognition etc.. For BP algorithm is easy to fall into local minima shortcoming cites simulated annealing algorithm. Firstly, study the basic idea of BP learning algorithm and its simple mathematical representation; Then, research simulated annealing algorithm theory and annealing processes; Finally, the study makes BP algorithm combine with simulated annealing algorithm to form a hybrid optimization algorithm of simulated annealing algorithm based on genetic and improved BP algorithm, and gives specific calculation steps. The results show that the content of this study give full play to their respective advantages of two algorithms, make best use of the advantages and bypass the disadvantages, whether in academic or in the application it has a very important significance.
APA, Harvard, Vancouver, ISO, and other styles
6

Ma, Jian Hua, and Fa Zhong Tian. "Intelligent Learning Ant Colony Algorithm." Applied Mechanics and Materials 48-49 (February 2011): 625–31. http://dx.doi.org/10.4028/www.scientific.net/amm.48-49.625.

Full text
Abstract:
Ant colony algorithm is effective algorithm for NP-hard problems, but it also tends to mature early as other evolutionary algorithms. One improvement method of ant colony algorithm is studied in this paper. Intelligent learning ant colony algorithm with the pheromone difference and positive-negative learning mechanism is brought forward to solve TSP. The basic approach of ant colony algorithm is introduced firstly, then we introduced the individual pheromone matrix and positive-negative learning mechanism into ant colony algorithm. Next the steps of intelligent learning ant colony algorithm are given. At last the effectiveness of this algorithm is proved by random numerical examples and typical numerical examples. It is also proved that intelligent ant and learning mechanism will affect concentration degree of pheromone.
APA, Harvard, Vancouver, ISO, and other styles
7

Coe, James, and Mustafa Atay. "Evaluating Impact of Race in Facial Recognition across Machine Learning and Deep Learning Algorithms." Computers 10, no. 9 (September 10, 2021): 113. http://dx.doi.org/10.3390/computers10090113.

Full text
Abstract:
The research aims to evaluate the impact of race in facial recognition across two types of algorithms. We give a general insight into facial recognition and discuss four problems related to facial recognition. We review our system design, development, and architectures and give an in-depth evaluation plan for each type of algorithm, dataset, and a look into the software and its architecture. We thoroughly explain the results and findings of our experimentation and provide analysis for the machine learning algorithms and deep learning algorithms. Concluding the investigation, we compare the results of two kinds of algorithms and compare their accuracy, metrics, miss rates, and performances to observe which algorithms mitigate racial bias the most. We evaluate racial bias across five machine learning algorithms and three deep learning algorithms using racially imbalanced and balanced datasets. We evaluate and compare the accuracy and miss rates between all tested algorithms and report that SVC is the superior machine learning algorithm and VGG16 is the best deep learning algorithm based on our experimental study. Our findings conclude the algorithm that mitigates the bias the most is VGG16, and all our deep learning algorithms outperformed their machine learning counterparts.
APA, Harvard, Vancouver, ISO, and other styles
8

Barbosa, Flávio, Arthur Vidal, and Flávio Mello. "Machine Learning for Cryptographic Algorithm Identification." Journal of Information Security and Cryptography (Enigma) 3, no. 1 (September 3, 2016): 3. http://dx.doi.org/10.17648/enig.v3i1.55.

Full text
Abstract:
This paper aims to study encrypted text files in order to identify their encoding algorithm. Plain texts were encoded with distinct cryptographic algorithms and then some metadata were extracted from these codifications. Afterward, the algorithm identification is obtained by using data mining techniques. Firstly, texts in Portuguese, English and Spanish were encrypted using DES, Blowfish, RSA, and RC4 algorithms. Secondly, the encrypted files were submitted to data mining techniques such as J48, FT, PART, Complement Naive Bayes, and Multilayer Perceptron classifiers. Charts were created using the confusion matrices generated in step two and it was possible to perceive that the percentage of identification for each of the algorithms is greater than a probabilistic bid. There are several scenarios where algorithm identification reaches almost 97, 23% of correctness.
APA, Harvard, Vancouver, ISO, and other styles
9

Yao, Jiajun. "RRT algorithm learning and optimization." Applied and Computational Engineering 53, no. 1 (March 28, 2024): 296–302. http://dx.doi.org/10.54254/2755-2721/53/20241614.

Full text
Abstract:
LWith the increasing maturity of RRT algorithm, more and more fields are starting to use this algorithm. For example, in path planning problems, this algorithm has been well applied because it has good performance and real-time performance. The RRT algorithm is a path planning algorithm based on tree structure. It continuously explores unknown regions, finds feasible paths, and ultimately connects the starting point and target point by randomly expanding the nodes of the tree. The RRT algorithm has good fast exploration ability and low computational complexity, making it suitable for path planning problems in various environments. This article focuses on studying the parameters in various RRT algorithms. Through analysis and comparison, more reasonable parameters were ultimately found. This article also involves optimizing and improving the RRT algorithm using the RRT * algorithm. The research in this article can further understand the application scenarios of the RRT algorithm. It is expected that this algorithm will be better applied in the field of autonomous driving in the future.
APA, Harvard, Vancouver, ISO, and other styles
10

Crandall, Jacob, Asad Ahmed, and Michael Goodrich. "Learning in Repeated Games with Minimal Information: The Effects of Learning Bias." Proceedings of the AAAI Conference on Artificial Intelligence 25, no. 1 (August 4, 2011): 650–56. http://dx.doi.org/10.1609/aaai.v25i1.7871.

Full text
Abstract:
Automated agents for electricity markets, social networks, and other distributed networks must repeatedly interact with other intelligent agents, often without observing associates' actions or payoffs (i.e., minimal information). Given this reality, our goal is to create algorithms that learn effectively in repeated games played with minimal information. As in other applications of machine learning, the success of a learning algorithm in repeated games depends on its learning bias. To better understand what learning biases are most successful, we analyze the learning biases of previously published multi-agent learning (MAL) algorithms. We then describe a new algorithm that adapts a successful learning bias from the literature to minimal information environments. Finally, we compare the performance of this algorithm with ten other algorithms in repeated games played with minimal information.
APA, Harvard, Vancouver, ISO, and other styles
11

Preethi, B. Meena, R. Gowtham, S. Aishvarya, S. Karthick, and D. G. Sabareesh. "Rainfall Prediction using Machine Learning and Deep Learning Algorithms." International Journal of Recent Technology and Engineering (IJRTE) 10, no. 4 (November 30, 2021): 251–54. http://dx.doi.org/10.35940/ijrte.d6611.1110421.

Full text
Abstract:
The project entitled as “Rainfall Prediction using Machine Learning & Deep Learning Algorithms” is a research project which is developed in Python Language and dataset is stored in Microsoft Excel. This prediction uses various machine learning and deep learning algorithms to find which algorithm predicts with most accurately. Rainfall prediction can be achieved by using binary classification under Data Mining. Predicting the rainfall is very important in several aspects of one’s country and can help from preventing serious natural disasters. For this prediction, Artificial Neural Network using Forward and Backward Propagation, Ada Boost, Gradient Boosting and XGBoost algorithms are used in this model for predicting the rainfall. There are totally five modules used in this project. The Data Analysis Module will analyse the datasets and finding the missing values in the dataset. The Data Pre-processing includes Data Cleaning which is the process of filling the missing values in the dataset. The Feature Transformation Module is used to modify the features of the dataset. The Data Mining Module is used to train the dataset to models using any algorithm for learning the pattern. The Model Evaluation Module is used to measure the performance of the model and finalize the overall best accuracy for the prediction. Dataset used in this prediction is for the country Australia. This main aim of the project is to compare the various boosting algorithms with the neural network and find the best algorithm among them. This prediction can be major advantage to the farmers in order to plant the types of crops according to the needy of water. Overall, we analyse the algorithm which is feasible for qualitatively predicting the rainfall.
APA, Harvard, Vancouver, ISO, and other styles
12

Sears, Connie, Rachel Tandias, and Jorge Arroyo. "Deep learning algorithm." Survey of Ophthalmology 63, no. 3 (May 2018): 448–49. http://dx.doi.org/10.1016/j.survophthal.2017.12.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Yu, Binyan, and Yuanzheng Zheng. "Research on algorithms of machine learning." Applied and Computational Engineering 39, no. 1 (February 21, 2024): 277–81. http://dx.doi.org/10.54254/2755-2721/39/20230614.

Full text
Abstract:
Machine learning has endless application possibilities, with many algorithms worth learning in depth. Different algorithms can be flexibly applied to a variety of vertical fields, such as the most common neural network algorithms for face recognition, garbage classification, picture classification, and other application scenarios image recognition and computer vision, the hottest recent natural language processing and recommendation algorithms for different applications are from it. In the field of financial analysis, the decision tree algorithm and its derivative algorithms such as random forest are the mainstream. As well as support vector machines, naive Bayes, K-nearest neighbor algorithms, and so on. From the traditional regression algorithm to the hottest neural network algorithm. This paper discusses the application principle of the algorithm and lists some corresponding applications. Linear regression, decision trees, supervised learning, etc., while some have been replaced by more powerful and flexible algorithms and methods, by studying and understanding these foundational algorithms in depth, neural network models can be better designed and optimized, and a better understanding of how they work can be obtained.
APA, Harvard, Vancouver, ISO, and other styles
14

Donghua Zhang. "Effectiveness Assessment and Optimization of Cross-Language Comparative Learning Algorithms in English Learning." Journal of Electrical Systems 20, no. 6s (April 29, 2024): 368–73. http://dx.doi.org/10.52783/jes.2657.

Full text
Abstract:
This study looks into the usefulness of cross-language comparison learning algorithms for enhancing English language acquisition among adult learners from various linguistic origins. In this study, two different algorithms, Algorithm A and Algorithm B, were systematically assessed to determine their impact on two critical components of language learning: listening comprehension and spoken fluency. A group of people including 100 adult learners participated in the study, taking exams customized to measure their proficiency in listening comprehension and speaking fluency. The evaluation indicated significant disparities in the efficacy of the two algorithms. Algorithm A outperformed Algorithm B, with higher mean scores in both comprehension and fluency evaluations. The results highlight the potential of optimized cross-language comparative learning algorithms to improve language learning outcomes, particularly in the context of English language acquisition. These algorithms show promise in meeting the different requirements and preferences of English language learners by leveraging computational approaches and multilingual data to effectively scaffold language learning processes. Furthermore, the study emphasizes the need for additional research to improve algorithmic designs and assess the long-term competence outcomes related to the usage of cross-language comparative learning algorithms. Embracing new technologies provides promising prospects to improve the effectiveness of English language instruction, encourage linguistic variety, and prepare students to succeed in an interconnected global society.
APA, Harvard, Vancouver, ISO, and other styles
15

Budura, Georgeta, Corina Botoca, and Nicolae Miclău. "Competitive learning algorithms for data clustering." Facta universitatis - series: Electronics and Energetics 19, no. 2 (2006): 261–69. http://dx.doi.org/10.2298/fuee0602261b.

Full text
Abstract:
This paper presents and discusses some competitive learning algorithms for data clustering. A new competitive learning algorithm, named the dynamically penalized rival competitive learning algorithm (DPRCL), is introduced and studied. It is a variant of the rival penalized competitive algorithm [1] and it performs appropriate clustering without knowing the clusters number, by automatically driving the extra seed points far away from the input data set. It does not have the 'dead units' problem. Simulations results, performed in different conditions, are presented showing that the performance of the new DPRCL algorithm is better comparative with other competitive algorithms.
APA, Harvard, Vancouver, ISO, and other styles
16

Ma, Yindi, Yanhai Li, and Longquan Yong. "Teaching–Learning-Based Optimization Algorithm with Stochastic Crossover Self-Learning and Blended Learning Model and Its Application." Mathematics 12, no. 10 (May 20, 2024): 1596. http://dx.doi.org/10.3390/math12101596.

Full text
Abstract:
This paper presents a novel variant of the teaching–learning-based optimization algorithm, termed BLTLBO, which draws inspiration from the blended learning model, specifically designed to tackle high-dimensional multimodal complex optimization problems. Firstly, the perturbation conditions in the “teaching” and “learning” stages of the original TLBO algorithm are interpreted geometrically, based on which the search capability of the TLBO is enhanced by adjusting the range of values of random numbers. Second, a strategic restructuring has been ingeniously implemented, dividing the algorithm into three distinct phases: pre-course self-study, classroom blended learning, and post-course consolidation; this structural reorganization and the random crossover strategy in the self-learning phase effectively enhance the global optimization capability of TLBO. To evaluate its performance, the BLTLBO algorithm was tested alongside seven distinguished variants of the TLBO algorithm on thirteen multimodal functions from the CEC2014 suite. Furthermore, two excellent high-dimensional optimization algorithms were added to the comparison algorithm and tested in high-dimensional mode on five scalable multimodal functions from the CEC2008 suite. The empirical results illustrate the BLTLBO algorithm’s superior efficacy in handling high-dimensional multimodal challenges. Finally, a high-dimensional portfolio optimization problem was successfully addressed using the BLTLBO algorithm, thereby validating the practicality and effectiveness of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
17

Chen, Zhenpeng, Yuanjie Zheng, Xiaojie Li, Rong Luo, Weikuan Jia, Jian Lian, and Chengjiang Li. "Interactive Trimap Generation for Digital Matting Based on Single-Sample Learning." Electronics 9, no. 4 (April 17, 2020): 659. http://dx.doi.org/10.3390/electronics9040659.

Full text
Abstract:
Image matting refers to the task of estimating the foreground of images, which is an important problem in image processing. Recently, trimap generation has attracted considerable attention because designing a trimap for every image is labor-intensive. In this paper, a two-step algorithm is proposed to generate trimaps. To use the proposed algorithm, users must only provide some clicks (foreground clicks and background clicks), which are employed as the input to generate a binary mask. One-shot learning technique achieves remarkable progress on semantic segmentation, we extend this technique to perform the binary mask prediction task. The mask is further used to predict the trimap using image dilation. Extensive experiments were performed to evaluate the proposed algorithm. Experimental results show that the trimaps generated using the proposed algorithm are visually similar to the user-annotated ones. Comparing with the interactive matting algorithms, the proposed algoritm is less labor-intensive than trimap-based matting algorithm and achieved more accuate results than scribble-based matting algorithm.
APA, Harvard, Vancouver, ISO, and other styles
18

Zhang, Pinggai, Ling Wang, Jiaojie Du, Zixiang Fei, Song Ye, Minrui Fei, and Panos M. Pardalos. "Differential Human Learning Optimization Algorithm." Computational Intelligence and Neuroscience 2022 (April 30, 2022): 1–19. http://dx.doi.org/10.1155/2022/5699472.

Full text
Abstract:
Human Learning Optimization (HLO) is an efficient metaheuristic algorithm in which three learning operators, i.e., the random learning operator, the individual learning operator, and the social learning operator, are developed to search for optima by mimicking the learning behaviors of humans. In fact, people not only learn from global optimization but also learn from the best solution of other individuals in the real life, and the operators of Differential Evolution are updated based on the optima of other individuals. Inspired by these facts, this paper proposes two novel differential human learning optimization algorithms (DEHLOs), into which the Differential Evolution strategy is introduced to enhance the optimization ability of the algorithm. And the two optimization algorithms, based on improving the HLO from individual and population, are named DEHLO1 and DEHLO2, respectively. The multidimensional knapsack problems are adopted as benchmark problems to validate the performance of DEHLOs, and the results are compared with the standard HLO and Modified Binary Differential Evolution (MBDE) as well as other state-of-the-art metaheuristics. The experimental results demonstrate that the developed DEHLOs significantly outperform other algorithms and the DEHLO2 achieves the best overall performance on various problems.
APA, Harvard, Vancouver, ISO, and other styles
19

Ouyang, Chengtian, Donglin Zhu, and Fengqi Wang. "A Learning Sparrow Search Algorithm." Computational Intelligence and Neuroscience 2021 (August 6, 2021): 1–23. http://dx.doi.org/10.1155/2021/3946958.

Full text
Abstract:
This paper solves the drawbacks of traditional intelligent optimization algorithms relying on 0 and has good results on CEC 2017 and benchmark functions, which effectively improve the problem of algorithms falling into local optimality. The sparrow search algorithm (SSA) has significant optimization performance, but still has the problem of large randomness and is easy to fall into the local optimum. For this reason, this paper proposes a learning sparrow search algorithm, which introduces the lens reverse learning strategy in the discoverer stage. The random reverse learning strategy increases the diversity of the population and makes the search method more flexible. In the follower stage, an improved sine and cosine guidance mechanism is introduced to make the search method of the discoverer more detailed. Finally, a differential-based local search is proposed. The strategy is used to update the optimal solution obtained each time to prevent the omission of high-quality solutions in the search process. LSSA is compared with CSSA, ISSA, SSA, BSO, GWO, and PSO in 12 benchmark functions to verify the feasibility of the algorithm. Furthermore, to further verify the effectiveness and practicability of the algorithm, LSSA is compared with MSSCS, CSsin, and FA-CL in CEC 2017 test function. The simulation results show that LSSA has good universality. Finally, the practicability of LSSA is verified by robot path planning, and LSSA has good stability and safety in path planning.
APA, Harvard, Vancouver, ISO, and other styles
20

Du, Bingqian, Zhiyi Huang, and Chuan Wu. "Adversarial Deep Learning for Online Resource Allocation." ACM Transactions on Modeling and Performance Evaluation of Computing Systems 6, no. 4 (December 31, 2021): 1–25. http://dx.doi.org/10.1145/3494526.

Full text
Abstract:
Online algorithms are an important branch in algorithm design. Designing online algorithms with a bounded competitive ratio (in terms of worst-case performance) can be hard and usually relies on problem-specific assumptions. Inspired by adversarial training from Generative Adversarial Net and the fact that the competitive ratio of an online algorithm is based on worst-case input, we adopt deep neural networks (NNs) to learn an online algorithm for a resource allocation and pricing problem from scratch, with the goal that the performance gap between offline optimum and the learned online algorithm can be minimized for worst-case input. Specifically, we leverage two NNs as the algorithm and the adversary, respectively, and let them play a zero sum game, with the adversary being responsible for generating worst-case input while the algorithm learns the best strategy based on the input provided by the adversary. To ensure better convergence of the algorithm network (to the desired online algorithm), we propose a novel per-round update method to handle sequential decision making to break complex dependency among different rounds so that update can be done for every possible action instead of only sampled actions. To the best of our knowledge, our work is the first using deep NNs to design an online algorithm from the perspective of worst-case performance guarantee. Empirical studies show that our updating methods ensure convergence to Nash equilibrium and the learned algorithm outperforms state-of-the-art online algorithms under various settings.
APA, Harvard, Vancouver, ISO, and other styles
21

Yasuda, Muneki, and Kazuyuki Tanaka. "Approximate Learning Algorithm in Boltzmann Machines." Neural Computation 21, no. 11 (November 2009): 3130–78. http://dx.doi.org/10.1162/neco.2009.08-08-844.

Full text
Abstract:
Boltzmann machines can be regarded as Markov random fields. For binary cases, they are equivalent to the Ising spin model in statistical mechanics. Learning systems in Boltzmann machines are one of the NP-hard problems. Thus, in general we have to use approximate methods to construct practical learning algorithms in this context. In this letter, we propose new and practical learning algorithms for Boltzmann machines by using the belief propagation algorithm and the linear response approximation, which are often referred as advanced mean field methods. Finally, we show the validity of our algorithm using numerical experiments.
APA, Harvard, Vancouver, ISO, and other styles
22

Meng, Xiyan, and Fang Zhuang. "A New Boosting Algorithm for Shrinkage Curve Learning." Mathematical Problems in Engineering 2022 (April 15, 2022): 1–14. http://dx.doi.org/10.1155/2022/6339758.

Full text
Abstract:
To a large extent, classical boosting denoising algorithms can improve denoising performance. However, these algorithms can only work well when the denoisers are linear. In this paper, we propose a boosting algorithm that can be used for a nonlinear denoiser. We further implement the proposed algorithm into a shrinkage curve learning denoising algorithm, which is a nonlinear denoiser. Concurrently, the convergence of the proposed algorithm is proved. Experimental results indicate that the proposed algorithm is effective and the dependence of the shrinkage curve learning denoising algorithm on training samples has improved. In addition, the proposed algorithm can achieve better performance in terms of visual quality and peak signal-to-noise ratio (PSNR).
APA, Harvard, Vancouver, ISO, and other styles
23

Ming, Fangpeng, Liang Tan, and Xiaofan Cheng. "Hybrid Recommendation Scheme Based on Deep Learning." Mathematical Problems in Engineering 2021 (December 22, 2021): 1–12. http://dx.doi.org/10.1155/2021/6120068.

Full text
Abstract:
Big data has been developed for nearly a decade, and the information data on the network is exploding. Facing the complex and massive data, it is difficult for people to get the demanded information quickly, and the recommendation algorithm with its characteristics becomes one of the important methods to solve the massive data overload problem at this stage. In particular, the rise of the e-commerce industry has promoted the development of recommendation algorithms. Traditional, single recommendation algorithms often have problems such as cold start, data sparsity, and long-tail items. The hybrid recommendation algorithms at this stage can effectively avoid some of the drawbacks caused by a single algorithm. To address the current problems, this paper makes up for the shortcomings of a single collaborative model by proposing a hybrid recommendation algorithm based on deep learning IA-CN. The algorithm first uses an integrated strategy to fuse user-based and item-based collaborative filtering algorithms to generalize and classify the output results. Then deeper and more abstract nonlinear interactions between users and items are captured by improved deep learning techniques. Finally, we designed experiments to validate the algorithm. The experiments are compared with the benchmark algorithm on (Amazon item rating dataset), and the results show that the IA-CN algorithm proposed in this paper has better performance in rating prediction on the test dataset.
APA, Harvard, Vancouver, ISO, and other styles
24

Shah, Kulin, and Naresh Manwani. "Online Active Learning of Reject Option Classifiers." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 5652–59. http://dx.doi.org/10.1609/aaai.v34i04.6019.

Full text
Abstract:
Active learning is an important technique to reduce the number of labeled examples in supervised learning. Active learning for binary classification has been well addressed in machine learning. However, active learning of the reject option classifier remains unaddressed. In this paper, we propose novel algorithms for active learning of reject option classifiers. We develop an active learning algorithm using double ramp loss function. We provide mistake bounds for this algorithm. We also propose a new loss function called double sigmoid loss function for reject option and corresponding active learning algorithm. We offer a convergence guarantee for this algorithm. We provide extensive experimental results to show the effectiveness of the proposed algorithms. The proposed algorithms efficiently reduce the number of label examples required.
APA, Harvard, Vancouver, ISO, and other styles
25

Wang, Yaoying, Shudong Sun, and Zhiqiang Cai. "The Research of Short-term Electric Load Forecasting based on Machine Learning Algorithm." Advances in Engineering Technology Research 4, no. 1 (March 22, 2023): 334. http://dx.doi.org/10.56028/aetr.4.1.334.2023.

Full text
Abstract:
The ability of electric load forecasting become the key to measuring electric planning and dispatching, and improving the accuracy of electric load forecasting has become one hot spot topic of scholars in recent years. The traditional electric load forecasting algorithm mainly include statistical learning algorithm. The electric load forecasting algorithm based on traditional forecasting algorithm, machine learning algorithm and neural network algorithm greatly improve the convergence approximation effect and accuracy. In recent years, the electric load forecasting algorithms based on the deep learning algorithm has greatly reduced the error and improved the measurement accuracy and robustness, such as RNN, GRU, LSTM, DBN, TCN, and so on, as well as the combination algorithm based on deep learning. This paper introduces the classical algorithm of deep learning in electric load forecasting. The main purpose is to explore the error and fitting degree of algorithms established by different algorithms, and provide reference for the selection of forecasting algorithms for various types of electric load forecasting.
APA, Harvard, Vancouver, ISO, and other styles
26

MØLLER, MARTIN. "SUPERVISED LEARNING ON LARGE REDUNDANT TRAINING SETS." International Journal of Neural Systems 04, no. 01 (March 1993): 15–25. http://dx.doi.org/10.1142/s0129065793000031.

Full text
Abstract:
Efficient supervised learning on large redundant training sets requires algorithms where the amount of computation involved in preparing each weight update is independent of the training set size. Off-line algorithms like the standard conjugate gradient algorithms do not have this property while on-line algorithms like the stochastic backpropagation algorithm do. A new algorithm combining the good properties of off-line and on-line algorithms is introduced.
APA, Harvard, Vancouver, ISO, and other styles
27

Qian, Yufeng. "Exploration of machine algorithms based on deep learning model and feature extraction." Mathematical Biosciences and Engineering 18, no. 6 (2021): 7602–18. http://dx.doi.org/10.3934/mbe.2021376.

Full text
Abstract:
<abstract> <p>The study expects to solve the problems of insufficient labeling, high input dimension, and inconsistent task input distribution in traditional lifelong machine learning. A new deep learning model is proposed by combining feature representation with a deep learning algorithm. First, based on the theoretical basis of the deep learning model and feature extraction. The study analyzes several representative machine learning algorithms, and compares the performance of the optimized deep learning model with other algorithms in a practical application. By explaining the machine learning system, the study introduces two typical algorithms in machine learning, namely ELLA (Efficient lifelong learning algorithm) and HLLA (Hierarchical lifelong learning algorithm). Second, the flow of the genetic algorithm is described, and combined with mutual information feature extraction in a machine algorithm, to form a composite algorithm HLLA (Hierarchical lifelong learning algorithm). Finally, the deep learning model is optimized and a deep learning model based on the HLLA algorithm is constructed. When K = 1200, the classification error rate reaches 0.63%, which reflects the excellent performance of the unsupervised database algorithm based on this model. Adding the feature model to the updating iteration process of lifelong learning deepens the knowledge base ability of lifelong machine learning, which is of great value to reduce the number of labels required for subsequent model learning and improve the efficiency of lifelong learning.</p> </abstract>
APA, Harvard, Vancouver, ISO, and other styles
28

Wu, Zhong Yong, and Li Li Gan. "Improved Isomap Algorithm Based on Supervision." Applied Mechanics and Materials 427-429 (September 2013): 1896–99. http://dx.doi.org/10.4028/www.scientific.net/amm.427-429.1896.

Full text
Abstract:
It focuses Isomap isometric embedding algorithm is proposed to improve supervised isometric embedding algorithm (SIsomap). Both supervised manifold learning algorithm, using the introduction of adjustable parameters in the form of classes in the classification problem for the effective use of information, making the manifold learning algorithms for classification classification problems have a stronger effect. Finally, through a series of experiments to fully illustrate the proposed improvement of the effectiveness of the algorithm, the proposed oversight of the manifold learning algorithm can more effectively enhance manifold learning algorithms for classification problems
APA, Harvard, Vancouver, ISO, and other styles
29

Yue, Yiqun, Yang Zhou, Lijuan Xu, and Dawei Zhao. "Optimal Defense Strategy Selection Algorithm Based on Reinforcement Learning and Opposition-Based Learning." Applied Sciences 12, no. 19 (September 24, 2022): 9594. http://dx.doi.org/10.3390/app12199594.

Full text
Abstract:
Industrial control systems (ICS) are facing increasing cybersecurity issues, leading to enormous threats and risks to numerous industrial infrastructures. In order to resist such threats and risks, it is particularly important to scientifically construct security strategies before an attack occurs. The characteristics of evolutionary algorithms are very suitable for finding optimal strategies. However, the more common evolutionary algorithms currently used have relatively large limitations in convergence accuracy and convergence speed, such as PSO, DE, GA, etc. Therefore, this paper proposes a hybrid strategy differential evolution algorithm based on reinforcement learning and opposition-based learning to construct the optimal security strategy. It greatly improved the common problems of evolutionary algorithms. This paper first scans the vulnerabilities of the water distribution system and generates an attack graph. Then, in order to solve the balance problem of cost and benefit, a cost–benefit-based objective function is constructed. Finally, the optimal security strategy set is constructed using the algorithm proposed in this paper. Through experiments, it is found that in the problem of security strategy construction, the algorithm in this paper has obvious advantages in convergence speed and convergence accuracy compared with some other intelligent strategy selection algorithms.
APA, Harvard, Vancouver, ISO, and other styles
30

Yang, Junfang, Yi Ma, Yabin Hu, Zongchen Jiang, Jie Zhang, Jianhua Wan, and Zhongwei Li. "Decision Fusion of Deep Learning and Shallow Learning for Marine Oil Spill Detection." Remote Sensing 14, no. 3 (January 30, 2022): 666. http://dx.doi.org/10.3390/rs14030666.

Full text
Abstract:
Marine oil spills are an emergency of great harm and have become a hot topic in marine environmental monitoring research. Optical remote sensing is an important means to monitor marine oil spills. Clouds, weather, and light control the amount of available data, which often limit feature characterization using a single classifier and therefore difficult to accurate monitoring of marine oil spills. In this paper, we develop a decision fusion algorithm to integrate deep learning methods and shallow learning methods based on multi-scale features for improving oil spill detection accuracy in the case of limited samples. Based on the multi-scale features after wavelet transform, two deep learning methods and two classical shallow learning algorithms are used to extract oil slick information from hyperspectral oil spill images. The decision fusion algorithm based on fuzzy membership degree is introduced to fuse multi-source oil spill information. The research shows that oil spill detection accuracy using the decision fusion algorithm is higher than that of the single detection algorithms. It is worth noting that oil spill detection accuracy is affected by different scale features. The decision fusion algorithm under the first-level scale features can further improve the accuracy of oil spill detection. The overall classification accuracy of the proposed method is 91.93%, which is 2.03%, 2.15%, 1.32%, and 0.43% higher than that of SVM, DBN, 1D-CNN, and MRF-CNN algorithms, respectively.
APA, Harvard, Vancouver, ISO, and other styles
31

Corazza, Jan, Ivan Gavran, and Daniel Neider. "Reinforcement Learning with Stochastic Reward Machines." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 6 (June 28, 2022): 6429–36. http://dx.doi.org/10.1609/aaai.v36i6.20594.

Full text
Abstract:
Reward machines are an established tool for dealing with reinforcement learning problems in which rewards are sparse and depend on complex sequences of actions. However, existing algorithms for learning reward machines assume an overly idealized setting where rewards have to be free of noise. To overcome this practical limitation, we introduce a novel type of reward machines, called stochastic reward machines, and an algorithm for learning them. Our algorithm, based on constraint solving, learns minimal stochastic reward machines from the explorations of a reinforcement learning agent. This algorithm can easily be paired with existing reinforcement learning algorithms for reward machines and guarantees to converge to an optimal policy in the limit. We demonstrate the effectiveness of our algorithm in two case studies and show that it outperforms both existing methods and a naive approach for handling noisy reward functions.
APA, Harvard, Vancouver, ISO, and other styles
32

Ibrahim, Dr Abdul-Wahab Sami, and Dr Baidaa Abdul khaliq Atya. "Detection of Diseases in Rice Leaf Using Deep Learning and Machine Learning Techniques." Webology 19, no. 1 (January 20, 2022): 1493–503. http://dx.doi.org/10.14704/web/v19i1/web19100.

Full text
Abstract:
Plant diseases have a negative impact on the agricultural sector. The diseases lower the productivity of the production yield and give huge losses to the farmers. For the betterment of agriculture, it is very essential to detect the diseases in the plants to protect the agricultural crop yield while it is also important to reduce the use of pesticides to improve the quality of the agricultural yield. Image processing and data mining algorithms together help analyze and detection of diseases. Using these techniques diseases detection can be done in rice leaves. In this research, the image processing technique is used to extract the feature from the leaf images. Further for the classification of diseases various machine learning algorithm like the random forest, J48 and support vector machine is used and the result is compared among different machine learning algorithm. After model evaluation, classification accuracy is verified using the n-fold cross-validation technique.
APA, Harvard, Vancouver, ISO, and other styles
33

Wu, Xidong, Feihu Huang, Zhengmian Hu, and Heng Huang. "Faster Adaptive Federated Learning." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 9 (June 26, 2023): 10379–87. http://dx.doi.org/10.1609/aaai.v37i9.26235.

Full text
Abstract:
Federated learning has attracted increasing attention with the emergence of distributed data. While extensive federated learning algorithms have been proposed for the non-convex distributed problem, the federated learning in practice still faces numerous challenges, such as the large training iterations to converge since the sizes of models and datasets keep increasing, and the lack of adaptivity by SGD-based model updates. Meanwhile, the study of adaptive methods in federated learning is scarce and existing works either lack a complete theoretical convergence guarantee or have slow sample complexity. In this paper, we propose an efficient adaptive algorithm (i.e., FAFED) based on the momentum-based variance reduced technique in cross-silo FL. We first explore how to design the adaptive algorithm in the FL setting. By providing a counter-example, we prove that a simple combination of FL and adaptive methods could lead to divergence. More importantly, we provide a convergence analysis for our method and prove that our algorithm is the first adaptive FL algorithm to reach the best-known samples O(epsilon(-3)) and O(epsilon(-2)) communication rounds to find an epsilon-stationary point without large batches. The experimental results on the language modeling task and image classification task with heterogeneous data demonstrate the efficiency of our algorithms.
APA, Harvard, Vancouver, ISO, and other styles
34

Feng, Zixin, Teligeng Yun, Yu Zhou, Ruirui Zheng, and Jianjun He. "Kernel Geometric Mean Metric Learning." Applied Sciences 13, no. 21 (November 6, 2023): 12047. http://dx.doi.org/10.3390/app132112047.

Full text
Abstract:
Geometric mean metric learning (GMML) algorithm is a novel metric learning approach proposed recently. It has many advantages such as unconstrained convex objective function, closed form solution, faster computational speed, and interpretability over other existing metric learning technologies. However, addressing the nonlinear problem is not effective enough. The kernel method is an effective method to solve nonlinear problems. Therefore, a kernel geometric mean metric learning (KGMML) algorithm is proposed. The basic idea is to transform the input space into a high-dimensional feature space through nonlinear transformation, and use the integral representation of the weighted geometric mean and the Woodbury matrix identity in new feature space to generalize the analytical solution obtained in the GMML algorithm as a form represented by a kernel matrix, and then the KGMML algorithm is obtained through operations. Experimental results on 15 datasets show that the proposed algorithm can effectively improve the accuracy of the GMML algorithm and other metric algorithms.
APA, Harvard, Vancouver, ISO, and other styles
35

Sporea, Ioana, and André Grüning. "Supervised Learning in Multilayer Spiking Neural Networks." Neural Computation 25, no. 2 (February 2013): 473–509. http://dx.doi.org/10.1162/neco_a_00396.

Full text
Abstract:
We introduce a supervised learning algorithm for multilayer spiking neural networks. The algorithm overcomes a limitation of existing learning algorithms: it can be applied to neurons firing multiple spikes in artificial neural networks with hidden layers. It can also, in principle, be used with any linearizable neuron model and allows different coding schemes of spike train patterns. The algorithm is applied successfully to classic linearly nonseparable benchmarks such as the XOR problem and the Iris data set, as well as to more complex classification and mapping problems. The algorithm has been successfully tested in the presence of noise, requires smaller networks than reservoir computing, and results in faster convergence than existing algorithms for similar tasks such as SpikeProp.
APA, Harvard, Vancouver, ISO, and other styles
36

Fern, A., R. Givan, and J. M. Siskind. "Specific-to-General Learning for Temporal Events with Application to Learning Event Definitions from Video." Journal of Artificial Intelligence Research 17 (December 1, 2002): 379–449. http://dx.doi.org/10.1613/jair.1050.

Full text
Abstract:
We develop, analyze, and evaluate a novel, supervised, specific-to-general learner for a simple temporal logic and use the resulting algorithm to learn visual event definitions from video sequences. First, we introduce a simple, propositional, temporal, event-description language called AMA that is sufficiently expressive to represent many events yet sufficiently restrictive to support learning. We then give algorithms, along with lower and upper complexity bounds, for the subsumption and generalization problems for AMA formulas. We present a positive-examples--only specific-to-general learning method based on these algorithms. We also present a polynomial-time--computable ``syntactic'' subsumption test that implies semantic subsumption without being equivalent to it. A generalization algorithm based on syntactic subsumption can be used in place of semantic generalization to improve the asymptotic complexity of the resulting learning algorithm. Finally, we apply this algorithm to the task of learning relational event definitions from video and show that it yields definitions that are competitive with hand-coded ones.
APA, Harvard, Vancouver, ISO, and other styles
37

Gao, Wen, Rong Yu, Zhaolei Yu, Zhuang Ma, and Md Masum. "Auxiliary Diagnosis Method of Chest Pain Based on Machine Learning." International Journal of Engineering and Technology 14, no. 4 (November 2022): 79–83. http://dx.doi.org/10.7763/ijet.2022.v14.1207.

Full text
Abstract:
Chest pain is sudden, its pathological causes are complex and various, fatal or non-fatal so that improving the diagnostic accuracy is extremely important in the emergency system of prehospital and hospitals. Therefore, we propose a method of introducing a decision tree, support vector machine, and KNN algorithm in machine learning into the auxiliary diagnosis of chest pain. First select the algorithm with better performance among decision tree, support vector machine, and KNN algorithm; Then compare the classification performance of the CART algorithm, the support vector machine using the Gaussian kernel function, and the K nearest neighbor algorithm using the Euclidean distance to select the best; Finally, through the analysis of the experimental results, the support vector machine algorithm with Gaussian kernel function is obtained. Its detection time and diagnosis accuracy rate are the best among the three algorithms, which can assist medical staff in the emergency system to carry out targeted chest pain diagnosis.
APA, Harvard, Vancouver, ISO, and other styles
38

Yang, Feidiao, Jiaqing Jiang, Jialin Zhang, and Xiaoming Sun. "Revisiting Online Quantum State Learning." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 6607–14. http://dx.doi.org/10.1609/aaai.v34i04.6136.

Full text
Abstract:
In this paper, we study the online quantum state learning problem which is recently proposed by Aaronson et al. (2018). In this problem, the learning algorithm sequentially predicts quantum states based on observed measurements and losses and the goal is to minimize the regret. In the previous work, the existing algorithms may output mixed quantum states. However, in many scenarios, the prediction of a pure quantum state is required. In this paper, we first propose a Follow-the-Perturbed-Leader (FTPL) algorithm that can guarantee to predict pure quantum states. Theoretical analysis shows that our algorithm can achieve an O(√T) expected regret under some reasonable settings. In the case that the pure state prediction is not mandatory, we propose another deterministic learning algorithm which is simpler and more efficient. The algorithm is based on the online gradient descent (OGD) method and can also achieve an O(√T) regret bound. The main technical contribution of this result is an algorithm of projecting an arbitrary Hermitian matrix onto the set of density matrices with respect to the Frobenius norm. We think this subroutine is of independent interest and can be widely used in many other problems in the quantum computing area. In addition to the theoretical analysis, we evaluate the algorithms with a series of simulation experiments. The experimental results show that our FTPL method and OGD method outperform the existing RFTL approach proposed by Aaronson et al. (2018) in almost all settings. In the implementation of the RFTL approach, we give a closed-form solution to the algorithm. This provides an efficient, accurate, and completely executable solution to the RFTL method.
APA, Harvard, Vancouver, ISO, and other styles
39

Khan, Dr Rafiqul Zaman, and Haider Allamy. "Training Algorithms for Supervised Machine Learning: Comparative Study." INTERNATIONAL JOURNAL OF MANAGEMENT & INFORMATION TECHNOLOGY 4, no. 3 (July 25, 2013): 354–60. http://dx.doi.org/10.24297/ijmit.v4i3.773.

Full text
Abstract:
Supervised machine learning is an important task for learning artificial neural networks; therefore a demand for selected supervised learning algorithms such as back propagation algorithm, decision tree learning algorithm and perceptron algorithm has been arise in order to perform the learning stage of the artificial neural networks. In this paper; a comparative study has been presented for the aforementioned algorithms to evaluate their performance within a range of specific parameters such as speed of learning, overfitting avoidance, and their accuracy. Besides these parameters we have included their benefits and limitations to unveil their hidden features and provide more details regarding their performance. We have found the decision tree algorithm is the best as compared with other algorithms that can solve the complex problems with a remarkable speed.
APA, Harvard, Vancouver, ISO, and other styles
40

Haneef, Farah, and Muddassar A. Sindhu. "DLIQ: A Deterministic Finite Automaton Learning Algorithm through Inverse Queries." Information Technology and Control 51, no. 4 (December 12, 2022): 611–24. http://dx.doi.org/10.5755/j01.itc.51.4.31394.

Full text
Abstract:
Automaton learning has attained a renewed interest in many interesting areas of software engineering including formal verification, software testing and model inference. An automaton learning algorithm typically learns the regular language of a DFA with the help of queries. These queries are posed by the learner (Learning Algorithm) to a Minimally Adequate Teacher (MAT). The MAT can generally answer two types of queries asked by the learning algorithm; membership queries and equivalence queries. Learning algorithms can be categorized into two broad categories: incremental and complete learning algorithms. Likewise, these can be designed for 1-bit learning or k-bit learning. Existing automaton learning algorithms have polynomial (atleast cubic) time complexity in the presence of a MAT. Therefore, sometimes these algorithms even become fail to learn large complex software systems. In this research work, we have reduced the complexity of the Deterministic Finite Automaton (DFA) learning into lower bounds (from cubic to square form). For this, we introduce an efficient complete DFA learning algorithm through Inverse Queries (DLIQ) based on the concept of inverse queries introduced by John Hopcroft for state minimization of a DFA. The DLIQ algorithm takes O(|Ps||F|+|Σ|N) complexity in the presence of a MAT which is also equipped to answer inverse queries. We give a theoretical analysis of the proposed algorithm along with providing a proof correctness and termination of the DLIQ algorithm. We also compare the performance of DLIQ with ID algorithm by implementing an evaluation framework. Our results depict that DLIQ is more efficient than ID algorithm in terms of time complexity.
APA, Harvard, Vancouver, ISO, and other styles
41

ANDRECUT, M., and M. K. ALI. "DEEP-SARSA: A REINFORCEMENT LEARNING ALGORITHM FOR AUTONOMOUS NAVIGATION." International Journal of Modern Physics C 12, no. 10 (December 2001): 1513–23. http://dx.doi.org/10.1142/s0129183101002851.

Full text
Abstract:
In this paper we discuss the application of reinforcement learning algorithms to the problem of autonomous robot navigation. We show that the autonomous navigation using the standard delayed reinforcement learning algorithms is an ill posed problem and we present a more efficient algorithm for which the convergence speed is greatly improved. The proposed algorithm (Deep-Sarsa) is based on a combination between the Depth-First Search (a graph searching algorithm) and Sarsa (a delayed reinforcement learning algorithm).
APA, Harvard, Vancouver, ISO, and other styles
42

Dongming Chen, Dongming Chen, Mingshuo Nie Dongming Chen, Jiarui Yan Mingshuo Nie, Jiangnan Meng Jiarui Yan, and Dongqi Wang Jiangnan Meng. "Network Representation Learning Algorithm Based on Community Folding." 網際網路技術學刊 23, no. 2 (March 2022): 415–23. http://dx.doi.org/10.53106/160792642022032302020.

Full text
Abstract:
<p>Network representation learning is a machine learning method that maps network topology and node information into low-dimensional vector space, which can reduce the temporal and spatial complexity of downstream network data mining such as node classification and graph clustering. This paper addresses the problem that neighborhood information-based network representation learning algorithm ignores the global topological information of the network. We propose the Network Representation Learning Algorithm Based on Community Folding (CF-NRL) considering the influence of community structure on the global topology of the network. Each community of the target network is regarded as a folding unit, the same network representation learning algorithm is used to learn the vector representation of the nodes on the folding network and the target network, then the vector representations are spliced correspondingly to obtain the final vector representation of the node. Experimental results show the excellent performance of the proposed algorithm.</p> <p>&nbsp;</p>
APA, Harvard, Vancouver, ISO, and other styles
43

Damodharan, Prof P., K. Veena, and Dr N. Suguna. "Optimized Intrusion Detection System using Deep Learning Algorithm." International Journal of Trend in Scientific Research and Development Volume-3, Issue-2 (February 28, 2019): 528–34. http://dx.doi.org/10.31142/ijtsrd21447.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Kruglov, Artem V. "The Unsupervised Learning Algorithm for Detecting Ellipsoid Objects." International Journal of Machine Learning and Computing 9, no. 3 (June 2019): 255–60. http://dx.doi.org/10.18178/ijmlc.2019.9.3.795.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Rathi Vijay Gadicha, Sarita. "Cardiovascular Disease (CVD) Prediction using Deep Learning Algorithm." International Journal of Science and Research (IJSR) 12, no. 6 (June 5, 2023): 780–86. http://dx.doi.org/10.21275/sr23603180631.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Dubey, Harshita. "High Security Machine Learning Algorithm for Industrial IoT." International Journal of Science and Research (IJSR) 12, no. 4 (April 5, 2023): 1794–99. http://dx.doi.org/10.21275/sr23428074556.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Christakis, Nicholas, and Dimitris Drikakis. "Unsupervised Learning of Particles Dispersion." Mathematics 11, no. 17 (August 23, 2023): 3637. http://dx.doi.org/10.3390/math11173637.

Full text
Abstract:
This paper discusses using unsupervised learning in classifying particle-like dispersion. The problem is relevant to various applications, including virus transmission and atmospheric pollution. The Reduce Uncertainty and Increase Confidence (RUN-ICON) algorithm of unsupervised learning is applied to particle spread classification. The algorithm classifies the particles with higher confidence and lower uncertainty than other algorithms. The algorithm’s efficiency remains high also when noise is added to the system. Applying unsupervised learning in conjunction with the RUN-ICON algorithm provides a tool for studying particles’ dynamics and their impact on air quality, health, and climate.
APA, Harvard, Vancouver, ISO, and other styles
48

Ma, Chunzhu. "Comparison of machine learning algorithms over prediction of Titanic database." Applied and Computational Engineering 5, no. 1 (June 14, 2023): 340–44. http://dx.doi.org/10.54254/2755-2721/5/20230593.

Full text
Abstract:
With the popularize of artificial intelligent, algorithms such as machine learning algorithm takes an important part in many fields. In terms of data prediction, it is important to know which machine learning algorithm fits the specific dataset the most. Comparison between machine learning algorithms helps us figure out the advantage and disadvantage of them and thus save time, and money during the working process. In this paper, this paper will be comparing K-nearest neighbours algorithm, Random Forest algorithm, and Support vector machine algorithm mostly focusing on their performance over the survival prediction of Titanic dataset. It could be seen that algorithm with more adjustable hyper parameters could produce more accurate result. However, finding the suitable hyper parameters is time consuming. In the future, this paper may try to explore ensembled algorithms for better result.
APA, Harvard, Vancouver, ISO, and other styles
49

Jabar, Abdul Aziz, and Andi Sofyan Anas. "Aplikasi Belajar Interaktif Algoritma Sorting Berbasis Desktop." JTIM : Jurnal Teknologi Informasi dan Multimedia 1, no. 1 (May 10, 2019): 23–29. http://dx.doi.org/10.35746/jtim.v1i1.10.

Full text
Abstract:
Making software requires a lot of algorithms that are implemented into the making, one of the algorithms used is the sorting algorithm. Algorithms are logical sequences of steps to solve problems that are arranged systematically. Sorting algorithms (sequencing) are generally defined as the process of rearranging a series of objects in a certain order. The purpose of the sequencing process is to facilitate the search process for a beginner who wants to learn to make software, of course they have to go through stages in learning algorithms and in the learning process there will usually be obstacles faced such as not fully understanding how the algorithm works, especially the sorting algorithm or the media used as learning materials have not been able to maximize the knowledge of sorting algorithms, therefore it requires an application of learning aids media that can help in learning and understanding the material of the sorting algorithm more optimally. In making assistive media applications learn how to work the steps of the sorting algorithm, the author uses the luther-sutopo method. The stages are concept, design, collecting materials, assembly, testing, and distribution. In the application there is general material about sorting and video algorithms that can help maximize understanding of the sorting algorithm material. The results achieved by an auxiliary media application are learning desktopbased sorting algorithms that are used for learning specific sorting algorithms in theory
APA, Harvard, Vancouver, ISO, and other styles
50

Shivaswamy, Pannaga, and Thorsten Joachims. "Coactive Learning." Journal of Artificial Intelligence Research 53 (May 27, 2015): 1–40. http://dx.doi.org/10.1613/jair.4539.

Full text
Abstract:
We propose Coactive Learning as a model of interaction between a learning system and a human user, where both have the common goal of providing results of maximum utility to the user. Interactions in the Coactive Learning model take the following form: at each step, the system (e.g. search engine) receives a context (e.g. query) and predicts an object (e.g. ranking); the user responds by correcting the system if necessary, providing a slightly improved but not necessarily optimal object as feedback. We argue that such preference feedback can be inferred in large quantity from observable user behavior (e.g., clicks in web search), unlike the optimal feedback required in the expert model or the cardinal valuations required for bandit learning. Despite the relaxed requirements for the feedback, we show that it is possible to adapt many existing online learning algorithms to the coactive framework. In particular, we provide algorithms that achieve square root regret in terms of cardinal utility, even though the learning algorithm never observes cardinal utility values directly. We also provide an algorithm with logarithmic regret in the case of strongly convex loss functions. An extensive empirical study demonstrates the applicability of our model and algorithms on a movie recommendation task, as well as ranking for web search.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography