Gotowa bibliografia na temat „Learning algorithms”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Learning algorithms”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Learning algorithms"

1

Egorova, Irina Konstantinovna. "BASIC ALGORITHMS LEARNING ALGORITHMS". Economy. Business. Computer science, nr 3 (1.01.2016): 47–58. http://dx.doi.org/10.19075/2500-2074-2016-3-47-58.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Xu, Chenyang, i Benjamin Moseley. "Learning-Augmented Algorithms for Online Steiner Tree". Proceedings of the AAAI Conference on Artificial Intelligence 36, nr 8 (28.06.2022): 8744–52. http://dx.doi.org/10.1609/aaai.v36i8.20854.

Pełny tekst źródła
Streszczenie:
This paper considers the recently popular beyond-worst-case algorithm analysis model which integrates machine-learned predictions with online algorithm design. We consider the online Steiner tree problem in this model for both directed and undirected graphs. Steiner tree is known to have strong lower bounds in the online setting and any algorithm’s worst-case guarantee is far from desirable. This paper considers algorithms that predict which terminal arrives online. The predictions may be incorrect and the algorithms’ performance is parameterized by the number of incorrectly predicted terminals. These guarantees ensure that algorithms break through the online lower bounds with good predictions and the competitive ratio gracefully degrades as the prediction error grows. We then observe that the theory is predictive of what will occur empirically. We show on graphs where terminals are drawn from a distribution, the new online algorithms have strong performance even with modestly correct predictions.
Style APA, Harvard, Vancouver, ISO itp.
3

Luan, Yuxuan, Junjiang He, Jingmin Yang, Xiaolong Lan i Geying Yang. "Uniformity-Comprehensive Multiobjective Optimization Evolutionary Algorithm Based on Machine Learning". International Journal of Intelligent Systems 2023 (10.11.2023): 1–21. http://dx.doi.org/10.1155/2023/1666735.

Pełny tekst źródła
Streszczenie:
When solving real-world optimization problems, the uniformity of Pareto fronts is an essential strategy in multiobjective optimization problems (MOPs). However, it is a common challenge for many existing multiobjective optimization algorithms due to the skewed distribution of solutions and biases towards specific objective functions. This paper proposes a uniformity-comprehensive multiobjective optimization evolutionary algorithm based on machine learning to address this limitation. Our algorithm utilizes uniform initialization and self-organizing map (SOM) to enhance population diversity and uniformity. We track the IGD value and use K-means and CNN refinement with crossover and mutation techniques during evolutionary stages. Our algorithm’s uniformity and objective function balance superiority were verified through comparative analysis with 13 other algorithms, including eight traditional multiobjective optimization algorithms, three machine learning-based enhanced multiobjective optimization algorithms, and two algorithms with objective initialization improvements. Based on these comprehensive experiments, it has been proven that our algorithm outperforms other existing algorithms in these areas.
Style APA, Harvard, Vancouver, ISO itp.
4

Zhang, Pingke, i Jinyu Huang. "APTM: Structurally Informative Network Representation Learning". Frontiers in Science and Engineering 3, nr 11 (21.11.2023): 5–11. http://dx.doi.org/10.54691/fse.v3i11.5701.

Pełny tekst źródła
Streszczenie:
Network representation learning algorithms provide a method to map complex network data into low-dimensional real vectors, aiming to capture and preserve structural information within the network. In recent years, these algorithms have found widespread applications in tasks such as link prediction and node classification in graph data mining. In this work, we propose a novel algorithm based on an adaptive transfer probability matrix. We use a deep neural network, comprising an autoencoder, to encode and reduce the dimensionality of the generated matrix, thereby encoding the intricate structural information of the network into low-dimensional real vectors. We evaluate the algorithm's performance through node classification, and in comparison with mainstream network representation learning algorithms, our proposed algorithm demonstrates favorable results. It outperforms baseline models in terms of micro-F1 scores on three datasets: PPI, Citeseer, and Wiki.
Style APA, Harvard, Vancouver, ISO itp.
5

Bottou, Léon, i Vladimir Vapnik. "Local Learning Algorithms". Neural Computation 4, nr 6 (listopad 1992): 888–900. http://dx.doi.org/10.1162/neco.1992.4.6.888.

Pełny tekst źródła
Streszczenie:
Very rarely are training data evenly distributed in the input space. Local learning algorithms attempt to locally adjust the capacity of the training system to the properties of the training set in each area of the input space. The family of local learning algorithms contains known methods, like the k-nearest neighbors method (kNN) or the radial basis function networks (RBF), as well as new algorithms. A single analysis models some aspects of these algorithms. In particular, it suggests that neither kNN or RBF, nor nonlocal classifiers, achieve the best compromise between locality and capacity. A careful control of these parameters in a simple local learning algorithm has provided a performance breakthrough for an optical character recognition problem. Both the error rate and the rejection performance have been significantly improved.
Style APA, Harvard, Vancouver, ISO itp.
6

Coe, James, i Mustafa Atay. "Evaluating Impact of Race in Facial Recognition across Machine Learning and Deep Learning Algorithms". Computers 10, nr 9 (10.09.2021): 113. http://dx.doi.org/10.3390/computers10090113.

Pełny tekst źródła
Streszczenie:
The research aims to evaluate the impact of race in facial recognition across two types of algorithms. We give a general insight into facial recognition and discuss four problems related to facial recognition. We review our system design, development, and architectures and give an in-depth evaluation plan for each type of algorithm, dataset, and a look into the software and its architecture. We thoroughly explain the results and findings of our experimentation and provide analysis for the machine learning algorithms and deep learning algorithms. Concluding the investigation, we compare the results of two kinds of algorithms and compare their accuracy, metrics, miss rates, and performances to observe which algorithms mitigate racial bias the most. We evaluate racial bias across five machine learning algorithms and three deep learning algorithms using racially imbalanced and balanced datasets. We evaluate and compare the accuracy and miss rates between all tested algorithms and report that SVC is the superior machine learning algorithm and VGG16 is the best deep learning algorithm based on our experimental study. Our findings conclude the algorithm that mitigates the bias the most is VGG16, and all our deep learning algorithms outperformed their machine learning counterparts.
Style APA, Harvard, Vancouver, ISO itp.
7

Mu, Tong, Georgios Theocharous, David Arbour i Emma Brunskill. "Constraint Sampling Reinforcement Learning: Incorporating Expertise for Faster Learning". Proceedings of the AAAI Conference on Artificial Intelligence 36, nr 7 (28.06.2022): 7841–49. http://dx.doi.org/10.1609/aaai.v36i7.20753.

Pełny tekst źródła
Streszczenie:
Online reinforcement learning (RL) algorithms are often difficult to deploy in complex human-facing applications as they may learn slowly and have poor early performance. To address this, we introduce a practical algorithm for incorporating human insight to speed learning. Our algorithm, Constraint Sampling Reinforcement Learning (CSRL), incorporates prior domain knowledge as constraints/restrictions on the RL policy. It takes in multiple potential policy constraints to maintain robustness to misspecification of individual constraints while leveraging helpful ones to learn quickly. Given a base RL learning algorithm (ex. UCRL, DQN, Rainbow) we propose an upper confidence with elimination scheme that leverages the relationship between the constraints, and their observed performance, to adaptively switch among them. We instantiate our algorithm with DQN-type algorithms and UCRL as base algorithms, and evaluate our algorithm in four environments, including three simulators based on real data: recommendations, educational activity sequencing, and HIV treatment sequencing. In all cases, CSRL learns a good policy faster than baselines.
Style APA, Harvard, Vancouver, ISO itp.
8

Yu, Binyan, i Yuanzheng Zheng. "Research on algorithms of machine learning". Applied and Computational Engineering 39, nr 1 (21.02.2024): 277–81. http://dx.doi.org/10.54254/2755-2721/39/20230614.

Pełny tekst źródła
Streszczenie:
Machine learning has endless application possibilities, with many algorithms worth learning in depth. Different algorithms can be flexibly applied to a variety of vertical fields, such as the most common neural network algorithms for face recognition, garbage classification, picture classification, and other application scenarios image recognition and computer vision, the hottest recent natural language processing and recommendation algorithms for different applications are from it. In the field of financial analysis, the decision tree algorithm and its derivative algorithms such as random forest are the mainstream. As well as support vector machines, naive Bayes, K-nearest neighbor algorithms, and so on. From the traditional regression algorithm to the hottest neural network algorithm. This paper discusses the application principle of the algorithm and lists some corresponding applications. Linear regression, decision trees, supervised learning, etc., while some have been replaced by more powerful and flexible algorithms and methods, by studying and understanding these foundational algorithms in depth, neural network models can be better designed and optimized, and a better understanding of how they work can be obtained.
Style APA, Harvard, Vancouver, ISO itp.
9

Sun, Yuqin, Songlei Wang, Dongmei Huang, Yuan Sun, Anduo Hu i Jinzhong Sun. "A multiple hierarchical clustering ensemble algorithm to recognize clusters arbitrarily shaped". Intelligent Data Analysis 26, nr 5 (5.09.2022): 1211–28. http://dx.doi.org/10.3233/ida-216112.

Pełny tekst źródła
Streszczenie:
As a research hotspot in ensemble learning, clustering ensemble obtains robust and highly accurate algorithms by integrating multiple basic clustering algorithms. Most of the existing clustering ensemble algorithms take the linear clustering algorithms as the base clusterings. As a typical unsupervised learning technique, clustering algorithms have difficulties properly defining the accuracy of the findings, making it difficult to significantly enhance the performance of the final algorithm. AGglomerative NESting method is used to build base clusters in this article, and an integration strategy for integrating multiple AGglomerative NESting clusterings is proposed. The algorithm has three main steps: evaluating the credibility of labels, producing multiple base clusters, and constructing the relation among clusters. The proposed algorithm builds on the original advantages of AGglomerative NESting and further compensates for the inability to identify arbitrarily shaped clusters. It can establish the proposed algorithm’s superiority in terms of clustering performance by comparing the proposed algorithm’s clustering performance to that of existing clustering algorithms on different datasets.
Style APA, Harvard, Vancouver, ISO itp.
10

Ling, Qingyang. "Machine learning algorithms review". Applied and Computational Engineering 4, nr 1 (14.06.2023): 91–98. http://dx.doi.org/10.54254/2755-2721/4/20230355.

Pełny tekst źródła
Streszczenie:
Machine learning is a field of study where the computer can learn for itself without a human explicitly hardcoding the knowledge for it. These algorithms make up the backbone of machine learning. This paper aims to study the field of machine learning and its algorithms. It will examine different types of machine learning models and introduce their most popular algorithms. The methodology of this paper is a literature review, which examines the most commonly used machine learning algorithms in the current field. Such algorithms include Nave Bayes, Decision Tree, KNN, and K-Mean Cluster. Nowadays, machine learning is everywhere and almost everyone using a technology product is enjoying its convenience. Applications like spam mail classification, image recognition, personalized product recommendations, and natural language processing all use machine learning algorithms. The conclusion is that there is no single algorithm that can solve all the problems. The choice of the use of algorithms and models must depend on the specific problem.
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "Learning algorithms"

1

Andersson, Viktor. "Machine Learning in Logistics: Machine Learning Algorithms : Data Preprocessing and Machine Learning Algorithms". Thesis, Luleå tekniska universitet, Datavetenskap, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-64721.

Pełny tekst źródła
Streszczenie:
Data Ductus is a Swedish IT-consultant company, their customer base ranging from small startups to large scale cooperations. The company has steadily grown since the 80s and has established offices in both Sweden and the US. With the help of machine learning, this project will present a possible solution to the errors caused by the human factor in the logistic business.A way of preprocessing data before applying it to a machine learning algorithm, as well as a couple of algorithms to use will be presented.
Data Ductus är ett svenskt IT-konsultbolag, deras kundbas sträcker sig från små startups till stora redan etablerade företag. Företaget har stadigt växt sedan 80-talet och har etablerat kontor både i Sverige och i USA. Med hjälp av maskininlärning kommer detta projket att presentera en möjlig lösning på de fel som kan uppstå inom logistikverksamheten, orsakade av den mänskliga faktorn.Ett sätt att förbehandla data innan den tillämpas på en maskininlärning algoritm, liksom ett par algoritmer för användning kommer att presenteras.
Style APA, Harvard, Vancouver, ISO itp.
2

Ameur, Foued ben Fredj. "Space-bounded learning algorithms /". Paderborn : Heinz Nixdorf Inst, 1996. http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&doc_number=007171235&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Hsu, Daniel Joseph. "Algorithms for active learning". Diss., [La Jolla] : University of California, San Diego, 2010. http://wwwlib.umi.com/cr/ucsd/fullcit?p3404377.

Pełny tekst źródła
Streszczenie:
Thesis (Ph. D.)--University of California, San Diego, 2010.
Title from first page of PDF file (viewed June 10, 2010). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references (leaves 97-101).
Style APA, Harvard, Vancouver, ISO itp.
4

CESARI, TOMMASO RENATO. "ALGORITHMS, LEARNING, AND OPTIMIZATION". Doctoral thesis, Università degli Studi di Milano, 2020. http://hdl.handle.net/2434/699354.

Pełny tekst źródła
Streszczenie:
This thesis covers some algorithmic aspects of online machine learning and optimization. In Chapter 1 we design algorithms with state-of-the-art regret guarantees for the problem dynamic pricing. In Chapter 2 we move on to an asynchronous online learning setting in which only some of the agents in the network are active at each time step. We show that when information is shared among neighbors, knowledge about the graph structure might have a significantly different impact on learning rates depending on how agents are activated. In Chapter 3 we investigate the online problem of multivariate non-concave maximization under weak assumptions on the regularity of the objective function. In Chapter 4 we introduce a new performance measure and design an efficient algorithm to learn optimal policies in repeated A/B testing.
Style APA, Harvard, Vancouver, ISO itp.
5

Janagam, Anirudh, i Saddam Hossen. "Analysis of Network Intrusion Detection System with Machine Learning Algorithms (Deep Reinforcement Learning Algorithm)". Thesis, Blekinge Tekniska Högskola, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-17126.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Sentenac, Flore. "Learning and Algorithms for Online Matching". Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAG005.

Pełny tekst źródła
Streszczenie:
Cette thèse se concentre principalement sur les problèmes d'appariement en ligne, où des ensembles de ressources sont alloués séquentiellement à des flux de demandes. Nous les traitons à la fois du point de vue de l'apprentissage en ligne et de l'analyse compétitive, toujours lorsqueEn ce qui concerne l'apprentissage en ligne, nous étudions comment la structure spécifique de l'appariement influence l'apprentissage dans la première partie, puis comment les effets de report dans le système affectent ses performances.En ce qui concerne l'analyse compétitive, nous étudions le problème de l'appariement en ligne dans des classes spécifiques de graphes aléatoires, dans un effort pour s'éloigner de l'analyse du pire cas.Enfin, nous explorons la manière dont l'apprentissage peut être exploité dans le problème d'ordonnancement des machines
This thesis focuses mainly on online matching problems, where sets of resources are sequentially allocated to demand streams. We treat them both from an online learning and a competitive analysis perspective, always in the case when the input is stochastic.On the online learning side, we study how the specific matching structure influences learning in the first part, then how carry-over effects in the system affect its performance.On the competitive analysis side, we study the online matching problem in specific classes of random graphs, in an effort to move away from worst-case analysis.Finally, we explore how learning can be leveraged in the scheduling problem
Style APA, Harvard, Vancouver, ISO itp.
7

Thompson, Simon Giles. "Distributed boosting algorithms". Thesis, University of Portsmouth, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.285529.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Moon, Gordon Euhyun. "Parallel Algorithms for Machine Learning". The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1561980674706558.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Atiya, Amir Abu-Mostafa Yaser S. "Learning algorithms for neural networks /". Diss., Pasadena, Calif. : California Institute of Technology, 1991. http://resolver.caltech.edu/CaltechETD:etd-09232005-083502.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Sanchez, Merchante Luis Francisco. "Learning algorithms for sparse classification". Phd thesis, Université de Technologie de Compiègne, 2013. http://tel.archives-ouvertes.fr/tel-00868847.

Pełny tekst źródła
Streszczenie:
This thesis deals with the development of estimation algorithms with embedded feature selection the context of high dimensional data, in the supervised and unsupervised frameworks. The contributions of this work are materialized by two algorithms, GLOSS for the supervised domain and Mix-GLOSS for unsupervised counterpart. Both algorithms are based on the resolution of optimal scoring regression regularized with a quadratic formulation of the group-Lasso penalty which encourages the removal of uninformative features. The theoretical foundations that prove that a group-Lasso penalized optimal scoring regression can be used to solve a linear discriminant analysis bave been firstly developed in this work. The theory that adapts this technique to the unsupervised domain by means of the EM algorithm is not new, but it has never been clearly exposed for a sparsity-inducing penalty. This thesis solidly demonstrates that the utilization of group-Lasso penalized optimal scoring regression inside an EM algorithm is possible. Our algorithms have been tested with real and artificial high dimensional databases with impressive resuits from the point of view of the parsimony without compromising prediction performances.
Style APA, Harvard, Vancouver, ISO itp.

Książki na temat "Learning algorithms"

1

Celebi, M. Emre, i Kemal Aydin, red. Unsupervised Learning Algorithms. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-24211-8.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Li, Fuwei, Lifeng Lai i Shuguang Cui. Machine Learning Algorithms. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-16375-3.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Szepesvári, Csaba. Algorithms for Reinforcement Learning. Cham: Springer International Publishing, 2010. http://dx.doi.org/10.1007/978-3-031-01551-9.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Ayyadevara, V. Kishore. Pro Machine Learning Algorithms. Berkeley, CA: Apress, 2018. http://dx.doi.org/10.1007/978-1-4842-3564-5.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Szepesvári, Csaba. Algorithms for reinforcement learning. San Rafael, Calif. (1537 Fourth Street, San Rafael, CA 94901 USA): Morgan & Claypool, 2010.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Hutchinson, Alan. Algorithmic learning. Oxford: Clarendon Press, 1994.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Learning kernel classifiers: Theory and algorithms. Cambridge, Mass: MIT Press, 2002.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

The design and analysis of efficient learning algorithms. Cambridge, Mass: MIT Press, 1992.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Schapire, Robert E. Boosting: Foundations and algorithms. Cambridge, MA: MIT Press, 2012.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Pereira, Ana I., Florbela P. Fernandes, João P. Coelho, João P. Teixeira, Maria F. Pacheco, Paulo Alves i Rui P. Lopes, red. Optimization, Learning Algorithms and Applications. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-91885-9.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "Learning algorithms"

1

Grollman, Daniel, i Aude Billard. "Learning Algorithms". W Encyclopedia of the Sciences of Learning, 1766–69. Boston, MA: Springer US, 2012. http://dx.doi.org/10.1007/978-1-4419-1428-6_759.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Aizenberg, Igor N., Naum N. Aizenberg i Joos Vandewalle. "Learning Algorithms". W Multi-Valued and Universal Binary Neurons, 139–67. Boston, MA: Springer US, 2000. http://dx.doi.org/10.1007/978-1-4757-3115-6_4.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Haken, Hermann. "Learning Algorithms". W Springer Series in Synergetics, 88–124. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-662-10182-7_10.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Haken, Hermann. "Learning Algorithms". W Springer Series in Synergetics, 84–120. Berlin, Heidelberg: Springer Berlin Heidelberg, 1991. http://dx.doi.org/10.1007/978-3-662-22450-2_10.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Bisong, Ekaba. "Learning Algorithms". W Building Machine Learning and Deep Learning Models on Google Cloud Platform, 209–11. Berkeley, CA: Apress, 2019. http://dx.doi.org/10.1007/978-1-4842-4470-8_17.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Björklund, Henrik, Johanna Björklund i Wim Martens. "Learning algorithms". W Handbook of Automata Theory, 375–409. Zuerich, Switzerland: European Mathematical Society Publishing House, 2021. http://dx.doi.org/10.4171/automata-1/11.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Berkemer, Rainer, i Markus Grottke. "Learning Algorithms". W AI - Limits and Prospects of Artificial Intelligence, 9–42. Bielefeld, Germany: transcript Verlag, 2023. http://dx.doi.org/10.14361/9783839457320-003.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Aved’yan, Eduard. "Deterministic Algorithms". W Learning Systems, 16–39. London: Springer London, 1995. http://dx.doi.org/10.1007/978-1-4471-3089-5_2.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Aved’yan, Eduard. "Stochastic Algorithms". W Learning Systems, 62–71. London: Springer London, 1995. http://dx.doi.org/10.1007/978-1-4471-3089-5_5.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Geetha, T. V., i S. Sendhilkumar. "Classification Algorithms". W Machine Learning, 127–51. Boca Raton: Chapman and Hall/CRC, 2023. http://dx.doi.org/10.1201/9781003290100-6.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "Learning algorithms"

1

Albeanu, Grigore, Henrik Madsen i Florin Popentiuvladicescu. "LEARNING FROM NATURE: NATURE-INSPIRED ALGORITHMS". W eLSE 2016. Carol I National Defence University Publishing House, 2016. http://dx.doi.org/10.12753/2066-026x-16-158.

Pełny tekst źródła
Streszczenie:
During last decade, the nature has inspired researchers to develop new algorithms [1, 2, 3]. The largest collection of nature-inspired algorithms is biology-inspired: swarm intelligence (particle swarm optimization, ant colony optimization, cuckoo search, bees algorithm, bat algorithm, firefly algorithm etc.), genetic and evolutionary strategies, artificial immune systems etc. As well-known examples, the following have to be mentioned: aircraft wing design, wind turbine design, bionic car, bullet train, optimal decisions related to traffic, appropriate strategies to survive under a well-adapted immune system etc. Based on collective social behavior of organisms, researchers had developed optimization strategies taking into account not only the individuals, but also groups and environment [1]. However, learning from nature, new classes of approaches can be identified, tested and compared against already available algorithms. After a short introduction, this work review the most effective, according to their performance, nature-inspired algorithms, in the second section. The third section is dedicated to learning strategies based on nature oriented thinking. Examples and the benefits obtained from applying nature-inspired strategies in problem solving are given in the fourth section. Concluding remarks are given in the final section. References 1. G. Albeanu, B. Burtschy, Fl. Popentiu-Vladicescu, Soft Computing Strategies in Multiobjective Optimization, Ann. Spiru Haret Univ., Mat-Inf Ser., 2013, 2, http://anale-mi.spiruharet.ro/upload/full_2013_2_a4.pdf 2. H. Madsen, G. Albeanu, and Fl. Popentiu-Vladicescu, BIO Inspired Algorithms in Reliability, In H. Pham (ed.) Proceedings of the 20th ISSAT International Conference on Reliability and Quality in Design, Reliability and Quality in Design, August 7-9, 2014, Seattle, WA, U.S.A. 3. N. Shadbolt, Nature-Inspired Computing, http://www.agent.ai/doc/upload/200402/shad04_1.pdf
Style APA, Harvard, Vancouver, ISO itp.
2

Kamalaruban, Parameswaran, Rati Devidze, Volkan Cevher i Adish Singla. "Interactive Teaching Algorithms for Inverse Reinforcement Learning". W Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/374.

Pełny tekst źródła
Streszczenie:
We study the problem of inverse reinforcement learning (IRL) with the added twist that the learner is assisted by a helpful teacher. More formally, we tackle the following algorithmic question: How could a teacher provide an informative sequence of demonstrations to an IRL learner to speed up the learning process? We present an interactive teaching framework where a teacher adaptively chooses the next demonstration based on learner's current policy. In particular, we design teaching algorithms for two concrete settings: an omniscient setting where a teacher has full knowledge about the learner's dynamics and a blackbox setting where the teacher has minimal knowledge. Then, we study a sequential variant of the popular MCE-IRL learner and prove convergence guarantees of our teaching algorithm in the omniscient setting. Extensive experiments with a car driving simulator environment show that the learning progress can be speeded up drastically as compared to an uninformative teacher.
Style APA, Harvard, Vancouver, ISO itp.
3

Pratuzaitė, Greta, i Nijolė Maknickienė. "Investigation of credit cards fraud detection by using deep learning and classification algorithms". W 11th International Scientific Conference „Business and Management 2020“. VGTU Technika, 2020. http://dx.doi.org/10.3846/bm.2020.558.

Pełny tekst źródła
Streszczenie:
Criminal financial behaviour is a problem for both banks and newly created fintech companies. Credit card fraud detection becomes a challenge for any such company. The aim of this paper is to com-pare ability to detect credit card fraud by four algorithmic methods: Generalized method of moments, K-nearest neighbour, Naive Bayes classification and Deep learning. The deep learning algorithm has been tuned to select key parameters so that fraud detection accuracy is the best. Five recognition accuracy parameters and a cost calcualtions showed that the deep learning algorithm is the best fraud detection meth-od compared to other classification algorithms. A financial company reduces losses and increases customer confidence by using fraud prevention technologies.
Style APA, Harvard, Vancouver, ISO itp.
4

Xia, Ruiqi, Manman Li i Shaozhen Chen. "Cryptographic Algorithms Identification based on Deep Learning". W 3rd International Conference on Artificial Intelligence and Machine Learning (CAIML 2022). Academy and Industry Research Collaboration Center (AIRCC), 2022. http://dx.doi.org/10.5121/csit.2022.121217.

Pełny tekst źródła
Streszczenie:
The identification of cryptographic algorithms is the premise of cryptanalysis which can help recover the keys effectively. This paper focuses on the construction of cryptographic identification classifiers based on residual neural network and feature engineering. We select 6 algorithms including block ciphers and public keys ciphers for experiments. The results show that the accuracy is generally over 90% for each algorithm. Our work has successfully combined deep learning with cryptanalysis, which is also very meaningful for the development of modern cryptography and pattern recognition.
Style APA, Harvard, Vancouver, ISO itp.
5

Dong, Li-yan, Guang-yuan Liu, Sen-miao Yuan, Yong-li Li i Zhen Li. "Classifier Learning Algorithm Based on Genetic Algorithms". W Second International Conference on Innovative Computing, Informatio and Control (ICICIC 2007). IEEE, 2007. http://dx.doi.org/10.1109/icicic.2007.214.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Wilamowski, Bogdan M. "Advanced learning algorithms". W 2009 International Conference on Intelligent Engineering Systems (INES). IEEE, 2009. http://dx.doi.org/10.1109/ines.2009.4924730.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Hasan, Mohammed A. "Orthonormalization Learning Algorithms". W 2007 International Joint Conference on Neural Networks. IEEE, 2007. http://dx.doi.org/10.1109/ijcnn.2007.4371247.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Hurlbert, Anya C., i Tomaso A. Poggio. "Learning Lightness Algorithms". W 1988 Robotics Conferences, redaktor David P. Casasent. SPIE, 1989. http://dx.doi.org/10.1117/12.960280.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Paun, Alexandru. "EXPERIMENTING WITH DECISION TREES AND INSTANCE-BASED LEARNING ALGORITHMS". W eLSE 2014. Editura Universitatii Nationale de Aparare "Carol I", 2014. http://dx.doi.org/10.12753/2066-026x-14-113.

Pełny tekst źródła
Streszczenie:
The use of e-learning systems has gradually increased through last years, becoming more and more pregnant in universities or just self-study. This is because the amount of information is rapidly growing and people often need it available at a click's distance. The plentiful information can be seen as a true blessing, but it can also be a real hassle through the unclassified information leaving the user without any articulate knowledge about the searched subject. Data mining can be seen as the solution for this kind of problem, being used already in various research areas such as medicine, business, market research with immense chunks of provided data. Simply put, data mining techniques can be used to "mine" knowledge from e-learning systems by analyzing information and detecting patterns of teachers and students and, maybe the most important one, knowing and learning the students' assimilation process and learn pattern, information that can't be seen easily with the naked eye. The results can be then applied so that the learning process is more effective, by adding functionalities such as personalized learning process, feedback for professors of the didactic contents or intrusion detection tools. Data mining classification algorithms can be used, ultimately, to predict and classify students (student success, grades). Also, students can be grouped and predict or analyze their response to different teaching techniques. The objective of this study is to improve the performance of one algorithm of data mining classification using another one, having in mind their strengths and weaknesses. The test of their performance was made with the same training sets for all algorithms, firstly testing the "clean" algorithms, taken separately and then the combination of the algorithms. The algorithms used in the study are decision-trees and one instance-based algorithm, k-NN. I used the C4.5 decision trees implementation (both pruned and unpruned tree) and 1-NN and 3-NN algorithms for the instance-based one. We will see in the study whether the combined algorithm performed better than the solitary algorithms and what combination of algorithms outperforms the others.
Style APA, Harvard, Vancouver, ISO itp.
10

Brown, Jason, Riley O'Neill, Jeff Calder i Andrea L. Bertozzi. "Utilizing contrastive learning for graph-based active learning of SAR data". W Algorithms for Synthetic Aperture Radar Imagery XXX, redaktorzy Edmund Zelnio i Frederick D. Garber. SPIE, 2023. http://dx.doi.org/10.1117/12.2663099.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Raporty organizacyjne na temat "Learning algorithms"

1

Stepp, Robert E., Bradley L. Whitehall i Lawrence B. Holder. Toward Intelligent Machine Learning Algorithms. Fort Belvoir, VA: Defense Technical Information Center, maj 1988. http://dx.doi.org/10.21236/ada197049.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Moody, John. Statistical Learning Theory and Algorithms. Fort Belvoir, VA: Defense Technical Information Center, luty 1993. http://dx.doi.org/10.21236/ada270209.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Haussler, David, i Manfred K. Warmuth. Analyzing the Performance of Learning Algorithms. Fort Belvoir, VA: Defense Technical Information Center, sierpień 1993. http://dx.doi.org/10.21236/ada327588.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Bertsekas, Dimitri P. Algorithms for Learning and Decision Making. Fort Belvoir, VA: Defense Technical Information Center, grudzień 2013. http://dx.doi.org/10.21236/ada591909.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Merzlykin, Pavlo, Natalia Kharadzjan, Dmytro Medvedev, Irina Zakarljuka i Liliia Fadeeva. Scheduling Algorithms Exploring via Robotics Learning. [б. в.], 2018. http://dx.doi.org/10.31812/123456789/2877.

Pełny tekst źródła
Streszczenie:
The new approach to schedule-related problems learning with use of robotics is reported. The materials are based on the authors' teaching experience within framework of Robotics School at Kryvyi Rih State Pedagogical University. The proposed learning problem may be used both for scheduling algorithms exploring and robotics competitions.
Style APA, Harvard, Vancouver, ISO itp.
6

Varastehpour, Soheil, Hamid Sharifzadeh i Iman Ardekani. A Comprehensive Review of Deep Learning Algorithms. Unitec ePress, 2021. http://dx.doi.org/10.34074/ocds.092.

Pełny tekst źródła
Streszczenie:
Deep learning algorithms are a subset of machine learning algorithms that aim to explore several levels of the distributed representations from the input data. Recently, many deep learning algorithms have been proposed to solve traditional artificial intelligence problems. In this review paper, some of the up-to-date algorithms of this topic in the field of computer vision and image processing are reviewed. Following this, a brief overview of several different deep learning methods and their recent developments are discussed.
Style APA, Harvard, Vancouver, ISO itp.
7

DeJong, Kenneth A., i William M. Spears. Learning Concept Classification Rules using Genetic Algorithms,. Fort Belvoir, VA: Defense Technical Information Center, styczeń 1990. http://dx.doi.org/10.21236/ada294470.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Moody, John E. New Neural Algorithms for Self-Organized Learning. Fort Belvoir, VA: Defense Technical Information Center, listopad 1991. http://dx.doi.org/10.21236/ada249816.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Moody, John E. New Neural Algorithms for Self-Organized Learning. Fort Belvoir, VA: Defense Technical Information Center, lipiec 1991. http://dx.doi.org/10.21236/ada251771.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Alwan, Iktimal, Dennis D. Spencer i Rafeed Alkawadri. Comparison of Machine Learning Algorithms in Sensorimotor Functional Mapping. Progress in Neurobiology, grudzień 2023. http://dx.doi.org/10.60124/j.pneuro.2023.30.03.

Pełny tekst źródła
Streszczenie:
Objective: To compare the performance of popular machine learning algorithms (ML) in mapping the sensorimotor cortex (SM) and identifying the anterior lip of the central sulcus (CS). Methods: We evaluated support vector machines (SVMs), random forest (RF), decision trees (DT), single layer perceptron (SLP), and multilayer perceptron (MLP) against standard logistic regression (LR) to identify the SM cortex employing validated features from six-minute of NREM sleep icEEG data and applying standard common hyperparameters and 10-fold cross-validation. Each algorithm was tested using vetted features based on the statistical significance of classical univariate analysis (p<0.05) and extended () 17 features representing power/coherence of different frequency bands, entropy, and interelectrode-based distance. The analysis was performed before and after weight adjustment for imbalanced data (w). Results: 7 subjects and 376 contacts were included. Before optimization, ML algorithms performed comparably employing conventional features (median CS accuracy: 0.89, IQR [0.88-0.9]). After optimization, neural networks outperformed others in means of accuracy (MLP: 0.86), the area under the curve (AUC) (SLPw, MLPw, MLP: 0.91), recall (SLPw: 0.82, MLPw: 0.81), precision (SLPw: 0.84), and F1-scores (SLPw: 0.82). SVM achieved the best specificity performance. Extending the number of features and adjusting the weights improved recall, precision, and F1-scores by 48.27%, 27.15%, and 39.15%, respectively, with gains or no significant losses in specificity and AUC across CS and Function (correlation r=0.71 between the two clinical scenarios in all performance metrics, p<0.001). Interpretation: Computational passive sensorimotor mapping is feasible and reliable. Feature extension and weight adjustments improve the performance and counterbalance the accuracy paradox. Optimized neural networks outperform other ML algorithms even in binary classification tasks. The best-performing models and the MATLAB® routine employed in signal processing are available to the public at (Link 1).
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii