Littérature scientifique sur le sujet « Machine learning, Global Optimization »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Machine learning, Global Optimization ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Articles de revues sur le sujet "Machine learning, Global Optimization"

1

Cassioli, A., D. Di Lorenzo, M. Locatelli, F. Schoen et M. Sciandrone. « Machine learning for global optimization ». Computational Optimization and Applications 51, no 1 (5 mai 2010) : 279–303. http://dx.doi.org/10.1007/s10589-010-9330-x.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Kudyshev, Zhaxylyk A., Alexander V. Kildishev, Vladimir M. Shalaev et Alexandra Boltasseva. « Machine learning–assisted global optimization of photonic devices ». Nanophotonics 10, no 1 (28 octobre 2020) : 371–83. http://dx.doi.org/10.1515/nanoph-2020-0376.

Texte intégral
Résumé :
AbstractOver the past decade, artificially engineered optical materials and nanostructured thin films have revolutionized the area of photonics by employing novel concepts of metamaterials and metasurfaces where spatially varying structures yield tailorable “by design” effective electromagnetic properties. The current state-of-the-art approach to designing and optimizing such structures relies heavily on simplistic, intuitive shapes for their unit cells or metaatoms. Such an approach cannot provide the global solution to a complex optimization problem where metaatom shape, in-plane geometry, out-of-plane architecture, and constituent materials have to be properly chosen to yield the maximum performance. In this work, we present a novel machine learning–assisted global optimization framework for photonic metadevice design. We demonstrate that using an adversarial autoencoder (AAE) coupled with a metaheuristic optimization framework significantly enhances the optimization search efficiency of the metadevice configurations with complex topologies. We showcase the concept of physics-driven compressed design space engineering that introduces advanced regularization into the compressed space of an AAE based on the optical responses of the devices. Beyond the significant advancement of the global optimization schemes, our approach can assist in gaining comprehensive design “intuition” by revealing the underlying physics of the optical performance of metadevices with complex topologies and material compositions.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Abdul Salam, Mustafa, Ahmad Taher Azar et Rana Hussien. « Swarm-Based Extreme Learning Machine Models for Global Optimization ». Computers, Materials & ; Continua 70, no 3 (2022) : 6339–63. http://dx.doi.org/10.32604/cmc.2022.020583.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

TAKAMATSU, Ryosuke, et Wataru YAMAZAKI. « Global topology optimization of supersonic airfoil using machine learning technologies ». Proceedings of The Computational Mechanics Conference 2021.34 (2021) : 112. http://dx.doi.org/10.1299/jsmecmd.2021.34.112.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Tsoulos, Ioannis G., Alexandros Tzallas, Evangelos Karvounis et Dimitrios Tsalikakis. « NeuralMinimizer : A Novel Method for Global Optimization ». Information 14, no 2 (25 janvier 2023) : 66. http://dx.doi.org/10.3390/info14020066.

Texte intégral
Résumé :
The problem of finding the global minimum of multidimensional functions is often applied to a wide range of problems. An innovative method of finding the global minimum of multidimensional functions is presented here. This method first generates an approximation of the objective function using only a few real samples from it. These samples construct the approach using a machine learning model. Next, the required sampling is performed by the approximation function. Furthermore, the approach is improved on each sample by using found local minima as samples for the training set of the machine learning model. In addition, as a termination criterion, the proposed technique uses a widely used criterion from the relevant literature which in fact evaluates it after each execution of the local minimization. The proposed technique was applied to a number of well-known problems from the relevant literature, and the comparative results with respect to modern global minimization techniques are shown to be extremely promising.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Honda, M., et E. Narita. « Machine-learning assisted steady-state profile predictions using global optimization techniques ». Physics of Plasmas 26, no 10 (octobre 2019) : 102307. http://dx.doi.org/10.1063/1.5117846.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Wu, Shaohua, Yong Hu, Wei Wang, Xinyong Feng et Wanneng Shu. « Application of Global Optimization Methods for Feature Selection and Machine Learning ». Mathematical Problems in Engineering 2013 (2013) : 1–8. http://dx.doi.org/10.1155/2013/241517.

Texte intégral
Résumé :
The feature selection process constitutes a commonly encountered problem of global combinatorial optimization. The process reduces the number of features by removing irrelevant and redundant data. This paper proposed a novel immune clonal genetic algorithm based on immune clonal algorithm designed to solve the feature selection problem. The proposed algorithm has more exploration and exploitation abilities due to the clonal selection theory, and each antibody in the search space specifies a subset of the possible features. Experimental results show that the proposed algorithm simplifies the feature selection process effectively and obtains higher classification accuracy than other feature selection algorithms.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Ma, Sicong, Cheng Shang, Chuan-Ming Wang et Zhi-Pan Liu. « Thermodynamic rules for zeolite formation from machine learning based global optimization ». Chemical Science 11, no 37 (2020) : 10113–18. http://dx.doi.org/10.1039/d0sc03918g.

Texte intégral
Résumé :
Machine learning based atomic simulation explores more than one million minima from global potential energy surface of SiAlPO system, and identifies thermodynamics rules on energetics, framework and composition for stable zeolite.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Huang, Si-Da, Cheng Shang, Pei-Lin Kang et Zhi-Pan Liu. « Atomic structure of boron resolved using machine learning and global sampling ». Chemical Science 9, no 46 (2018) : 8644–55. http://dx.doi.org/10.1039/c8sc03427c.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Barkalov, Konstantin, Ilya Lebedev et Evgeny Kozinov. « Acceleration of Global Optimization Algorithm by Detecting Local Extrema Based on Machine Learning ». Entropy 23, no 10 (28 septembre 2021) : 1272. http://dx.doi.org/10.3390/e23101272.

Texte intégral
Résumé :
This paper features the study of global optimization problems and numerical methods of their solution. Such problems are computationally expensive since the objective function can be multi-extremal, nondifferentiable, and, as a rule, given in the form of a “black box”. This study used a deterministic algorithm for finding the global extremum. This algorithm is based neither on the concept of multistart, nor nature-inspired algorithms. The article provides computational rules of the one-dimensional algorithm and the nested optimization scheme which could be applied for solving multidimensional problems. Please note that the solution complexity of global optimization problems essentially depends on the presence of multiple local extrema. In this paper, we apply machine learning methods to identify regions of attraction of local minima. The use of local optimization algorithms in the selected regions can significantly accelerate the convergence of global search as it could reduce the number of search trials in the vicinity of local minima. The results of computational experiments carried out on several hundred global optimization problems of different dimensionalities presented in the paper confirm the effect of accelerated convergence (in terms of the number of search trials required to solve a problem with a given accuracy).
Styles APA, Harvard, Vancouver, ISO, etc.

Thèses sur le sujet "Machine learning, Global Optimization"

1

Nowak, Hans II(Hans Antoon). « Strategic capacity planning using data science, optimization, and machine learning ». Thesis, Massachusetts Institute of Technology, 2020. https://hdl.handle.net/1721.1/126914.

Texte intégral
Résumé :
Thesis: M.B.A., Massachusetts Institute of Technology, Sloan School of Management, in conjunction with the Leaders for Global Operations Program at MIT, May, 2020
Thesis: S.M., Massachusetts Institute of Technology, Department of Mechanical Engineering, in conjunction with the Leaders for Global Operations Program at MIT, May, 2020
Cataloged from the official PDF of thesis.
Includes bibliographical references (pages 101-104).
Raytheon's Circuit Card Assembly (CCA) factory in Andover, MA is Raytheon's largest factory and the largest Department of Defense (DOD) CCA manufacturer in the world. With over 500 operations, it manufactures over 7000 unique parts with a high degree of complexity and varying levels of demand. Recently, the factory has seen an increase in demand, making the ability to continuously analyze factory capacity and strategically plan for future operations much needed. This study seeks to develop a sustainable strategic capacity optimization model and capacity visualization tool that integrates demand data with historical manufacturing data. Through automated data mining algorithms of factory data sources, capacity utilization and overall equipment effectiveness (OEE) for factory operations are evaluated. Machine learning methods are then assessed to gain an accurate estimate of cycle time (CT) throughout the factory. Finally, a mixed-integer nonlinear program (MINLP) integrates the capacity utilization framework and machine learning predictions to compute the optimal strategic capacity planning decisions. Capacity utilization and OEE models are shown to be able to be generated through automated data mining algorithms. Machine learning models are shown to have a mean average error (MAE) of 1.55 on predictions for new data, which is 76.3% lower than the current CT prediction error. Finally, the MINLP is solved to optimality within a tolerance of 1.00e-04 and generates resource and production decisions that can be acted upon.
by Hans Nowak II.
M.B.A.
S.M.
M.B.A. Massachusetts Institute of Technology, Sloan School of Management
S.M. Massachusetts Institute of Technology, Department of Mechanical Engineering
Styles APA, Harvard, Vancouver, ISO, etc.
2

Veluscek, Marco. « Global supply chain optimization : a machine learning perspective to improve caterpillar's logistics operations ». Thesis, Brunel University, 2016. http://bura.brunel.ac.uk/handle/2438/13050.

Texte intégral
Résumé :
Supply chain optimization is one of the key components for the effective management of a company with a complex manufacturing process and distribution network. Companies with a global presence in particular are motivated to optimize their distribution plans in order to keep their operating costs low and competitive. Changing condition in the global market and volatile energy prices increase the need for an automatic decision and optimization tool. In recent years, many techniques and applications have been proposed to address the problem of supply chain optimization. However, such techniques are often too problemspecific or too knowledge-intensive to be implemented as in-expensive, and easy-to-use computer system. The effort required to implement an optimization system for a new instance of the problem appears to be quite significant. The development process necessitates the involvement of expert personnel and the level of automation is low. The aim of this project is to develop a set of strategies capable of increasing the level of automation when developing a new optimization system. An increased level of automation is achieved by focusing on three areas: multi-objective optimization, optimization algorithm usability, and optimization model design. A literature review highlighted the great level of interest for the problem of multiobjective optimization in the research community. However, the review emphasized a lack of standardization in the area and insufficient understanding of the relationship between multi-objective strategies and problems. Experts in the area of optimization and artificial intelligence are interested in improving the usability of the most recent optimization algorithms. They stated the concern that the large number of variants and parameters, which characterizes such algorithms, affect their potential applicability in real-world environments. Such characteristics are seen as the root cause for the low success of the most recent optimization algorithms in industrial applications. Crucial task for the development of an optimization system is the design of the optimization model. Such task is one of the most complex in the development process, however, it is still performed mostly manually. The importance and the complexity of the task strongly suggest the development of tools to aid the design of optimization models. In order to address such challenges, first the problem of multi-objective optimization is considered and the most widely adopted techniques to solve it are identified. Such techniques are analyzed and described in details to increase the level of standardization in the area. Empirical evidences are highlighted to suggest what type of relationship exists between strategies and problem instances. Regarding the optimization algorithm, a classification method is proposed to improve its usability and computational requirement by automatically tuning one of its key parameters, the termination condition. The algorithm understands the problem complexity and automatically assigns the best termination condition to minimize runtime. The runtime of the optimization system has been reduced by more than 60%. Arguably, the usability of the algorithm has been improved as well, as one of the key configuration tasks can now be completed automatically. Finally, a system is presented to aid the definition of the optimization model through regression analysis. The purpose of the method is to gather as much knowledge about the problem as possible so that the task of the optimization model definition requires a lower user involvement. The application of the proposed algorithm is estimated that could have saved almost 1000 man-weeks to complete the project. The developed strategies have been applied to the problem of Caterpillar’s global supply chain optimization. This thesis describes also the process of developing an optimization system for Caterpillar and highlights the challenges and research opportunities identified while undertaking this work. This thesis describes the optimization model designed for Caterpillar’s supply chain and the implementation details of the Ant Colony System, the algorithm selected to optimize the supply chain. The system is now used to design the distribution plans of more than 7,000 products. The system improved Caterpillar’s marginal profit on such products by a factor of 4.6% on average.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Schweidtmann, Artur M. [Verfasser], Alexander [Akademischer Betreuer] Mitsos et Andreas [Akademischer Betreuer] Schuppert. « Global optimization of processes through machine learning / Artur M. Schweidtmann ; Alexander Mitsos, Andreas Schuppert ». Aachen : Universitätsbibliothek der RWTH Aachen, 2021. http://d-nb.info/1240690924/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Taheri, Mehdi. « Machine Learning from Computer Simulations with Applications in Rail Vehicle Dynamics and System Identification ». Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/81417.

Texte intégral
Résumé :
The application of stochastic modeling for learning the behavior of multibody dynamics models is investigated. The stochastic modeling technique is also known as Kriging or random function approach. Post-processing data from a simulation run is used to train the stochastic model that estimates the relationship between model inputs, such as the suspension relative displacement and velocity, and the output, for example, sum of suspension forces. Computational efficiency of Multibody Dynamics (MBD) models can be improved by replacing their computationally-intensive subsystems with stochastic predictions. The stochastic modeling technique is able to learn the behavior of a physical system and integrate its behavior in MBS models, resulting in improved real-time simulations and reduced computational effort in models with repeated substructures (for example, modeling a train with a large number of rail vehicles). Since the sampling plan greatly influences the overall accuracy and efficiency of the stochastic predictions, various sampling plans are investigated, and a space-filling Latin Hypercube sampling plan based on the traveling salesman problem (TPS) is suggested for efficiently representing the entire parameter space. The simulation results confirm the expected increased modeling efficiency, although further research is needed for improving the accuracy of the predictions. The prediction accuracy is expected to improve through employing a sampling strategy that considers the discrete nature of the training data and uses infill criteria that considers the shape of the output function and detects sample spaces with high prediction errors. It is recommended that future efforts consider quantifying the computation efficiency of the proposed learning behavior by overcoming the inefficiencies associated with transferring data between multiple software packages, which proved to be a limiting factor in this study. These limitations can be overcome by using the user subroutine functionality of SIMPACK and adding the stochastic modeling technique to its force library.
Ph. D.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Gabere, Musa Nur. « Prediction of antimicrobial peptides using hyperparameter optimized support vector machines ». Thesis, University of the Western Cape, 2011. http://etd.uwc.ac.za/index.php?module=etd&action=viewtitle&id=gen8Srv25Nme4_7345_1330684697.

Texte intégral
Résumé :

Antimicrobial peptides (AMPs) play a key role in the innate immune response. They can be ubiquitously found in a wide range of eukaryotes including mammals, amphibians, insects, plants, and protozoa. In lower organisms, AMPs function merely as antibiotics by permeabilizing cell membranes and lysing invading microbes. Prediction of antimicrobial peptides is important because experimental methods used in characterizing AMPs are costly, time consuming and resource intensive and identification of AMPs in insects can serve as a template for the design of novel antibiotic. In order to fulfil this, firstly, data on antimicrobial peptides is extracted from UniProt, manually curated and stored into a centralized database called dragon antimicrobial peptide database (DAMPD). Secondly, based on the curated data, models to predict antimicrobial peptides are created using support vector machine with optimized hyperparameters. In particular, global optimization methods such as grid search, pattern search and derivative-free methods are utilised to optimize the SVM hyperparameters. These models are useful in characterizing unknown antimicrobial peptides. Finally, a webserver is created that will be used to predict antimicrobial peptides in haemotophagous insects such as Glossina morsitan and Anopheles gambiae.

Styles APA, Harvard, Vancouver, ISO, etc.
6

Belkhir, Nacim. « Per Instance Algorithm Configuration for Continuous Black Box Optimization ». Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS455/document.

Texte intégral
Résumé :
Cette thèse porte sur la configurationAutomatisée des algorithmes qui vise à trouver le meilleur paramétrage à un problème donné ou une catégorie deproblèmes.Le problème de configuration de l'algorithme revient doncà un problème de métaFoptimisation dans l'espace desparamètres, dont le métaFobjectif est la mesure deperformance de l’algorithme donné avec une configuration de paramètres donnée.Des approches plus récentes reposent sur une description des problèmes et ont pour but d’apprendre la relationentre l’espace des caractéristiques des problèmes etl’espace des configurations de l’algorithme à paramétrer.Cette thèse de doctorat porter le CAPI (Configurationd'Algorithme Par Instance) pour résoudre des problèmesd'optimisation de boîte noire continus, où seul un budgetlimité d'évaluations de fonctions est disponible. Nous étudions d'abord' les algorithmes évolutionnairesPour l'optimisation continue, en mettant l'accent sur deux algorithmes que nous avons utilisés comme algorithmecible pour CAPI,DE et CMAFES.Ensuite, nous passons en revue l'état de l'art desapproches de configuration d'algorithme, et lesdifférentes fonctionnalités qui ont été proposées dansla littérature pour décrire les problèmesd'optimisation de boîte noire continue.Nous introduisons ensuite une méthodologie générale Pour étudier empiriquement le CAPI pour le domainecontinu, de sorte que toutes les composantes du CAPIpuissent être explorées dans des conditions réelles.À cette fin, nous introduisons également un nouveau Banc d'essai de boîte noire continue, distinct ducélèbre benchmark BBOB, qui est composé deplusieurs fonctions de test multidimensionnelles avec'différentes propriétés problématiques, issues de lalittérature.La méthodologie proposée est finalement appliquée 'àdeux AES. La méthodologie est ainsi, validéempiriquement sur le nouveau banc d’essaid’optimisation boîte noire pour des dimensions allant jusqu’à 100
This PhD thesis focuses on the automated algorithm configuration that aims at finding the best parameter setting for a given problem or a' class of problem. The Algorithm Configuration problem thus amounts to a metal Foptimization problem in the space of parameters, whosemetaFobjective is the performance measure of the given algorithm at hand with a given parameter configuration. However, in the continuous domain, such method can only be empirically assessed at the cost of running the algorithm on some problem instances. More recent approaches rely on a description of problems in some features space, and try to learn a mapping from this feature space onto the space of parameter configurations of the algorithm at hand. Along these lines, this PhD thesis focuses on the Per Instance Algorithm Configuration (PIAC) for solving continuous black boxoptimization problems, where only a limited budget confessionnalisations available. We first survey Evolutionary Algorithms for continuous optimization, with a focus on two algorithms that we have used as target algorithm for PIAC, DE and CMAFES. Next, we review the state of the art of Algorithm Configuration approaches, and the different features that have been proposed in the literature to describe continuous black box optimization problems. We then introduce a general methodology to empirically study PIAC for the continuous domain, so that all the components of PIAC can be explored in real Fworld conditions. To this end, we also introduce a new continuous black box test bench, distinct from the famous BBOB'benchmark, that is composed of a several multiFdimensional test functions with different problem properties, gathered from the literature. The methodology is finally applied to two EAS. First we use Differential Evolution as'target algorithm, and explore all the components of PIAC, such that we empirically assess the best. Second, based on the results on DE, we empirically investigate PIAC with Covariance Matrix Adaptation Evolution Strategy (CMAFES) as target algorithm. Both use cases empirically validate the proposed methodology on the new black box testbench for dimensions up to100
Styles APA, Harvard, Vancouver, ISO, etc.
7

Liu, Liu. « Stochastic Optimization in Machine Learning ». Thesis, The University of Sydney, 2019. http://hdl.handle.net/2123/19982.

Texte intégral
Résumé :
Stochastic optimization has received extensive attention in recent years due to their extremely potential for solving the large-scale optimization problem. However, the classical optimization algorithm and original stochastic method might prove to be inefficient due to the fact that: 1) the cost-per-iteration is a computational challenge, 2) the convergence and complexity are poorly performed. In this thesis, we exploit the stochastic optimization from three kinds of "order" optimization to address the problem. For the stochastic zero-order optimization, we introduce a novel variance reduction based method under Gaussian smoothing and establish the complexity for optimizing non-convex problems. With variance reduction on both sample space and search space, the complexity of our algorithm is sublinear to d and is strictly better than current approaches, in both smooth and non-smooth cases. Moreover, we extend the proposed method to the mini-batch version. For the stochastic first-order optimization, we consider two kinds of functions with one finite-sum and two finite-sums. The one first structure, we apply the dual coordinate ascent and accelerated algorithm to propose a general scheme for the double-accelerated stochastic method to deal with the ill-conditioned problem. The second structure, we apply the variance-reduced technique to derive the stochastic composition, including inner and outer finite-sum functions with a large number of component functions, via variance reduction that significantly improves the query complexity when the number of inner component functions is sufficiently large. For the stochastic second-order optimization, we study a family of stochastic trust region and cubic regularization methods when gradient, Hessian and function values are computed inexactly, and show the iteration complexity to achieve $\epsilon$-approximate second-order optimality is in the same order with previous work for which gradient and function values are computed exactly. The mild conditions on inexactness can be achieved in finite-sum minimization using random sampling.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Leblond, Rémi. « Asynchronous optimization for machine learning ». Thesis, Paris Sciences et Lettres (ComUE), 2018. http://www.theses.fr/2018PSLEE057/document.

Texte intégral
Résumé :
Les explosions combinées de la puissance computationnelle et de la quantité de données disponibles ont fait des algorithmes les nouveaux facteurs limitants en machine learning. L’objectif de cette thèse est donc d’introduire de nouvelles méthodes capables de tirer profit de quantités de données et de ressources computationnelles importantes. Nous présentons deux contributions indépendantes. Premièrement, nous développons des algorithmes d’optimisation rapides, adaptés aux avancées en architecture de calcul parallèle pour traiter des quantités massives de données. Nous introduisons un cadre d’analyse pour les algorithmes parallèles asynchrones, qui nous permet de faire des preuves correctes et simples. Nous démontrons son utilité en analysant les propriétés de convergence et d’accélération de deux nouveaux algorithmes. Asaga est une variante parallèle asynchrone et parcimonieuse de Saga, un algorithme à variance réduite qui a un taux de convergence linéaire rapide dans le cas d’un objectif lisse et fortement convexe. Dans les conditions adéquates, Asaga est linéairement plus rapide que Saga, même en l’absence de parcimonie. ProxAsaga est une extension d’Asaga au cas plus général où le terme de régularisation n’est pas lisse. ProxAsaga obtient aussi une accélération linéaire. Nous avons réalisé des expériences approfondies pour comparer nos algorithms à l’état de l’art. Deuxièmement, nous présentons de nouvelles méthodes adaptées à la prédiction structurée. Nous nous concentrons sur les réseaux de neurones récurrents (RNNs), dont l’algorithme d’entraînement traditionnel – basé sur le principe du maximum de vraisemblance (MLE) – présente plusieurs limitations. La fonction de coût associée ignore l’information contenue dans les métriques structurées ; de plus, elle entraîne des divergences entre l’entraînement et la prédiction. Nous proposons donc SeaRNN, un nouvel algorithme d’entraînement des RNNs inspiré de l’approche dite “learning to search”. SeaRNN repose sur une exploration de l’espace d’états pour définir des fonctions de coût globales-locales, plus proches de la métrique d’évaluation que l’objectif MLE. Les modèles entraînés avec SeaRNN ont de meilleures performances que ceux appris via MLE pour trois tâches difficiles, dont la traduction automatique. Enfin, nous étudions le comportement de ces modèles et effectuons une comparaison détaillée de notre nouvelle approche aux travaux de recherche connexes
The impressive breakthroughs of the last two decades in the field of machine learning can be in large part attributed to the explosion of computing power and available data. These two limiting factors have been replaced by a new bottleneck: algorithms. The focus of this thesis is thus on introducing novel methods that can take advantage of high data quantity and computing power. We present two independent contributions. First, we develop and analyze novel fast optimization algorithms which take advantage of the advances in parallel computing architecture and can handle vast amounts of data. We introduce a new framework of analysis for asynchronous parallel incremental algorithms, which enable correct and simple proofs. We then demonstrate its usefulness by performing the convergence analysis for several methods, including two novel algorithms. Asaga is a sparse asynchronous parallel variant of the variance-reduced algorithm Saga which enjoys fast linear convergence rates on smooth and strongly convex objectives. We prove that it can be linearly faster than its sequential counterpart, even without sparsity assumptions. ProxAsaga is an extension of Asaga to the more general setting where the regularizer can be non-smooth. We prove that it can also achieve a linear speedup. We provide extensive experiments comparing our new algorithms to the current state-of-art. Second, we introduce new methods for complex structured prediction tasks. We focus on recurrent neural networks (RNNs), whose traditional training algorithm for RNNs – based on maximum likelihood estimation (MLE) – suffers from several issues. The associated surrogate training loss notably ignores the information contained in structured losses and introduces discrepancies between train and test times that may hurt performance. To alleviate these problems, we propose SeaRNN, a novel training algorithm for RNNs inspired by the “learning to search” approach to structured prediction. SeaRNN leverages test-alike search space exploration to introduce global-local losses that are closer to the test error than the MLE objective. We demonstrate improved performance over MLE on three challenging tasks, and provide several subsampling strategies to enable SeaRNN to scale to large-scale tasks, such as machine translation. Finally, after contrasting the behavior of SeaRNN models to MLE models, we conduct an in-depth comparison of our new approach to the related work
Styles APA, Harvard, Vancouver, ISO, etc.
9

Bai, Hao. « Machine learning assisted probabilistic prediction of long-term fatigue damage and vibration reduction of wind turbine tower using active damping system ». Thesis, Normandie, 2021. http://www.theses.fr/2021NORMIR01.

Texte intégral
Résumé :
Cette thèse est consacrée au développement d'un système d'amortissement actif pour la réduction des vibrations du mât d'éoliennes en cas de vent avec rafales et de vent avec turbulence. La présence de vibrations entraîne souvent soit une déflexion ultime au sommet du mât d'éolienne, soit une défaillance due à la fatigue du matériau près du bas du mât d'éolienne. De plus, étant donné la nature aléatoire de l'état du vent, il est indispensable d'examiner ce problème d'un point de vue probabiliste. Dans ce travail, un cadre probabiliste d'analyse de la fatigue est développé et amélioré en utilisant le réseau de neurones résiduels. Un système d'amortissement utilisant un amortisseur actif, le Twin Rotor Damper, est conçu pour l'éolienne référentielle NREL 5MW. La conception est optimisée par un algorithme évolutionniste avec une méthode de réglage automatique des paramètres basée sur l'exploitation et l'exploration
This dissertation is devoted to the development of an active damping system for vibration reduction of wind turbine tower under gusty wind and turbulent wind. The presence of vibrations often leads to either an ultimate deflection on the top of wind tower or a failure due to the material’s fatigue near the bottom of wind tower. Furthermore, given the random nature of wind conditions, it is indispensable to look at this problem from a probabilistic point of view. In this work, a probabilistic framework of fatigue analysis is developed and improved by using a residual neural network. A damping system employing an active damper, Twin Rotor Damper, is designed for NREL 5MW reference wind turbine. The design is optimized by an evolutionary algorithm with automatic parameter tuning method based on exploitation and exploration
Styles APA, Harvard, Vancouver, ISO, etc.
10

Chang, Allison An. « Integer optimization methods for machine learning ». Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/72643.

Texte intégral
Résumé :
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2012.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (p. 129-137).
In this thesis, we propose new mixed integer optimization (MIO) methods to ad- dress problems in machine learning. The first part develops methods for supervised bipartite ranking, which arises in prioritization tasks in diverse domains such as information retrieval, recommender systems, natural language processing, bioinformatics, and preventative maintenance. The primary advantage of using MIO for ranking is that it allows for direct optimization of ranking quality measures, as opposed to current state-of-the-art algorithms that use heuristic loss functions. We demonstrate using a number of datasets that our approach can outperform other ranking methods. The second part of the thesis focuses on reverse-engineering ranking models. This is an application of a more general ranking problem than the bipartite case. Quality rankings affect business for many organizations, and knowing the ranking models would allow these organizations to better understand the standards by which their products are judged and help them to create higher quality products. We introduce an MIO method for reverse-engineering such models and demonstrate its performance in a case-study with real data from a major ratings company. We also devise an approach to find the most cost-effective way to increase the rank of a certain product. In the final part of the thesis, we develop MIO methods to first generate association rules and then use the rules to build an interpretable classifier in the form of a decision list, which is an ordered list of rules. These are both combinatorially challenging problems because even a small dataset may yield a large number of rules and a small set of rules may correspond to many different orderings. We show how to use MIO to mine useful rules, as well as to construct a classifier from them. We present results in terms of both classification accuracy and interpretability for a variety of datasets.
by Allison An Chang.
Ph.D.
Styles APA, Harvard, Vancouver, ISO, etc.

Livres sur le sujet "Machine learning, Global Optimization"

1

Optimization for machine learning. Cambridge, Mass : MIT Press, 2012.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Lin, Zhouchen, Huan Li et Cong Fang. Accelerated Optimization for Machine Learning. Singapore : Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-2910-8.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Agrawal, Tanay. Hyperparameter Optimization in Machine Learning. Berkeley, CA : Apress, 2021. http://dx.doi.org/10.1007/978-1-4842-6579-6.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Fazelnia, Ghazal. Optimization for Probabilistic Machine Learning. [New York, N.Y.?] : [publisher not identified], 2019.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Nicosia, Giuseppe, Varun Ojha, Emanuele La Malfa, Gabriele La Malfa, Giorgio Jansen, Panos M. Pardalos, Giovanni Giuffrida et Renato Umeton, dir. Machine Learning, Optimization, and Data Science. Cham : Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-95470-3.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Nicosia, Giuseppe, Varun Ojha, Emanuele La Malfa, Gabriele La Malfa, Giorgio Jansen, Panos M. Pardalos, Giovanni Giuffrida et Renato Umeton, dir. Machine Learning, Optimization, and Data Science. Cham : Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-95467-3.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Jiang, Jiawei, Bin Cui et Ce Zhang. Distributed Machine Learning and Gradient Optimization. Singapore : Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-3420-8.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Pardalos, Panos, Mario Pavone, Giovanni Maria Farinella et Vincenzo Cutello, dir. Machine Learning, Optimization, and Big Data. Cham : Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-27926-8.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Nicosia, Giuseppe, Panos Pardalos, Renato Umeton, Giovanni Giuffrida et Vincenzo Sciacca, dir. Machine Learning, Optimization, and Data Science. Cham : Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-37599-7.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Kulkarni, Anand J., et Suresh Chandra Satapathy, dir. Optimization in Machine Learning and Applications. Singapore : Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-0994-0.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Chapitres de livres sur le sujet "Machine learning, Global Optimization"

1

Kearfott, Ralph Baker. « Mathematically Rigorous Global Optimization and Fuzzy Optimization ». Dans Black Box Optimization, Machine Learning, and No-Free Lunch Theorems, 169–94. Cham : Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-66515-9_7.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

de Winter, Roy, Bas van Stein, Matthys Dijkman et Thomas Bäck. « Designing Ships Using Constrained Multi-objective Efficient Global Optimization ». Dans Machine Learning, Optimization, and Data Science, 191–203. Cham : Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-13709-0_16.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Cocola, Jorio, et Paul Hand. « Global Convergence of Sobolev Training for Overparameterized Neural Networks ». Dans Machine Learning, Optimization, and Data Science, 574–86. Cham : Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-64583-0_51.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Zabinsky, Zelda B., Giulia Pedrielli et Hao Huang. « A Framework for Multi-fidelity Modeling in Global Optimization Approaches ». Dans Machine Learning, Optimization, and Data Science, 335–46. Cham : Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-37599-7_28.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Griewank, Andreas, et Ángel Rojas. « Treating Artificial Neural Net Training as a Nonsmooth Global Optimization Problem ». Dans Machine Learning, Optimization, and Data Science, 759–70. Cham : Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-37599-7_64.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Issa, Mohamed, Aboul Ella Hassanien et Ibrahim Ziedan. « Performance Evaluation of Sine-Cosine Optimization Versus Particle Swarm Optimization for Global Sequence Alignment Problem ». Dans Machine Learning Paradigms : Theory and Application, 375–91. Cham : Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-02357-7_18.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Wang, Yong-Jun, Jiang-She Zhang et Yu-Fen Zhang. « An Effective and Efficient Two Stage Algorithm for Global Optimization ». Dans Advances in Machine Learning and Cybernetics, 487–96. Berlin, Heidelberg : Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11739685_51.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Kiranyaz, Serkan, Turker Ince et Moncef Gabbouj. « Improving Global Convergence ». Dans Multidimensional Particle Swarm Optimization for Machine Learning and Pattern Recognition, 101–49. Berlin, Heidelberg : Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-37846-1_5.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Consoli, Sergio, Luca Tiozzo Pezzoli et Elisa Tosetti. « Using the GDELT Dataset to Analyse the Italian Sovereign Bond Market ». Dans Machine Learning, Optimization, and Data Science, 190–202. Cham : Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-64583-0_18.

Texte intégral
Résumé :
AbstractThe Global Data on Events, Location, and Tone (GDELT) is a real time large scale database of global human society for open research which monitors worlds broadcast, print, and web news, creating a free open platform for computing on the entire world’s media. In this work, we first describe a data crawler, which collects metadata of the GDELT database in real-time and stores them in a big data management system based on Elasticsearch, a popular and efficient search engine relying on the Lucene library. Then, by exploiting and engineering the detailed information of each news encoded in GDELT, we build indicators capturing investor’s emotions which are useful to analyse the sovereign bond market in Italy. By using regression analysis and by exploiting the power of Gradient Boosting models from machine learning, we find that the features extracted from GDELT improve the forecast of country government yield spread, relative that of a baseline regression where only conventional regressors are included. The improvement in the fitting is particularly relevant during the period government crisis in May-December 2018.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Rodrigues, Douglas, Gustavo Henrique de Rosa, Leandro Aparecido Passos et João Paulo Papa. « Adaptive Improved Flower Pollination Algorithm for Global Optimization ». Dans Nature-Inspired Computation in Data Mining and Machine Learning, 1–21. Cham : Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28553-1_1.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Actes de conférences sur le sujet "Machine learning, Global Optimization"

1

He, Yi-chao, et Kun-qi Liu. « A Modified Particle Swarm Optimization for Solving Global Optimization Problems ». Dans 2006 International Conference on Machine Learning and Cybernetics. IEEE, 2006. http://dx.doi.org/10.1109/icmlc.2006.258615.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Tamura, Kenichi, et Keiichiro Yasuda. « Spiral Multipoint Search for Global Optimization ». Dans 2011 Tenth International Conference on Machine Learning and Applications (ICMLA). IEEE, 2011. http://dx.doi.org/10.1109/icmla.2011.131.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Yong-Jun Wang, Jiang-She Zhang et Yu-Fen Zhang. « A fast hybrid algorithm for global optimization ». Dans Proceedings of 2005 International Conference on Machine Learning and Cybernetics. IEEE, 2005. http://dx.doi.org/10.1109/icmlc.2005.1527462.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Sun, Gao-Ji. « A new evolutionary algorithm for global numerical optimization ». Dans 2010 International Conference on Machine Learning and Cybernetics (ICMLC). IEEE, 2010. http://dx.doi.org/10.1109/icmlc.2010.5580961.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Nacef, Abdelhakim, Miloud Bagaa, Youcef Aklouf, Abdellah Kaci, Diego Leonel Cadette Dutra et Adlen Ksentini. « Self-optimized network : When Machine Learning Meets Optimization ». Dans GLOBECOM 2021 - 2021 IEEE Global Communications Conference. IEEE, 2021. http://dx.doi.org/10.1109/globecom46510.2021.9685681.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Li, Xue-Qiang, Zhi-Feng Hao et Han Huang. « An evolutionary algorithm with sorted race mechanism for global optimization ». Dans 2010 International Conference on Machine Learning and Cybernetics (ICMLC). IEEE, 2010. http://dx.doi.org/10.1109/icmlc.2010.5580810.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Injadat, MohammadNoor, Fadi Salo, Ali Bou Nassif, Aleksander Essex et Abdallah Shami. « Bayesian Optimization with Machine Learning Algorithms Towards Anomaly Detection ». Dans GLOBECOM 2018 - 2018 IEEE Global Communications Conference. IEEE, 2018. http://dx.doi.org/10.1109/glocom.2018.8647714.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Candelieri, Antonio, et Francesco Archetti. « Sequential model based optimization with black-box constraints : Feasibility determination via machine learning ». Dans PROCEEDINGS LEGO – 14TH INTERNATIONAL GLOBAL OPTIMIZATION WORKSHOP. Author(s), 2019. http://dx.doi.org/10.1063/1.5089977.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Chen, Chang-Huang. « Bare bone particle swarm optimization with integration of global and local learning strategies ». Dans 2011 International Conference on Machine Learning and Cybernetics (ICMLC). IEEE, 2011. http://dx.doi.org/10.1109/icmlc.2011.6016781.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Soroush, H. M. « Bicriteria single machine scheduling with setup times and learning effects ». Dans PROCEEDINGS OF THE SIXTH GLOBAL CONFERENCE ON POWER CONTROL AND OPTIMIZATION. AIP, 2012. http://dx.doi.org/10.1063/1.4769005.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Rapports d'organisations sur le sujet "Machine learning, Global Optimization"

1

Saenz, Juan Antonio, Ismael Djibrilla Boureima, Vitaliy Gyrya et Susan Kurien. Machine-Learning for Rapid Optimization of Turbulence Models. Office of Scientific and Technical Information (OSTI), juillet 2020. http://dx.doi.org/10.2172/1638623.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Gu, Xiaofeng, A. Fedotov et D. Kayran. Application of a machine learning algorithm (XGBoost) to offline RHIC luminosity optimization. Office of Scientific and Technical Information (OSTI), avril 2021. http://dx.doi.org/10.2172/1777441.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Rolf, Esther, Jonathan Proctor, Tamma Carleton, Ian Bolliger, Vaishaal Shankar, Miyabi Ishihara, Benjamin Recht et Solomon Hsiang. A Generalizable and Accessible Approach to Machine Learning with Global Satellite Imagery. Cambridge, MA : National Bureau of Economic Research, novembre 2020. http://dx.doi.org/10.3386/w28045.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Scheinberg, Katya. Derivative Free Optimization of Complex Systems with the Use of Statistical Machine Learning Models. Fort Belvoir, VA : Defense Technical Information Center, septembre 2015. http://dx.doi.org/10.21236/ada622645.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Ghanshyam, Pilania, Kenneth James McClellan, Christopher Richard Stanek et Blas P. Uberuaga. Physics-Informed Machine Learning for Discovery and Optimization of Materials : A Case Study of Scintillators. Office of Scientific and Technical Information (OSTI), août 2018. http://dx.doi.org/10.2172/1463529.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Bao, Jie, Chao Wang, Zhijie Xu et Brian J. Koeppel. Physics-Informed Machine Learning with Application to Solid Oxide Fuel Cell System Modeling and Optimization. Office of Scientific and Technical Information (OSTI), septembre 2019. http://dx.doi.org/10.2172/1569289.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Gabelmann, Jeffrey, et Eduardo Gildin. A Machine Learning-Based Geothermal Drilling Optimization System Using EM Short-Hop Bit Dynamics Measurements. Office of Scientific and Technical Information (OSTI), avril 2020. http://dx.doi.org/10.2172/1842454.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Qi, Fei, Zhaohui Xia, Gaoyang Tang, Hang Yang, Yu Song, Guangrui Qian, Xiong An, Chunhuan Lin et Guangming Shi. A Graph-based Evolutionary Algorithm for Automated Machine Learning. Web of Open Science, décembre 2020. http://dx.doi.org/10.37686/ser.v1i2.77.

Texte intégral
Résumé :
As an emerging field, Automated Machine Learning (AutoML) aims to reduce or eliminate manual operations that require expertise in machine learning. In this paper, a graph-based architecture is employed to represent flexible combinations of ML models, which provides a large searching space compared to tree-based and stacking-based architectures. Based on this, an evolutionary algorithm is proposed to search for the best architecture, where the mutation and heredity operators are the key for architecture evolution. With Bayesian hyper-parameter optimization, the proposed approach can automate the workflow of machine learning. On the PMLB dataset, the proposed approach shows the state-of-the-art performance compared with TPOT, Autostacker, and auto-sklearn. Some of the optimized models are with complex structures which are difficult to obtain in manual design.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Vittorio, Alan, et Kate Calvin. Using machine learning to improve land use/cover characterization and projection for scenario-based global modeling. Office of Scientific and Technical Information (OSTI), avril 2021. http://dx.doi.org/10.2172/1769796.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Wu, S. Boiler Optimization Using Advance Machine Learning Techniques. Final Report for period September 30, 1995 - September 29, 2000. Office of Scientific and Technical Information (OSTI), août 2005. http://dx.doi.org/10.2172/877237.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie