Articles de revues sur le sujet « Machine learning, Global Optimization »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Machine learning, Global Optimization.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleurs articles de revues pour votre recherche sur le sujet « Machine learning, Global Optimization ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les articles de revues sur diverses disciplines et organisez correctement votre bibliographie.

1

Cassioli, A., D. Di Lorenzo, M. Locatelli, F. Schoen et M. Sciandrone. « Machine learning for global optimization ». Computational Optimization and Applications 51, no 1 (5 mai 2010) : 279–303. http://dx.doi.org/10.1007/s10589-010-9330-x.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Kudyshev, Zhaxylyk A., Alexander V. Kildishev, Vladimir M. Shalaev et Alexandra Boltasseva. « Machine learning–assisted global optimization of photonic devices ». Nanophotonics 10, no 1 (28 octobre 2020) : 371–83. http://dx.doi.org/10.1515/nanoph-2020-0376.

Texte intégral
Résumé :
AbstractOver the past decade, artificially engineered optical materials and nanostructured thin films have revolutionized the area of photonics by employing novel concepts of metamaterials and metasurfaces where spatially varying structures yield tailorable “by design” effective electromagnetic properties. The current state-of-the-art approach to designing and optimizing such structures relies heavily on simplistic, intuitive shapes for their unit cells or metaatoms. Such an approach cannot provide the global solution to a complex optimization problem where metaatom shape, in-plane geometry, out-of-plane architecture, and constituent materials have to be properly chosen to yield the maximum performance. In this work, we present a novel machine learning–assisted global optimization framework for photonic metadevice design. We demonstrate that using an adversarial autoencoder (AAE) coupled with a metaheuristic optimization framework significantly enhances the optimization search efficiency of the metadevice configurations with complex topologies. We showcase the concept of physics-driven compressed design space engineering that introduces advanced regularization into the compressed space of an AAE based on the optical responses of the devices. Beyond the significant advancement of the global optimization schemes, our approach can assist in gaining comprehensive design “intuition” by revealing the underlying physics of the optical performance of metadevices with complex topologies and material compositions.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Abdul Salam, Mustafa, Ahmad Taher Azar et Rana Hussien. « Swarm-Based Extreme Learning Machine Models for Global Optimization ». Computers, Materials & ; Continua 70, no 3 (2022) : 6339–63. http://dx.doi.org/10.32604/cmc.2022.020583.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

TAKAMATSU, Ryosuke, et Wataru YAMAZAKI. « Global topology optimization of supersonic airfoil using machine learning technologies ». Proceedings of The Computational Mechanics Conference 2021.34 (2021) : 112. http://dx.doi.org/10.1299/jsmecmd.2021.34.112.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Tsoulos, Ioannis G., Alexandros Tzallas, Evangelos Karvounis et Dimitrios Tsalikakis. « NeuralMinimizer : A Novel Method for Global Optimization ». Information 14, no 2 (25 janvier 2023) : 66. http://dx.doi.org/10.3390/info14020066.

Texte intégral
Résumé :
The problem of finding the global minimum of multidimensional functions is often applied to a wide range of problems. An innovative method of finding the global minimum of multidimensional functions is presented here. This method first generates an approximation of the objective function using only a few real samples from it. These samples construct the approach using a machine learning model. Next, the required sampling is performed by the approximation function. Furthermore, the approach is improved on each sample by using found local minima as samples for the training set of the machine learning model. In addition, as a termination criterion, the proposed technique uses a widely used criterion from the relevant literature which in fact evaluates it after each execution of the local minimization. The proposed technique was applied to a number of well-known problems from the relevant literature, and the comparative results with respect to modern global minimization techniques are shown to be extremely promising.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Honda, M., et E. Narita. « Machine-learning assisted steady-state profile predictions using global optimization techniques ». Physics of Plasmas 26, no 10 (octobre 2019) : 102307. http://dx.doi.org/10.1063/1.5117846.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Wu, Shaohua, Yong Hu, Wei Wang, Xinyong Feng et Wanneng Shu. « Application of Global Optimization Methods for Feature Selection and Machine Learning ». Mathematical Problems in Engineering 2013 (2013) : 1–8. http://dx.doi.org/10.1155/2013/241517.

Texte intégral
Résumé :
The feature selection process constitutes a commonly encountered problem of global combinatorial optimization. The process reduces the number of features by removing irrelevant and redundant data. This paper proposed a novel immune clonal genetic algorithm based on immune clonal algorithm designed to solve the feature selection problem. The proposed algorithm has more exploration and exploitation abilities due to the clonal selection theory, and each antibody in the search space specifies a subset of the possible features. Experimental results show that the proposed algorithm simplifies the feature selection process effectively and obtains higher classification accuracy than other feature selection algorithms.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Ma, Sicong, Cheng Shang, Chuan-Ming Wang et Zhi-Pan Liu. « Thermodynamic rules for zeolite formation from machine learning based global optimization ». Chemical Science 11, no 37 (2020) : 10113–18. http://dx.doi.org/10.1039/d0sc03918g.

Texte intégral
Résumé :
Machine learning based atomic simulation explores more than one million minima from global potential energy surface of SiAlPO system, and identifies thermodynamics rules on energetics, framework and composition for stable zeolite.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Huang, Si-Da, Cheng Shang, Pei-Lin Kang et Zhi-Pan Liu. « Atomic structure of boron resolved using machine learning and global sampling ». Chemical Science 9, no 46 (2018) : 8644–55. http://dx.doi.org/10.1039/c8sc03427c.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Barkalov, Konstantin, Ilya Lebedev et Evgeny Kozinov. « Acceleration of Global Optimization Algorithm by Detecting Local Extrema Based on Machine Learning ». Entropy 23, no 10 (28 septembre 2021) : 1272. http://dx.doi.org/10.3390/e23101272.

Texte intégral
Résumé :
This paper features the study of global optimization problems and numerical methods of their solution. Such problems are computationally expensive since the objective function can be multi-extremal, nondifferentiable, and, as a rule, given in the form of a “black box”. This study used a deterministic algorithm for finding the global extremum. This algorithm is based neither on the concept of multistart, nor nature-inspired algorithms. The article provides computational rules of the one-dimensional algorithm and the nested optimization scheme which could be applied for solving multidimensional problems. Please note that the solution complexity of global optimization problems essentially depends on the presence of multiple local extrema. In this paper, we apply machine learning methods to identify regions of attraction of local minima. The use of local optimization algorithms in the selected regions can significantly accelerate the convergence of global search as it could reduce the number of search trials in the vicinity of local minima. The results of computational experiments carried out on several hundred global optimization problems of different dimensionalities presented in the paper confirm the effect of accelerated convergence (in terms of the number of search trials required to solve a problem with a given accuracy).
Styles APA, Harvard, Vancouver, ISO, etc.
11

Wang, Wei-Ching. « Sound localization via deep learning, generative modeling, and global optimization ». Journal of the Acoustical Society of America 151, no 4 (avril 2022) : A255. http://dx.doi.org/10.1121/10.0011240.

Texte intégral
Résumé :
An acoustic lens is capable of focusing incident plane waves at the focal point. This talk will discuss a new approach to design metamaterials with a focusing effect using effective and innovative methods by using machine learning. Specifically, the physics simulations use multiple scattering theory and machine learning techniques such as deep learning and generative modeling. The 2-D-Global Optimization Networks (2-D-GLOnets) model [1] developed initially for acoustic cloak design is adapted and generalized to design and optimize the acoustic lens. We supply the absolute pressure amplitude and gradient into the deep learning algorithms to discover the optimal scatterer positions that maximize the absolute pressure at the focal point in the confined region. We examine and evaluate the performance of the generative network, searching for an optimal configuration of scatterers over a range of parameters to produce desired features, such as broadband focusing and localization effects. The model will show examples of planar configurations of cylindrical structures producing focusing effect. [1] L. Zhuo and F. Amirkulova, “Design of acoustic cloak using generative modeling and gradient-based optimization,” in INTER-NOISE and NOISE-CON Congress and Conference Proceedings (Institute of Noise Control Engineering, 2021), Vol. 3.
Styles APA, Harvard, Vancouver, ISO, etc.
12

Zhang, Ao, Yan Liu, Jinguang Yang, Zhi Li, Chuang Zhang et Yiwen Li. « Machine learning based design optimization of centrifugal impellers ». Journal of the Global Power and Propulsion Society 6 (25 juillet 2022) : 124–34. http://dx.doi.org/10.33737/jgpps/150663.

Texte intégral
Résumé :
Big data and machine learning are developing rapidly, and their applications in the aerodynamic design of centrifugal impellers and other turbomachinery have attracted wide attention. In this paper, centrifugal impellers with large flow coefficient (0.18–0.22) are taken as research objects. Firstly, through one-dimensional design and optimization, main one-dimensional geometric parameters of those centrifugal impellers are obtained. Subsequently, hundreds of samples of centrifugal impellers are obtained by using an in-house parameterization program and Latin hypercube sampling method. The NUMECA software is used for CFD calculations to build a sample library of centrifugal impellers. Then, applying the artificial neural network (ANN) to deal with the data in the sample library, a nonlinear model between the flow coefficients, the geometric parameters of these centrifugal impellers and the aerodynamic performance is constructed, which can replace CFD calculations. Lastly with the help of the multi-objective genetic algorithm, a global optimization is carried out to fulfull a rapid design optimization for centrifugal impellers with flow coefficients in the range of 0.18–0.22. Three examples provided in the paper show that the design and optimization method described above is faster and more reliable compared with the traditional design method. This method provides a new way for the rapid design of centrifugal impellers.
Styles APA, Harvard, Vancouver, ISO, etc.
13

Ha, Seung-Yeal, Shi Jin et Doheon Kim. « Convergence of a first-order consensus-based global optimization algorithm ». Mathematical Models and Methods in Applied Sciences 30, no 12 (19 septembre 2020) : 2417–44. http://dx.doi.org/10.1142/s0218202520500463.

Texte intégral
Résumé :
Global optimization of a non-convex objective function often appears in large-scale machine learning and artificial intelligence applications. Recently, consensus-based optimization (CBO) methods have been introduced as one of the gradient-free optimization methods. In this paper, we provide a convergence analysis for the first-order CBO method in [J. A. Carrillo, S. Jin, L. Li and Y. Zhu, A consensus-based global optimization method for high dimensional machine learning problems, https://arxiv.org/abs/1909.09249v1 ]. Prior to this work, the convergence study was carried out for CBO methods on corresponding mean-field limit, a Fokker–Planck equation, which does not imply the convergence of the CBO method per se. Based on the consensus estimate directly on the first-order CBO model, we provide a convergence analysis of the first-order CBO method [J. A. Carrillo, S. Jin, L. Li and Y. Zhu, A consensus-based global optimization method for high dimensional machine learning problems, https://arxiv.org/abs/1909.09249v1 ] without resorting to the corresponding mean-field model. Our convergence analysis consists of two steps. In the first step, we show that the CBO model exhibits a global consensus time asymptotically for any initial data, and in the second step, we provide a sufficient condition on system parameters — which is dimension independent — and initial data which guarantee that the converged consensus state lies in a small neighborhood of the global minimum almost surely.
Styles APA, Harvard, Vancouver, ISO, etc.
14

Spillard, Samuel, Christopher J. Turner et Konstantinos Meichanetzidis. « Machine learning entanglement freedom ». International Journal of Quantum Information 16, no 08 (décembre 2018) : 1840002. http://dx.doi.org/10.1142/s0219749918400026.

Texte intégral
Résumé :
Quantum many-body systems realize many different phases of matter characterized by their exotic emergent phenomena. While some simple versions of these properties can occur in systems of free fermions, their occurrence generally implies that the physics is dictated by an interacting Hamiltonian. The interaction distance has been successfully used to quantify the effect of interactions in a variety of states of matter via the entanglement spectrum [C. J. Turner, K. Meichanetzidis, Z. Papic and J. K. Pachos, Nat. Commun. 8 (2017) 14926, Phys. Rev. B 97 (2018) 125104]. The computation of the interaction distance reduces to a global optimization problem whose goal is to search for the free-fermion entanglement spectrum closest to the given entanglement spectrum. In this work, we employ techniques from machine learning in order to perform this same task. In a supervised learning setting, we use labeled data obtained by computing the interaction distance and predict its value via linear regression. Moving to a semi-supervised setting, we train an autoencoder to estimate an alternative measure to the interaction distance, and we show that it behaves in a similar manner.
Styles APA, Harvard, Vancouver, ISO, etc.
15

Candelieri, Antonio, et Francesco Archetti. « Global optimization in machine learning : the design of a predictive analytics application ». Soft Computing 23, no 9 (1 novembre 2018) : 2969–77. http://dx.doi.org/10.1007/s00500-018-3597-8.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
16

Li, Shijin, et Fucai Wang. « Research on Optimization of Improved Gray Wolf Optimization-Extreme Learning Machine Algorithm in Vehicle Route Planning ». Discrete Dynamics in Nature and Society 2020 (6 octobre 2020) : 1–7. http://dx.doi.org/10.1155/2020/8647820.

Texte intégral
Résumé :
With the rapid development of intelligent transportation, intelligent algorithms and path planning have become effective methods to relieve traffic pressure. Intelligent algorithm can realize the priority selection mode in realizing traffic optimization efficiency. However, there is local optimization in intelligence and it is difficult to realize global optimization. In this paper, the antilearning model is used to solve the problem that the gray wolf algorithm falls into local optimization. The positions of different wolves are updated. When falling into local optimization, the current position is optimized to realize global optimization. Extreme Learning Machine (ELM) algorithm model is introduced to accelerate Improved Gray Wolf Optimization (IGWO) optimization and improve convergence speed. Finally, the experiment proves that IGWO-ELM algorithm is compared in path planning, and the algorithm has an ideal effect and high efficiency.
Styles APA, Harvard, Vancouver, ISO, etc.
17

Kramer, Oliver. « On Machine Symbol Grounding and Optimization ». International Journal of Cognitive Informatics and Natural Intelligence 5, no 3 (juillet 2011) : 73–85. http://dx.doi.org/10.4018/ijcini.2011070105.

Texte intégral
Résumé :
From the point of view of an autonomous agent the world consists of high-dimensional dynamic sensorimotor data. Interface algorithms translate this data into symbols that are easier to handle for cognitive processes. Symbol grounding is about whether these systems can, based on this data, construct symbols that serve as a vehicle for higher symbol-oriented cognitive processes. Machine learning and data mining techniques are geared towards finding structures and input-output relations in this data by providing appropriate interface algorithms that translate raw data into symbols. This work formulates the interface design as global optimization problem with the objective to maximize the success of the overlying symbolic algorithm. For its implementation various known algorithms from data mining and machine learning turn out to be adequate methods that do not only exploit the intrinsic structure of the subsymbolic data, but that also allow to flexibly adapt to the objectives of the symbolic process. Furthermore, this work discusses the optimization formulation as a functional perspective on symbol grounding that does not hurt the zero semantical commitment condition. A case study illustrates technical details of the machine symbol grounding approach.
Styles APA, Harvard, Vancouver, ISO, etc.
18

Kubwimana, Benjamin, et Hamidreza Najafi. « A Novel Approach for Optimizing Building Energy Models Using Machine Learning Algorithms ». Energies 16, no 3 (17 janvier 2023) : 1033. http://dx.doi.org/10.3390/en16031033.

Texte intégral
Résumé :
The current practice with building energy simulation software tools requires the manual entry of a large list of detailed inputs pertaining to the building characteristics, geographical region, schedule of operation, end users, occupancy, control aspects, and more. While these software tools allow the evaluation of the energy consumption of a building with various combinations of building parameters, with the manual information entry and considering the large number of parameters related to building design and operation, global optimization is extremely challenging. In the present paper, a novel approach is developed for the global optimization of building energy models (BEMs) using Python EnergyPlus. A Python-based script is developed to automate the data entry into the building energy modeling tool (EnergyPlus) and numerous possible designs that cover the desired ranges of multiple variables are simulated. The resulting datasets are then used to establish a surrogate BEM using an artificial neural network (ANN) which is optimized through two different approaches, including Bayesian optimization and a genetic algorithm. To demonstrate the proposed approach, a case study is performed for a building on the campus of the Florida Institute of Technology, located in Melbourne, FL, USA. Eight parameters are selected and 200 variations of them are supplied to EnergyPlus, and the produced results from the simulations are used to train an ANN-based surrogate model. The surrogate model achieved a maximum of 90% R2 through hyperparameter tuning. The two optimization approaches, including the genetic algorithm and the Bayesian method, were applied to the surrogate model, and the optimal designs achieved annual energy consumptions of 11.3 MWh and 12.7 MWh, respectively. It was shown that the approach presented bridges between the physics-based building energy models and the strong optimization tools available in Python, which can allow the achievement of global optimization in a computationally efficient fashion.
Styles APA, Harvard, Vancouver, ISO, etc.
19

Li, Yibo, Chao Liu, Senyue Zhang, Wenan Tan et Yanyan Ding. « Reproducing Polynomial Kernel Extreme Learning Machine ». Journal of Advanced Computational Intelligence and Intelligent Informatics 21, no 5 (20 septembre 2017) : 795–802. http://dx.doi.org/10.20965/jaciii.2017.p0795.

Texte intégral
Résumé :
Conventional kernel support vector machine (KSVM) has the problem of slow training speed, and single kernel extreme learning machine (KELM) also has some performance limitations, for which this paper proposes a new combined KELM model that build by the polynomial kernel and reproducing kernel on Sobolev Hilbert space. This model combines the advantages of global and local kernel function and has fast training speed. At the same time, an efficient optimization algorithm called cuckoo search algorithm is adopted to avoid blindness and inaccuracy in parameter selection. Experiments were performed on bi-spiral benchmark dataset, Banana dataset, as well as a number of classification and regression datasets from the UCI benchmark repository illustrate the feasibility of the proposed model. It achieves the better robustness and generalization performance when compared to other conventional KELM and KSVM, which demonstrates its effectiveness and usefulness.
Styles APA, Harvard, Vancouver, ISO, etc.
20

Barkalov, Konstantin, Ilya Lebedev, Marina Usova, Daria Romanova, Daniil Ryazanov et Sergei Strijhak. « Optimization of Turbulence Model Parameters Using the Global Search Method Combined with Machine Learning ». Mathematics 10, no 15 (31 juillet 2022) : 2708. http://dx.doi.org/10.3390/math10152708.

Texte intégral
Résumé :
The paper considers the slope flow simulation and the problem of finding the optimal parameter values of this mathematical model. The slope flow is modeled using the finite volume method applied to the Reynolds-averaged Navier–Stokes equations with closure in the form of the k−ωSST turbulence model. The optimal values of the turbulence model coefficients for free surface gravity multiphase flows were found using the global search algorithm. Calibration was performed to increase the similarity of the experimental and calculated velocity profiles. The Root Mean Square Error (RMSE) of derivation between the calculated flow velocity profile and the experimental one is considered as the objective function in the optimization problem. The calibration of the turbulence model coefficients for calculating the free surface flows on test slopes using the multiphase model for interphase tracking has not been performed previously. To solve the multi-extremal optimization problem arising from the search for the minimum of the loss function for the flow velocity profile, we apply a new optimization approach using a Peano curve to reduce the dimensionality of the problem. To speed up the optimization procedure, the objective function was approximated using an artificial neural network. Thus, an interdisciplinary approach was applied which allowed the optimal values of six turbulence model parameters to be found using OpenFOAM and Globalizer software.
Styles APA, Harvard, Vancouver, ISO, etc.
21

Özöğür Akyüz, Süreyya, Gürkan Üstünkar et Gerhard Wilhelm Weber. « Adapted Infinite Kernel Learning by Multi-Local Algorithm ». International Journal of Pattern Recognition and Artificial Intelligence 30, no 04 (12 avril 2016) : 1651004. http://dx.doi.org/10.1142/s0218001416510046.

Texte intégral
Résumé :
The interplay of machine learning (ML) and optimization methods is an emerging field of artificial intelligence. Both ML and optimization are concerned with modeling of systems related to real-world problems. Parameter selection for classification models is an important task for ML algorithms. In statistical learning theory, cross-validation (CV) which is the most well-known model selection method can be very time consuming for large data sets. One of the recent model selection techniques developed for support vector machines (SVMs) is based on the observed test point margins. In this study, observed margin strategy is integrated into our novel infinite kernel learning (IKL) algorithm together with multi-local procedure (MLP) which is an optimization technique to find global solution. The experimental results show improvements in accuracy and speed when comparing with multiple kernel learning (MKL) and semi-infinite linear programming (SILP) with CV.
Styles APA, Harvard, Vancouver, ISO, etc.
22

Smithies, Rob, Said Salhi et Nat Queen. « Adaptive Hybrid Learning for Neural Networks ». Neural Computation 16, no 1 (1 janvier 2004) : 139–57. http://dx.doi.org/10.1162/08997660460734038.

Texte intégral
Résumé :
A robust locally adaptive learning algorithm is developed via two enhancements of the Resilient Propagation (RPROP) method. Remaining drawbacks of the gradient-based approach are addressed by hybridization with gradient-independent Local Search. Finally, a global optimization method based on recursion of the hybrid is constructed, making use of tabu neighborhoods to accelerate the search for minima through diver-sification. Enhanced RPROP is shown to be faster and more accurate than the standard RPROP in solving classification tasks based on natural data sets taken from the UCI repository of machine learning databases. Furthermore, the use of Local Search is shown to improve Enhanced RPROP by solving the same classification tasks as part of the global optimization method.
Styles APA, Harvard, Vancouver, ISO, etc.
23

Yi, Dokkyun, Sangmin Ji et Sunyoung Bu. « An Enhanced Optimization Scheme Based on Gradient Descent Methods for Machine Learning ». Symmetry 11, no 7 (20 juillet 2019) : 942. http://dx.doi.org/10.3390/sym11070942.

Texte intégral
Résumé :
A The learning process of machine learning consists of finding values of unknown weights in a cost function by minimizing the cost function based on learning data. However, since the cost function is not convex, it is conundrum to find the minimum value of the cost function. The existing methods used to find the minimum values usually use the first derivative of the cost function. When even the local minimum (but not a global minimum) is reached, since the first derivative of the cost function becomes zero, the methods give the local minimum values, so that the desired global minimum cannot be found. To overcome this problem, in this paper we modified one of the existing schemes—the adaptive momentum estimation scheme—by adding a new term, so that it can prevent the new optimizer from staying at local minimum. The convergence condition for the proposed scheme and the convergence value are also analyzed, and further explained through several numerical experiments whose cost function is non-convex.
Styles APA, Harvard, Vancouver, ISO, etc.
24

Song, Tao, Jiarong Wang, Danya Xu, Wei Wei, Runsheng Han, Fan Meng, Ying Li et Pengfei Xie. « Unsupervised Machine Learning for Improved Delaunay Triangulation ». Journal of Marine Science and Engineering 9, no 12 (7 décembre 2021) : 1398. http://dx.doi.org/10.3390/jmse9121398.

Texte intégral
Résumé :
Physical oceanography models rely heavily on grid discretization. It is known that unstructured grids perform well in dealing with boundary fitting problems in complex nearshore regions. However, it is time-consuming to find a set of unstructured grids in specific ocean areas, particularly in the case of land areas that are frequently changed by human construction. In this work, an attempt was made to use machine learning for the optimization of the unstructured triangular meshes formed with Delaunay triangulation in the global ocean field, so that the triangles in the triangular mesh were closer to equilateral triangles, the long, narrow triangles in the triangular mesh were reduced, and the mesh quality was improved. Specifically, we used Delaunay triangulation to generate the unstructured grid, and then developed a K-means clustering-based algorithm to optimize the unstructured grid. With the proposed method, unstructured meshes were generated and optimized for global oceans, small sea areas, and the South China Sea estuary to carry out data experiments. The results suggested that the proportion of triangles with a triangle shape factor greater than 0.7 amounted to 77.80%, 79.78%, and 79.78%, respectively, in the unstructured mesh. Meanwhile, the proportion of long, narrow triangles in the unstructured mesh was decreased to 8.99%, 3.46%, and 4.12%, respectively.
Styles APA, Harvard, Vancouver, ISO, etc.
25

Li, Yang, Zhichuan Zhu, Alin Hou, Qingdong Zhao, Liwei Liu et Lijuan Zhang. « Pulmonary Nodule Recognition Based on Multiple Kernel Learning Support Vector Machine-PSO ». Computational and Mathematical Methods in Medicine 2018 (2018) : 1–10. http://dx.doi.org/10.1155/2018/1461470.

Texte intégral
Résumé :
Pulmonary nodule recognition is the core module of lung CAD. The Support Vector Machine (SVM) algorithm has been widely used in pulmonary nodule recognition, and the algorithm of Multiple Kernel Learning Support Vector Machine (MKL-SVM) has achieved good results therein. Based on grid search, however, the MKL-SVM algorithm needs long optimization time in course of parameter optimization; also its identification accuracy depends on the fineness of grid. In the paper, swarm intelligence is introduced and the Particle Swarm Optimization (PSO) is combined with MKL-SVM algorithm to be MKL-SVM-PSO algorithm so as to realize global optimization of parameters rapidly. In order to obtain the global optimal solution, different inertia weights such as constant inertia weight, linear inertia weight, and nonlinear inertia weight are applied to pulmonary nodules recognition. The experimental results show that the model training time of the proposed MKL-SVM-PSO algorithm is only 1/7 of the training time of the MKL-SVM grid search algorithm, achieving better recognition effect. Moreover, Euclidean norm of normalized error vector is proposed to measure the proximity between the average fitness curve and the optimal fitness curve after convergence. Through statistical analysis of the average of 20 times operation results with different inertial weights, it can be seen that the dynamic inertial weight is superior to the constant inertia weight in the MKL-SVM-PSO algorithm. In the dynamic inertial weight algorithm, the parameter optimization time of nonlinear inertia weight is shorter; the average fitness value after convergence is much closer to the optimal fitness value, which is better than the linear inertial weight. Besides, a better nonlinear inertial weight is verified.
Styles APA, Harvard, Vancouver, ISO, etc.
26

Kawaguchi, Kenji, Jiaoyang Huang et Leslie Pack Kaelbling. « Every Local Minimum Value Is the Global Minimum Value of Induced Model in Nonconvex Machine Learning ». Neural Computation 31, no 12 (décembre 2019) : 2293–323. http://dx.doi.org/10.1162/neco_a_01234.

Texte intégral
Résumé :
For nonconvex optimization in machine learning, this article proves that every local minimum achieves the globally optimal value of the perturbable gradient basis model at any differentiable point. As a result, nonconvex machine learning is theoretically as supported as convex machine learning with a handcrafted basis in terms of the loss at differentiable local minima, except in the case when a preference is given to the handcrafted basis over the perturbable gradient basis. The proofs of these results are derived under mild assumptions. Accordingly, the proven results are directly applicable to many machine learning models, including practical deep neural networks, without any modification of practical methods. Furthermore, as special cases of our general results, this article improves or complements several state-of-the-art theoretical results on deep neural networks, deep residual networks, and overparameterized deep neural networks with a unified proof technique and novel geometric insights. A special case of our results also contributes to the theoretical foundation of representation learning.
Styles APA, Harvard, Vancouver, ISO, etc.
27

Khan, Waqar Ahmed, S. H. Chung, Muhammad Usman Awan et Xin Wen. « Machine learning facilitated business intelligence (Part II) ». Industrial Management & ; Data Systems 120, no 1 (27 novembre 2019) : 128–63. http://dx.doi.org/10.1108/imds-06-2019-0351.

Texte intégral
Résumé :
Purpose The purpose of this paper is three-fold: to review the categories explaining mainly optimization algorithms (techniques) in that needed to improve the generalization performance and learning speed of the Feedforward Neural Network (FNN); to discover the change in research trends by analyzing all six categories (i.e. gradient learning algorithms for network training, gradient free learning algorithms, optimization algorithms for learning rate, bias and variance (underfitting and overfitting) minimization algorithms, constructive topology neural networks, metaheuristic search algorithms) collectively; and recommend new research directions for researchers and facilitate users to understand algorithms real-world applications in solving complex management, engineering and health sciences problems. Design/methodology/approach The FNN has gained much attention from researchers to make a more informed decision in the last few decades. The literature survey is focused on the learning algorithms and the optimization techniques proposed in the last three decades. This paper (Part II) is an extension of Part I. For the sake of simplicity, the paper entitled “Machine learning facilitated business intelligence (Part I): Neural networks learning algorithms and applications” is referred to as Part I. To make the study consistent with Part I, the approach and survey methodology in this paper are kept similar to those in Part I. Findings Combining the work performed in Part I, the authors studied a total of 80 articles through popular keywords searching. The FNN learning algorithms and optimization techniques identified in the selected literature are classified into six categories based on their problem identification, mathematical model, technical reasoning and proposed solution. Previously, in Part I, the two categories focusing on the learning algorithms (i.e. gradient learning algorithms for network training, gradient free learning algorithms) are reviewed with their real-world applications in management, engineering, and health sciences. Therefore, in the current paper, Part II, the remaining four categories, exploring optimization techniques (i.e. optimization algorithms for learning rate, bias and variance (underfitting and overfitting) minimization algorithms, constructive topology neural networks, metaheuristic search algorithms) are studied in detail. The algorithm explanation is made enriched by discussing their technical merits, limitations, and applications in their respective categories. Finally, the authors recommend future new research directions which can contribute to strengthening the literature. Research limitations/implications The FNN contributions are rapidly increasing because of its ability to make reliably informed decisions. Like learning algorithms, reviewed in Part I, the focus is to enrich the comprehensive study by reviewing remaining categories focusing on the optimization techniques. However, future efforts may be needed to incorporate other algorithms into identified six categories or suggest new category to continuously monitor the shift in the research trends. Practical implications The authors studied the shift in research trend for three decades by collectively analyzing the learning algorithms and optimization techniques with their applications. This may help researchers to identify future research gaps to improve the generalization performance and learning speed, and user to understand the applications areas of the FNN. For instance, research contribution in FNN in the last three decades has changed from complex gradient-based algorithms to gradient free algorithms, trial and error hidden units fixed topology approach to cascade topology, hyperparameters initial guess to analytically calculation and converging algorithms at a global minimum rather than the local minimum. Originality/value The existing literature surveys include comparative study of the algorithms, identifying algorithms application areas and focusing on specific techniques in that it may not be able to identify algorithms categories, a shift in research trends over time, application area frequently analyzed, common research gaps and collective future directions. Part I and II attempts to overcome the existing literature surveys limitations by classifying articles into six categories covering a wide range of algorithm proposed to improve the FNN generalization performance and convergence rate. The classification of algorithms into six categories helps to analyze the shift in research trend which makes the classification scheme significant and innovative.
Styles APA, Harvard, Vancouver, ISO, etc.
28

Guo, Xiaohua. « Optimization of English Machine Translation by Deep Neural Network under Artificial Intelligence ». Computational Intelligence and Neuroscience 2022 (21 avril 2022) : 1–10. http://dx.doi.org/10.1155/2022/2003411.

Texte intégral
Résumé :
To improve the function of machine translation to adapt to global language translation, the work takes deep neural network (DNN) as the basic theory, carries out transfer learning and neural network translation modeling, and optimizes the word alignment function in machine translation performance. First, the work implements a deep learning translation network model for English translation. On this basis, the neural machine translation model is designed under transfer learning. The random shielding method is introduced to implement the language training model, and the machine translation is slightly adjusted as the goal of transfer learning, thereby improving the semantic understanding ability in translation performance. Meanwhile, the work design introduces the method of word alignment optimization and optimizes the performance of word alignment in the transformer system by using word corpus. The experimental results show that the proposed method reduces the average alignment error rate by 8.1%, 24.4%, and 22.1% in EnRo (English-Roman), EnGe (English-German), and EnFr (English-French), respectively, compared with the previous algorithms. Compared with the designed optimization method, the word alignment error rate is lower than that of traditional methods. The modeling and optimization method is feasible, which can effectively solve the problems of insufficient information utilization, large parameter scale, and difficult storage in the process of machine translation. Additionally, it provides a feasible idea and direction for the optimization and improvement in neural machine translation (NMT) system.
Styles APA, Harvard, Vancouver, ISO, etc.
29

Fan, Yanyan, Yu Zhang, Baosu Guo, Xiaoyuan Luo, Qingjin Peng et Zhenlin Jin. « A Hybrid Sparrow Search Algorithm of the Hyperparameter Optimization in Deep Learning ». Mathematics 10, no 16 (22 août 2022) : 3019. http://dx.doi.org/10.3390/math10163019.

Texte intégral
Résumé :
Deep learning has been widely used in different fields such as computer vision and speech processing. The performance of deep learning algorithms is greatly affected by their hyperparameters. For complex machine learning models such as deep neural networks, it is difficult to determine their hyperparameters. In addition, existing hyperparameter optimization algorithms easily converge to a local optimal solution. This paper proposes a method for hyperparameter optimization that combines the Sparrow Search Algorithm and Particle Swarm Optimization, called the Hybrid Sparrow Search Algorithm. This method takes advantages of avoiding the local optimal solution in the Sparrow Search Algorithm and the search efficiency of Particle Swarm Optimization to achieve global optimization. Experiments verified the proposed algorithm in simple and complex networks. The results show that the Hybrid Sparrow Search Algorithm has the strong global search capability to avoid local optimal solutions and satisfactory search efficiency in both low and high-dimensional spaces. The proposed method provides a new solution for hyperparameter optimization problems in deep learning models.
Styles APA, Harvard, Vancouver, ISO, etc.
30

Papakonstantinou, Charalampos, Ioannis Daramouskas, Vaios Lappas, Vassilis C. Moulianitis et Vassilis Kostopoulos. « A Machine Learning Approach for Global Steering Control Moment Gyroscope Clusters ». Aerospace 9, no 3 (17 mars 2022) : 164. http://dx.doi.org/10.3390/aerospace9030164.

Texte intégral
Résumé :
This paper addresses the problem of singularity avoidance for a 4-Control Moment Gyroscope (CMG) pyramid cluster, as used for the attitude control of a satellite using machine learning (ML) techniques. A data-set, generated using a heuristic algorithm, relates the initial gimbal configuration and the desired maneuver—inputs—to a number of null space motions the gimbals have to execute—output. Two ML techniques—Deep Neural Network (DNN) and Random Forest Classifier (RFC)—are utilized to predict the required null motion for trajectories that are not included in the training set. The principal advantage of this approach is the exploitation of global information gathered from the whole maneuver compared to conventional steering laws that consider only some local information, near the current gimbal configuration for optimization and are prone to local extrema. The data-set generation and the predictions of the ML systems can be made offline, so no further calculations are needed on board, providing the possibility to inspect the way the system responds to any commanded maneuver before its execution. The RFC technique demonstrates enhanced accuracy for the test data compared to the DNN, validating that it is possible to correctly predict the null motion even for maneuvers that are not included in the training data.
Styles APA, Harvard, Vancouver, ISO, etc.
31

Belmahdi, Brahim, Mohamed Louzazni et Abdelmajid El Bouardi. « Comparative optimization of global solar radiation forecasting using machine learning and time series models ». Environmental Science and Pollution Research 29, no 10 (8 octobre 2021) : 14871–88. http://dx.doi.org/10.1007/s11356-021-16760-8.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
32

Meldgaard, Søren A., Esben L. Kolsbjerg et Bjørk Hammer. « Machine learning enhanced global optimization by clustering local environments to enable bundled atomic energies ». Journal of Chemical Physics 149, no 13 (7 octobre 2018) : 134104. http://dx.doi.org/10.1063/1.5048290.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
33

Zhou, Shuchen, Waqas Jadoon et Junaid Shuja. « Machine Learning-Based Offloading Strategy for Lightweight User Mobile Edge Computing Tasks ». Complexity 2021 (8 juin 2021) : 1–11. http://dx.doi.org/10.1155/2021/6455617.

Texte intégral
Résumé :
This paper presents an in-depth study and analysis of offloading strategies for lightweight user mobile edge computing tasks using a machine learning approach. Firstly, a scheme for multiuser frequency division multiplexing approach in mobile edge computing offloading is proposed, and a mixed-integer nonlinear optimization model for energy consumption minimization is developed. Then, based on the analysis of the concave-convex properties of this optimization model, this paper uses variable relaxation and nonconvex optimization theory to transform the problem into a convex optimization problem. Subsequently, two optimization algorithms are designed: for the relaxation optimization problem, an iterative optimization algorithm based on the Lagrange dual method is designed; based on the branch-and-bound integer programming method, the iterative optimization algorithm is used as the basic algorithm for each step of the operation, and a global optimization algorithm is designed for transmitting power allocation, computational offloading strategy, dynamic adjustment of local computing power, and receiving energy channel selection strategy. Finally, the simulation results verify that the scheduling strategy of the frequency division technique proposed in this paper has good energy consumption minimization performance in mobile edge computation offloading. Our model is highly efficient and has a high degree of accuracy. The anomaly detection method based on a decision tree combined with deep learning proposed in this paper, unlike traditional IoT attack detection methods, overcomes the drawbacks of rule-based security detection methods and enables them to adapt to both established and unknown hostile environments. Experimental results show that the attack detection system based on the model achieves good detection results in the detection of multiple attacks.
Styles APA, Harvard, Vancouver, ISO, etc.
34

Sun, Qian, William Ampomah, Junyu You, Martha Cather et Robert Balch. « Practical CO2—WAG Field Operational Designs Using Hybrid Numerical-Machine-Learning Approaches ». Energies 14, no 4 (17 février 2021) : 1055. http://dx.doi.org/10.3390/en14041055.

Texte intégral
Résumé :
Machine-learning technologies have exhibited robust competences in solving many petroleum engineering problems. The accurate predictivity and fast computational speed enable a large volume of time-consuming engineering processes such as history-matching and field development optimization. The Southwest Regional Partnership on Carbon Sequestration (SWP) project desires rigorous history-matching and multi-objective optimization processes, which fits the superiorities of the machine-learning approaches. Although the machine-learning proxy models are trained and validated before imposing to solve practical problems, the error margin would essentially introduce uncertainties to the results. In this paper, a hybrid numerical machine-learning workflow solving various optimization problems is presented. By coupling the expert machine-learning proxies with a global optimizer, the workflow successfully solves the history-matching and CO2 water alternative gas (WAG) design problem with low computational overheads. The history-matching work considers the heterogeneities of multiphase relative characteristics, and the CO2-WAG injection design takes multiple techno-economic objective functions into accounts. This work trained an expert response surface, a support vector machine, and a multi-layer neural network as proxy models to effectively learn the high-dimensional nonlinear data structure. The proposed workflow suggests revisiting the high-fidelity numerical simulator for validation purposes. The experience gained from this work would provide valuable guiding insights to similar CO2 enhanced oil recovery (EOR) projects.
Styles APA, Harvard, Vancouver, ISO, etc.
35

Zong Chen, Dr Joy Iong, et Kong-Long Lai. « Machine Learning based Energy Management at Internet of Things Network Nodes ». Journal of Trends in Computer Science and Smart Technology 2, no 3 (17 juillet 2020) : 127–33. http://dx.doi.org/10.36548/jtcsst.2020.3.001.

Texte intégral
Résumé :
The Internet of Things networks comprising wireless sensors and controllers or IoT gateways offers extremely high functionalities. However, not much attention is paid towards energy optimization of these nodes and enabling lossless networks. The wireless sensor networks and its applications has industrialized and scaled up gradually with the development of artificial intelligence and popularization of machine learning. The uneven network node energy consumption and local optimum is reached by the algorithm protocol due to the high energy consumption issues relating to the routing strategy. The smart ant colony optimization algorithm is used for obtaining an energy balanced routing at required regions. A neighbor selection strategy is proposed by combining the wireless sensor network nodes and the energy factors based on the smart ant colony optimization algorithm. The termination conditions for the algorithm as well as adaptive perturbation strategy are established for improving the convergence speed as well as ant searchability. This enables obtaining the find the global optimal solution. The performance, network life cycle, energy distribution, node equilibrium, network delay and network energy consumption are improved using the proposed routing planning methodology. There has been around 10% energy saving compared to the existing state-of-the-art algorithms.
Styles APA, Harvard, Vancouver, ISO, etc.
36

Roncaglia, Cesare, Daniele Rapetti et Riccardo Ferrando. « Regression and clustering algorithms for AgCu nanoalloys : from mixing energy predictions to structure recognition ». Physical Chemistry Chemical Physics 23, no 40 (2021) : 23325–35. http://dx.doi.org/10.1039/d1cp02143e.

Texte intégral
Résumé :
The lowest-energy structures of AgCu nanoalloys are searched for by global optimization algorithms for sizes 100 and 200 atoms depending on composition, and their structures and mixing energy are analyzed by machine learning tools.
Styles APA, Harvard, Vancouver, ISO, etc.
37

Li, Xiguang, Shoufei Han, Liang Zhao, Changqing Gong et Xiaojing Liu. « New Dandelion Algorithm Optimizes Extreme Learning Machine for Biomedical Classification Problems ». Computational Intelligence and Neuroscience 2017 (2017) : 1–13. http://dx.doi.org/10.1155/2017/4523754.

Texte intégral
Résumé :
Inspired by the behavior of dandelion sowing, a new novel swarm intelligence algorithm, namely, dandelion algorithm (DA), is proposed for global optimization of complex functions in this paper. In DA, the dandelion population will be divided into two subpopulations, and different subpopulations will undergo different sowing behaviors. Moreover, another sowing method is designed to jump out of local optimum. In order to demonstrate the validation of DA, we compare the proposed algorithm with other existing algorithms, including bat algorithm, particle swarm optimization, and enhanced fireworks algorithm. Simulations show that the proposed algorithm seems much superior to other algorithms. At the same time, the proposed algorithm can be applied to optimize extreme learning machine (ELM) for biomedical classification problems, and the effect is considerable. At last, we use different fusion methods to form different fusion classifiers, and the fusion classifiers can achieve higher accuracy and better stability to some extent.
Styles APA, Harvard, Vancouver, ISO, etc.
38

Gao, H., L. Jézéque, E. Cabrol et B. Vitry. « Robust Design of Suspension System with Polynomial Chaos Expansion and Machine Learning ». Science & ; Technique 19, no 1 (5 février 2020) : 43–54. http://dx.doi.org/10.21122/2227-1031-2020-19-1-43-54.

Texte intégral
Résumé :
During the early development of a new vehicle project, the uncertainty of parameters should be taken into consideration because the design may be perturbed due to real components’ complexity and manufacturing tolerances. Thus, the numerical validation of critical suspension specifications, such as durability and ride comfort should be carried out with random factors. In this article a multi-objective optimization methodology is proposed which involves the specification’s robustness as one of the optimization objectives. To predict the output variation from a given set of uncertain-but-bounded parameters proposed by optimization iterations, an adaptive chaos polynomial expansion (PCE) is applied to combine a local design of experiments with global response surfaces. Furthermore, in order to reduce the additional tests required for PCE construction, a machine learning algorithm based on inter-design correlation matrix firstly classifies the current design points through data mining and clustering. Then it learns how to predict the robustness of future optimized solutions with no extra simulations. At the end of the optimization, a Pareto front between specifications and their robustness can be obtained which represents the best compromises among objectives. The optimum set on the front is classified and can serve as a reference for future design. An example of a quarter car model has been tested for which the target is to optimize the global durability based on real road excitations. The statistical distribution of the parameters such as the trajectories and speeds is also taken into account. The result shows the natural incompatibility between the durability of the chassis and the robustness of this durability. Here the term robustness does not mean “strength”, but means that the performance is less sensitive to perturbations. In addition, a stochastic sampling verifies the good robustness prediction of PCE method and machine learning, based on a greatly reduced number of tests. This example demonstrates the effectiveness of the approach, in particular its ability to save computational costs for full vehicle simulation.
Styles APA, Harvard, Vancouver, ISO, etc.
39

Ma, Yun Jie, Zi Hui Ren et Ping Zhu. « A Layer Hybrid Intelligent Algorithm for Solving Resources Scheduling Problem ». Applied Mechanics and Materials 644-650 (septembre 2014) : 1506–9. http://dx.doi.org/10.4028/www.scientific.net/amm.644-650.1506.

Texte intégral
Résumé :
A New hybrid intelligent algorithm is used to solve the resources scheduling problem. This new algorithm contains Adaptive Particle Swarm Optimization (APSO) algorithm and Modified Genetic Algorithm (MGA) and Machine Learning (ML) algorithm, MGA is used to realize global searching, APSO is used to get the local searching. The choose processing depend on the definite of information in ant algorithm. Machine learning principle was proposed, after some iteration, the part of the optimal solution was deserved. Then we search the optimal solution in each layer. Simulational results based on the well-known benchmark suites in the literature showed that the algorithm had better optimization performance.
Styles APA, Harvard, Vancouver, ISO, etc.
40

Feigl, Moritz, Katharina Lebiedzinski, Mathew Herrnegger et Karsten Schulz. « Machine-learning methods for stream water temperature prediction ». Hydrology and Earth System Sciences 25, no 5 (31 mai 2021) : 2951–77. http://dx.doi.org/10.5194/hess-25-2951-2021.

Texte intégral
Résumé :
Abstract. Water temperature in rivers is a crucial environmental factor with the ability to alter hydro-ecological as well as socio-economic conditions within a catchment. The development of modelling concepts for predicting river water temperature is and will be essential for effective integrated water management and the development of adaptation strategies to future global changes (e.g. climate change). This study tests the performance of six different machine-learning models: step-wise linear regression, random forest, eXtreme Gradient Boosting (XGBoost), feed-forward neural networks (FNNs), and two types of recurrent neural networks (RNNs). All models are applied using different data inputs for daily water temperature prediction in 10 Austrian catchments ranging from 200 to 96 000 km2 and exhibiting a wide range of physiographic characteristics. The evaluated input data sets include combinations of daily means of air temperature, runoff, precipitation and global radiation. Bayesian optimization is applied to optimize the hyperparameters of all applied machine-learning models. To make the results comparable to previous studies, two widely used benchmark models are applied additionally: linear regression and air2stream. With a mean root mean squared error (RMSE) of 0.55 ∘C, the tested models could significantly improve water temperature prediction compared to linear regression (1.55 ∘C) and air2stream (0.98 ∘C). In general, the results show a very similar performance of the tested machine-learning models, with a median RMSE difference of 0.08 ∘C between the models. From the six tested machine-learning models both FNNs and XGBoost performed best in 4 of the 10 catchments. RNNs are the best-performing models in the largest catchment, indicating that RNNs mainly perform well when processes with long-term dependencies are important. Furthermore, a wide range of performance was observed for different hyperparameter sets for the tested models, showing the importance of hyperparameter optimization. Especially the FNN model results showed an extremely large RMSE standard deviation of 1.60 ∘C due to the chosen hyperparameters. This study evaluates different sets of input variables, machine-learning models and training characteristics for daily stream water temperature prediction, acting as a basis for future development of regional multi-catchment water temperature prediction models. All preprocessing steps and models are implemented in the open-source R package wateRtemp to provide easy access to these modelling approaches and facilitate further research.
Styles APA, Harvard, Vancouver, ISO, etc.
41

Kawaguchi, Kenji, Yu Maruyama et Xiaoyu Zheng. « Global Continuous Optimization with Error Bound and Fast Convergence ». Journal of Artificial Intelligence Research 56 (15 juin 2016) : 153–95. http://dx.doi.org/10.1613/jair.4742.

Texte intégral
Résumé :
This paper considers global optimization with a black-box unknown objective function that can be non-convex and non-differentiable. Such a difficult optimization problem arises in many real-world applications, such as parameter tuning in machine learning, engineering design problem, and planning with a complex physics simulator. This paper proposes a new global optimization algorithm, called Locally Oriented Global Optimization (LOGO), to aim for both fast convergence in practice and finite-time error bound in theory. The advantage and usage of the new algorithm are illustrated via theoretical analysis and an experiment conducted with 11 benchmark test functions. Further, we modify the LOGO algorithm to specifically solve a planning problem via policy search with continuous state/action space and long time horizon while maintaining its finite-time error bound. We apply the proposed planning method to accident management of a nuclear power plant. The result of the application study demonstrates the practical utility of our method.
Styles APA, Harvard, Vancouver, ISO, etc.
42

Han, Shoufei, Kun Zhu et Ran Wang. « Improvement of evolution process of dandelion algorithm with extreme learning machine for global optimization problems ». Expert Systems with Applications 163 (janvier 2021) : 113803. http://dx.doi.org/10.1016/j.eswa.2020.113803.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
43

Yu, Xi, Li Li, Xin He, Shengbo Chen et Lei Jiang. « Federated Learning Optimization Algorithm for Automatic Weight Optimal ». Computational Intelligence and Neuroscience 2022 (9 novembre 2022) : 1–19. http://dx.doi.org/10.1155/2022/8342638.

Texte intégral
Résumé :
Federated learning (FL), a distributed machine-learning framework, is poised to effectively protect data privacy and security, and it also has been widely applied in variety of fields in recent years. However, the system heterogeneity and statistical heterogeneity of FL pose serious obstacles to the global model’s quality. This study investigates server and client resource allocation in the context of FL system resource efficiency and offers the FedAwo optimization algorithm. This approach combines adaptive learning with federated learning, and makes full use of the computing resources of the server to calculate the optimal weight value corresponding to each client. This approach aggregated the global model according to the optimal weight value, which significantly minimizes the detrimental effects of statistical and system heterogeneity. In the process of traditional FL, we found that a large number of client trainings converge earlier than the specified epoch. However, according to the provisions of traditional FL, the client still needs to be trained for the specified epoch, which leads to the meaningless of a large number of calculations in the client. To further lower the training cost, the augmentation FedAwo ∗ algorithm is proposed. The FedAwo ∗ algorithm takes into account the heterogeneity of clients and sets the criteria for local convergence. When the local model of the client reaches the criteria, it will be returned to the server immediately. In this way, the epoch of the client can dynamically be modified adaptively. A large number of experiments based on MNIST and Fashion-MNIST public datasets reveal that the global model converges faster and has higher accuracy in FedAwo and FedAwo ∗ algorithms than FedAvg, FedProx, and FedAdp baseline algorithms.
Styles APA, Harvard, Vancouver, ISO, etc.
44

Alqahtani, Abdulwahab, Xupeng He, Bicheng Yan et Hussein Hoteit. « Uncertainty Analysis of CO2 Storage in Deep Saline Aquifers Using Machine Learning and Bayesian Optimization ». Energies 16, no 4 (8 février 2023) : 1684. http://dx.doi.org/10.3390/en16041684.

Texte intégral
Résumé :
Geological CO2 sequestration (GCS) has been proposed as an effective approach to mitigate carbon emissions in the atmosphere. Uncertainty and sensitivity analysis of the fate of CO2 dynamics and storage are essential aspects of large-scale reservoir simulations. This work presents a rigorous machine learning-assisted (ML) workflow for the uncertainty and global sensitivity analysis of CO2 storage prediction in deep saline aquifers. The proposed workflow comprises three main steps: The first step concerns dataset generation, in which we identify the uncertainty parameters impacting CO2 flow and transport and then determine their corresponding ranges and distributions. The training data samples are generated by combining the Latin Hypercube Sampling (LHS) technique with high-resolution simulations. The second step involves ML model development based on a data-driven ML model, which is generated to map the nonlinear relationship between the input parameters and corresponding output interests from the previous step. We show that using Bayesian optimization significantly accelerates the tuning process of hyper-parameters, which is vastly superior to a traditional trial–error analysis. In the third step, uncertainty and global sensitivity analysis are performed using Monte Carlo simulations applied to the optimized surrogate. This step is performed to explore the time-dependent uncertainty propagation of model outputs. The key uncertainty parameters are then identified by calculating the Sobol indices based on the global sensitivity analysis. The proposed workflow is accurate and efficient and could be readily implemented in field-scale CO2 sequestration in deep saline aquifers.
Styles APA, Harvard, Vancouver, ISO, etc.
45

Ren, Bin, et Huanfei Ma. « Global optimization of hyper-parameters in reservoir computing ». Electronic Research Archive 30, no 7 (2022) : 2719–29. http://dx.doi.org/10.3934/era.2022139.

Texte intégral
Résumé :
<abstract><p>Reservoir computing has emerged as a powerful and efficient machine learning tool especially in the reconstruction of many complex systems even for chaotic systems only based on the observational data. Though fruitful advances have been extensively studied, how to capture the art of hyper-parameter settings to construct efficient RC is still a long-standing and urgent problem. In contrast to the local manner of many works which aim to optimize one hyper-parameter while keeping others constant, in this work, we propose a global optimization framework using simulated annealing technique to find the optimal architecture of the randomly generated networks for a successful RC. Based on the optimized results, we further study several important properties of some hyper-parameters. Particularly, we find that the globally optimized reservoir network has a largest singular value significantly larger than one, which is contrary to the sufficient condition reported in the literature to guarantee the echo state property. We further reveal the mechanism of this phenomenon with a simplified model and the theory of nonlinear dynamical systems.</p></abstract>
Styles APA, Harvard, Vancouver, ISO, etc.
46

Li, Shuang, et Qiuwei Li. « Local and Global Convergence of General Burer-Monteiro Tensor Optimizations ». Proceedings of the AAAI Conference on Artificial Intelligence 36, no 9 (28 juin 2022) : 10266–74. http://dx.doi.org/10.1609/aaai.v36i9.21267.

Texte intégral
Résumé :
Tensor optimization is crucial to massive machine learning and signal processing tasks. In this paper, we consider tensor optimization with a convex and well-conditioned objective function and reformulate it into a nonconvex optimization using the Burer-Monteiro type parameterization. We analyze the local convergence of applying vanilla gradient descent to the factored formulation and establish a local regularity condition under mild assumptions. We also provide a linear convergence analysis of the gradient descent algorithm started in a neighborhood of the true tensor factors. Complementary to the local analysis, this work also characterizes the global geometry of the best rank-one tensor approximation problem and demonstrates that for orthogonally decomposable tensors the problem has no spurious local minima and all saddle points are strict except for the one at zero which is a third-order saddle point.
Styles APA, Harvard, Vancouver, ISO, etc.
47

Al-Mashhadani, Firas, Ibrahim Al-Jadir et Qusay Alsaffar. « An enhanced krill herd optimization technique used for classification problem ». Przegląd Naukowy Inżynieria i Kształtowanie Środowiska 30, no 2 (5 juillet 2021) : 354–64. http://dx.doi.org/10.22630/pniks.2021.30.2.30.

Texte intégral
Résumé :
In this paper, this method is intended to improve the optimization of the classification problem in machine learning. The EKH as a global search optimization method, it allocates the best representation of the solution (krill individual) whereas it uses the simulated annealing (SA) to modify the generated krill individuals (each individual represents a set of bits). The test results showed that the KH outperformed other methods using the external and internal evaluation measures.
Styles APA, Harvard, Vancouver, ISO, etc.
48

Liang, Chunyu, Xin Xu, Heping Chen, Wensheng Wang, Kunkun Zheng, Guojin Tan, Zhengwei Gu et Hao Zhang. « Machine Learning Approach to Develop a Novel Multi-Objective Optimization Method for Pavement Material Proportion ». Applied Sciences 11, no 2 (17 janvier 2021) : 835. http://dx.doi.org/10.3390/app11020835.

Texte intégral
Résumé :
Asphalt mixture proportion design is one of the most important steps in asphalt pavement design and application. This study proposes a novel multi-objective particle swarm optimization (MOPSO) algorithm employing the Gaussian process regression (GPR)-based machine learning (ML) method for multi-variable, multi-level optimization problems with multiple constraints. First, the GPR-based ML method is proposed to model the objective and constraint functions without the explicit relationships between variables and objectives. In the optimization step, the metaheuristic algorithm based on adaptive weight multi-objective particle swarm optimization (AWMOPSO) is used to achieve the global optimal solution, which is very efficient for the objectives and constraints without mathematical relationships. The results showed that the optimal GPR model could describe the relationship between variables and objectives well in terms of root-mean-square error (RMSE) and R2. After the optimization by the proposed GPR-AWMOPSO algorithm, the comprehensive pavement performances were enhanced in terms of the permanent deformation resistance at high temperature, crack resistance at low temperature as well as moisture stability. Therefore, the proposed GPR-AWMOPSO algorithm is the best option and efficient for maximizing the performances of composite modified asphalt mixture. The GPR-AWMOPSO algorithm has advantages of less computational time and fewer samples, higher accuracy, etc. over traditional laboratory-based experimental methods, which can serve as guidance for the proportion optimization design of asphalt pavement.
Styles APA, Harvard, Vancouver, ISO, etc.
49

Arrinda, Mikel, Gorka Vertiz, Denis Sanchéz, Aitor Makibar et Haritz Macicior. « Surrogate Model of the Optimum Global Battery Pack Thermal Management System Control ». Energies 15, no 5 (24 février 2022) : 1695. http://dx.doi.org/10.3390/en15051695.

Texte intégral
Résumé :
The control of the battery-thermal-management-system (BTMS) is key to prevent catastrophic events and to ensure long lifespans of the batteries. Nonetheless, to achieve a high-quality control of BTMS, several technical challenges must be faced: safe and homogeneous control in a multi element system with just one actuator, limited computational resources, and energy consumption restrictions. To address those challenges and restrictions, we propose a surrogate BTMS control model consisting of a classification machine-learning model that defines the optimum cooling-heating power of the actuator according to several temperature measurements. The la-belled-data required to build the control model is generated from a simulation environment that integrates model-predictive-control and linear optimization concepts. As a result, a controller that optimally controls the actuator with multi-input temperature signals in a multi-objective optimization problem is constructed. This paper benchmarks the response of the proposal using different classification machine-learning models and compares them with the responses of a state diagram controller and a PID controller. The results show that the proposed surrogate model has 35% less energy consumption than the evaluated state diagram, and 60% less energy consumption than a traditional PID controller, while dealing with multi-input and multi-objective systems.
Styles APA, Harvard, Vancouver, ISO, etc.
50

Feng, Yi, Mengru Liu, Yuqian Zhang et Jinglin Wang. « A Dynamic Opposite Learning Assisted Grasshopper Optimization Algorithm for the Flexible JobScheduling Problem ». Complexity 2020 (30 décembre 2020) : 1–19. http://dx.doi.org/10.1155/2020/8870783.

Texte intégral
Résumé :
Job shop scheduling problem (JSP) is one of the most difficult optimization problems in manufacturing industry, and flexible job shop scheduling problem (FJSP) is an extension of the classical JSP, which further challenges the algorithm performance. In FJSP, a machine should be selected for each process from a given set, which introduces another decision element within the job path, making FJSP be more difficult than traditional JSP. In this paper, a variant of grasshopper optimization algorithm (GOA) named dynamic opposite learning assisted GOA (DOLGOA) is proposed to solve FJSP. The recently proposed dynamic opposite learning (DOL) strategy adopts the asymmetric search space to improve the exploitation ability of the algorithm and increase the possibility of finding the global optimum. Various popular benchmarks from CEC 2014 and FJSP are used to evaluate the performance of DOLGOA. Numerical results with comparisons of other classic algorithms show that DOLGOA gets obvious improvement for solving global optimization problems and is well-performed when solving FJSP.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie