Siga este enlace para ver otros tipos de publicaciones sobre el tema: Machine learning, Global Optimization.

Artículos de revistas sobre el tema "Machine learning, Global Optimization"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "Machine learning, Global Optimization".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Cassioli, A., D. Di Lorenzo, M. Locatelli, F. Schoen y M. Sciandrone. "Machine learning for global optimization". Computational Optimization and Applications 51, n.º 1 (5 de mayo de 2010): 279–303. http://dx.doi.org/10.1007/s10589-010-9330-x.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Kudyshev, Zhaxylyk A., Alexander V. Kildishev, Vladimir M. Shalaev y Alexandra Boltasseva. "Machine learning–assisted global optimization of photonic devices". Nanophotonics 10, n.º 1 (28 de octubre de 2020): 371–83. http://dx.doi.org/10.1515/nanoph-2020-0376.

Texto completo
Resumen
AbstractOver the past decade, artificially engineered optical materials and nanostructured thin films have revolutionized the area of photonics by employing novel concepts of metamaterials and metasurfaces where spatially varying structures yield tailorable “by design” effective electromagnetic properties. The current state-of-the-art approach to designing and optimizing such structures relies heavily on simplistic, intuitive shapes for their unit cells or metaatoms. Such an approach cannot provide the global solution to a complex optimization problem where metaatom shape, in-plane geometry, out-of-plane architecture, and constituent materials have to be properly chosen to yield the maximum performance. In this work, we present a novel machine learning–assisted global optimization framework for photonic metadevice design. We demonstrate that using an adversarial autoencoder (AAE) coupled with a metaheuristic optimization framework significantly enhances the optimization search efficiency of the metadevice configurations with complex topologies. We showcase the concept of physics-driven compressed design space engineering that introduces advanced regularization into the compressed space of an AAE based on the optical responses of the devices. Beyond the significant advancement of the global optimization schemes, our approach can assist in gaining comprehensive design “intuition” by revealing the underlying physics of the optical performance of metadevices with complex topologies and material compositions.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Abdul Salam, Mustafa, Ahmad Taher Azar y Rana Hussien. "Swarm-Based Extreme Learning Machine Models for Global Optimization". Computers, Materials & Continua 70, n.º 3 (2022): 6339–63. http://dx.doi.org/10.32604/cmc.2022.020583.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

TAKAMATSU, Ryosuke y Wataru YAMAZAKI. "Global topology optimization of supersonic airfoil using machine learning technologies". Proceedings of The Computational Mechanics Conference 2021.34 (2021): 112. http://dx.doi.org/10.1299/jsmecmd.2021.34.112.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Tsoulos, Ioannis G., Alexandros Tzallas, Evangelos Karvounis y Dimitrios Tsalikakis. "NeuralMinimizer: A Novel Method for Global Optimization". Information 14, n.º 2 (25 de enero de 2023): 66. http://dx.doi.org/10.3390/info14020066.

Texto completo
Resumen
The problem of finding the global minimum of multidimensional functions is often applied to a wide range of problems. An innovative method of finding the global minimum of multidimensional functions is presented here. This method first generates an approximation of the objective function using only a few real samples from it. These samples construct the approach using a machine learning model. Next, the required sampling is performed by the approximation function. Furthermore, the approach is improved on each sample by using found local minima as samples for the training set of the machine learning model. In addition, as a termination criterion, the proposed technique uses a widely used criterion from the relevant literature which in fact evaluates it after each execution of the local minimization. The proposed technique was applied to a number of well-known problems from the relevant literature, and the comparative results with respect to modern global minimization techniques are shown to be extremely promising.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Honda, M. y E. Narita. "Machine-learning assisted steady-state profile predictions using global optimization techniques". Physics of Plasmas 26, n.º 10 (octubre de 2019): 102307. http://dx.doi.org/10.1063/1.5117846.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Wu, Shaohua, Yong Hu, Wei Wang, Xinyong Feng y Wanneng Shu. "Application of Global Optimization Methods for Feature Selection and Machine Learning". Mathematical Problems in Engineering 2013 (2013): 1–8. http://dx.doi.org/10.1155/2013/241517.

Texto completo
Resumen
The feature selection process constitutes a commonly encountered problem of global combinatorial optimization. The process reduces the number of features by removing irrelevant and redundant data. This paper proposed a novel immune clonal genetic algorithm based on immune clonal algorithm designed to solve the feature selection problem. The proposed algorithm has more exploration and exploitation abilities due to the clonal selection theory, and each antibody in the search space specifies a subset of the possible features. Experimental results show that the proposed algorithm simplifies the feature selection process effectively and obtains higher classification accuracy than other feature selection algorithms.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Ma, Sicong, Cheng Shang, Chuan-Ming Wang y Zhi-Pan Liu. "Thermodynamic rules for zeolite formation from machine learning based global optimization". Chemical Science 11, n.º 37 (2020): 10113–18. http://dx.doi.org/10.1039/d0sc03918g.

Texto completo
Resumen
Machine learning based atomic simulation explores more than one million minima from global potential energy surface of SiAlPO system, and identifies thermodynamics rules on energetics, framework and composition for stable zeolite.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Huang, Si-Da, Cheng Shang, Pei-Lin Kang y Zhi-Pan Liu. "Atomic structure of boron resolved using machine learning and global sampling". Chemical Science 9, n.º 46 (2018): 8644–55. http://dx.doi.org/10.1039/c8sc03427c.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Barkalov, Konstantin, Ilya Lebedev y Evgeny Kozinov. "Acceleration of Global Optimization Algorithm by Detecting Local Extrema Based on Machine Learning". Entropy 23, n.º 10 (28 de septiembre de 2021): 1272. http://dx.doi.org/10.3390/e23101272.

Texto completo
Resumen
This paper features the study of global optimization problems and numerical methods of their solution. Such problems are computationally expensive since the objective function can be multi-extremal, nondifferentiable, and, as a rule, given in the form of a “black box”. This study used a deterministic algorithm for finding the global extremum. This algorithm is based neither on the concept of multistart, nor nature-inspired algorithms. The article provides computational rules of the one-dimensional algorithm and the nested optimization scheme which could be applied for solving multidimensional problems. Please note that the solution complexity of global optimization problems essentially depends on the presence of multiple local extrema. In this paper, we apply machine learning methods to identify regions of attraction of local minima. The use of local optimization algorithms in the selected regions can significantly accelerate the convergence of global search as it could reduce the number of search trials in the vicinity of local minima. The results of computational experiments carried out on several hundred global optimization problems of different dimensionalities presented in the paper confirm the effect of accelerated convergence (in terms of the number of search trials required to solve a problem with a given accuracy).
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Wang, Wei-Ching. "Sound localization via deep learning, generative modeling, and global optimization". Journal of the Acoustical Society of America 151, n.º 4 (abril de 2022): A255. http://dx.doi.org/10.1121/10.0011240.

Texto completo
Resumen
An acoustic lens is capable of focusing incident plane waves at the focal point. This talk will discuss a new approach to design metamaterials with a focusing effect using effective and innovative methods by using machine learning. Specifically, the physics simulations use multiple scattering theory and machine learning techniques such as deep learning and generative modeling. The 2-D-Global Optimization Networks (2-D-GLOnets) model [1] developed initially for acoustic cloak design is adapted and generalized to design and optimize the acoustic lens. We supply the absolute pressure amplitude and gradient into the deep learning algorithms to discover the optimal scatterer positions that maximize the absolute pressure at the focal point in the confined region. We examine and evaluate the performance of the generative network, searching for an optimal configuration of scatterers over a range of parameters to produce desired features, such as broadband focusing and localization effects. The model will show examples of planar configurations of cylindrical structures producing focusing effect. [1] L. Zhuo and F. Amirkulova, “Design of acoustic cloak using generative modeling and gradient-based optimization,” in INTER-NOISE and NOISE-CON Congress and Conference Proceedings (Institute of Noise Control Engineering, 2021), Vol. 3.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Zhang, Ao, Yan Liu, Jinguang Yang, Zhi Li, Chuang Zhang y Yiwen Li. "Machine learning based design optimization of centrifugal impellers". Journal of the Global Power and Propulsion Society 6 (25 de julio de 2022): 124–34. http://dx.doi.org/10.33737/jgpps/150663.

Texto completo
Resumen
Big data and machine learning are developing rapidly, and their applications in the aerodynamic design of centrifugal impellers and other turbomachinery have attracted wide attention. In this paper, centrifugal impellers with large flow coefficient (0.18–0.22) are taken as research objects. Firstly, through one-dimensional design and optimization, main one-dimensional geometric parameters of those centrifugal impellers are obtained. Subsequently, hundreds of samples of centrifugal impellers are obtained by using an in-house parameterization program and Latin hypercube sampling method. The NUMECA software is used for CFD calculations to build a sample library of centrifugal impellers. Then, applying the artificial neural network (ANN) to deal with the data in the sample library, a nonlinear model between the flow coefficients, the geometric parameters of these centrifugal impellers and the aerodynamic performance is constructed, which can replace CFD calculations. Lastly with the help of the multi-objective genetic algorithm, a global optimization is carried out to fulfull a rapid design optimization for centrifugal impellers with flow coefficients in the range of 0.18–0.22. Three examples provided in the paper show that the design and optimization method described above is faster and more reliable compared with the traditional design method. This method provides a new way for the rapid design of centrifugal impellers.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Ha, Seung-Yeal, Shi Jin y Doheon Kim. "Convergence of a first-order consensus-based global optimization algorithm". Mathematical Models and Methods in Applied Sciences 30, n.º 12 (19 de septiembre de 2020): 2417–44. http://dx.doi.org/10.1142/s0218202520500463.

Texto completo
Resumen
Global optimization of a non-convex objective function often appears in large-scale machine learning and artificial intelligence applications. Recently, consensus-based optimization (CBO) methods have been introduced as one of the gradient-free optimization methods. In this paper, we provide a convergence analysis for the first-order CBO method in [J. A. Carrillo, S. Jin, L. Li and Y. Zhu, A consensus-based global optimization method for high dimensional machine learning problems, https://arxiv.org/abs/1909.09249v1 ]. Prior to this work, the convergence study was carried out for CBO methods on corresponding mean-field limit, a Fokker–Planck equation, which does not imply the convergence of the CBO method per se. Based on the consensus estimate directly on the first-order CBO model, we provide a convergence analysis of the first-order CBO method [J. A. Carrillo, S. Jin, L. Li and Y. Zhu, A consensus-based global optimization method for high dimensional machine learning problems, https://arxiv.org/abs/1909.09249v1 ] without resorting to the corresponding mean-field model. Our convergence analysis consists of two steps. In the first step, we show that the CBO model exhibits a global consensus time asymptotically for any initial data, and in the second step, we provide a sufficient condition on system parameters — which is dimension independent — and initial data which guarantee that the converged consensus state lies in a small neighborhood of the global minimum almost surely.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Spillard, Samuel, Christopher J. Turner y Konstantinos Meichanetzidis. "Machine learning entanglement freedom". International Journal of Quantum Information 16, n.º 08 (diciembre de 2018): 1840002. http://dx.doi.org/10.1142/s0219749918400026.

Texto completo
Resumen
Quantum many-body systems realize many different phases of matter characterized by their exotic emergent phenomena. While some simple versions of these properties can occur in systems of free fermions, their occurrence generally implies that the physics is dictated by an interacting Hamiltonian. The interaction distance has been successfully used to quantify the effect of interactions in a variety of states of matter via the entanglement spectrum [C. J. Turner, K. Meichanetzidis, Z. Papic and J. K. Pachos, Nat. Commun. 8 (2017) 14926, Phys. Rev. B 97 (2018) 125104]. The computation of the interaction distance reduces to a global optimization problem whose goal is to search for the free-fermion entanglement spectrum closest to the given entanglement spectrum. In this work, we employ techniques from machine learning in order to perform this same task. In a supervised learning setting, we use labeled data obtained by computing the interaction distance and predict its value via linear regression. Moving to a semi-supervised setting, we train an autoencoder to estimate an alternative measure to the interaction distance, and we show that it behaves in a similar manner.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Candelieri, Antonio y Francesco Archetti. "Global optimization in machine learning: the design of a predictive analytics application". Soft Computing 23, n.º 9 (1 de noviembre de 2018): 2969–77. http://dx.doi.org/10.1007/s00500-018-3597-8.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Li, Shijin y Fucai Wang. "Research on Optimization of Improved Gray Wolf Optimization-Extreme Learning Machine Algorithm in Vehicle Route Planning". Discrete Dynamics in Nature and Society 2020 (6 de octubre de 2020): 1–7. http://dx.doi.org/10.1155/2020/8647820.

Texto completo
Resumen
With the rapid development of intelligent transportation, intelligent algorithms and path planning have become effective methods to relieve traffic pressure. Intelligent algorithm can realize the priority selection mode in realizing traffic optimization efficiency. However, there is local optimization in intelligence and it is difficult to realize global optimization. In this paper, the antilearning model is used to solve the problem that the gray wolf algorithm falls into local optimization. The positions of different wolves are updated. When falling into local optimization, the current position is optimized to realize global optimization. Extreme Learning Machine (ELM) algorithm model is introduced to accelerate Improved Gray Wolf Optimization (IGWO) optimization and improve convergence speed. Finally, the experiment proves that IGWO-ELM algorithm is compared in path planning, and the algorithm has an ideal effect and high efficiency.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Kramer, Oliver. "On Machine Symbol Grounding and Optimization". International Journal of Cognitive Informatics and Natural Intelligence 5, n.º 3 (julio de 2011): 73–85. http://dx.doi.org/10.4018/ijcini.2011070105.

Texto completo
Resumen
From the point of view of an autonomous agent the world consists of high-dimensional dynamic sensorimotor data. Interface algorithms translate this data into symbols that are easier to handle for cognitive processes. Symbol grounding is about whether these systems can, based on this data, construct symbols that serve as a vehicle for higher symbol-oriented cognitive processes. Machine learning and data mining techniques are geared towards finding structures and input-output relations in this data by providing appropriate interface algorithms that translate raw data into symbols. This work formulates the interface design as global optimization problem with the objective to maximize the success of the overlying symbolic algorithm. For its implementation various known algorithms from data mining and machine learning turn out to be adequate methods that do not only exploit the intrinsic structure of the subsymbolic data, but that also allow to flexibly adapt to the objectives of the symbolic process. Furthermore, this work discusses the optimization formulation as a functional perspective on symbol grounding that does not hurt the zero semantical commitment condition. A case study illustrates technical details of the machine symbol grounding approach.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Kubwimana, Benjamin y Hamidreza Najafi. "A Novel Approach for Optimizing Building Energy Models Using Machine Learning Algorithms". Energies 16, n.º 3 (17 de enero de 2023): 1033. http://dx.doi.org/10.3390/en16031033.

Texto completo
Resumen
The current practice with building energy simulation software tools requires the manual entry of a large list of detailed inputs pertaining to the building characteristics, geographical region, schedule of operation, end users, occupancy, control aspects, and more. While these software tools allow the evaluation of the energy consumption of a building with various combinations of building parameters, with the manual information entry and considering the large number of parameters related to building design and operation, global optimization is extremely challenging. In the present paper, a novel approach is developed for the global optimization of building energy models (BEMs) using Python EnergyPlus. A Python-based script is developed to automate the data entry into the building energy modeling tool (EnergyPlus) and numerous possible designs that cover the desired ranges of multiple variables are simulated. The resulting datasets are then used to establish a surrogate BEM using an artificial neural network (ANN) which is optimized through two different approaches, including Bayesian optimization and a genetic algorithm. To demonstrate the proposed approach, a case study is performed for a building on the campus of the Florida Institute of Technology, located in Melbourne, FL, USA. Eight parameters are selected and 200 variations of them are supplied to EnergyPlus, and the produced results from the simulations are used to train an ANN-based surrogate model. The surrogate model achieved a maximum of 90% R2 through hyperparameter tuning. The two optimization approaches, including the genetic algorithm and the Bayesian method, were applied to the surrogate model, and the optimal designs achieved annual energy consumptions of 11.3 MWh and 12.7 MWh, respectively. It was shown that the approach presented bridges between the physics-based building energy models and the strong optimization tools available in Python, which can allow the achievement of global optimization in a computationally efficient fashion.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Li, Yibo, Chao Liu, Senyue Zhang, Wenan Tan y Yanyan Ding. "Reproducing Polynomial Kernel Extreme Learning Machine". Journal of Advanced Computational Intelligence and Intelligent Informatics 21, n.º 5 (20 de septiembre de 2017): 795–802. http://dx.doi.org/10.20965/jaciii.2017.p0795.

Texto completo
Resumen
Conventional kernel support vector machine (KSVM) has the problem of slow training speed, and single kernel extreme learning machine (KELM) also has some performance limitations, for which this paper proposes a new combined KELM model that build by the polynomial kernel and reproducing kernel on Sobolev Hilbert space. This model combines the advantages of global and local kernel function and has fast training speed. At the same time, an efficient optimization algorithm called cuckoo search algorithm is adopted to avoid blindness and inaccuracy in parameter selection. Experiments were performed on bi-spiral benchmark dataset, Banana dataset, as well as a number of classification and regression datasets from the UCI benchmark repository illustrate the feasibility of the proposed model. It achieves the better robustness and generalization performance when compared to other conventional KELM and KSVM, which demonstrates its effectiveness and usefulness.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Barkalov, Konstantin, Ilya Lebedev, Marina Usova, Daria Romanova, Daniil Ryazanov y Sergei Strijhak. "Optimization of Turbulence Model Parameters Using the Global Search Method Combined with Machine Learning". Mathematics 10, n.º 15 (31 de julio de 2022): 2708. http://dx.doi.org/10.3390/math10152708.

Texto completo
Resumen
The paper considers the slope flow simulation and the problem of finding the optimal parameter values of this mathematical model. The slope flow is modeled using the finite volume method applied to the Reynolds-averaged Navier–Stokes equations with closure in the form of the k−ωSST turbulence model. The optimal values of the turbulence model coefficients for free surface gravity multiphase flows were found using the global search algorithm. Calibration was performed to increase the similarity of the experimental and calculated velocity profiles. The Root Mean Square Error (RMSE) of derivation between the calculated flow velocity profile and the experimental one is considered as the objective function in the optimization problem. The calibration of the turbulence model coefficients for calculating the free surface flows on test slopes using the multiphase model for interphase tracking has not been performed previously. To solve the multi-extremal optimization problem arising from the search for the minimum of the loss function for the flow velocity profile, we apply a new optimization approach using a Peano curve to reduce the dimensionality of the problem. To speed up the optimization procedure, the objective function was approximated using an artificial neural network. Thus, an interdisciplinary approach was applied which allowed the optimal values of six turbulence model parameters to be found using OpenFOAM and Globalizer software.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Özöğür Akyüz, Süreyya, Gürkan Üstünkar y Gerhard Wilhelm Weber. "Adapted Infinite Kernel Learning by Multi-Local Algorithm". International Journal of Pattern Recognition and Artificial Intelligence 30, n.º 04 (12 de abril de 2016): 1651004. http://dx.doi.org/10.1142/s0218001416510046.

Texto completo
Resumen
The interplay of machine learning (ML) and optimization methods is an emerging field of artificial intelligence. Both ML and optimization are concerned with modeling of systems related to real-world problems. Parameter selection for classification models is an important task for ML algorithms. In statistical learning theory, cross-validation (CV) which is the most well-known model selection method can be very time consuming for large data sets. One of the recent model selection techniques developed for support vector machines (SVMs) is based on the observed test point margins. In this study, observed margin strategy is integrated into our novel infinite kernel learning (IKL) algorithm together with multi-local procedure (MLP) which is an optimization technique to find global solution. The experimental results show improvements in accuracy and speed when comparing with multiple kernel learning (MKL) and semi-infinite linear programming (SILP) with CV.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Smithies, Rob, Said Salhi y Nat Queen. "Adaptive Hybrid Learning for Neural Networks". Neural Computation 16, n.º 1 (1 de enero de 2004): 139–57. http://dx.doi.org/10.1162/08997660460734038.

Texto completo
Resumen
A robust locally adaptive learning algorithm is developed via two enhancements of the Resilient Propagation (RPROP) method. Remaining drawbacks of the gradient-based approach are addressed by hybridization with gradient-independent Local Search. Finally, a global optimization method based on recursion of the hybrid is constructed, making use of tabu neighborhoods to accelerate the search for minima through diver-sification. Enhanced RPROP is shown to be faster and more accurate than the standard RPROP in solving classification tasks based on natural data sets taken from the UCI repository of machine learning databases. Furthermore, the use of Local Search is shown to improve Enhanced RPROP by solving the same classification tasks as part of the global optimization method.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Yi, Dokkyun, Sangmin Ji y Sunyoung Bu. "An Enhanced Optimization Scheme Based on Gradient Descent Methods for Machine Learning". Symmetry 11, n.º 7 (20 de julio de 2019): 942. http://dx.doi.org/10.3390/sym11070942.

Texto completo
Resumen
A The learning process of machine learning consists of finding values of unknown weights in a cost function by minimizing the cost function based on learning data. However, since the cost function is not convex, it is conundrum to find the minimum value of the cost function. The existing methods used to find the minimum values usually use the first derivative of the cost function. When even the local minimum (but not a global minimum) is reached, since the first derivative of the cost function becomes zero, the methods give the local minimum values, so that the desired global minimum cannot be found. To overcome this problem, in this paper we modified one of the existing schemes—the adaptive momentum estimation scheme—by adding a new term, so that it can prevent the new optimizer from staying at local minimum. The convergence condition for the proposed scheme and the convergence value are also analyzed, and further explained through several numerical experiments whose cost function is non-convex.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Song, Tao, Jiarong Wang, Danya Xu, Wei Wei, Runsheng Han, Fan Meng, Ying Li y Pengfei Xie. "Unsupervised Machine Learning for Improved Delaunay Triangulation". Journal of Marine Science and Engineering 9, n.º 12 (7 de diciembre de 2021): 1398. http://dx.doi.org/10.3390/jmse9121398.

Texto completo
Resumen
Physical oceanography models rely heavily on grid discretization. It is known that unstructured grids perform well in dealing with boundary fitting problems in complex nearshore regions. However, it is time-consuming to find a set of unstructured grids in specific ocean areas, particularly in the case of land areas that are frequently changed by human construction. In this work, an attempt was made to use machine learning for the optimization of the unstructured triangular meshes formed with Delaunay triangulation in the global ocean field, so that the triangles in the triangular mesh were closer to equilateral triangles, the long, narrow triangles in the triangular mesh were reduced, and the mesh quality was improved. Specifically, we used Delaunay triangulation to generate the unstructured grid, and then developed a K-means clustering-based algorithm to optimize the unstructured grid. With the proposed method, unstructured meshes were generated and optimized for global oceans, small sea areas, and the South China Sea estuary to carry out data experiments. The results suggested that the proportion of triangles with a triangle shape factor greater than 0.7 amounted to 77.80%, 79.78%, and 79.78%, respectively, in the unstructured mesh. Meanwhile, the proportion of long, narrow triangles in the unstructured mesh was decreased to 8.99%, 3.46%, and 4.12%, respectively.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Li, Yang, Zhichuan Zhu, Alin Hou, Qingdong Zhao, Liwei Liu y Lijuan Zhang. "Pulmonary Nodule Recognition Based on Multiple Kernel Learning Support Vector Machine-PSO". Computational and Mathematical Methods in Medicine 2018 (2018): 1–10. http://dx.doi.org/10.1155/2018/1461470.

Texto completo
Resumen
Pulmonary nodule recognition is the core module of lung CAD. The Support Vector Machine (SVM) algorithm has been widely used in pulmonary nodule recognition, and the algorithm of Multiple Kernel Learning Support Vector Machine (MKL-SVM) has achieved good results therein. Based on grid search, however, the MKL-SVM algorithm needs long optimization time in course of parameter optimization; also its identification accuracy depends on the fineness of grid. In the paper, swarm intelligence is introduced and the Particle Swarm Optimization (PSO) is combined with MKL-SVM algorithm to be MKL-SVM-PSO algorithm so as to realize global optimization of parameters rapidly. In order to obtain the global optimal solution, different inertia weights such as constant inertia weight, linear inertia weight, and nonlinear inertia weight are applied to pulmonary nodules recognition. The experimental results show that the model training time of the proposed MKL-SVM-PSO algorithm is only 1/7 of the training time of the MKL-SVM grid search algorithm, achieving better recognition effect. Moreover, Euclidean norm of normalized error vector is proposed to measure the proximity between the average fitness curve and the optimal fitness curve after convergence. Through statistical analysis of the average of 20 times operation results with different inertial weights, it can be seen that the dynamic inertial weight is superior to the constant inertia weight in the MKL-SVM-PSO algorithm. In the dynamic inertial weight algorithm, the parameter optimization time of nonlinear inertia weight is shorter; the average fitness value after convergence is much closer to the optimal fitness value, which is better than the linear inertial weight. Besides, a better nonlinear inertial weight is verified.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Kawaguchi, Kenji, Jiaoyang Huang y Leslie Pack Kaelbling. "Every Local Minimum Value Is the Global Minimum Value of Induced Model in Nonconvex Machine Learning". Neural Computation 31, n.º 12 (diciembre de 2019): 2293–323. http://dx.doi.org/10.1162/neco_a_01234.

Texto completo
Resumen
For nonconvex optimization in machine learning, this article proves that every local minimum achieves the globally optimal value of the perturbable gradient basis model at any differentiable point. As a result, nonconvex machine learning is theoretically as supported as convex machine learning with a handcrafted basis in terms of the loss at differentiable local minima, except in the case when a preference is given to the handcrafted basis over the perturbable gradient basis. The proofs of these results are derived under mild assumptions. Accordingly, the proven results are directly applicable to many machine learning models, including practical deep neural networks, without any modification of practical methods. Furthermore, as special cases of our general results, this article improves or complements several state-of-the-art theoretical results on deep neural networks, deep residual networks, and overparameterized deep neural networks with a unified proof technique and novel geometric insights. A special case of our results also contributes to the theoretical foundation of representation learning.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Khan, Waqar Ahmed, S. H. Chung, Muhammad Usman Awan y Xin Wen. "Machine learning facilitated business intelligence (Part II)". Industrial Management & Data Systems 120, n.º 1 (27 de noviembre de 2019): 128–63. http://dx.doi.org/10.1108/imds-06-2019-0351.

Texto completo
Resumen
Purpose The purpose of this paper is three-fold: to review the categories explaining mainly optimization algorithms (techniques) in that needed to improve the generalization performance and learning speed of the Feedforward Neural Network (FNN); to discover the change in research trends by analyzing all six categories (i.e. gradient learning algorithms for network training, gradient free learning algorithms, optimization algorithms for learning rate, bias and variance (underfitting and overfitting) minimization algorithms, constructive topology neural networks, metaheuristic search algorithms) collectively; and recommend new research directions for researchers and facilitate users to understand algorithms real-world applications in solving complex management, engineering and health sciences problems. Design/methodology/approach The FNN has gained much attention from researchers to make a more informed decision in the last few decades. The literature survey is focused on the learning algorithms and the optimization techniques proposed in the last three decades. This paper (Part II) is an extension of Part I. For the sake of simplicity, the paper entitled “Machine learning facilitated business intelligence (Part I): Neural networks learning algorithms and applications” is referred to as Part I. To make the study consistent with Part I, the approach and survey methodology in this paper are kept similar to those in Part I. Findings Combining the work performed in Part I, the authors studied a total of 80 articles through popular keywords searching. The FNN learning algorithms and optimization techniques identified in the selected literature are classified into six categories based on their problem identification, mathematical model, technical reasoning and proposed solution. Previously, in Part I, the two categories focusing on the learning algorithms (i.e. gradient learning algorithms for network training, gradient free learning algorithms) are reviewed with their real-world applications in management, engineering, and health sciences. Therefore, in the current paper, Part II, the remaining four categories, exploring optimization techniques (i.e. optimization algorithms for learning rate, bias and variance (underfitting and overfitting) minimization algorithms, constructive topology neural networks, metaheuristic search algorithms) are studied in detail. The algorithm explanation is made enriched by discussing their technical merits, limitations, and applications in their respective categories. Finally, the authors recommend future new research directions which can contribute to strengthening the literature. Research limitations/implications The FNN contributions are rapidly increasing because of its ability to make reliably informed decisions. Like learning algorithms, reviewed in Part I, the focus is to enrich the comprehensive study by reviewing remaining categories focusing on the optimization techniques. However, future efforts may be needed to incorporate other algorithms into identified six categories or suggest new category to continuously monitor the shift in the research trends. Practical implications The authors studied the shift in research trend for three decades by collectively analyzing the learning algorithms and optimization techniques with their applications. This may help researchers to identify future research gaps to improve the generalization performance and learning speed, and user to understand the applications areas of the FNN. For instance, research contribution in FNN in the last three decades has changed from complex gradient-based algorithms to gradient free algorithms, trial and error hidden units fixed topology approach to cascade topology, hyperparameters initial guess to analytically calculation and converging algorithms at a global minimum rather than the local minimum. Originality/value The existing literature surveys include comparative study of the algorithms, identifying algorithms application areas and focusing on specific techniques in that it may not be able to identify algorithms categories, a shift in research trends over time, application area frequently analyzed, common research gaps and collective future directions. Part I and II attempts to overcome the existing literature surveys limitations by classifying articles into six categories covering a wide range of algorithm proposed to improve the FNN generalization performance and convergence rate. The classification of algorithms into six categories helps to analyze the shift in research trend which makes the classification scheme significant and innovative.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Guo, Xiaohua. "Optimization of English Machine Translation by Deep Neural Network under Artificial Intelligence". Computational Intelligence and Neuroscience 2022 (21 de abril de 2022): 1–10. http://dx.doi.org/10.1155/2022/2003411.

Texto completo
Resumen
To improve the function of machine translation to adapt to global language translation, the work takes deep neural network (DNN) as the basic theory, carries out transfer learning and neural network translation modeling, and optimizes the word alignment function in machine translation performance. First, the work implements a deep learning translation network model for English translation. On this basis, the neural machine translation model is designed under transfer learning. The random shielding method is introduced to implement the language training model, and the machine translation is slightly adjusted as the goal of transfer learning, thereby improving the semantic understanding ability in translation performance. Meanwhile, the work design introduces the method of word alignment optimization and optimizes the performance of word alignment in the transformer system by using word corpus. The experimental results show that the proposed method reduces the average alignment error rate by 8.1%, 24.4%, and 22.1% in EnRo (English-Roman), EnGe (English-German), and EnFr (English-French), respectively, compared with the previous algorithms. Compared with the designed optimization method, the word alignment error rate is lower than that of traditional methods. The modeling and optimization method is feasible, which can effectively solve the problems of insufficient information utilization, large parameter scale, and difficult storage in the process of machine translation. Additionally, it provides a feasible idea and direction for the optimization and improvement in neural machine translation (NMT) system.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Fan, Yanyan, Yu Zhang, Baosu Guo, Xiaoyuan Luo, Qingjin Peng y Zhenlin Jin. "A Hybrid Sparrow Search Algorithm of the Hyperparameter Optimization in Deep Learning". Mathematics 10, n.º 16 (22 de agosto de 2022): 3019. http://dx.doi.org/10.3390/math10163019.

Texto completo
Resumen
Deep learning has been widely used in different fields such as computer vision and speech processing. The performance of deep learning algorithms is greatly affected by their hyperparameters. For complex machine learning models such as deep neural networks, it is difficult to determine their hyperparameters. In addition, existing hyperparameter optimization algorithms easily converge to a local optimal solution. This paper proposes a method for hyperparameter optimization that combines the Sparrow Search Algorithm and Particle Swarm Optimization, called the Hybrid Sparrow Search Algorithm. This method takes advantages of avoiding the local optimal solution in the Sparrow Search Algorithm and the search efficiency of Particle Swarm Optimization to achieve global optimization. Experiments verified the proposed algorithm in simple and complex networks. The results show that the Hybrid Sparrow Search Algorithm has the strong global search capability to avoid local optimal solutions and satisfactory search efficiency in both low and high-dimensional spaces. The proposed method provides a new solution for hyperparameter optimization problems in deep learning models.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Papakonstantinou, Charalampos, Ioannis Daramouskas, Vaios Lappas, Vassilis C. Moulianitis y Vassilis Kostopoulos. "A Machine Learning Approach for Global Steering Control Moment Gyroscope Clusters". Aerospace 9, n.º 3 (17 de marzo de 2022): 164. http://dx.doi.org/10.3390/aerospace9030164.

Texto completo
Resumen
This paper addresses the problem of singularity avoidance for a 4-Control Moment Gyroscope (CMG) pyramid cluster, as used for the attitude control of a satellite using machine learning (ML) techniques. A data-set, generated using a heuristic algorithm, relates the initial gimbal configuration and the desired maneuver—inputs—to a number of null space motions the gimbals have to execute—output. Two ML techniques—Deep Neural Network (DNN) and Random Forest Classifier (RFC)—are utilized to predict the required null motion for trajectories that are not included in the training set. The principal advantage of this approach is the exploitation of global information gathered from the whole maneuver compared to conventional steering laws that consider only some local information, near the current gimbal configuration for optimization and are prone to local extrema. The data-set generation and the predictions of the ML systems can be made offline, so no further calculations are needed on board, providing the possibility to inspect the way the system responds to any commanded maneuver before its execution. The RFC technique demonstrates enhanced accuracy for the test data compared to the DNN, validating that it is possible to correctly predict the null motion even for maneuvers that are not included in the training data.
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Belmahdi, Brahim, Mohamed Louzazni y Abdelmajid El Bouardi. "Comparative optimization of global solar radiation forecasting using machine learning and time series models". Environmental Science and Pollution Research 29, n.º 10 (8 de octubre de 2021): 14871–88. http://dx.doi.org/10.1007/s11356-021-16760-8.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Meldgaard, Søren A., Esben L. Kolsbjerg y Bjørk Hammer. "Machine learning enhanced global optimization by clustering local environments to enable bundled atomic energies". Journal of Chemical Physics 149, n.º 13 (7 de octubre de 2018): 134104. http://dx.doi.org/10.1063/1.5048290.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Zhou, Shuchen, Waqas Jadoon y Junaid Shuja. "Machine Learning-Based Offloading Strategy for Lightweight User Mobile Edge Computing Tasks". Complexity 2021 (8 de junio de 2021): 1–11. http://dx.doi.org/10.1155/2021/6455617.

Texto completo
Resumen
This paper presents an in-depth study and analysis of offloading strategies for lightweight user mobile edge computing tasks using a machine learning approach. Firstly, a scheme for multiuser frequency division multiplexing approach in mobile edge computing offloading is proposed, and a mixed-integer nonlinear optimization model for energy consumption minimization is developed. Then, based on the analysis of the concave-convex properties of this optimization model, this paper uses variable relaxation and nonconvex optimization theory to transform the problem into a convex optimization problem. Subsequently, two optimization algorithms are designed: for the relaxation optimization problem, an iterative optimization algorithm based on the Lagrange dual method is designed; based on the branch-and-bound integer programming method, the iterative optimization algorithm is used as the basic algorithm for each step of the operation, and a global optimization algorithm is designed for transmitting power allocation, computational offloading strategy, dynamic adjustment of local computing power, and receiving energy channel selection strategy. Finally, the simulation results verify that the scheduling strategy of the frequency division technique proposed in this paper has good energy consumption minimization performance in mobile edge computation offloading. Our model is highly efficient and has a high degree of accuracy. The anomaly detection method based on a decision tree combined with deep learning proposed in this paper, unlike traditional IoT attack detection methods, overcomes the drawbacks of rule-based security detection methods and enables them to adapt to both established and unknown hostile environments. Experimental results show that the attack detection system based on the model achieves good detection results in the detection of multiple attacks.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Sun, Qian, William Ampomah, Junyu You, Martha Cather y Robert Balch. "Practical CO2—WAG Field Operational Designs Using Hybrid Numerical-Machine-Learning Approaches". Energies 14, n.º 4 (17 de febrero de 2021): 1055. http://dx.doi.org/10.3390/en14041055.

Texto completo
Resumen
Machine-learning technologies have exhibited robust competences in solving many petroleum engineering problems. The accurate predictivity and fast computational speed enable a large volume of time-consuming engineering processes such as history-matching and field development optimization. The Southwest Regional Partnership on Carbon Sequestration (SWP) project desires rigorous history-matching and multi-objective optimization processes, which fits the superiorities of the machine-learning approaches. Although the machine-learning proxy models are trained and validated before imposing to solve practical problems, the error margin would essentially introduce uncertainties to the results. In this paper, a hybrid numerical machine-learning workflow solving various optimization problems is presented. By coupling the expert machine-learning proxies with a global optimizer, the workflow successfully solves the history-matching and CO2 water alternative gas (WAG) design problem with low computational overheads. The history-matching work considers the heterogeneities of multiphase relative characteristics, and the CO2-WAG injection design takes multiple techno-economic objective functions into accounts. This work trained an expert response surface, a support vector machine, and a multi-layer neural network as proxy models to effectively learn the high-dimensional nonlinear data structure. The proposed workflow suggests revisiting the high-fidelity numerical simulator for validation purposes. The experience gained from this work would provide valuable guiding insights to similar CO2 enhanced oil recovery (EOR) projects.
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Zong Chen, Dr Joy Iong y Kong-Long Lai. "Machine Learning based Energy Management at Internet of Things Network Nodes". Journal of Trends in Computer Science and Smart Technology 2, n.º 3 (17 de julio de 2020): 127–33. http://dx.doi.org/10.36548/jtcsst.2020.3.001.

Texto completo
Resumen
The Internet of Things networks comprising wireless sensors and controllers or IoT gateways offers extremely high functionalities. However, not much attention is paid towards energy optimization of these nodes and enabling lossless networks. The wireless sensor networks and its applications has industrialized and scaled up gradually with the development of artificial intelligence and popularization of machine learning. The uneven network node energy consumption and local optimum is reached by the algorithm protocol due to the high energy consumption issues relating to the routing strategy. The smart ant colony optimization algorithm is used for obtaining an energy balanced routing at required regions. A neighbor selection strategy is proposed by combining the wireless sensor network nodes and the energy factors based on the smart ant colony optimization algorithm. The termination conditions for the algorithm as well as adaptive perturbation strategy are established for improving the convergence speed as well as ant searchability. This enables obtaining the find the global optimal solution. The performance, network life cycle, energy distribution, node equilibrium, network delay and network energy consumption are improved using the proposed routing planning methodology. There has been around 10% energy saving compared to the existing state-of-the-art algorithms.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Roncaglia, Cesare, Daniele Rapetti y Riccardo Ferrando. "Regression and clustering algorithms for AgCu nanoalloys: from mixing energy predictions to structure recognition". Physical Chemistry Chemical Physics 23, n.º 40 (2021): 23325–35. http://dx.doi.org/10.1039/d1cp02143e.

Texto completo
Resumen
The lowest-energy structures of AgCu nanoalloys are searched for by global optimization algorithms for sizes 100 and 200 atoms depending on composition, and their structures and mixing energy are analyzed by machine learning tools.
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Li, Xiguang, Shoufei Han, Liang Zhao, Changqing Gong y Xiaojing Liu. "New Dandelion Algorithm Optimizes Extreme Learning Machine for Biomedical Classification Problems". Computational Intelligence and Neuroscience 2017 (2017): 1–13. http://dx.doi.org/10.1155/2017/4523754.

Texto completo
Resumen
Inspired by the behavior of dandelion sowing, a new novel swarm intelligence algorithm, namely, dandelion algorithm (DA), is proposed for global optimization of complex functions in this paper. In DA, the dandelion population will be divided into two subpopulations, and different subpopulations will undergo different sowing behaviors. Moreover, another sowing method is designed to jump out of local optimum. In order to demonstrate the validation of DA, we compare the proposed algorithm with other existing algorithms, including bat algorithm, particle swarm optimization, and enhanced fireworks algorithm. Simulations show that the proposed algorithm seems much superior to other algorithms. At the same time, the proposed algorithm can be applied to optimize extreme learning machine (ELM) for biomedical classification problems, and the effect is considerable. At last, we use different fusion methods to form different fusion classifiers, and the fusion classifiers can achieve higher accuracy and better stability to some extent.
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Gao, H., L. Jézéque, E. Cabrol y B. Vitry. "Robust Design of Suspension System with Polynomial Chaos Expansion and Machine Learning". Science & Technique 19, n.º 1 (5 de febrero de 2020): 43–54. http://dx.doi.org/10.21122/2227-1031-2020-19-1-43-54.

Texto completo
Resumen
During the early development of a new vehicle project, the uncertainty of parameters should be taken into consideration because the design may be perturbed due to real components’ complexity and manufacturing tolerances. Thus, the numerical validation of critical suspension specifications, such as durability and ride comfort should be carried out with random factors. In this article a multi-objective optimization methodology is proposed which involves the specification’s robustness as one of the optimization objectives. To predict the output variation from a given set of uncertain-but-bounded parameters proposed by optimization iterations, an adaptive chaos polynomial expansion (PCE) is applied to combine a local design of experiments with global response surfaces. Furthermore, in order to reduce the additional tests required for PCE construction, a machine learning algorithm based on inter-design correlation matrix firstly classifies the current design points through data mining and clustering. Then it learns how to predict the robustness of future optimized solutions with no extra simulations. At the end of the optimization, a Pareto front between specifications and their robustness can be obtained which represents the best compromises among objectives. The optimum set on the front is classified and can serve as a reference for future design. An example of a quarter car model has been tested for which the target is to optimize the global durability based on real road excitations. The statistical distribution of the parameters such as the trajectories and speeds is also taken into account. The result shows the natural incompatibility between the durability of the chassis and the robustness of this durability. Here the term robustness does not mean “strength”, but means that the performance is less sensitive to perturbations. In addition, a stochastic sampling verifies the good robustness prediction of PCE method and machine learning, based on a greatly reduced number of tests. This example demonstrates the effectiveness of the approach, in particular its ability to save computational costs for full vehicle simulation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Ma, Yun Jie, Zi Hui Ren y Ping Zhu. "A Layer Hybrid Intelligent Algorithm for Solving Resources Scheduling Problem". Applied Mechanics and Materials 644-650 (septiembre de 2014): 1506–9. http://dx.doi.org/10.4028/www.scientific.net/amm.644-650.1506.

Texto completo
Resumen
A New hybrid intelligent algorithm is used to solve the resources scheduling problem. This new algorithm contains Adaptive Particle Swarm Optimization (APSO) algorithm and Modified Genetic Algorithm (MGA) and Machine Learning (ML) algorithm, MGA is used to realize global searching, APSO is used to get the local searching. The choose processing depend on the definite of information in ant algorithm. Machine learning principle was proposed, after some iteration, the part of the optimal solution was deserved. Then we search the optimal solution in each layer. Simulational results based on the well-known benchmark suites in the literature showed that the algorithm had better optimization performance.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Feigl, Moritz, Katharina Lebiedzinski, Mathew Herrnegger y Karsten Schulz. "Machine-learning methods for stream water temperature prediction". Hydrology and Earth System Sciences 25, n.º 5 (31 de mayo de 2021): 2951–77. http://dx.doi.org/10.5194/hess-25-2951-2021.

Texto completo
Resumen
Abstract. Water temperature in rivers is a crucial environmental factor with the ability to alter hydro-ecological as well as socio-economic conditions within a catchment. The development of modelling concepts for predicting river water temperature is and will be essential for effective integrated water management and the development of adaptation strategies to future global changes (e.g. climate change). This study tests the performance of six different machine-learning models: step-wise linear regression, random forest, eXtreme Gradient Boosting (XGBoost), feed-forward neural networks (FNNs), and two types of recurrent neural networks (RNNs). All models are applied using different data inputs for daily water temperature prediction in 10 Austrian catchments ranging from 200 to 96 000 km2 and exhibiting a wide range of physiographic characteristics. The evaluated input data sets include combinations of daily means of air temperature, runoff, precipitation and global radiation. Bayesian optimization is applied to optimize the hyperparameters of all applied machine-learning models. To make the results comparable to previous studies, two widely used benchmark models are applied additionally: linear regression and air2stream. With a mean root mean squared error (RMSE) of 0.55 ∘C, the tested models could significantly improve water temperature prediction compared to linear regression (1.55 ∘C) and air2stream (0.98 ∘C). In general, the results show a very similar performance of the tested machine-learning models, with a median RMSE difference of 0.08 ∘C between the models. From the six tested machine-learning models both FNNs and XGBoost performed best in 4 of the 10 catchments. RNNs are the best-performing models in the largest catchment, indicating that RNNs mainly perform well when processes with long-term dependencies are important. Furthermore, a wide range of performance was observed for different hyperparameter sets for the tested models, showing the importance of hyperparameter optimization. Especially the FNN model results showed an extremely large RMSE standard deviation of 1.60 ∘C due to the chosen hyperparameters. This study evaluates different sets of input variables, machine-learning models and training characteristics for daily stream water temperature prediction, acting as a basis for future development of regional multi-catchment water temperature prediction models. All preprocessing steps and models are implemented in the open-source R package wateRtemp to provide easy access to these modelling approaches and facilitate further research.
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Kawaguchi, Kenji, Yu Maruyama y Xiaoyu Zheng. "Global Continuous Optimization with Error Bound and Fast Convergence". Journal of Artificial Intelligence Research 56 (15 de junio de 2016): 153–95. http://dx.doi.org/10.1613/jair.4742.

Texto completo
Resumen
This paper considers global optimization with a black-box unknown objective function that can be non-convex and non-differentiable. Such a difficult optimization problem arises in many real-world applications, such as parameter tuning in machine learning, engineering design problem, and planning with a complex physics simulator. This paper proposes a new global optimization algorithm, called Locally Oriented Global Optimization (LOGO), to aim for both fast convergence in practice and finite-time error bound in theory. The advantage and usage of the new algorithm are illustrated via theoretical analysis and an experiment conducted with 11 benchmark test functions. Further, we modify the LOGO algorithm to specifically solve a planning problem via policy search with continuous state/action space and long time horizon while maintaining its finite-time error bound. We apply the proposed planning method to accident management of a nuclear power plant. The result of the application study demonstrates the practical utility of our method.
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Han, Shoufei, Kun Zhu y Ran Wang. "Improvement of evolution process of dandelion algorithm with extreme learning machine for global optimization problems". Expert Systems with Applications 163 (enero de 2021): 113803. http://dx.doi.org/10.1016/j.eswa.2020.113803.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Yu, Xi, Li Li, Xin He, Shengbo Chen y Lei Jiang. "Federated Learning Optimization Algorithm for Automatic Weight Optimal". Computational Intelligence and Neuroscience 2022 (9 de noviembre de 2022): 1–19. http://dx.doi.org/10.1155/2022/8342638.

Texto completo
Resumen
Federated learning (FL), a distributed machine-learning framework, is poised to effectively protect data privacy and security, and it also has been widely applied in variety of fields in recent years. However, the system heterogeneity and statistical heterogeneity of FL pose serious obstacles to the global model’s quality. This study investigates server and client resource allocation in the context of FL system resource efficiency and offers the FedAwo optimization algorithm. This approach combines adaptive learning with federated learning, and makes full use of the computing resources of the server to calculate the optimal weight value corresponding to each client. This approach aggregated the global model according to the optimal weight value, which significantly minimizes the detrimental effects of statistical and system heterogeneity. In the process of traditional FL, we found that a large number of client trainings converge earlier than the specified epoch. However, according to the provisions of traditional FL, the client still needs to be trained for the specified epoch, which leads to the meaningless of a large number of calculations in the client. To further lower the training cost, the augmentation FedAwo ∗ algorithm is proposed. The FedAwo ∗ algorithm takes into account the heterogeneity of clients and sets the criteria for local convergence. When the local model of the client reaches the criteria, it will be returned to the server immediately. In this way, the epoch of the client can dynamically be modified adaptively. A large number of experiments based on MNIST and Fashion-MNIST public datasets reveal that the global model converges faster and has higher accuracy in FedAwo and FedAwo ∗ algorithms than FedAvg, FedProx, and FedAdp baseline algorithms.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Alqahtani, Abdulwahab, Xupeng He, Bicheng Yan y Hussein Hoteit. "Uncertainty Analysis of CO2 Storage in Deep Saline Aquifers Using Machine Learning and Bayesian Optimization". Energies 16, n.º 4 (8 de febrero de 2023): 1684. http://dx.doi.org/10.3390/en16041684.

Texto completo
Resumen
Geological CO2 sequestration (GCS) has been proposed as an effective approach to mitigate carbon emissions in the atmosphere. Uncertainty and sensitivity analysis of the fate of CO2 dynamics and storage are essential aspects of large-scale reservoir simulations. This work presents a rigorous machine learning-assisted (ML) workflow for the uncertainty and global sensitivity analysis of CO2 storage prediction in deep saline aquifers. The proposed workflow comprises three main steps: The first step concerns dataset generation, in which we identify the uncertainty parameters impacting CO2 flow and transport and then determine their corresponding ranges and distributions. The training data samples are generated by combining the Latin Hypercube Sampling (LHS) technique with high-resolution simulations. The second step involves ML model development based on a data-driven ML model, which is generated to map the nonlinear relationship between the input parameters and corresponding output interests from the previous step. We show that using Bayesian optimization significantly accelerates the tuning process of hyper-parameters, which is vastly superior to a traditional trial–error analysis. In the third step, uncertainty and global sensitivity analysis are performed using Monte Carlo simulations applied to the optimized surrogate. This step is performed to explore the time-dependent uncertainty propagation of model outputs. The key uncertainty parameters are then identified by calculating the Sobol indices based on the global sensitivity analysis. The proposed workflow is accurate and efficient and could be readily implemented in field-scale CO2 sequestration in deep saline aquifers.
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Ren, Bin y Huanfei Ma. "Global optimization of hyper-parameters in reservoir computing". Electronic Research Archive 30, n.º 7 (2022): 2719–29. http://dx.doi.org/10.3934/era.2022139.

Texto completo
Resumen
<abstract><p>Reservoir computing has emerged as a powerful and efficient machine learning tool especially in the reconstruction of many complex systems even for chaotic systems only based on the observational data. Though fruitful advances have been extensively studied, how to capture the art of hyper-parameter settings to construct efficient RC is still a long-standing and urgent problem. In contrast to the local manner of many works which aim to optimize one hyper-parameter while keeping others constant, in this work, we propose a global optimization framework using simulated annealing technique to find the optimal architecture of the randomly generated networks for a successful RC. Based on the optimized results, we further study several important properties of some hyper-parameters. Particularly, we find that the globally optimized reservoir network has a largest singular value significantly larger than one, which is contrary to the sufficient condition reported in the literature to guarantee the echo state property. We further reveal the mechanism of this phenomenon with a simplified model and the theory of nonlinear dynamical systems.</p></abstract>
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Li, Shuang y Qiuwei Li. "Local and Global Convergence of General Burer-Monteiro Tensor Optimizations". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 9 (28 de junio de 2022): 10266–74. http://dx.doi.org/10.1609/aaai.v36i9.21267.

Texto completo
Resumen
Tensor optimization is crucial to massive machine learning and signal processing tasks. In this paper, we consider tensor optimization with a convex and well-conditioned objective function and reformulate it into a nonconvex optimization using the Burer-Monteiro type parameterization. We analyze the local convergence of applying vanilla gradient descent to the factored formulation and establish a local regularity condition under mild assumptions. We also provide a linear convergence analysis of the gradient descent algorithm started in a neighborhood of the true tensor factors. Complementary to the local analysis, this work also characterizes the global geometry of the best rank-one tensor approximation problem and demonstrates that for orthogonally decomposable tensors the problem has no spurious local minima and all saddle points are strict except for the one at zero which is a third-order saddle point.
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Al-Mashhadani, Firas, Ibrahim Al-Jadir y Qusay Alsaffar. "An enhanced krill herd optimization technique used for classification problem". Przegląd Naukowy Inżynieria i Kształtowanie Środowiska 30, n.º 2 (5 de julio de 2021): 354–64. http://dx.doi.org/10.22630/pniks.2021.30.2.30.

Texto completo
Resumen
In this paper, this method is intended to improve the optimization of the classification problem in machine learning. The EKH as a global search optimization method, it allocates the best representation of the solution (krill individual) whereas it uses the simulated annealing (SA) to modify the generated krill individuals (each individual represents a set of bits). The test results showed that the KH outperformed other methods using the external and internal evaluation measures.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Liang, Chunyu, Xin Xu, Heping Chen, Wensheng Wang, Kunkun Zheng, Guojin Tan, Zhengwei Gu y Hao Zhang. "Machine Learning Approach to Develop a Novel Multi-Objective Optimization Method for Pavement Material Proportion". Applied Sciences 11, n.º 2 (17 de enero de 2021): 835. http://dx.doi.org/10.3390/app11020835.

Texto completo
Resumen
Asphalt mixture proportion design is one of the most important steps in asphalt pavement design and application. This study proposes a novel multi-objective particle swarm optimization (MOPSO) algorithm employing the Gaussian process regression (GPR)-based machine learning (ML) method for multi-variable, multi-level optimization problems with multiple constraints. First, the GPR-based ML method is proposed to model the objective and constraint functions without the explicit relationships between variables and objectives. In the optimization step, the metaheuristic algorithm based on adaptive weight multi-objective particle swarm optimization (AWMOPSO) is used to achieve the global optimal solution, which is very efficient for the objectives and constraints without mathematical relationships. The results showed that the optimal GPR model could describe the relationship between variables and objectives well in terms of root-mean-square error (RMSE) and R2. After the optimization by the proposed GPR-AWMOPSO algorithm, the comprehensive pavement performances were enhanced in terms of the permanent deformation resistance at high temperature, crack resistance at low temperature as well as moisture stability. Therefore, the proposed GPR-AWMOPSO algorithm is the best option and efficient for maximizing the performances of composite modified asphalt mixture. The GPR-AWMOPSO algorithm has advantages of less computational time and fewer samples, higher accuracy, etc. over traditional laboratory-based experimental methods, which can serve as guidance for the proportion optimization design of asphalt pavement.
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Arrinda, Mikel, Gorka Vertiz, Denis Sanchéz, Aitor Makibar y Haritz Macicior. "Surrogate Model of the Optimum Global Battery Pack Thermal Management System Control". Energies 15, n.º 5 (24 de febrero de 2022): 1695. http://dx.doi.org/10.3390/en15051695.

Texto completo
Resumen
The control of the battery-thermal-management-system (BTMS) is key to prevent catastrophic events and to ensure long lifespans of the batteries. Nonetheless, to achieve a high-quality control of BTMS, several technical challenges must be faced: safe and homogeneous control in a multi element system with just one actuator, limited computational resources, and energy consumption restrictions. To address those challenges and restrictions, we propose a surrogate BTMS control model consisting of a classification machine-learning model that defines the optimum cooling-heating power of the actuator according to several temperature measurements. The la-belled-data required to build the control model is generated from a simulation environment that integrates model-predictive-control and linear optimization concepts. As a result, a controller that optimally controls the actuator with multi-input temperature signals in a multi-objective optimization problem is constructed. This paper benchmarks the response of the proposal using different classification machine-learning models and compares them with the responses of a state diagram controller and a PID controller. The results show that the proposed surrogate model has 35% less energy consumption than the evaluated state diagram, and 60% less energy consumption than a traditional PID controller, while dealing with multi-input and multi-objective systems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Feng, Yi, Mengru Liu, Yuqian Zhang y Jinglin Wang. "A Dynamic Opposite Learning Assisted Grasshopper Optimization Algorithm for the Flexible JobScheduling Problem". Complexity 2020 (30 de diciembre de 2020): 1–19. http://dx.doi.org/10.1155/2020/8870783.

Texto completo
Resumen
Job shop scheduling problem (JSP) is one of the most difficult optimization problems in manufacturing industry, and flexible job shop scheduling problem (FJSP) is an extension of the classical JSP, which further challenges the algorithm performance. In FJSP, a machine should be selected for each process from a given set, which introduces another decision element within the job path, making FJSP be more difficult than traditional JSP. In this paper, a variant of grasshopper optimization algorithm (GOA) named dynamic opposite learning assisted GOA (DOLGOA) is proposed to solve FJSP. The recently proposed dynamic opposite learning (DOL) strategy adopts the asymmetric search space to improve the exploitation ability of the algorithm and increase the possibility of finding the global optimum. Various popular benchmarks from CEC 2014 and FJSP are used to evaluate the performance of DOLGOA. Numerical results with comparisons of other classic algorithms show that DOLGOA gets obvious improvement for solving global optimization problems and is well-performed when solving FJSP.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía