Literatura académica sobre el tema "Machine learning, Global Optimization"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Machine learning, Global Optimization".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Artículos de revistas sobre el tema "Machine learning, Global Optimization"
Cassioli, A., D. Di Lorenzo, M. Locatelli, F. Schoen y M. Sciandrone. "Machine learning for global optimization". Computational Optimization and Applications 51, n.º 1 (5 de mayo de 2010): 279–303. http://dx.doi.org/10.1007/s10589-010-9330-x.
Texto completoKudyshev, Zhaxylyk A., Alexander V. Kildishev, Vladimir M. Shalaev y Alexandra Boltasseva. "Machine learning–assisted global optimization of photonic devices". Nanophotonics 10, n.º 1 (28 de octubre de 2020): 371–83. http://dx.doi.org/10.1515/nanoph-2020-0376.
Texto completoAbdul Salam, Mustafa, Ahmad Taher Azar y Rana Hussien. "Swarm-Based Extreme Learning Machine Models for Global Optimization". Computers, Materials & Continua 70, n.º 3 (2022): 6339–63. http://dx.doi.org/10.32604/cmc.2022.020583.
Texto completoTAKAMATSU, Ryosuke y Wataru YAMAZAKI. "Global topology optimization of supersonic airfoil using machine learning technologies". Proceedings of The Computational Mechanics Conference 2021.34 (2021): 112. http://dx.doi.org/10.1299/jsmecmd.2021.34.112.
Texto completoTsoulos, Ioannis G., Alexandros Tzallas, Evangelos Karvounis y Dimitrios Tsalikakis. "NeuralMinimizer: A Novel Method for Global Optimization". Information 14, n.º 2 (25 de enero de 2023): 66. http://dx.doi.org/10.3390/info14020066.
Texto completoHonda, M. y E. Narita. "Machine-learning assisted steady-state profile predictions using global optimization techniques". Physics of Plasmas 26, n.º 10 (octubre de 2019): 102307. http://dx.doi.org/10.1063/1.5117846.
Texto completoWu, Shaohua, Yong Hu, Wei Wang, Xinyong Feng y Wanneng Shu. "Application of Global Optimization Methods for Feature Selection and Machine Learning". Mathematical Problems in Engineering 2013 (2013): 1–8. http://dx.doi.org/10.1155/2013/241517.
Texto completoMa, Sicong, Cheng Shang, Chuan-Ming Wang y Zhi-Pan Liu. "Thermodynamic rules for zeolite formation from machine learning based global optimization". Chemical Science 11, n.º 37 (2020): 10113–18. http://dx.doi.org/10.1039/d0sc03918g.
Texto completoHuang, Si-Da, Cheng Shang, Pei-Lin Kang y Zhi-Pan Liu. "Atomic structure of boron resolved using machine learning and global sampling". Chemical Science 9, n.º 46 (2018): 8644–55. http://dx.doi.org/10.1039/c8sc03427c.
Texto completoBarkalov, Konstantin, Ilya Lebedev y Evgeny Kozinov. "Acceleration of Global Optimization Algorithm by Detecting Local Extrema Based on Machine Learning". Entropy 23, n.º 10 (28 de septiembre de 2021): 1272. http://dx.doi.org/10.3390/e23101272.
Texto completoTesis sobre el tema "Machine learning, Global Optimization"
Nowak, Hans II(Hans Antoon). "Strategic capacity planning using data science, optimization, and machine learning". Thesis, Massachusetts Institute of Technology, 2020. https://hdl.handle.net/1721.1/126914.
Texto completoThesis: S.M., Massachusetts Institute of Technology, Department of Mechanical Engineering, in conjunction with the Leaders for Global Operations Program at MIT, May, 2020
Cataloged from the official PDF of thesis.
Includes bibliographical references (pages 101-104).
Raytheon's Circuit Card Assembly (CCA) factory in Andover, MA is Raytheon's largest factory and the largest Department of Defense (DOD) CCA manufacturer in the world. With over 500 operations, it manufactures over 7000 unique parts with a high degree of complexity and varying levels of demand. Recently, the factory has seen an increase in demand, making the ability to continuously analyze factory capacity and strategically plan for future operations much needed. This study seeks to develop a sustainable strategic capacity optimization model and capacity visualization tool that integrates demand data with historical manufacturing data. Through automated data mining algorithms of factory data sources, capacity utilization and overall equipment effectiveness (OEE) for factory operations are evaluated. Machine learning methods are then assessed to gain an accurate estimate of cycle time (CT) throughout the factory. Finally, a mixed-integer nonlinear program (MINLP) integrates the capacity utilization framework and machine learning predictions to compute the optimal strategic capacity planning decisions. Capacity utilization and OEE models are shown to be able to be generated through automated data mining algorithms. Machine learning models are shown to have a mean average error (MAE) of 1.55 on predictions for new data, which is 76.3% lower than the current CT prediction error. Finally, the MINLP is solved to optimality within a tolerance of 1.00e-04 and generates resource and production decisions that can be acted upon.
by Hans Nowak II.
M.B.A.
S.M.
M.B.A. Massachusetts Institute of Technology, Sloan School of Management
S.M. Massachusetts Institute of Technology, Department of Mechanical Engineering
Veluscek, Marco. "Global supply chain optimization : a machine learning perspective to improve caterpillar's logistics operations". Thesis, Brunel University, 2016. http://bura.brunel.ac.uk/handle/2438/13050.
Texto completoSchweidtmann, Artur M. [Verfasser], Alexander [Akademischer Betreuer] Mitsos y Andreas [Akademischer Betreuer] Schuppert. "Global optimization of processes through machine learning / Artur M. Schweidtmann ; Alexander Mitsos, Andreas Schuppert". Aachen : Universitätsbibliothek der RWTH Aachen, 2021. http://d-nb.info/1240690924/34.
Texto completoTaheri, Mehdi. "Machine Learning from Computer Simulations with Applications in Rail Vehicle Dynamics and System Identification". Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/81417.
Texto completoPh. D.
Gabere, Musa Nur. "Prediction of antimicrobial peptides using hyperparameter optimized support vector machines". Thesis, University of the Western Cape, 2011. http://etd.uwc.ac.za/index.php?module=etd&action=viewtitle&id=gen8Srv25Nme4_7345_1330684697.
Texto completoAntimicrobial peptides (AMPs) play a key role in the innate immune response. They can be ubiquitously found in a wide range of eukaryotes including mammals, amphibians, insects, plants, and protozoa. In lower organisms, AMPs function merely as antibiotics by permeabilizing cell membranes and lysing invading microbes. Prediction of antimicrobial peptides is important because experimental methods used in characterizing AMPs are costly, time consuming and resource intensive and identification of AMPs in insects can serve as a template for the design of novel antibiotic. In order to fulfil this, firstly, data on antimicrobial peptides is extracted from UniProt, manually curated and stored into a centralized database called dragon antimicrobial peptide database (DAMPD). Secondly, based on the curated data, models to predict antimicrobial peptides are created using support vector machine with optimized hyperparameters. In particular, global optimization methods such as grid search, pattern search and derivative-free methods are utilised to optimize the SVM hyperparameters. These models are useful in characterizing unknown antimicrobial peptides. Finally, a webserver is created that will be used to predict antimicrobial peptides in haemotophagous insects such as Glossina morsitan and Anopheles gambiae.
Belkhir, Nacim. "Per Instance Algorithm Configuration for Continuous Black Box Optimization". Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS455/document.
Texto completoThis PhD thesis focuses on the automated algorithm configuration that aims at finding the best parameter setting for a given problem or a' class of problem. The Algorithm Configuration problem thus amounts to a metal Foptimization problem in the space of parameters, whosemetaFobjective is the performance measure of the given algorithm at hand with a given parameter configuration. However, in the continuous domain, such method can only be empirically assessed at the cost of running the algorithm on some problem instances. More recent approaches rely on a description of problems in some features space, and try to learn a mapping from this feature space onto the space of parameter configurations of the algorithm at hand. Along these lines, this PhD thesis focuses on the Per Instance Algorithm Configuration (PIAC) for solving continuous black boxoptimization problems, where only a limited budget confessionnalisations available. We first survey Evolutionary Algorithms for continuous optimization, with a focus on two algorithms that we have used as target algorithm for PIAC, DE and CMAFES. Next, we review the state of the art of Algorithm Configuration approaches, and the different features that have been proposed in the literature to describe continuous black box optimization problems. We then introduce a general methodology to empirically study PIAC for the continuous domain, so that all the components of PIAC can be explored in real Fworld conditions. To this end, we also introduce a new continuous black box test bench, distinct from the famous BBOB'benchmark, that is composed of a several multiFdimensional test functions with different problem properties, gathered from the literature. The methodology is finally applied to two EAS. First we use Differential Evolution as'target algorithm, and explore all the components of PIAC, such that we empirically assess the best. Second, based on the results on DE, we empirically investigate PIAC with Covariance Matrix Adaptation Evolution Strategy (CMAFES) as target algorithm. Both use cases empirically validate the proposed methodology on the new black box testbench for dimensions up to100
Liu, Liu. "Stochastic Optimization in Machine Learning". Thesis, The University of Sydney, 2019. http://hdl.handle.net/2123/19982.
Texto completoLeblond, Rémi. "Asynchronous optimization for machine learning". Thesis, Paris Sciences et Lettres (ComUE), 2018. http://www.theses.fr/2018PSLEE057/document.
Texto completoThe impressive breakthroughs of the last two decades in the field of machine learning can be in large part attributed to the explosion of computing power and available data. These two limiting factors have been replaced by a new bottleneck: algorithms. The focus of this thesis is thus on introducing novel methods that can take advantage of high data quantity and computing power. We present two independent contributions. First, we develop and analyze novel fast optimization algorithms which take advantage of the advances in parallel computing architecture and can handle vast amounts of data. We introduce a new framework of analysis for asynchronous parallel incremental algorithms, which enable correct and simple proofs. We then demonstrate its usefulness by performing the convergence analysis for several methods, including two novel algorithms. Asaga is a sparse asynchronous parallel variant of the variance-reduced algorithm Saga which enjoys fast linear convergence rates on smooth and strongly convex objectives. We prove that it can be linearly faster than its sequential counterpart, even without sparsity assumptions. ProxAsaga is an extension of Asaga to the more general setting where the regularizer can be non-smooth. We prove that it can also achieve a linear speedup. We provide extensive experiments comparing our new algorithms to the current state-of-art. Second, we introduce new methods for complex structured prediction tasks. We focus on recurrent neural networks (RNNs), whose traditional training algorithm for RNNs – based on maximum likelihood estimation (MLE) – suffers from several issues. The associated surrogate training loss notably ignores the information contained in structured losses and introduces discrepancies between train and test times that may hurt performance. To alleviate these problems, we propose SeaRNN, a novel training algorithm for RNNs inspired by the “learning to search” approach to structured prediction. SeaRNN leverages test-alike search space exploration to introduce global-local losses that are closer to the test error than the MLE objective. We demonstrate improved performance over MLE on three challenging tasks, and provide several subsampling strategies to enable SeaRNN to scale to large-scale tasks, such as machine translation. Finally, after contrasting the behavior of SeaRNN models to MLE models, we conduct an in-depth comparison of our new approach to the related work
Bai, Hao. "Machine learning assisted probabilistic prediction of long-term fatigue damage and vibration reduction of wind turbine tower using active damping system". Thesis, Normandie, 2021. http://www.theses.fr/2021NORMIR01.
Texto completoThis dissertation is devoted to the development of an active damping system for vibration reduction of wind turbine tower under gusty wind and turbulent wind. The presence of vibrations often leads to either an ultimate deflection on the top of wind tower or a failure due to the material’s fatigue near the bottom of wind tower. Furthermore, given the random nature of wind conditions, it is indispensable to look at this problem from a probabilistic point of view. In this work, a probabilistic framework of fatigue analysis is developed and improved by using a residual neural network. A damping system employing an active damper, Twin Rotor Damper, is designed for NREL 5MW reference wind turbine. The design is optimized by an evolutionary algorithm with automatic parameter tuning method based on exploitation and exploration
Chang, Allison An. "Integer optimization methods for machine learning". Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/72643.
Texto completoThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (p. 129-137).
In this thesis, we propose new mixed integer optimization (MIO) methods to ad- dress problems in machine learning. The first part develops methods for supervised bipartite ranking, which arises in prioritization tasks in diverse domains such as information retrieval, recommender systems, natural language processing, bioinformatics, and preventative maintenance. The primary advantage of using MIO for ranking is that it allows for direct optimization of ranking quality measures, as opposed to current state-of-the-art algorithms that use heuristic loss functions. We demonstrate using a number of datasets that our approach can outperform other ranking methods. The second part of the thesis focuses on reverse-engineering ranking models. This is an application of a more general ranking problem than the bipartite case. Quality rankings affect business for many organizations, and knowing the ranking models would allow these organizations to better understand the standards by which their products are judged and help them to create higher quality products. We introduce an MIO method for reverse-engineering such models and demonstrate its performance in a case-study with real data from a major ratings company. We also devise an approach to find the most cost-effective way to increase the rank of a certain product. In the final part of the thesis, we develop MIO methods to first generate association rules and then use the rules to build an interpretable classifier in the form of a decision list, which is an ordered list of rules. These are both combinatorially challenging problems because even a small dataset may yield a large number of rules and a small set of rules may correspond to many different orderings. We show how to use MIO to mine useful rules, as well as to construct a classifier from them. We present results in terms of both classification accuracy and interpretability for a variety of datasets.
by Allison An Chang.
Ph.D.
Libros sobre el tema "Machine learning, Global Optimization"
Optimization for machine learning. Cambridge, Mass: MIT Press, 2012.
Buscar texto completoLin, Zhouchen, Huan Li y Cong Fang. Accelerated Optimization for Machine Learning. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-2910-8.
Texto completoAgrawal, Tanay. Hyperparameter Optimization in Machine Learning. Berkeley, CA: Apress, 2021. http://dx.doi.org/10.1007/978-1-4842-6579-6.
Texto completoFazelnia, Ghazal. Optimization for Probabilistic Machine Learning. [New York, N.Y.?]: [publisher not identified], 2019.
Buscar texto completoNicosia, Giuseppe, Varun Ojha, Emanuele La Malfa, Gabriele La Malfa, Giorgio Jansen, Panos M. Pardalos, Giovanni Giuffrida y Renato Umeton, eds. Machine Learning, Optimization, and Data Science. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-95470-3.
Texto completoNicosia, Giuseppe, Varun Ojha, Emanuele La Malfa, Gabriele La Malfa, Giorgio Jansen, Panos M. Pardalos, Giovanni Giuffrida y Renato Umeton, eds. Machine Learning, Optimization, and Data Science. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-95467-3.
Texto completoJiang, Jiawei, Bin Cui y Ce Zhang. Distributed Machine Learning and Gradient Optimization. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-3420-8.
Texto completoPardalos, Panos, Mario Pavone, Giovanni Maria Farinella y Vincenzo Cutello, eds. Machine Learning, Optimization, and Big Data. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-27926-8.
Texto completoNicosia, Giuseppe, Panos Pardalos, Renato Umeton, Giovanni Giuffrida y Vincenzo Sciacca, eds. Machine Learning, Optimization, and Data Science. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-37599-7.
Texto completoKulkarni, Anand J. y Suresh Chandra Satapathy, eds. Optimization in Machine Learning and Applications. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-0994-0.
Texto completoCapítulos de libros sobre el tema "Machine learning, Global Optimization"
Kearfott, Ralph Baker. "Mathematically Rigorous Global Optimization and Fuzzy Optimization". En Black Box Optimization, Machine Learning, and No-Free Lunch Theorems, 169–94. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-66515-9_7.
Texto completode Winter, Roy, Bas van Stein, Matthys Dijkman y Thomas Bäck. "Designing Ships Using Constrained Multi-objective Efficient Global Optimization". En Machine Learning, Optimization, and Data Science, 191–203. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-13709-0_16.
Texto completoCocola, Jorio y Paul Hand. "Global Convergence of Sobolev Training for Overparameterized Neural Networks". En Machine Learning, Optimization, and Data Science, 574–86. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-64583-0_51.
Texto completoZabinsky, Zelda B., Giulia Pedrielli y Hao Huang. "A Framework for Multi-fidelity Modeling in Global Optimization Approaches". En Machine Learning, Optimization, and Data Science, 335–46. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-37599-7_28.
Texto completoGriewank, Andreas y Ángel Rojas. "Treating Artificial Neural Net Training as a Nonsmooth Global Optimization Problem". En Machine Learning, Optimization, and Data Science, 759–70. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-37599-7_64.
Texto completoIssa, Mohamed, Aboul Ella Hassanien y Ibrahim Ziedan. "Performance Evaluation of Sine-Cosine Optimization Versus Particle Swarm Optimization for Global Sequence Alignment Problem". En Machine Learning Paradigms: Theory and Application, 375–91. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-02357-7_18.
Texto completoWang, Yong-Jun, Jiang-She Zhang y Yu-Fen Zhang. "An Effective and Efficient Two Stage Algorithm for Global Optimization". En Advances in Machine Learning and Cybernetics, 487–96. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11739685_51.
Texto completoKiranyaz, Serkan, Turker Ince y Moncef Gabbouj. "Improving Global Convergence". En Multidimensional Particle Swarm Optimization for Machine Learning and Pattern Recognition, 101–49. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-37846-1_5.
Texto completoConsoli, Sergio, Luca Tiozzo Pezzoli y Elisa Tosetti. "Using the GDELT Dataset to Analyse the Italian Sovereign Bond Market". En Machine Learning, Optimization, and Data Science, 190–202. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-64583-0_18.
Texto completoRodrigues, Douglas, Gustavo Henrique de Rosa, Leandro Aparecido Passos y João Paulo Papa. "Adaptive Improved Flower Pollination Algorithm for Global Optimization". En Nature-Inspired Computation in Data Mining and Machine Learning, 1–21. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28553-1_1.
Texto completoActas de conferencias sobre el tema "Machine learning, Global Optimization"
He, Yi-chao y Kun-qi Liu. "A Modified Particle Swarm Optimization for Solving Global Optimization Problems". En 2006 International Conference on Machine Learning and Cybernetics. IEEE, 2006. http://dx.doi.org/10.1109/icmlc.2006.258615.
Texto completoTamura, Kenichi y Keiichiro Yasuda. "Spiral Multipoint Search for Global Optimization". En 2011 Tenth International Conference on Machine Learning and Applications (ICMLA). IEEE, 2011. http://dx.doi.org/10.1109/icmla.2011.131.
Texto completoYong-Jun Wang, Jiang-She Zhang y Yu-Fen Zhang. "A fast hybrid algorithm for global optimization". En Proceedings of 2005 International Conference on Machine Learning and Cybernetics. IEEE, 2005. http://dx.doi.org/10.1109/icmlc.2005.1527462.
Texto completoSun, Gao-Ji. "A new evolutionary algorithm for global numerical optimization". En 2010 International Conference on Machine Learning and Cybernetics (ICMLC). IEEE, 2010. http://dx.doi.org/10.1109/icmlc.2010.5580961.
Texto completoNacef, Abdelhakim, Miloud Bagaa, Youcef Aklouf, Abdellah Kaci, Diego Leonel Cadette Dutra y Adlen Ksentini. "Self-optimized network: When Machine Learning Meets Optimization". En GLOBECOM 2021 - 2021 IEEE Global Communications Conference. IEEE, 2021. http://dx.doi.org/10.1109/globecom46510.2021.9685681.
Texto completoLi, Xue-Qiang, Zhi-Feng Hao y Han Huang. "An evolutionary algorithm with sorted race mechanism for global optimization". En 2010 International Conference on Machine Learning and Cybernetics (ICMLC). IEEE, 2010. http://dx.doi.org/10.1109/icmlc.2010.5580810.
Texto completoInjadat, MohammadNoor, Fadi Salo, Ali Bou Nassif, Aleksander Essex y Abdallah Shami. "Bayesian Optimization with Machine Learning Algorithms Towards Anomaly Detection". En GLOBECOM 2018 - 2018 IEEE Global Communications Conference. IEEE, 2018. http://dx.doi.org/10.1109/glocom.2018.8647714.
Texto completoCandelieri, Antonio y Francesco Archetti. "Sequential model based optimization with black-box constraints: Feasibility determination via machine learning". En PROCEEDINGS LEGO – 14TH INTERNATIONAL GLOBAL OPTIMIZATION WORKSHOP. Author(s), 2019. http://dx.doi.org/10.1063/1.5089977.
Texto completoChen, Chang-Huang. "Bare bone particle swarm optimization with integration of global and local learning strategies". En 2011 International Conference on Machine Learning and Cybernetics (ICMLC). IEEE, 2011. http://dx.doi.org/10.1109/icmlc.2011.6016781.
Texto completoSoroush, H. M. "Bicriteria single machine scheduling with setup times and learning effects". En PROCEEDINGS OF THE SIXTH GLOBAL CONFERENCE ON POWER CONTROL AND OPTIMIZATION. AIP, 2012. http://dx.doi.org/10.1063/1.4769005.
Texto completoInformes sobre el tema "Machine learning, Global Optimization"
Saenz, Juan Antonio, Ismael Djibrilla Boureima, Vitaliy Gyrya y Susan Kurien. Machine-Learning for Rapid Optimization of Turbulence Models. Office of Scientific and Technical Information (OSTI), julio de 2020. http://dx.doi.org/10.2172/1638623.
Texto completoGu, Xiaofeng, A. Fedotov y D. Kayran. Application of a machine learning algorithm (XGBoost) to offline RHIC luminosity optimization. Office of Scientific and Technical Information (OSTI), abril de 2021. http://dx.doi.org/10.2172/1777441.
Texto completoRolf, Esther, Jonathan Proctor, Tamma Carleton, Ian Bolliger, Vaishaal Shankar, Miyabi Ishihara, Benjamin Recht y Solomon Hsiang. A Generalizable and Accessible Approach to Machine Learning with Global Satellite Imagery. Cambridge, MA: National Bureau of Economic Research, noviembre de 2020. http://dx.doi.org/10.3386/w28045.
Texto completoScheinberg, Katya. Derivative Free Optimization of Complex Systems with the Use of Statistical Machine Learning Models. Fort Belvoir, VA: Defense Technical Information Center, septiembre de 2015. http://dx.doi.org/10.21236/ada622645.
Texto completoGhanshyam, Pilania, Kenneth James McClellan, Christopher Richard Stanek y Blas P. Uberuaga. Physics-Informed Machine Learning for Discovery and Optimization of Materials: A Case Study of Scintillators. Office of Scientific and Technical Information (OSTI), agosto de 2018. http://dx.doi.org/10.2172/1463529.
Texto completoBao, Jie, Chao Wang, Zhijie Xu y Brian J. Koeppel. Physics-Informed Machine Learning with Application to Solid Oxide Fuel Cell System Modeling and Optimization. Office of Scientific and Technical Information (OSTI), septiembre de 2019. http://dx.doi.org/10.2172/1569289.
Texto completoGabelmann, Jeffrey y Eduardo Gildin. A Machine Learning-Based Geothermal Drilling Optimization System Using EM Short-Hop Bit Dynamics Measurements. Office of Scientific and Technical Information (OSTI), abril de 2020. http://dx.doi.org/10.2172/1842454.
Texto completoQi, Fei, Zhaohui Xia, Gaoyang Tang, Hang Yang, Yu Song, Guangrui Qian, Xiong An, Chunhuan Lin y Guangming Shi. A Graph-based Evolutionary Algorithm for Automated Machine Learning. Web of Open Science, diciembre de 2020. http://dx.doi.org/10.37686/ser.v1i2.77.
Texto completoVittorio, Alan y Kate Calvin. Using machine learning to improve land use/cover characterization and projection for scenario-based global modeling. Office of Scientific and Technical Information (OSTI), abril de 2021. http://dx.doi.org/10.2172/1769796.
Texto completoWu, S. Boiler Optimization Using Advance Machine Learning Techniques. Final Report for period September 30, 1995 - September 29, 2000. Office of Scientific and Technical Information (OSTI), agosto de 2005. http://dx.doi.org/10.2172/877237.
Texto completo