Tesis sobre el tema "Machine learning, Global Optimization"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 50 mejores tesis para su investigación sobre el tema "Machine learning, Global Optimization".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Nowak, Hans II(Hans Antoon). "Strategic capacity planning using data science, optimization, and machine learning". Thesis, Massachusetts Institute of Technology, 2020. https://hdl.handle.net/1721.1/126914.
Texto completoThesis: S.M., Massachusetts Institute of Technology, Department of Mechanical Engineering, in conjunction with the Leaders for Global Operations Program at MIT, May, 2020
Cataloged from the official PDF of thesis.
Includes bibliographical references (pages 101-104).
Raytheon's Circuit Card Assembly (CCA) factory in Andover, MA is Raytheon's largest factory and the largest Department of Defense (DOD) CCA manufacturer in the world. With over 500 operations, it manufactures over 7000 unique parts with a high degree of complexity and varying levels of demand. Recently, the factory has seen an increase in demand, making the ability to continuously analyze factory capacity and strategically plan for future operations much needed. This study seeks to develop a sustainable strategic capacity optimization model and capacity visualization tool that integrates demand data with historical manufacturing data. Through automated data mining algorithms of factory data sources, capacity utilization and overall equipment effectiveness (OEE) for factory operations are evaluated. Machine learning methods are then assessed to gain an accurate estimate of cycle time (CT) throughout the factory. Finally, a mixed-integer nonlinear program (MINLP) integrates the capacity utilization framework and machine learning predictions to compute the optimal strategic capacity planning decisions. Capacity utilization and OEE models are shown to be able to be generated through automated data mining algorithms. Machine learning models are shown to have a mean average error (MAE) of 1.55 on predictions for new data, which is 76.3% lower than the current CT prediction error. Finally, the MINLP is solved to optimality within a tolerance of 1.00e-04 and generates resource and production decisions that can be acted upon.
by Hans Nowak II.
M.B.A.
S.M.
M.B.A. Massachusetts Institute of Technology, Sloan School of Management
S.M. Massachusetts Institute of Technology, Department of Mechanical Engineering
Veluscek, Marco. "Global supply chain optimization : a machine learning perspective to improve caterpillar's logistics operations". Thesis, Brunel University, 2016. http://bura.brunel.ac.uk/handle/2438/13050.
Texto completoSchweidtmann, Artur M. [Verfasser], Alexander [Akademischer Betreuer] Mitsos y Andreas [Akademischer Betreuer] Schuppert. "Global optimization of processes through machine learning / Artur M. Schweidtmann ; Alexander Mitsos, Andreas Schuppert". Aachen : Universitätsbibliothek der RWTH Aachen, 2021. http://d-nb.info/1240690924/34.
Texto completoTaheri, Mehdi. "Machine Learning from Computer Simulations with Applications in Rail Vehicle Dynamics and System Identification". Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/81417.
Texto completoPh. D.
Gabere, Musa Nur. "Prediction of antimicrobial peptides using hyperparameter optimized support vector machines". Thesis, University of the Western Cape, 2011. http://etd.uwc.ac.za/index.php?module=etd&action=viewtitle&id=gen8Srv25Nme4_7345_1330684697.
Texto completoAntimicrobial peptides (AMPs) play a key role in the innate immune response. They can be ubiquitously found in a wide range of eukaryotes including mammals, amphibians, insects, plants, and protozoa. In lower organisms, AMPs function merely as antibiotics by permeabilizing cell membranes and lysing invading microbes. Prediction of antimicrobial peptides is important because experimental methods used in characterizing AMPs are costly, time consuming and resource intensive and identification of AMPs in insects can serve as a template for the design of novel antibiotic. In order to fulfil this, firstly, data on antimicrobial peptides is extracted from UniProt, manually curated and stored into a centralized database called dragon antimicrobial peptide database (DAMPD). Secondly, based on the curated data, models to predict antimicrobial peptides are created using support vector machine with optimized hyperparameters. In particular, global optimization methods such as grid search, pattern search and derivative-free methods are utilised to optimize the SVM hyperparameters. These models are useful in characterizing unknown antimicrobial peptides. Finally, a webserver is created that will be used to predict antimicrobial peptides in haemotophagous insects such as Glossina morsitan and Anopheles gambiae.
Belkhir, Nacim. "Per Instance Algorithm Configuration for Continuous Black Box Optimization". Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS455/document.
Texto completoThis PhD thesis focuses on the automated algorithm configuration that aims at finding the best parameter setting for a given problem or a' class of problem. The Algorithm Configuration problem thus amounts to a metal Foptimization problem in the space of parameters, whosemetaFobjective is the performance measure of the given algorithm at hand with a given parameter configuration. However, in the continuous domain, such method can only be empirically assessed at the cost of running the algorithm on some problem instances. More recent approaches rely on a description of problems in some features space, and try to learn a mapping from this feature space onto the space of parameter configurations of the algorithm at hand. Along these lines, this PhD thesis focuses on the Per Instance Algorithm Configuration (PIAC) for solving continuous black boxoptimization problems, where only a limited budget confessionnalisations available. We first survey Evolutionary Algorithms for continuous optimization, with a focus on two algorithms that we have used as target algorithm for PIAC, DE and CMAFES. Next, we review the state of the art of Algorithm Configuration approaches, and the different features that have been proposed in the literature to describe continuous black box optimization problems. We then introduce a general methodology to empirically study PIAC for the continuous domain, so that all the components of PIAC can be explored in real Fworld conditions. To this end, we also introduce a new continuous black box test bench, distinct from the famous BBOB'benchmark, that is composed of a several multiFdimensional test functions with different problem properties, gathered from the literature. The methodology is finally applied to two EAS. First we use Differential Evolution as'target algorithm, and explore all the components of PIAC, such that we empirically assess the best. Second, based on the results on DE, we empirically investigate PIAC with Covariance Matrix Adaptation Evolution Strategy (CMAFES) as target algorithm. Both use cases empirically validate the proposed methodology on the new black box testbench for dimensions up to100
Liu, Liu. "Stochastic Optimization in Machine Learning". Thesis, The University of Sydney, 2019. http://hdl.handle.net/2123/19982.
Texto completoLeblond, Rémi. "Asynchronous optimization for machine learning". Thesis, Paris Sciences et Lettres (ComUE), 2018. http://www.theses.fr/2018PSLEE057/document.
Texto completoThe impressive breakthroughs of the last two decades in the field of machine learning can be in large part attributed to the explosion of computing power and available data. These two limiting factors have been replaced by a new bottleneck: algorithms. The focus of this thesis is thus on introducing novel methods that can take advantage of high data quantity and computing power. We present two independent contributions. First, we develop and analyze novel fast optimization algorithms which take advantage of the advances in parallel computing architecture and can handle vast amounts of data. We introduce a new framework of analysis for asynchronous parallel incremental algorithms, which enable correct and simple proofs. We then demonstrate its usefulness by performing the convergence analysis for several methods, including two novel algorithms. Asaga is a sparse asynchronous parallel variant of the variance-reduced algorithm Saga which enjoys fast linear convergence rates on smooth and strongly convex objectives. We prove that it can be linearly faster than its sequential counterpart, even without sparsity assumptions. ProxAsaga is an extension of Asaga to the more general setting where the regularizer can be non-smooth. We prove that it can also achieve a linear speedup. We provide extensive experiments comparing our new algorithms to the current state-of-art. Second, we introduce new methods for complex structured prediction tasks. We focus on recurrent neural networks (RNNs), whose traditional training algorithm for RNNs – based on maximum likelihood estimation (MLE) – suffers from several issues. The associated surrogate training loss notably ignores the information contained in structured losses and introduces discrepancies between train and test times that may hurt performance. To alleviate these problems, we propose SeaRNN, a novel training algorithm for RNNs inspired by the “learning to search” approach to structured prediction. SeaRNN leverages test-alike search space exploration to introduce global-local losses that are closer to the test error than the MLE objective. We demonstrate improved performance over MLE on three challenging tasks, and provide several subsampling strategies to enable SeaRNN to scale to large-scale tasks, such as machine translation. Finally, after contrasting the behavior of SeaRNN models to MLE models, we conduct an in-depth comparison of our new approach to the related work
Bai, Hao. "Machine learning assisted probabilistic prediction of long-term fatigue damage and vibration reduction of wind turbine tower using active damping system". Thesis, Normandie, 2021. http://www.theses.fr/2021NORMIR01.
Texto completoThis dissertation is devoted to the development of an active damping system for vibration reduction of wind turbine tower under gusty wind and turbulent wind. The presence of vibrations often leads to either an ultimate deflection on the top of wind tower or a failure due to the material’s fatigue near the bottom of wind tower. Furthermore, given the random nature of wind conditions, it is indispensable to look at this problem from a probabilistic point of view. In this work, a probabilistic framework of fatigue analysis is developed and improved by using a residual neural network. A damping system employing an active damper, Twin Rotor Damper, is designed for NREL 5MW reference wind turbine. The design is optimized by an evolutionary algorithm with automatic parameter tuning method based on exploitation and exploration
Chang, Allison An. "Integer optimization methods for machine learning". Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/72643.
Texto completoThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (p. 129-137).
In this thesis, we propose new mixed integer optimization (MIO) methods to ad- dress problems in machine learning. The first part develops methods for supervised bipartite ranking, which arises in prioritization tasks in diverse domains such as information retrieval, recommender systems, natural language processing, bioinformatics, and preventative maintenance. The primary advantage of using MIO for ranking is that it allows for direct optimization of ranking quality measures, as opposed to current state-of-the-art algorithms that use heuristic loss functions. We demonstrate using a number of datasets that our approach can outperform other ranking methods. The second part of the thesis focuses on reverse-engineering ranking models. This is an application of a more general ranking problem than the bipartite case. Quality rankings affect business for many organizations, and knowing the ranking models would allow these organizations to better understand the standards by which their products are judged and help them to create higher quality products. We introduce an MIO method for reverse-engineering such models and demonstrate its performance in a case-study with real data from a major ratings company. We also devise an approach to find the most cost-effective way to increase the rank of a certain product. In the final part of the thesis, we develop MIO methods to first generate association rules and then use the rules to build an interpretable classifier in the form of a decision list, which is an ordered list of rules. These are both combinatorially challenging problems because even a small dataset may yield a large number of rules and a small set of rules may correspond to many different orderings. We show how to use MIO to mine useful rules, as well as to construct a classifier from them. We present results in terms of both classification accuracy and interpretability for a variety of datasets.
by Allison An Chang.
Ph.D.
Reddi, Sashank Jakkam. "New Optimization Methods for Modern Machine Learning". Research Showcase @ CMU, 2017. http://repository.cmu.edu/dissertations/1116.
Texto completoEtheve, Marc. "Solving repeated optimization problems by Machine Learning". Thesis, Paris, HESAM, 2021. http://www.theses.fr/2021HESAC040.
Texto completoThis thesis aims at using machine learning techniques in the context of Mixed Integer LinearProgramming instances generated by stochastic data. Rather than solve these instances independentlyusing the Branch and Bound algorithm (B&B), we propose to leverage the similarities between instancesby learning inner strategies of this algorithm, such as node selection and branching.The main approach developed in this work is to use reinforcement learning to discover by trials-and-errorsstrategies which minimize the B&B tree size. To properly adapt to the B&B environment, we definea new kind of tree-based transitions, and elaborate on different cost models in the correspondingMarkov Decision Processes. We prove the optimality of the unitary cost model under both classical andtree-based transitions, either for branching or node selection. However, we experimentally show that itmay be beneficial to bias the cost so as to improve the learning stability. Regarding node selection, weformally exhibit an optimal strategy which can be more efficiently learnt directly by supervised learning.In addition, we propose to exploit the structure of the studied problems. To this end, we propose adecomposition-coordination methodology, a branching heuristic based on a graph representation of aB&B node and finally an approach for learning to disrupt the objective function
Van, Mai Vien. "Large-Scale Optimization With Machine Learning Applications". Licentiate thesis, KTH, Reglerteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-263147.
Texto completoQC 20191105
Cardamone, Dario. "Support Vector Machine a Machine Learning Algorithm". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017.
Buscar texto completoTugnoli, Riccardo. "MVA Calculation and Optimization with Machine Learning Techniques". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018.
Buscar texto completoDahlberg, Leslie. "Evolutionary Computation in Continuous Optimization and Machine Learning". Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-35674.
Texto completoKonečný, Jakub. "Stochastic, distributed and federated optimization for machine learning". Thesis, University of Edinburgh, 2017. http://hdl.handle.net/1842/31478.
Texto completoHedberg, Karolina. "Optimization of Insert-Tray Matching using Machine Learning". Thesis, Uppsala universitet, Avdelningen för systemteknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-452871.
Texto completoSingh, Karanpreet. "Accelerating Structural Design and Optimization using Machine Learning". Diss., Virginia Tech, 2020. http://hdl.handle.net/10919/104114.
Texto completoDoctor of Philosophy
This thesis presents an innovative application of artificial intelligence (AI) techniques for designing aircraft structures. An important objective for the aerospace industry is to design robust and fuel-efficient aerospace structures. The state of the art research in the literature shows that the structure of aircraft in future could mimic organic cellular structure. However, the design of these new panels with arbitrary structures is computationally expensive. For instance, applying standard optimization methods currently being applied to aerospace structures to design an aircraft, can take anywhere from a few days to months. The presented research demonstrates the potential of AI for accelerating the optimization of an aircraft structures. This will provide an efficient way for aircraft designers to design futuristic fuel-efficient aircraft which will have positive impact on the environment and the world.
Dabert, Geoffrey. "Application of Machine Learning techniques to Optimization algorithms". Thesis, KTH, Optimeringslära och systemteori, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-207471.
Texto completoGiarimpampa, Despoina. "Blind Image Steganalytic Optimization by using Machine Learning". Thesis, Högskolan i Halmstad, Akademin för informationsteknologi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-38150.
Texto completoNguyen, Thanh Tan. "Selected non-convex optimization problems in machine learning". Thesis, Queensland University of Technology, 2020. https://eprints.qut.edu.au/200748/1/Thanh_Nguyen_Thesis.pdf.
Texto completoDetassis, Fabrizio <1991>. "Methods for integrating machine learning and constrained optimization". Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2022. http://amsdottorato.unibo.it/10360/1/Detassis_Fabrizio_Thesis_Final.pdf.
Texto completoHill, Jerry L. y Randall P. Mora. "An Autonomous Machine Learning Approach for Global Terrorist Recognition". International Foundation for Telemetering, 2012. http://hdl.handle.net/10150/581675.
Texto completoA major intelligence challenge we face in today's national security environment is the threat of terrorist attack against our national assets, especially our citizens. This paper addresses global reconnaissance which incorporates an autonomous Intelligent Agent/Data Fusion solution for recognizing potential risk of terrorist attack through identifying and reporting imminent persona-oriented terrorist threats based on data reduction/compression of a large volume of low latency data possibly from hundreds, or even thousands of data points.
Kindestam, Anton. "Graph-based features for machine learning driven code optimization". Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-211444.
Texto completoI den här raporten presenterar vi en metod att träna en Stöd-vektor-regressions-modell som givet ett osett program och en punkt i optimeringsrymden kan förutsäga hur mycket snabbare över baslinjen programmet kommer att exekvera förutsatt att man applicerar givna optimeringar. För att representera programmet använder vi en grafstruktur för vilken vi kan använda en grafkärna, Shortest-Path Graph Kernel, vilken kan avgöra hur lika två olika grafer är varandra. Metoden är baserad på en metod presenterad av Park et al. i Using Graph-Based Program Characterization for Predictive Modeling. Optimeringsrymden erhålls genom olika kombinationer av kommandoradsparametrar till den polyhedriska C-till-C-kompilatorn PoCC. Testdatat erhölls genom att förberäkna hastighetsfaktorer för alla optimeringar och alla program i test-algoritms-biblioteket PolyBench. Vi finner att modellen i vissa mått mätt producerar "bra" resultat, men p.g.a. av det stora felet och det slumpmässiga beteendet måste dessvärre metoden, i dess nuvarande form,förkastas.
Patvarczki, Jozsef. "Layout Optimization for Distributed Relational Databases Using Machine Learning". Digital WPI, 2012. https://digitalcommons.wpi.edu/etd-dissertations/291.
Texto completoValenzuela, Michael Lawrence. "Machine Learning, Optimization, and Anti-Training with Sacrificial Data". Diss., The University of Arizona, 2016. http://hdl.handle.net/10150/605111.
Texto completoMahajan, Ankush. "Machine learning assisted QoT estimation for optical networks optimization". Doctoral thesis, Universitat Politècnica de Catalunya, 2021. http://hdl.handle.net/10803/672665.
Texto completoLas operadoras están impulsando el concepto de desagregación de red. Dicho concepto permite desacoplar el tradicional hardware de transporte óptico dispuesto de forma monolítica en bloques funcionales independientes que interoperan entre ellos. Como resultado, esta desagregación incentiva un mercado más abierto en el cual los operadores/propietarios de la red pueden elegir los mejores dispositivos de diferentes proveedores, eliminando el conocido como bloqueo/dependencia del proveedor, a precios más competitivos. En este contexto de desagregación con múltiples fabricantes, cada equipo afecta de forma independiente. Por lo tanto, la incertidumbre aumenta al compararlo con el rendimiento obtenido mediante un modelo más tradicional basado en agregación y dependiente de un único proveedor. Para una eficiente planificación y optimización de una red óptica, es necesario estimar la Quality of Transmission (QoT) de las conexiones. Los diseñadores de redes están interesados en una estimación precisa y rápida de la QoT para los servicios que se establezcan. Normalmente, la estimación de la QoT se realiza mediante un Physical Layer Model (PLM) que se incluye en la herramienta de estimación de la QoT o Qtool. Además, se incluye unos márgenes de diseño (design margin) dentro de la herramienta Qtool. Esto permite tener en cuenta las imprecisiones de modelado y de los parámetros y de esta forma asegurar un rendimiento aceptable. La precisión del PLM es muy importante, ya que los errores de modelado se traducen en un mayor design margin que, a su vez, se traduce en una pérdida de capacidad. Recientemente, importantes logros en la definición de PLMs para redes ópticas más precisos y rápidos se han alcanzado. Estos se basan en métodos tradicionales con soluciones analíticas o numéricas. La primera parte de la tesis se centra en las técnicas de estimación precisa de QoT asistidas por machine learning (ML). Se ha desarrollado un modelo que utiliza la información de monitorización de red combinada con técnicas de regresión ML supervisadas para comprender las condiciones de la red. En particular, se han modelado las penalizaciones generadas debido a: i) el efecto de gain ripple del EDFA, y ii) las incertidumbres de la forma espectral del filtro en los nodos ROADM. Además, con el objetivo de mejorar la precisión de la estimación del Qtool en redes que incluyen elementos de diferentes fabricantes (i.e., multi-proveedor), se han propuesto unas extensiones del PLM. Se han introducido cuatro factores de rendimiento dependientes del proveedor del transponder (TP) que capturan las variaciones de rendimiento de los TP de múltiples proveedores. Para verificar la mejora potencial, se han estudiado los siguientes dos casos de uso con el PLM propuesto: i) optimizar la potencia de lanzamiento de los TPs; y ii) reducir el design margin. La última parte de esta tesis ha tenido como objetivo investigar la cuestión de la limitación de la precisión del Qtool en las tareas de optimización dinámica. Para mantener los modelos alineados con las condiciones reales, el concepto de digital twin (DT) está ganando mucha atención. El DT incluye un conjunto de datos en evolución y un medio para ajustar dinámicamente el modelo. Basándonos en los fundamentos del DT, se ha ideado e implementado un proceso iterativo de bucle cerrado de control que, tras varias iteraciones intermedias del algoritmo de optimización, configura la red, supervisa y reentrena el Qtool. Para el reentrenamiento del Qtool, se ha adoptado una técnica de ajuste de regresión no lineal basada en ML. La principal ventaja es que, mientras la red funciona, los parámetros del Qtool se reentrenan según la información monitorizada con el modelo ML adoptado. Por lo tanto, el Qtool sigue los estados proyectados de forma intermedia calculados por el algoritmo. Esto reduce el tiempo de optimización en comparación con el sondeo y la monitorización directa
Teoria del senyal i comunicacions
Crouch, Ingrid W. M. "A knowledge-based simulation optimization system with machine learning". Diss., Virginia Tech, 1992. http://hdl.handle.net/10919/37245.
Texto completoZhou, Yi. "Nonconvex Optimization in Machine Learning: Convergence, Landscape, and Generalization". The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu1533554879269658.
Texto completoAdurti, Devi Abhiseshu y Mohit Battu. "Optimization of Heterogeneous Parallel Computing Systems using Machine Learning". Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-21834.
Texto completoEkman, Björn. "Machine Learning for Beam Based Mobility Optimization in NR". Thesis, Linköpings universitet, Kommunikationssystem, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-136489.
Texto completoZhang, Tianfang. "Machine learning multicriteria optimization in radiation therapy treatment planning". Thesis, KTH, Matematisk statistik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-257509.
Texto completoInom strålterapiplanering har den senaste forskningen använt maskininlärning baserat på historiskt levererade planer för att automatisera den process i vilken kliniskt acceptabla planer produceras. Jämfört med traditionella angreppssätt, såsom upprepad optimering av en viktad målfunktion eller flermålsoptimering (MCO), har automatiska planeringsmetoder generellt sett fördelarna av lägre beräkningstider och minimal användarinteraktion, men saknar däremot flexibiliteten hos allmänna ramverk som exempelvis MCO. Maskininlärningsmetoder kan vara speciellt känsliga för avvikelser i dosprediktionssteget på grund av särskilda egenskaper hos de optimeringsfunktioner som vanligtvis används för att återskapa dosfördelningar, och lider dessutom av problemet att det inte finns något allmängiltigt orsakssamband mellan prediktionsnoggrannhet och kvalitet hos optimerad plan. I detta arbete presenterar vi ett sätt att förena idéer från maskininlärningsbaserade planeringsmetoder med det väletablerade MCO-ramverket. Mer precist kan vi, givet förkunskaper i form av antingen en tidigare optimerad plan eller en uppsättning av historiskt levererade kliniska planer, automatiskt generera Paretooptimala planer som täcker en dosregion motsvarande uppnåeliga såväl som kliniskt acceptabla planer. I det förra fallet görs detta genom att introducera dos--volym-bivillkor; i det senare fallet görs detta genom att anpassa en gaussisk blandningsmodell med viktade data med förväntning--maximering-algoritmen, modifiera den med exponentiell lutning och sedan använda speciellt utvecklade optimeringsfunktioner för att ta hänsyn till prediktionsosäkerheter.Numeriska resultat för konceptuell demonstration erhålls för ett fall av prostatacancer varvid behandlingen levererades med volymetriskt modulerad bågterapi, där det visas att metoderna utvecklade i detta arbete är framgångsrika i att automatiskt generera Paretooptimala planer med tillfredsställande kvalitet och variation medan kliniskt irrelevanta dosregioner utesluts. I fallet då historiska planer används som förkunskap är beräkningstiderna markant kortare än för konventionell MCO.
Sedig, Victoria, Evelina Samuelsson, Nils Gumaelius y Andrea Lindgren. "Greenhouse Climate Optimization using Weather Forecasts and Machine Learning". Thesis, Uppsala universitet, Avdelningen för beräkningsvetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-391045.
Texto completoBompaire, Martin. "Machine learning based on Hawkes processes and stochastic optimization". Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLX030/document.
Texto completoThe common thread of this thesis is the study of Hawkes processes. These point processes decrypt the cross-causality that occurs across several event series. Namely, they retrieve the influence that the events of one series have on the future events of all series. For example, in the context of social networks, they describe how likely an action of a certain user (such as a Tweet) will trigger reactions from the others.The first chapter consists in a general introduction on point processes followed by a focus on Hawkes processes and more specifically on the properties of the widely used exponential kernels parametrization. In the following chapter, we introduce an adaptive penalization technique to model, with Hawkes processes, the information propagation on social networks. This penalization is able to take into account the prior knowledge on the social network characteristics, such as the sparse interactions between users or the community structure, to reflect them on the estimated model. Our technique uses data-driven weighted penalties induced by a careful analysis of the generalization error.Next, we focus on convex optimization and recall the recent progresses made with stochastic first order methods using variance reduction techniques. The fourth chapter is dedicated to an adaptation of these techniques to optimize the most commonly used goodness-of-fit of Hawkes processes. Indeed, this goodness-of-fit does not meet the gradient-Lipschitz assumption that is required by the latest first order methods. Thus, we work under another smoothness assumption, and obtain a linear convergence rate for a shifted version of Stochastic Dual Coordinate Ascent that improves the current state-of-the-art. Besides, such objectives include many linear constraints that are easily violated by classic first order algorithms, but in the Fenchel-dual problem these constraints are easier to deal with. Hence, our algorithm's robustness is comparable to second order methods that are very expensive in high dimensions.Finally, the last chapter introduces a new statistical learning library for Python 3 with a particular emphasis on time-dependent models, tools for generalized linear models and survival analysis. Called tick, this library relies on a C++ implementation and state-of-the-art optimization algorithms to provide very fast computations in a single node multi-core setting. Open-sourced and published on Github, this library has been used all along this thesis to perform benchmarks and experiments
Del, Testa Davide. "Stochastic Optimization and Machine Learning Modeling for Wireless Networking". Doctoral thesis, Università degli studi di Padova, 2017. http://hdl.handle.net/11577/3424825.
Texto completoScopo di questa tesi è quello di presentare come sia possibile modellizzare il considerevole livello di incertezza proprio dei moderni sistemi di telecomunicazioni attraverso differenti approcci. Il primo è basato su modelli di ottimizzazione stocastici, e verrà adottato per la progettazione di politiche di trasmissione in particolari reti di sensori wireless, dotate di apparati in grado di recuperare energia dall'ambiente. Il secondo approccio verte sull'utilizzo di tecniche di apprendimento automatico applicate alla stima di parametri di rete, alla compressione di segnali biomedici e alla predizione del guadagno di canale in reti mobili.
Wu, Anjian M. B. A. Sloan School of Management. "Performance modeling of human-machine interfaces using machine learning". Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122599.
Texto completoThesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019, In conjunction with the Leaders for Global Operations Program at MIT
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 70-71).
As the popularity of online retail expands, world-class electronic commerce (e-commerce) businesses are increasingly adopting collaborative robotics and Internet of Things (IoT) technologies to enhance fulfillment efficiency and operational advantage. E-commerce giants like Alibaba and Amazon are known to have smart warehouses staffed by both machines and human operators. The robotics systems specialize in transporting and maneuvering heavy shelves of goods to and from operators. Operators are left to higher-level cognitive tasks needed to process goods such as identification and complex manipulation of individual objects. Achieving high system throughput in these systems require harmonized interaction between humans and machines. The robotics systems must minimize time that operators are waiting for new work (idle time) and operators need to minimize time processing items (takt time). Over time, these systems will naturally generate extensive amounts of data. Our research provides insights into both using this data to design a machine-learning (ML) model of takt time, as well as exploring methods of interpreting insights from such a model. We start by presenting our iterative approach to developing a ML model that predicts the average takt of a group of operators at hourly intervals. Our final XGBoost model reached an out-of-sample performance of 4.01% mean absolute percent error (MAPE) using over 250,000 hours of historic data across multiple warehouses around the world. Our research will share methods to cross-examine and interpret the relationships learned by the model for business value. This can allow organizations to effectively quantify system trade-offs as well as identify root-causes of takt performance deviations. Finally, we will discuss the implications of our empirical findings.
by Anjian Wu.
M.B.A.
S.M.
M.B.A. Massachusetts Institute of Technology, Sloan School of Management
S.M. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
Armond, Kenneth C. Jr. "Distributed Support Vector Machine Learning". ScholarWorks@UNO, 2008. http://scholarworks.uno.edu/td/711.
Texto completoOuyang, Hua. "Optimal stochastic and distributed algorithms for machine learning". Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/49091.
Texto completoCESARI, TOMMASO RENATO. "ALGORITHMS, LEARNING, AND OPTIMIZATION". Doctoral thesis, Università degli Studi di Milano, 2020. http://hdl.handle.net/2434/699354.
Texto completoBhat, Sooraj. "Syntactic foundations for machine learning". Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/47700.
Texto completoAlcoverro, Vidal Marcel. "Stochastic optimization and interactive machine learning for human motion analysis". Doctoral thesis, Universitat Politècnica de Catalunya, 2014. http://hdl.handle.net/10803/285337.
Texto completoL'anàlisi del moviment humà a partir de dades visuals és un tema central en la recerca en visió per computador, per una banda perquè habilita un ampli espectre d'aplicacions i per altra perquè encara és un problema no resolt quan és aplicat en escenaris no controlats. L'analisi del moviment humà s'utilitza a l'indústria de l'entreteniment per la producció de pel·lícules i videojocs, en aplicacions mèdiques per rehabilitació o per estudis bio-mecànics. També s'utilitza en el camp de la interacció amb computadors o també per l'analisi de grans volums de dades de xarxes socials com Youtube o Flickr, per mencionar alguns exemples. En aquesta tesi s'han estudiat tècniques per l'anàlisi de moviment humà enfocant la seva aplicació en entorns de sales intel·ligents. És a dir, s'ha enfocat a mètodes que puguin permetre l'anàlisi del comportament de les persones a la sala, que permetin la interacció amb els dispositius d'una manera natural i, en general, mètodes que incorporin les computadores en espais on hi ha activitat de persones, per habilitar nous serveis de manera que no interfereixin en la activitat. A la primera part, es proposa un marc genèric per l'ús de filtres de partícules jeràrquics (HPF) especialment adequat per tasques de captura de moviment humà. La captura de moviment humà generalment implica seguiment i optimització de vectors d'estat de molt alta dimensió on a la vegada també s'han de tractar pdf's multi-modals. Els HPF permeten tractar aquest problema mitjançant multiples passades en subdivisions del vector d'estat. Basant-nos en el marc dels HPF, es proposa un mètode per estimar l'antropometria del subjecte, que a la vegada permet obtenir un model acurat del subjecte. També proposem dos nous mètodes per la captura de moviment humà. Per una banda, el APO es basa en una nova estratègia per les funcions de cost basada en la partició de les observacions. Per altra, el DD-HPF utilitza deteccions de parts del cos per millorar la propagació de partícules i l'avaluació de pesos. Ambdós mètodes són integrats dins el marc dels HPF. La segona part de la tesi es centra en la detecció de gestos, i s'ha enfocat en el problema de reduir els esforços d'anotació i entrenament requerits per entrenar un detector per un gest concret. Per tal de reduir els esforços requerits per entrenar un detector de gestos, proposem una solució basada en online random forests que permet l'entrenament en temps real, mentre es reben noves dades sequencialment. El principal aspecte que fa la solució efectiva és el mètode que proposem per obtenir mostres negatives rellevants, mentre s'entrenen els arbres de decisió. El mètode utilitza el detector entrenat fins al moment per recollir mostres basades en la resposta del detector, de manera que siguin més rellevants per l'entrenament. D'aquesta manera l'entrenament és més efectiu pel que fa al nombre de mostres anotades que es requereixen.
Shahriari, Bobak. "Practical Bayesian optimization with application to tuning machine learning algorithms". Thesis, University of British Columbia, 2016. http://hdl.handle.net/2429/59104.
Texto completoScience, Faculty of
Computer Science, Department of
Graduate
Mazzieri, Diego. "Machine Learning for combinatorial optimization: the case of Vehicle Routing". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/24688/.
Texto completoZhu, Zhanxing. "Integrating local information for inference and optimization in machine learning". Thesis, University of Edinburgh, 2016. http://hdl.handle.net/1842/20980.
Texto completoBergkvist, Markus y Tobias Olandersson. "Machine learning in simulated RoboCup". Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik och datavetenskap, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3827.
Texto completoNarasimhan, Mukund. "Applications of submodular minimization in machine learning /". Thesis, Connect to this title online; UW restricted, 2007. http://hdl.handle.net/1773/5983.
Texto completoCasavant, Matt(Matt Stephen). "Predicting competitor restructuring using machine learning methods". Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122595.
Texto completoThesis: S.M., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019, In conjunction with the Leaders for Global Operations Program at MIT
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 59-60).
Increasing competition in the defense industry risks contract margin degradation and increases the need for new avenues to margin expansion. One such area of opportunity is take-away bids for under-performing competitor sole source contracts. Post financial crisis, the government has been more willing to entertain conversation with outside firms about existing contracts in the execution phase if the contracted firm is under performing budgetary and schedule terms. The contracted firm has the opportunity to defend its performance though, so in order to maximize the likelihood of successful take-away, the bid would ideally be submitted when the contracted firm is distracted and cannot put together as strong of a defense as would be typical. Corporate restructuring is an example of such a time; employees are distracted and leadership, communication, and approval chains are disrupted. Because the government contracting process is long and detailed, often taking on the order of one year, if restructuring at competitor firms could be predicted up to a year in advance, resources could be shifted ahead of time to align bid submittal with the public restructuring announcement and therefore increase the likelihood of take-away success. The subject of this thesis is the development of the necessary dataset and application of various machine learning methods to predict future restructuring. Literature review emphasizes understanding of current methods benefits and shortcomings in relation to forecasting, and proposed methods seeks to fill in gaps. Depending on the competitor, the resulting models predict future restructuring on blind historical test set data with an accuracy of 80-90%. While blind historical test set data are not necessarily indicative of future data, one of the firm's under assessment recently announced a future restructuring in the same quarter that the model predicted.
by Matt Casavant.
M.B.A.
S.M.
M.B.A. Massachusetts Institute of Technology, Sloan School of Management
S.M. Massachusetts Institute of Technology, Department of Mechanical Engineering
Addis, Antonio. "Deep reinforcement learning optimization of video streaming". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019.
Buscar texto completoTorkamani, MohamadAli. "Robust Large Margin Approaches for Machine Learning in Adversarial Settings". Thesis, University of Oregon, 2016. http://hdl.handle.net/1794/20677.
Texto completo