Tesis sobre el tema "Selection et optimisation d'hyperparamètre"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 26 mejores tesis para su investigación sobre el tema "Selection et optimisation d'hyperparamètre".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Bertrand, Quentin. "Hyperparameter selection for high dimensional sparse learning : application to neuroimaging". Electronic Thesis or Diss., université Paris-Saclay, 2021. http://www.theses.fr/2021UPASG054.
Texto completoDue to non-invasiveness and excellent time resolution, magneto- and electroencephalography (M/EEG) have emerged as tools of choice to monitor brain activity. Reconstructing brain signals from M/EEG measurements can be cast as a high dimensional ill-posed inverse problem. Typical estimators of brain signals involve challenging optimization problems, composed of the sum of a data-fidelity term, and a sparsity promoting term. Because of their notoriously hard to tune regularization hyperparameters, sparsity-based estimators are currently not massively used by practitioners. The goal of this thesis is to provide a simple, fast, and automatic way to calibrate sparse linear models. We first study some properties of coordinate descent: model identification, local linear convergence, and acceleration. Relying on Anderson extrapolation schemes, we propose an effective way to speed up coordinate descent in theory and practice. We then explore a statistical approach to set the regularization parameter of Lasso-type problems. A closed-form formula can be derived for the optimal regularization parameter of L1 penalized linear regressions. Unfortunately, it relies on the true noise level, unknown in practice. To remove this dependency, one can resort to estimators for which the regularization parameter does not depend on the noise level. However, they require to solve challenging "nonsmooth + nonsmooth" optimization problems. We show that partial smoothing preserves their statistical properties and we propose an application to M/EEG source localization problems. Finally we investigate hyperparameter optimization, encompassing held-out or cross-validation hyperparameter selection. It requires tackling bilevel optimization with nonsmooth inner problems. Such problems are canonically solved using zeros order techniques, such as grid-search or random-search. We present an efficient technique to solve these challenging bilevel optimization problems using first-order methods
YACOUB, MEZIANE. "Selection de caracteristiques et optimisation d'architectures dans les systemes d'apprentissage connexionnistes". Paris 13, 1999. http://www.theses.fr/1999PA132014.
Texto completoPURBA, ABDUL RAZAK. "Optimisation de la selection recurrente reciproque du palmier a huile (elaeis guineensis jacq. ) par l'utilisation conjointe des index de selection et des marqueurs moleculaires". Montpellier, ENSA, 2000. http://www.theses.fr/2000ENSA0018.
Texto completoSarmis, Merdan. "Etude de l'activité neuronale : optimisation du temps de simulation et stabilité des modèles". Thesis, Mulhouse, 2013. http://www.theses.fr/2013MULH3848/document.
Texto completoComputational Neuroscience consists in studying the nervous system through modeling and simulation. It is to characterize the laws of biology by using mathematical models integrating all known experimental data. From a practical point of view, the more realistic the model, the largest the required computational resources. The issue of complexity and accuracy is a well known problem in the modeling and identification of models. The research conducted in this thesis aims at improving the simulation of mathematical models representing the physical and chemical behavior of synaptic receptors. Models of synaptic receptors are described by ordinary differential equations (ODE), and are resolved with numerical procedures. In order to optimize the performance of the simulations, I have implemented various ODE numerical resolution methods. To facilitate the selection of the best solver, a method, requiring a minimum amount of information, has been proposed. This method allows choosing the best solver in order to optimize the simulation. The method demonstrates that the dynamic of a model has greater influence on the solver performances than the kinetic scheme of the model. In addition, to characterize pathogenic behavior, a parameter optimization is performed. However, some parameter values lead to unstable models. A stability study allowed for determining the stability of the models with parameters provided by the literature, but also to trace the stability constraints depending to these parameters. Compliance with these constraints ensures the stability of the models studied during the optimization phase, and therefore the success of the procedure to study pathogen models
Rincent, Renaud. "Optimisation des stratégies de génétique d'association et de sélection génomique pour des populations de diversité variable : Application au maïs". Thesis, Paris, AgroParisTech, 2014. http://www.theses.fr/2014AGPT0018/document.
Texto completoMajor progresses have been achieved in genotyping technologies, which makes it easier to decipher the relationship between genotype and phenotype. This contributed to the understanding of the genetic architecture of traits (Genome Wide Association Studies, GWAS), and to better predictions of genetic value to improve breeding efficiency (Genomic Selection, GS). The objective of this thesis was to define efficient ways of leading these approaches. We first derived analytically the power from classical GWAS mixed model and showed that it was lower for markers with a small minimum allele frequency, a strong differentiation among population subgroups and that are strongly correlated with markers used for estimating the kinship matrix K. We considered therefore two alternative estimators of K. Simulations showed that these were as efficient as classical estimators to control false positive and provided more power. We confirmed these results on true datasets collected on two maize panels, and could increase by up to 40% the number of detected associations. These panels, genotyped with a 50k SNP-array and phenotyped for flowering and biomass traits, were used to characterize the diversity of Dent and Flint groups and detect QTLs. In GS, studies highlighted the importance of relationship between the calibration set (CS) and the predicted set on the accuracy of predictions. Considering low present genotyping cost, we proposed a sampling algorithm of the CS based on the G-BLUP model, which resulted in higher accuracies than other sampling strategies for all the traits considered. It could reach the same accuracy than a randomly sampled CS with half of the phenotyping effort
Blanc, Guylaine. "Selection assistee par marqueurs (sam) dans un dispositif multiparental connecte - application au maÏs et approche par simulations". Phd thesis, INAPG (AgroParisTech), 2006. http://pastel.archives-ouvertes.fr/pastel-00003478.
Texto completoHamdi, Faiza. "Optimisation et planification de l'approvisionnement en présence du risque de rupture des fournisseurs". Thesis, Ecole nationale des Mines d'Albi-Carmaux, 2017. http://www.theses.fr/2017EMAC0002/document.
Texto completoTrade liberalization, the development of mean of transport and the development economic of emerging countries which lead to globalization of supply chain is irreversible phenomen. They can reduce costs, in return, they multiply the risk of disruption from upstream stage to downstream stage. In this thesis, we focus on the inbound supply chain stage. We treat more specifically the case of a purchasing central to select suppliers and allocate the orders. Each of the suppliers cannot deliver its orders due to internal reasons (poor quality problems) or external reasons (natural disasters, transport problems). According to the selected suppliers deliver their orders or not, the transaction operation will generate a profit or loss. The objective of this thesis is to provide decision support tools to a decision maker faced with this problem by taking into account the behavior of decision maker toward risk. We proposed stochastic mixed integer linear programs to model this problem. In the first part, we focuses on the development of a decision support visual tool that allows a decision maker to find a compromise between maximizing the expected profit and minimize the risk of loss. In the second part, we integrated the techniques of estimation of risk VaR and CVaR in this problem. The objective is to help decision maker to minimize the expected cost and minimize the conditional value at risk simultanously via calculating of VaR. Result shows that the decision maker must tack into account the different scenarios of disruption regardless their probability of realisation
Landru, Didier. "Aides informatisées à la selection des matériaux et des procédés dans la conception des pièces de structure". Grenoble INPG, 2000. http://www.theses.fr/2000INPG0012.
Texto completoAkkouche, Nourredine. "Optimisation du test de production de circuits analogiques et RF par des techniques de modélisation statistique". Phd thesis, Université de Grenoble, 2011. http://tel.archives-ouvertes.fr/tel-00625469.
Texto completoPham, Viet Nga. "Programmation DC et DCA pour l'optimisation non convexe/optimisation globale en variables mixtes entières : Codes et Applications". Phd thesis, INSA de Rouen, 2013. http://tel.archives-ouvertes.fr/tel-00833570.
Texto completoBouchakri, Rima. "Conception physique statique et dynamique des entrepôts de données". Thesis, Chasseneuil-du-Poitou, Ecole nationale supérieure de mécanique et d'aérotechnique, 2015. http://www.theses.fr/2015ESMA0012/document.
Texto completoData Warehouses store into a single location a huge amount of data. They are interrogated by complex decisional queries called star join queries. To optimize such queries, several works propose algorithms for selecting optimization techniques such as Binary Join Indexes and Horizontal Partitioning during the DW physical design. However, these works propose static algorithms, select optimization techniques in and isolated way and focus on optimizing a single objective which is the query performance. Our main contribution in this thesis is to propose a new vision of optimization techniques selection. Our first contribution is an incremental selection that updates continuously the optimization scheme implemented on the DW, to ensure the continual optimization of queries. To deal with queries complexity increase, our second contribution is a join incremental selection of two optimization techniques which covers the optimization of a maximum number or queries and respects the optimization constraints. Finally, we note that the incremental selection generates a maintenance cost to update the optimization schemes. Thus, our third prop05ilion is to formulate and resolve a multi-objective selection problem or optimization techniques where we have two objectives to optimize : queries performance and maintenance cost of the DW
Dubois, Amaury. "Optimisation et apprentissage de modèles biologiques : application à lirrigation [sic l'irrigation] de pomme de terre". Thesis, Littoral, 2020. http://www.theses.fr/2020DUNK0560.
Texto completoThe subject of this PhD concerns one of the LISIC themes : modelling and simulation of complex systems, as well as optimization and automatic learning for agronomy. The objectives of the thesis are to answer the questions of irrigation management of the potato crop and the development of decision support tools for farmers. The choice of this crop is motivated by its important share in the Haut-de-France region. The manuscript is divided into 3 parts. The first part deals with continuous multimodal optimization in a black box context. This is followed by a presentation of a methodology for the automatic calibration of biological model parameters through reformulation into a black box multimodal optimization problem. The relevance of the use of inverse analysis as a methodology for automatic parameterisation of large models in then demonstrated. The second part presents 2 new algorithms, UCB Random with Decreasing Step-size and UCT Random with Decreasing Step-size. Thes algorithms are designed for continuous multimodal black-box optimization whose choice of the position of the initial local search is assisted by a reinforcement learning algorithms. The results show that these algorithms have better performance than (Quasi) Random with Decreasing Step-size algorithms. Finally, the last part focuses on machine learning principles and methods. A reformulation of the problem of predicting soil water content at one-week intervals into a supervised learning problem has enabled the development of a new decision support tool to respond to the problem of crop management
VANDEMOORTELE, JEAN-LUC. "Optimisation de la micropropagation de deux especes legumieres (petroselinum crispum mill. Et brassica oleracea l. Var. Botrytis) en vue de l'obtention de vitroplants conformes pour un programme de selection". Caen, 1997. http://www.theses.fr/1997CAEN2008.
Texto completoChopin, Morgan. "Problèmes d'optimisation avec propagation dans les graphes : complexité paramétrée et approximation". Phd thesis, Université Paris Dauphine - Paris IX, 2013. http://tel.archives-ouvertes.fr/tel-00933769.
Texto completoChallita, Nicole. "Contributions à la sélection des attributs de signaux non stationnaires pour la classification". Thesis, Troyes, 2018. http://www.theses.fr/2018TROY0012.
Texto completoTo monitor the functioning of a system, the number of measurements and attributes can now be very large. But it is desirable to reduce the size of the problem by keeping only the discriminating features to learn the monitoring rule and to reduce the processing demand. The problem is therefore to select a subset of attributes to obtain the best possible classification performance. This thesis dissertation presents different existing methods for feature selection and proposes two new ones. The first one, named "EN-ReliefF", is a combination of a sequential ReliefF method and a weighted regression approach: Elastic Net. The second one is inspired by neural networks. It is formulated as an optimization problem allowing defining at the same time a non-linear regression that adapts to the learning data and a parsimonious weighting of the features. The weights are then used to select the relevant features. Both methods are tested on synthesis data and data from rotating machines. Experimental results show the effectiveness of both methods. Remarkable characteristics are the stability of selection and ability to manage linearly correlated attributes for "EN-ReliefF" and the sensitivity and ability to manage non-linear dependencies for the second method
Apatean, Anca Ioana. "Contributions à la fusion des informations : application à la reconnaissance des obstacles dans les images visible et infrarouge". Phd thesis, INSA de Rouen, 2010. http://tel.archives-ouvertes.fr/tel-00621202.
Texto completoMonrousseau, Thomas. "Développement du système d'analyse des données recueillies par les capteurs et choix du groupement de capteurs optimal pour le suivi de la cuisson des aliments dans un four". Thesis, Toulouse, INSA, 2016. http://www.theses.fr/2016ISAT0054.
Texto completoIn a world where all personal devices become smart and connected, some French industrials created a project to make ovens able detecting the cooking state of fish and meat without contact sensor. This thesis takes place in this context and is divided in two major parts. The first one is a feature selection phase to be able to classify food in three states: under baked, well baked and over baked. The point of this selection method, based on fuzzy logic is to strongly reduce the number of features got from laboratory specific sensors. The second part concerns on-line monitoring of the food cooking state by several methods. These technics are: classification algorithm into ten bake states, the use of a discrete version of the heat equation and the development of a soft sensor based on an artificial neural network model build from cooking experiments to infer the temperature inside the food from available on-line measurements. These algorithms have been implemented on microcontroller equipping a prototype version of a new oven in order to be tested and validated on real use cases
Bérodier, Marie. "Utilisation en ferme des données de génotypage pour une gestion optimisée et durable de l'élevage laitier". Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASA001.
Texto completoOver the last 10 years, new methods have emerged for farmers to estimate the genetic level of their Montbéliarde cattle. These methods rely on the genotyping of animals, an approach to read and interpret key parts of their genome. This genomic information can be used during the entire life of the animal in order to find the best mate to produce offspring according to the farmer’s expectations.Female genotyping allows for a higher genetic gain, a smaller mate co-ancestry and a reduced risk to conceive an embryo affected by a genetic defect thanks to more complete and reliable information to be used to optimize the matings. Considering farming systems specific breeding objectives when planning the matings further improves these results
Rommel, Cédric. "Exploration de données pour l'optimisation de trajectoires aériennes". Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLX066/document.
Texto completoThis thesis deals with the use of flight data for the optimization of climb trajectories with relation to fuel consumption.We first focus on methods for identifying the aircraft dynamics, in order to plug it in the trajectory optimization problem. We suggest a static formulation of the identification problem, which we interpret as a structured multi-task regression problem. In this framework, we propose parametric models and use different maximum likelihood approaches to learn the unknown parameters.Furthermore, polynomial models are considered and an extension to the structured multi-task setting of the bootstrap Lasso is used to make a consistent selection of the monomials despite the high correlations among them.Next, we consider the problem of assessing the optimized trajectories relatively to the validity region of the identified models. For this, we propose a probabilistic criterion for quantifying the closeness between an arbitrary curve and a set of trajectories sampled from the same stochastic process. We propose a class of estimators of this quantity and prove their consistency in some sense. A nonparemetric implementation based on kernel density estimators, as well as a parametric implementation based on Gaussian mixtures are presented. We introduce the later as a penalty term in the trajectory optimization problem, which allows us to control the trade-off between trajectory acceptability and consumption reduction
Akgul, Yeter. "Gestion de la consommation basée sur l’adaptation dynamique de la tension, fréquence et body bias sur les systèmes sur puce en technologie FD-SOI". Thesis, Montpellier 2, 2014. http://www.theses.fr/2014MON20132/document.
Texto completoBeyond 28nm CMOS BULK technology node, some limits have been reached in terms of performance improvements. This is mainly due to the increasing power consumption. This is one of the reasons why new technologies have been developed, including those based on Silicon-On-Insulator (SOI). Moreover, the standardization of complex architectures such as multi-core architectures emphasizes the problem of power management at fine-grain. FD-SOI technologies offer new power management opportunities by adjusting, in addition to the usual parameters such as supply voltage and clock frequency, the body bias voltage. In this context, this work explores new opportunities and searches novel solutions for dynamically manage supply voltage, clock frequency and body bias voltage in order to optimize the power consumption of System on Chip.Adjusting supply voltage, frequency and body bias parameters allows multiple operating points, which must satisfy the constraints of functionality and performance. This work focuses initially at design time, proposing a method to optimize the placement of these operating points. An analytical solution to maximize power savings achieved through the use of several operating points is provided. The second important contribution of this work is a method based on convexity concept to dynamically manage the supply voltage, the frequency and the body bias voltage so as to optimize the energy efficiency. The experimental results based on real circuits show average power savings reaching 35%
Ben, Hassine Soumaya. "Évaluation et requêtage de données multisources : une approche guidée par la préférence et la qualité des données : application aux campagnes marketing B2B dans les bases de données de prospection". Thesis, Lyon 2, 2014. http://www.theses.fr/2014LYO22012/document.
Texto completoIn Business-to-Business (B-to-B) marketing campaigns, manufacturing “the highest volume of sales at the lowest cost” and achieving the best return on investment (ROI) score is a significant challenge. ROI performance depends on a set of subjective and objective factors such as dialogue strategy, invested budget, marketing technology and organisation, and above all data and, particularly, data quality. However, data issues in marketing databases are overwhelming, leading to insufficient target knowledge that handicaps B-to-B salespersons when interacting with prospects. B-to-B prospection data is indeed mainly structured through a set of independent, heterogeneous, separate and sometimes overlapping files that form a messy multisource prospect selection environment. Data quality thus appears as a crucial issue when dealing with prospection databases. Moreover, beyond data quality, the ROI metric mainly depends on campaigns costs. Given the vagueness of (direct and indirect) cost definition, we limit our focus to price considerations.Price and quality thus define the fundamental constraints data marketers consider when designing a marketing campaign file, as they typically look for the "best-qualified selection at the lowest price". However, this goal is not always reachable and compromises often have to be defined. Compromise must first be modelled and formalized, and then deployed for multisource selection issues. In this thesis, we propose a preference-driven selection approach for multisource environments that aims at: 1) modelling and quantifying decision makers’ preferences, and 2) defining and optimizing a selection routine based on these preferences. Concretely, we first deal with the data marketer’s quality preference modelling by appraising multisource data using robust evaluation criteria (quality dimensions) that are rigorously summarized into a global quality score. Based on this global quality score and data price, we exploit in a second step a preference-based selection algorithm to return "the best qualified records bearing the lowest possible price". An optimisation algorithm, BrokerACO, is finally run to generate the best selection result
Harsan, Horea. "Analyse cyclique de sécurité : approche temps réel et intégration dans la gestion prévisionnelle". Grenoble INPG, 1996. http://www.theses.fr/1996INPG0166.
Texto completoGigan, Daniel. "Modélisation des comportements d'un pilote expert en situation de collision en vol vers une nouvelle technologie "voir et éviter" pour les drones : Pour un fonctionnalisme holistique à vocation intégrative". Thesis, Toulouse, ISAE, 2013. http://www.theses.fr/2013ESAE0022.
Texto completoThe aim of this doctoral thesis is the modeling of expert pilot behaviors in flight collisions. This modeling gives the first echnologic steps to elaborate a new "sense and avoid" system allowing the future integration of Unmanned Air Vehicles in eneral air traffic. The proposed modeling is the result of global and holistic way and describes the cognitive process and he architecture of systems allowing the expression of these cognitive processes. This model allows solving the collision problem thanks to an observable and adapted piloted behavior. Besides a generic modeling of cognitive process of ategorization has been built and based on non linear regression theory and numeric methods for the resolution of ptimization problems.hanks to this global modeling, this new "sense and avoid" system is made of a simple passive optic sensor and it emulates he detection process, the recognition process and the and the actions selection process allowing the resolution of collision problem by a adapted piloted behavior. Thanks to the generic categorization modeling, the main technologic result is to be ble to determinate the Time To Collision (ITC) with a passive sensor. The determination of the TTC is essential for the 'sense and avoid" systems to get the level safety certification required to integrate drones in general air traffic
Huang, Changwu. "Kriging-assisted evolution strategy for optimization and application in material parameters identification". Thesis, Normandie, 2017. http://www.theses.fr/2017NORMIR05.
Texto completoIn order to reduce the cost of solving expensive optimization problems, this thesis devoted to Kriging-Assisted Covariance Matrix Adaptation Evolution Strategy (KA-CMA-ES). Several algorithms of KA-CMA-ES were developed and a comprehensive investigation on KA-CMA-ES was performed. Then applications of the developed KA-CMA-ES algorithm were carried out in material parameter identification of an elastic-plastic damage constitutive model. The results of experimental studies demonstrated that the developed KA-CMA-ES algorithms generally are more efficient than the standard CMA-ES and that the KA-CMA-ES using ARP-EI has the best performance among all the investigated KA-CMA-ES algorithms in this work. The results of engineering applications of the algorithm ARP-EI in material parameter identification show that the presented elastic-plastic damage model is adequate to describe the plastic and ductile damage behavior and also prove that the proposed KA-CMA-ES algorithm apparently improve the efficiency of the standard CMA-ES. Therefore, the KA-CMA-ES is more powerful and efficient than CMA-ES for expensive optimization problems
Vaiter, Samuel. "Régularisations de Faible Complexité pour les Problèmes Inverses". Phd thesis, Université Paris Dauphine - Paris IX, 2014. http://tel.archives-ouvertes.fr/tel-01026398.
Texto completoBelbekkouche, Abdeltouab. "Routage adaptatif et qualité de service dans les réseaux optiques à commutation de rafales". Thèse, 2010. http://hdl.handle.net/1866/4776.
Texto completoOptical Burst Switching (OBS) networks are candidates to play an important role in the context of next generation optical networks. In this thesis, we are interested in adaptive routing and quality of service provisioning for these networks. In the first part of the thesis, we study the capability of multi-path routing and alternative routing (deflection routing) to improve the performance of the OBS network proactively for the former and reactively for the latter. In this context, we propose a reinforcement learning-based approach where learning agents, placed in each OBS node, cooperate to learn, continuously, optimal routing paths and alternative paths according to the current state of the network. Numerical results show that the proposed approach improves the performance of the OBS network compared to existing solutions in the literature. In the second part of the thesis, we consider the problem of absolute quality of service provisioning for OBS networks where worst-case performance of high priority traffic is guaranteed quantitatively. Particularly, we are interested in the loss-free transmission, inside the OBS network, of high priority bursts, while preserving statistical multiplexing gain and high resources utilization of the OBS network. Also, we aim to improve the performance of best effort traffic. Hence, we propose two approaches: (a) the node-based approach; and (b) the path-based approach. In the node-based approach, we propose to assign a set of wavelengths to each OBS edge node that it can use to send its guaranteed traffic. This assignment takes into consideration physical distances between edge nodes. Furthermore, we propose a wavelength selection algorithm to improve the performance of best effort bursts. In the path-based approach, absolute quality of service provisioning is offered at end-to-end path level. To do this, we propose a routing and wavelength assignment approach which aims to reduce the number of wavelengths required to establish contention free paths. Nevertheless, if this objective cannot be reached because of the limited number of wavelengths in each fiber link, we propose an approach to synchronize overlapping paths without the need for additional equipments for synchronization. Here again, we propose a wavelength selection algorithm for best effort bursts. Numerical results show that both the node-based and the path-based approaches successfully provide absolute quality of service provisioning for guaranteed traffic and improve the performance of best effort traffic. Also, path-based approach could accommodate more guaranteed traffic and improve the performance of best effort traffic compared to node-based approach when the number of wavelengths is sufficient.