Dissertationen zum Thema „Optimisation de boîte noire“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit Top-45 Dissertationen für die Forschung zum Thema "Optimisation de boîte noire" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.
Ros, Raymond. „Optimisation Continue Boîte Noire : Comparaison et Conception d'Algorithmes“. Phd thesis, Université Paris Sud - Paris XI, 2009. http://tel.archives-ouvertes.fr/tel-00595922.
Der volle Inhalt der QuelleIrfan, Muhammad Naeem. „Analyse et optimisation d'algorithmes pour l'inférence de modèles de composants logiciels“. Phd thesis, Université de Grenoble, 2012. http://tel.archives-ouvertes.fr/tel-00767894.
Der volle Inhalt der QuelleVarelas, Konstantinos. „Randomized Derivative Free Optimization via CMA-ES and Sparse Techniques : Applications to Radars“. Thesis, Institut polytechnique de Paris, 2021. http://www.theses.fr/2021IPPAX012.
Der volle Inhalt der QuelleIn this thesis, we investigate aspects of adaptive randomized methods for black-box continuous optimization. The algorithms that we study are based on the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) algorithm and focus on large scale optimization problems.We start with a description of CMA-ES and its relation to the Information Geometric Optimization (IGO) framework, succeeded by a comparative study of large scale variants of CMA-ES. We furthermore propose novel methods which integrate tools of high dimensional analysis within CMA-ES, to obtain more efficient algorithms for large scale partially separable problems.Additionally, we describe the methodology for algorithm performance evaluation adopted by the Comparing Continuous Optimizers (COCO) platform, and finalize the bbob-largescale test suite, a novel benchmarking suite with problems of increased dimensions and with a low computational cost.Finally, we present the formulation, methodology and obtained results for two applications related to Radar problems, the Phase Code optimization problem and the Phased-Array Pattern design problem
Nguyen, Xuan-Nam. „Une approche « boite noire » pour résoudre le problème de placement des règles dans un réseau OpenFlow“. Thesis, Nice, 2016. http://www.theses.fr/2016NICE4012/document.
Der volle Inhalt der QuelleThe massive number of connected devices combined with an increasing traffic push network operators to their limit by limiting their profitability. To tackle this problem, Software-Defined Networking (SDN), which decouples network control logic from forwarding devices, has been proposed. An important part of the SDN concepts is implemented by the OpenFlow protocol that abstracts network communications as flows and processes them using a prioritized list of rules on the network forwarding elements. While the abstraction offered by OpenFlow allows to implement many applications, it raises the new problem of how to define the rules and where to place them in the network while respecting all requirements, which we refer as the OpenFlow Rules Placement Problem (ORPP). In this thesis, we focus on the ORPP and hide the complexity of network management by proposing a black box abstraction. First, we formalize that problem, classify and discuss existing solutions. We discover that most of the solutions enforce the routing policy when placing rules, which is not memory efficient in some cases. Second, by trading routing for better resource efficiency, we propose OFFICER and aOFFICER, two frameworks that select OpenFlow rules satisfying policies and network constraints, while minimizing overheads. The main idea of OFFICER an aOFFICER is to give high priority for large flows to be installed on efficient paths, and let other flows follow default paths. These proposals are evaluated and compared to existing solutions in realistic scenarios. Finally, we study a use case of the black box abstraction, in which we improve the performance of content delivery services in cellular networks
Berthier, Vincent. „Studies on Stochastic Optimisation and applications to the Real-World“. Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS336/document.
Der volle Inhalt der QuelleA lot of research is being done on Stochastic Optimisation in general and Genetic Algorithms in particular. Most of the new developments are then tested on well know testbeds like BBOB, CEC, etc. conceived to exhibit as many pitfalls as possible such as non-separability, multi-modality, valleys with an almost null gradient and so on. Most studies done on such testbeds are pretty straightforward, optimising a given number of variables for there cognized criterion on the testbed. The first contribution made here is to study the impact of some changes in those assumptions, namely the effect of supernumerary variables that don't change anything to a function evaluation on the one hand, and the effect of a change of the studied criterion on the other hand. A second contribution is in the modification of the mutation design for the algorithm CMA-ES, where we will use Quasi-Random mutations instead of purely random ones. This will almost always result in a very clear improvement ofthe observed results. This research also introduces the Sieves Method well known in statistics, to stochastic optimisers: by first optimising a small subset of the variables and gradually increasing the number of variables during the optimization process, we observe on some problems a very clear improvement. While artificial testbeds are of course really useful, they can only be the first step: in almost every case, the testbeds are a collection of purely mathematical functions, from the simplest one like the sphere, to some really complex functions. The goal of the design of new optimisers or the improvement of an existing one is however, in fine, to answer some real world question. It can be the design of a more efficient engine, finding the correct parameters of a physical model or even to organize data in clusters. Stochastic optimisers are used on those problems, in research or industry, but in most instances, an optimiser ischosen almost arbitrarily. We know how optimisers compare on artificial functions, but almost nothing is known abouttheir performances on real world problems. One of the main aspect of the research exposed here will be to compare someof the most used optimisers in the literature on problems inspired or directly coming from the real-world. On those problems, we will additionally test the efficiency of quasi-random mutations in CMA-ES and the Sieves-Method
Jankovic, Anja. „Towards Online Landscape-Aware Algorithm Selection in Numerical Black-Box Optimization“. Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS302.
Der volle Inhalt der QuelleBlack-box optimization algorithms (BBOAs) are conceived for settings in which exact problem formulations are non-existent, inaccessible, or too complex for an analytical solution. BBOAs are essentially the only means of finding a good solution to such problems. Due to their general applicability, BBOAs can exhibit different behaviors when optimizing different types of problems. This yields a meta-optimization problem of choosing the best suited algorithm for a particular problem, called the algorithm selection (AS) problem. By reason of inherent human bias and limited expert knowledge, the vision of automating the selection process has quickly gained traction in the community. One prominent way of doing so is via so-called landscape-aware AS, where the choice of the algorithm is based on predicting its performance by means of numerical problem instance representations called features. A key challenge that landscape-aware AS faces is the computational overhead of extracting the features, a step typically designed to precede the actual optimization. In this thesis, we propose a novel trajectory-based landscape-aware AS approach which incorporates the feature extraction step within the optimization process. We show that the features computed using the search trajectory samples lead to robust and reliable predictions of algorithm performance, and to powerful algorithm selection models built atop. We also present several preparatory analyses, including a novel perspective of combining two complementary regression strategies that outperforms any of the classical, single regression models, to amplify the quality of the final selector
Samaké, Oumar. „Analyse thermo-économique d'un système de dessalement par thermocompression de vapeur et conception de l'éjecteur“. Thèse, Université de Sherbrooke, 2016. http://hdl.handle.net/11143/8782.
Der volle Inhalt der QuelleTorossian, Léonard. „Méthodes d'apprentissage statistique pour la régression et l'optimisation globale de mesures de risque“. Thesis, Toulouse 3, 2019. http://www.theses.fr/2019TOU30192.
Der volle Inhalt der QuelleThis thesis presents methods for estimation and optimization of stochastic black box functions. Motivated by the necessity to take risk-averse decisions in medecine, agriculture or finance, in this study we focus our interest on indicators able to quantify some characteristics of the output distribution such as the variance or the size of the tails. These indicators also known as measure of risk have received a lot of attention during the last decades. Based on the existing literature on risk measures, we chose to focus this work on quantiles, CVaR and expectiles. First, we will compare the following approaches to perform quantile regression on stochastic black box functions: the K-nearest neighbors, the random forests, the RKHS regression, the neural network regression and the Gaussian process regression. Then a new regression model is proposed in this study that is based on chained Gaussian processes inferred by variational techniques. Though our approach has been initially designed to do quantile regression, we showed that it can be easily applied to expectile regression. Then, this study will focus on optimisation of risk measures. We propose a generic approach inspired from the X-armed bandit which enables the creation of an optimiser and an upper bound on the simple regret that can be adapted to any risk measure. The importance and relevance of this approach is illustrated by the optimization of quantiles and CVaR. Finally, some optimisation algorithms for the conditional quantile and expectile are developed based on Gaussian processes combined with UCB and Thompson sampling strategies
Bittar, Thomas. „Stochastic optimization of maintenance scheduling : blackbox methods, decomposition approaches - Theoretical and numerical aspects“. Thesis, Marne-la-vallée, ENPC, 2021. http://www.theses.fr/2021ENPC2004.
Der volle Inhalt der QuelleThe aim of the thesis is to develop algorithms for optimal maintenance scheduling. We focus on the specific case of large systems that consist of several components linked by a common stock of spare parts. The numerical experiments are carried out on systems of components from a single hydroelectric power plant.The first part is devoted to blackbox methods which are commonly used in maintenance scheduling. We focus on a kriging-based algorithm, Efficient Global Optimization (EGO), and on a direct search method, Mesh Adaptive Direct Search (MADS). We present a theoretical and practical review of the algorithms as well as some improvements for the implementation of EGO. MADS and EGO are compared on an academic benchmark and on small industrial maintenance problems, showing the superiority of MADS but also the limitation of the blackbox approach when tackling large-scale problems.In a second part, we want to take into account the fact that the system is composed of several components linked by a common stock in order to address large-scale maintenance optimization problems. For that purpose, we develop a model of the dynamics of the studied system and formulate an explicit stochastic optimal control problem. We set up a scheme of decomposition by prediction, based on the Auxiliary Problem Principle (APP), that turns the resolution of the large-scale problem into the iterative resolution of a sequence of subproblems of smaller size. The decomposition is first applied on synthetic test cases where it proves to be very efficient. For the industrial case, a "relaxation" of the system is needed and developed to apply the decomposition methodology. In the numerical experiments, we solve a Sample Average Approximation (SAA) of the problem and show that the decomposition leads to substantial gains over the reference algorithm.As we use a SAA method, we have considered the APP in a deterministic setting. In the third part, we study the APP in the stochastic approximation framework in a Banach space. We prove the measurability of the iterates of the algorithm, extend convergence results from Hilbert spaces to Banach spaces and give efficiency estimates
Loshchilov, Ilya. „Surrogate-Assisted Evolutionary Algorithms“. Phd thesis, Université Paris Sud - Paris XI, 2013. http://tel.archives-ouvertes.fr/tel-00823882.
Der volle Inhalt der QuelleDubois, Amaury. „Optimisation et apprentissage de modèles biologiques : application à lirrigation [sic l'irrigation] de pomme de terre“. Thesis, Littoral, 2020. http://www.theses.fr/2020DUNK0560.
Der volle Inhalt der QuelleThe subject of this PhD concerns one of the LISIC themes : modelling and simulation of complex systems, as well as optimization and automatic learning for agronomy. The objectives of the thesis are to answer the questions of irrigation management of the potato crop and the development of decision support tools for farmers. The choice of this crop is motivated by its important share in the Haut-de-France region. The manuscript is divided into 3 parts. The first part deals with continuous multimodal optimization in a black box context. This is followed by a presentation of a methodology for the automatic calibration of biological model parameters through reformulation into a black box multimodal optimization problem. The relevance of the use of inverse analysis as a methodology for automatic parameterisation of large models in then demonstrated. The second part presents 2 new algorithms, UCB Random with Decreasing Step-size and UCT Random with Decreasing Step-size. Thes algorithms are designed for continuous multimodal black-box optimization whose choice of the position of the initial local search is assisted by a reinforcement learning algorithms. The results show that these algorithms have better performance than (Quasi) Random with Decreasing Step-size algorithms. Finally, the last part focuses on machine learning principles and methods. A reformulation of the problem of predicting soil water content at one-week intervals into a supervised learning problem has enabled the development of a new decision support tool to respond to the problem of crop management
Berthou, Thomas. „Développement de modèles de bâtiment pour la prévision de charge de climatisation et l'élaboration de stratégies d'optimisation énergétique et d'effacement“. Phd thesis, Ecole Nationale Supérieure des Mines de Paris, 2013. http://pastel.archives-ouvertes.fr/pastel-00935434.
Der volle Inhalt der QuelleAli, Marwan. „Nouvelles architectures intégrées de filtre CEM hybride“. Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2012. http://tel.archives-ouvertes.fr/tel-00847144.
Der volle Inhalt der QuelleAkerma, Mahdjouba. „Impact énergétique de l’effacement dans un entrepôt frigorifique : analyse des approches systémiques : boîte noire / boîte blanche“. Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS187.
Der volle Inhalt der QuelleRefrigerated warehouses and cold rooms, mainly used for food conservation, constitute available storage cells; they can be considered as a network of "thermal batteries" ready to be used and one of the best existing solutions to store and delay electricity consumption. However, the risk related to temperature fluctuations of products due to periods of demand response - DR* and the risk of energy overconsumption limit the use of this strategy by industrials in food refrigeration. The present PhD thesis aims to characterize the electrical DR of warehouses and cold rooms by examining the thermal behavior of those systems, in terms of temperature fluctuation and electrical consumption. An experimental set-up was developed to study several DR scenarios (duration, frequency and operating conditions) and to propose new indicators to characterize the impact of DR periods on the thermal and energy behavior of refrigeration systems. This study has highlighted the importance of the presence of load to limit the temperature rise and thus to reduce the impact on stored products. The potential for DR application in the case of a cold store and a cold room was assessed, based on the development of two modeling approaches: “black box” (Machine Learning by artificial neural networks using Deep Learning models) and “white box” (physics). A possibility of interaction between these two approaches has been proposed, based on the use of black box models for prediction and the use of the white box model to generate input and output data
Bout, Erwan David Mickaël. „Poïétique et procédure : pour une esthétique de la boîte noire“. Paris 1, 2009. http://www.theses.fr/2009PA010631.
Der volle Inhalt der QuelleBarbillon, Pierre. „Méthodes d'interpolation à noyaux pour l'approximation de fonctions type boîte noire coûteuses“. Phd thesis, Université Paris Sud - Paris XI, 2010. http://tel.archives-ouvertes.fr/tel-00559502.
Der volle Inhalt der QuelleBerthou, Thomas. „Développement de modèles de bâtiment pour la prévision de charge de climatisation et l’élaboration de stratégies d’optimisation énergétique et d’effacement“. Thesis, Paris, ENMP, 2013. http://www.theses.fr/2013ENMP0030/document.
Der volle Inhalt der QuelleTo reach the objectives of reducing the energy consumption and increasing the flexibility of buildings energy demand, it is necessary to have load forecast models easy to adapt on site and efficient for the implementation of energy optimization and load shedding strategies. This thesis compares several inverse model architectures ("black box", "grey box"). A 2nd order semi-physical model (R6C2) has been selected to forecast load curves and the average indoor temperature for heating and cooling. It is also able to simulate unknown situations (load shedding), absent from the learning phase. Three energy optimization and load shedding strategies adapted to operational constraints are studied. The first one optimizes the night set-back to reduce consumption and to reach the comfort temperature in the morning. The second strategy optimizes the set-point temperatures during a day in the context of variable energy prices, thus reducing the energy bill. The third strategy allows load curtailment in buildings by limiting load while meeting specified comfort criteria. The R6C2 model and strategies have been faced with a real building (elementary school). The study shows that it is possible to forecast the electrical power and the average temperature of a complex building with a single-zone model; the developed strategies are assessed and the limitations of the model are identified
Bensadon, Jérémy. „Applications de la théorie de l'information à l'apprentissage statistique“. Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLS025/document.
Der volle Inhalt der QuelleWe study two different topics, using insight from information theory in both cases: 1) Context Tree Weighting is a text compression algorithm that efficiently computes the Bayesian combination of all visible Markov models: we build a "context tree", with deeper nodes corresponding to more complex models, and the mixture is computed recursively, starting with the leaves. We extend this idea to a more general context, also encompassing density estimation and regression; and we investigate the benefits of replacing regular Bayesian inference with switch distributions, which put a prior on sequences of models instead of models. 2) Information Geometric Optimization (IGO) is a general framework for black box optimization that recovers several state of the art algorithms, such as CMA-ES and xNES. The initial problem is transferred to a Riemannian manifold, yielding parametrization-invariant first order differential equation. However, since in practice, time is discretized, this invariance only holds up to first order. We introduce the Geodesic IGO (GIGO) update, which uses this Riemannian manifold structure to define a fully parametrization invariant algorithm. Thanks to Noether's theorem, we obtain a first order differential equation satisfied by the geodesics of the statistical manifold of Gaussians, thus allowing to compute the corresponding GIGO update. Finally, we show that while GIGO and xNES are different in general, it is possible to define a new "almost parametrization-invariant" algorithm, Blockwise GIGO, that recovers xNES from abstract principles
Saives, Jérémie. „Identification Comportementale "Boîte-noire" des Systèmes à Evénements Discrets par Réseaux de Petri Interprétés“. Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLN018/document.
Der volle Inhalt der QuelleThis thesis proposes a method to identify compact and expressive models of closed-loop reactive Discrete Event Systems (DES), for reverse-engineering or certification. The identification is passive, and blackbox, accessible knowledge being limited to input/output signals. Interpreted Petri Nets (IPN) represent both the observable behaviour (direct input/output causalities) and the unobservable behaviour (internal state evolutions) of the system. This thesis aims at identifying IPN models from an observed sequence of I/O vectors. The proposed contributions extend previous results towards scalability, to deal with realistic systems who exhibit concurrency.Firstly, the construction of the observable part of the IPN is improved by the addition of a filter limiting the effect of concurrency. It detects and removes spurious synchronizations caused by the controller. Then, a new approach is proposed to improve the discovery of the unobservable part. It is based on the use of projections and guarantees the reproduction of the observed behaviour, despite concurrency. An efficient heuristic is proposed to compute a model adapted to reverse-engineering, limiting the computational cost. Finally, a distributed approach is proposed to further reduce the computational cost, by automatically partitioning the system into subsystems. The efficiency of the cumulative effect of these contributions is validated on a system of realistic size
Muzammil, Shahbaz Muhammad. „Rétro-conception de modèles d'automates étendus de composant logiciels boîte-noire pour le test d'intégration“. Grenoble INPG, 2008. http://www.theses.fr/2008INPG0166.
Der volle Inhalt der QuelleA challenging issue in component based software engineering is to deliver quality of service. When components come from third-party sources (aka black boxes), the specifications are often absent/insufficient for their formal analysis. The thesis addresses the problem of uncovering the behaviors of black box software components to support testing and analysis of the integrated system that is composed of such components. We propose to learn finite state machine models (where transitions are labelled with parameterized inputs/outputs) and provide a framework for testing and analyzing the integrated system using the inferred models. The approach has been validated on various case studies provides by France Telecom that has produced encouraging results
Graux, François. „Méthodologie de modélisation boîte noire de circuits hyperfréquences non linéaires par réseaux de neurones : applications au radar“. Lille 1, 2001. https://pepite-depot.univ-lille.fr/RESTREINT/Th_Num/2001/50376-2001-47.pdf.
Der volle Inhalt der QuelleBustos, Avila Cecilia. „Optimisation du procédé d'aboutage par entures multiples du bois d'épinette noire“. Thesis, Université Laval, 2003. http://www.theses.ulaval.ca/2003/21006/21006.pdf.
Der volle Inhalt der QuelleIn Eastern Canada, black spruce (Picea mariana (Mill.) B.S.P.) has recently been introduced to finger-joined wood products. While there is a growing economic importance of the species in such applications, little information is available on the manufacturing parameters that influence the finger-jointing process for this species. The purpose of this work was to optimize the finger-jointing process of black spruce wood. Various parameters associated with the finger-jointing process were evaluated. Those included the following: finger-joint configurations, curing time and end-pressure, wood temperature and moisture content and wood machining parameters. Isocyanate adhesive was used for all types of evaluations. Results from configurations evaluation indicated that the feather configuration performs better than male-female and reverse profiles, especially for horizontal structural joints. The effect of moisture content on the mechanical performance of joined black spruce wood was not very conclusive. However, the experiment on wood temperature showed the lowest tensile strength at -5°C. Results on the effect of pressure and curing time showed that curing time and end-pressure have a statistically significant influence on the performance of structural finger-joints. Analysis indicated that finger-joined black spruce has the best performance at an end-pressure of 3.43 MPa (498 psi). For wood machining parameters, results indicated that suitable finger-jointing in black spruce could be achieved within a range of 1676 m/min (5498 feet/min) and 2932 m/min (9621 feet/min) of cutting speed and between 0.86 mm and 1.14 mm of chip-load. The microscopical analysis of damaged cells confirmed the effect of cutting speed on the finger-jointing process. In general, depth of damage was more severe as cutting speed increased. Results obtained in this research could help mills to optimize the process and improve the mechanical performance of finger-joined black spruce product.
Bustos, Cecilia. „Optimisation du procédé d'aboutage par entures multiples du bois d'épinette noire“. Doctoral thesis, Université Laval, 2003. http://hdl.handle.net/20.500.11794/17809.
Der volle Inhalt der QuelleIn Eastern Canada, black spruce (Picea mariana (Mill.) B.S.P.) has recently been introduced to finger-joined wood products. While there is a growing economic importance of the species in such applications, little information is available on the manufacturing parameters that influence the finger-jointing process for this species. The purpose of this work was to optimize the finger-jointing process of black spruce wood. Various parameters associated with the finger-jointing process were evaluated. Those included the following: finger-joint configurations, curing time and end-pressure, wood temperature and moisture content and wood machining parameters. Isocyanate adhesive was used for all types of evaluations. Results from configurations evaluation indicated that the feather configuration performs better than male-female and reverse profiles, especially for horizontal structural joints. The effect of moisture content on the mechanical performance of joined black spruce wood was not very conclusive. However, the experiment on wood temperature showed the lowest tensile strength at -5°C. Results on the effect of pressure and curing time showed that curing time and end-pressure have a statistically significant influence on the performance of structural finger-joints. Analysis indicated that finger-joined black spruce has the best performance at an end-pressure of 3.43 MPa (498 psi). For wood machining parameters, results indicated that suitable finger-jointing in black spruce could be achieved within a range of 1676 m/min (5498 feet/min) and 2932 m/min (9621 feet/min) of cutting speed and between 0.86 mm and 1.14 mm of chip-load. The microscopical analysis of damaged cells confirmed the effect of cutting speed on the finger-jointing process. In general, depth of damage was more severe as cutting speed increased. Results obtained in this research could help mills to optimize the process and improve the mechanical performance of finger-joined black spruce product.
Cool, Julie. „Optimisation de l'usinage de finition du bois d'épinette noire pour fins d'adhésion“. Thesis, Université Laval, 2011. http://www.theses.ulaval.ca/2011/28194/28194.pdf.
Der volle Inhalt der QuelleThe main objective of this research project was to evaluate the effect of surfacing processes in black spruce wood in relation to surface quality and performance of poly (vynil acetate) glue and water-based coating. First, oblique cutting, peripheral planing, face milling and sanding were used prior to gluing with two-component poly (vinyl acetate) glue. The four machining processes produced surfaces having similar surface roughness and glue line shear strength before and after an accelerated aging treatment. Thus, black spruce wood is relatively easy to glue. Only glue penetration, as well as the level of fibrillation, was affected by the machining processes. In the second part of the project, the objective was to optimize cutting parameters of sanding, oblique cutting, helical planing, peripheral planing and face milling for surface quality and coating performance. In addition, the four planing processes were evaluated in order to evaluate their potential to be used as alternatives to sanding. Sanding with a two-stage program (P100-150-grit) at a feed speed of 17 m/min generated low surface roughness and high wettability and pull-off strength after the accelerated aging treatment. On one hand, face milling induced superior surface roughness and wettability when compared with other studied surfacing processes. These samples were also characterized by the most important coating penetration and level of fibrillation. This contributed to reduce the coating layer protecting the samples during the aging treatment. As a consequence, face-milled samples had a significant lower pull-off strength compared with those prepared by the other four machining processes. On the other hand, oblique cutting with an oblique angle fo 35°, helical planing with a wavelength of 1.43 mm and peripheral planing with a 10 or 20° rake angle all produced surfaces with intermediate surface roughness and wetting properties, but coating performance statistically similar to that of sanded samples. As a result, these three planing processes are to be considered as alternatives to the sanding process.
Cool, Julie, und Julie Cool. „Optimisation de l'usinage de finition du bois d'épinette noire pour fins d'adhésion“. Doctoral thesis, Université Laval, 2011. http://hdl.handle.net/20.500.11794/22507.
Der volle Inhalt der QuelleLa présente recherche a eu pour but principal d’évaluer l’effet de l’usinage de finition du bois d’épinette noire en fonction de la qualité de surface et la performance d’une colle d’acétate de polyvinyle et d’un revêtement de finition aqueux, deux types de produits typiquemment utilisés dans la fabrication de produits d’apparence à usage intérieur. Dans un premier volet, on a évalué la qualité de surface et la résistance au cisaillement à la ligne de colle d’échantillons poncés, rabotés en coupe oblique et en coupe périphérique droite et fraisés. Les quatre procédés ont généré des surfaces ayant des rugosités et des résistances au cisaillement semblables avant et après un vieillissement accéléré. Il en ressortit que le bois d’épinette noire est relativement facile à coller. Seule la pénétration de la colle d’acétate de polyvinyle et la fibrillation des surfaces ont été quand même affectées par les types d’usinage. Dans un second volet, on a optimisé certains paramètres de cinq procédés d’usinage, soit le ponçage, la coupe oblique, le fraisage et les coupes périphériques droite et hélicoïdale en fonction de la qualité de surface et de l’adhésion d’un vernis en phase aqueuse. L’étude des quatre procédés de rabotage avait également pour objectif d’évaluer leur potentiel quant au remplacement du ponçage. Le ponçage fait suivant un programme P100-150-grain, à une vitesse d’avance de 17 m/min a permis d’obtenir une faible rugosité ainsi qu’une mouillabilité et une adhésion élevées suite au traitement de vieillissement accéléré. Le fraisage a induit une rugosité et une mouillabilité supérieures à celles des autres procédés d’usinage. Cependant, la pénétration plus importante du vernis et le plus grand niveau de fibrillation ont réduit l’épaisseur de la couche de vernis sur la surfaces. L’effet protecteur du vernis a donc été réduit durant le vieillissement accéléré, ce qui a diminué son adhésion. De leur côté, la coupe oblique avec un angle oblique de 35°, la coupe hélicoïdale avec une onde d’usinage de 1,43 mm et la coupe périphérique droite avec un angle d’attaque de 10 ou 20° ont généré des surfaces ayant une rugosité et une mouillabilité intermédiaires, mais une adhésion du vernis similaire à celle des échantillons poncés. Ces trois procédés ont donc montré être de bonnes alternatives pour faire face aux aspects nuisibles de l’utilisation du ponçage.
The main objective of this research project was to evaluate the effect of surfacing processes in black spruce wood in relation to surface quality and performance of poly (vynil acetate) glue and water-based coating. First, oblique cutting, peripheral planing, face milling and sanding were used prior to gluing with two-component poly (vinyl acetate) glue. The four machining processes produced surfaces having similar surface roughness and glue line shear strength before and after an accelerated aging treatment. Thus, black spruce wood is relatively easy to glue. Only glue penetration, as well as the level of fibrillation, was affected by the machining processes. In the second part of the project, the objective was to optimize cutting parameters of sanding, oblique cutting, helical planing, peripheral planing and face milling for surface quality and coating performance. In addition, the four planing processes were evaluated in order to evaluate their potential to be used as alternatives to sanding. Sanding with a two-stage program (P100-150-grit) at a feed speed of 17 m/min generated low surface roughness and high wettability and pull-off strength after the accelerated aging treatment. On one hand, face milling induced superior surface roughness and wettability when compared with other studied surfacing processes. These samples were also characterized by the most important coating penetration and level of fibrillation. This contributed to reduce the coating layer protecting the samples during the aging treatment. As a consequence, face-milled samples had a significant lower pull-off strength compared with those prepared by the other four machining processes. On the other hand, oblique cutting with an oblique angle fo 35°, helical planing with a wavelength of 1.43 mm and peripheral planing with a 10 or 20° rake angle all produced surfaces with intermediate surface roughness and wetting properties, but coating performance statistically similar to that of sanded samples. As a result, these three planing processes are to be considered as alternatives to the sanding process.
The main objective of this research project was to evaluate the effect of surfacing processes in black spruce wood in relation to surface quality and performance of poly (vynil acetate) glue and water-based coating. First, oblique cutting, peripheral planing, face milling and sanding were used prior to gluing with two-component poly (vinyl acetate) glue. The four machining processes produced surfaces having similar surface roughness and glue line shear strength before and after an accelerated aging treatment. Thus, black spruce wood is relatively easy to glue. Only glue penetration, as well as the level of fibrillation, was affected by the machining processes. In the second part of the project, the objective was to optimize cutting parameters of sanding, oblique cutting, helical planing, peripheral planing and face milling for surface quality and coating performance. In addition, the four planing processes were evaluated in order to evaluate their potential to be used as alternatives to sanding. Sanding with a two-stage program (P100-150-grit) at a feed speed of 17 m/min generated low surface roughness and high wettability and pull-off strength after the accelerated aging treatment. On one hand, face milling induced superior surface roughness and wettability when compared with other studied surfacing processes. These samples were also characterized by the most important coating penetration and level of fibrillation. This contributed to reduce the coating layer protecting the samples during the aging treatment. As a consequence, face-milled samples had a significant lower pull-off strength compared with those prepared by the other four machining processes. On the other hand, oblique cutting with an oblique angle fo 35°, helical planing with a wavelength of 1.43 mm and peripheral planing with a 10 or 20° rake angle all produced surfaces with intermediate surface roughness and wetting properties, but coating performance statistically similar to that of sanded samples. As a result, these three planing processes are to be considered as alternatives to the sanding process.
Censier, Benjamin. „Étude et optimisation de la voie ionisation dansl'expérience EDELWEISS de détection directe de lamatière noire“. Phd thesis, Université Paris Sud - Paris XI, 2006. http://tel.archives-ouvertes.fr/tel-00109453.
Der volle Inhalt der QuelleCensier, Benjamin. „Etude et optimisation de la voie ionisation dans l'expérience EDELWEISS de détection directe de la matière noire“. Paris 11, 2006. https://tel.archives-ouvertes.fr/tel-00109453.
Der volle Inhalt der QuelleThe EDELWEISS experiment is aiming at the detection of Weakly Interactive Massive Particles (WIMPs), today's most favoured candidates for solving the dark matter issue. Background ionising particles are identified thanks to the simultaneous measurement of heat and ionisation in the detectors. The main limitation to this method is coming from the ionisation measurement, charge collection being less efficient in some part of the detectors known as "dead" areas. The specificity of the measurement is due to the use of very low temperatures and low collection fields. This thesis is dedicated to the study of carrier trapping. It involves time-resolved charge measurements as well as a simulation code adapted to the specific physical conditions. We first present results concerning charge trapping at the free surfaces of the detectors. Our method allows to build a surface-charge in a controlled manner by irradiation with a strong radioactive source. This charge is then caracterised with a weaker source which acts as a probe. In a second part of the work, bulk-trapping characteristics are deduced from charge collection efficiency measurements, and by an original method based on event localisation in the detector. The results show that a large proportion of the doping impurities are ionised, as indicated independently by the study of degradation by space-charge build-up. In this last part, near-electrodes areas are found to contain large densities of charged trapping centres, in connection with dead-layer effects
Begin, Thomas. „Modélisation et calibrage automatiques de systèmes“. Paris 6, 2008. http://www.theses.fr/2008PA066540.
Der volle Inhalt der QuelleNAVICK, XAVIER-FRANCOIS. „Etude et optimisation de bolometres a mesure simultanee de l'ionisation et de la chaleur pour la recherche de la matiere noire“. Paris 7, 1997. http://www.theses.fr/1997PA077146.
Der volle Inhalt der QuelleCabana, Antoine. „Contribution à l'évaluation opérationnelle des systèmes biométriques multimodaux“. Thesis, Normandie, 2018. http://www.theses.fr/2018NORMC249/document.
Der volle Inhalt der QuelleDevelopment and spread of connected devices, in particular smartphones, requires the implementation of authentication methods. In an ergonomic concern, manufacturers integrates biometric systems in order to deal with logical control access issues. These biometric systems grant access to critical data and application (payment, e-banking, privcy concerns : emails...). Thus, evaluation processes allows to estimate the systems' suitabilty with these uses. In order to improve recognition performances, manufacturer are susceptible to perform multimodal fusion.In this thesis, the evaluation of operationnal biometric systems has been studied, and an implementation is presented. A second contribution studies the quality estimation of speech samples, in order to predict recognition performances
Romero, Ugalde Héctor Manuel. „Identification de systèmes utilisant les réseaux de neurones : un compromis entre précision, complexité et charge de calculs“. Thesis, Paris, ENSAM, 2013. http://www.theses.fr/2013ENAM0001/document.
Der volle Inhalt der QuelleThis report concerns the research topic of black box nonlinear system identification. In effect, among all the various and numerous techniques developed in this field of research these last decades, it seems still interesting to investigate the neural network approach in complex system model estimation. Even if accurate models have been derived, the main drawbacks of these techniques remain the large number of parameters required and, as a consequence, the important computational cost necessary to obtain the convenient level of the model accuracy desired. Hence, motivated to address these drawbacks, we achieved a complete and efficient system identification methodology providing balanced accuracy, complexity and cost models by proposing, firstly, new neural network structures particularly adapted to a very wide use in practical nonlinear system modeling, secondly, a simple and efficient model reduction technique, and, thirdly, a computational cost reduction procedure. It is important to notice that these last two reduction techniques can be applied to a very large range of neural network architectures under two simple specific assumptions which are not at all restricting. Finally, the last important contribution of this work is to have shown that this estimation phase can be achieved in a robust framework if the quality of identification data compels it. In order to validate the proposed system identification procedure, application examples driven in simulation and on a real process, satisfactorily validated all the contributions of this thesis, confirming all the interest of this work
Romero, ugalde Héctor manuel. „Identification de systèmes utilisant les réseaux de neurones : un compromis entre précision, complexité et charge de calculs“. Phd thesis, Ecole nationale supérieure d'arts et métiers - ENSAM, 2013. http://pastel.archives-ouvertes.fr/pastel-00869428.
Der volle Inhalt der QuelleDolgorouky, Youri. „Optimisation du pouvoir de résolution et du rejet du fond radioactif de détecteurs ionisation-chaleur équipés de couches minces thermométriques pour la détection directe de WIMPs“. Phd thesis, Université Paris Sud - Paris XI, 2008. http://tel.archives-ouvertes.fr/tel-00401690.
Der volle Inhalt der QuelleOuenzar, Mohammed. „Validation de spécifications de systèmes d'information avec Alloy“. Mémoire, Université de Sherbrooke, 2013. http://hdl.handle.net/11143/6594.
Der volle Inhalt der QuelleDominique, Cyril. „Modélisation dynamique des modules actifs à balayage électronique par séries de Volterra et intégration de ces modèles pour une simulation de type système“. Paris 6, 2002. http://www.theses.fr/2002PA066106.
Der volle Inhalt der QuelleLonguet, Delphine. „Test à partir de spécifications axiomatiques“. Phd thesis, Université d'Evry-Val d'Essonne, 2007. http://tel.archives-ouvertes.fr/tel-00258792.
Der volle Inhalt der QuelleLa sélection des données à soumettre au logiciel peut être effectuée selon différentes approches. Lorsque la phase de sélection d'un jeu de tests est opérée à partir d'un objet de référence décrivant plus ou moins formellement le comportement du logiciel, sans connaissance de l'implantation elle-même, on parle de test « boîte noire ». Une des approches de test boîte noire pour laquelle un cadre formel a été proposé est celle qui utilise comme objet de référence une spécification logique du système sous test.
Le cadre général de test à partir de spécifications logiques (ou axiomatiques) pose les conditions et les hypothèses sous lesquelles il est possible de tester un système. La première hypothèse consiste à considérer le système sous test comme un modèle formel implantant les opérations dont le comportement est décrit par la spécification. La seconde hypothèse a trait à l'observabilité du système sous test. Il faut fixer la forme des formules qui peuvent être interprétées par le système, c'est-à-dire qui peuvent être des tests. On se restreint généralement au moins aux formules qui ne contiennent pas de variables. Une fois ces hypothèses de test posées, on dispose d'un jeu de tests initial, celui de toutes les formules observables qui sont des conséquences logiques de la spécification.
Le premier résultat à établir est l'exhaustivité de cet ensemble, c'est-à-dire sa capacité à prouver la correction du système s'il pouvait être soumis dans son intégralité. Le jeu de tests exhaustif étant le plus souvent infini, une phase de sélection intervient afin de choisir un jeu de tests de taille finie et raisonnable à soumettre au système. Plusieurs approches sont possibles. L'approche suivie dans ma thèse, dite par partition, consiste a diviser le jeu de tests exhaustif initial en sous-jeux de tests, selon un certain critère de sélection relatif à une fonctionnalité ou à une caractéristique du système que l'on veut tester. Une fois cette partition suffisamment fine, il suffit de choisir un cas de test dans chaque sous-jeu de test obtenu en appliquant l'hypothèse d'uniformité (tous les cas de test d'un jeu de test sont équivalents pour faire échouer le système). Le deuxième résultat à établir est que la division du jeu de tests initial n'ajoute pas (correction de la procédure) et ne fait pas perdre (complétude) de cas de test.
Dans le cadre des spécifications algébriques, une des méthodes de partition du jeu de tests exhaustif qui a été très étudiée, appelée dépliage des axiomes, consiste à procéder à une analyse par cas de la spécification. Jusqu'à présent, cette méthode s'appuyait sur des spécifications équationnelles dont les axiomes avaient la caractéristique d'être conditionnels positifs (une conjonction d'équations implique une équation).
Le travail de ma thèse a eu pour but d'étendre et d'adapter ce cadre de sélection de tests à des systèmes dynamiques spécifiés dans un formalisme axiomatique, la logique modale du premier ordre. La première étape a consisté à généraliser la méthode de sélection définie pour des spécifications équationnelles conditionnelles positives aux spécifications du premier ordre. Ce cadre de test a ensuite été d'adapté à des spécifications modales du premier ordre. Le premier formalisme de spécification considéré est une extension modale de la logique conditionnelle positive pour laquelle le cadre de test a été initialement défini. Une fois le cadre de test adapté aux spécifications modales conditionnelles positives, la généralisation aux spécifications modales du premier ordre a pu être effectuée.
Dans chacun de ces formalismes nous avons effectué deux tâches. Nous avons d'une part étudié les conditions nécessaires à imposer à la spécification et au système sous test pour obtenir l'exhaustivité du jeu de tests initial. Nous avons d'autre part adapté et étendu la procédure de sélection par dépliage des axiomes à ces formalismes et montré sa correction et sa complétude. Dans les deux cadres généraux des spécifications du premier ordre et des spécifications modales du premier ordre, nous avons montré que les conditions nécessaires à l'exhausitivité du jeu de test visé étaient mineures car faciles à assurer dans la pratique, ce qui assure une généralisation satisfaisante de la sélection dans ce cadre.
Faye, Papa Abdoulaye. „Planification et analyse de données spatio-temporelles“. Thesis, Clermont-Ferrand 2, 2015. http://www.theses.fr/2015CLF22638/document.
Der volle Inhalt der QuelleSpatio-temporal modeling allows to make the prediction of a regionalized variable at unobserved points of a given field, based on the observations of this variable at some points of field at different times. In this thesis, we proposed a approach which combine numerical and statistical models. Indeed by using the Bayesian methods we combined the different sources of information : spatial information provided by the observations, temporal information provided by the black-box and the prior information on the phenomenon of interest. This approach allowed us to have a good prediction of the variable of interest and a good quantification of incertitude on this prediction. We also proposed a new method to construct experimental design by establishing a optimality criterion based on the uncertainty and the expected value of the phenomenon
Nesme, Vincent. „Complexité en requêtes et symétries“. Phd thesis, Ecole normale supérieure de lyon - ENS LYON, 2007. http://tel.archives-ouvertes.fr/tel-00156762.
Der volle Inhalt der Quelleproblèmes symétriques, dans les cadres du calcul probabiliste classique
et du calcul quantique.
Il est montré, dans le cas quantique, une application de la méthode de
bornes inférieures dite "polynomiale" au calcul de la complexité en
requêtes des problèmes de sous-groupes cachés abéliens, via la technique de "symétrisation".
Dans le cas du calcul probabiliste, sous une hypothèse de "symétrie
transitive" des problèmes, il est donné une formule combinatoire
permettant de calculer la complexité en requêtes exacte du meilleur
algorithme non-adaptatif. De plus, il est mis en évidence que sous
certaines hypothèses de symétrie, ce meilleur algorithme non-adaptatif
est optimal même parmi les algorithmes probabilistes plus généraux, ce qui donne pour la classe de problèmes correspondante une expression exacte de la complexité en requêtes.
Vazquez, Emmanuel. „Modélisation comportementale de systèmes non-linéaires multivariables par méthodes à noyaux et applications“. Phd thesis, Université Paris Sud - Paris XI, 2005. http://tel.archives-ouvertes.fr/tel-00010199.
Der volle Inhalt der QuelleDumas, Jean-Guillaume. „Algorithmes parallèles efficaces pour le calcul formel : algèbre linéaire creuse et extensions algébriques“. Phd thesis, Grenoble INPG, 2000. http://tel.archives-ouvertes.fr/tel-00002742.
Der volle Inhalt der QuelleCheaito, Hassan. „Modélisation CEM des équipements aéronautiques : aide à la qualification de l’essai BCI“. Thesis, Lyon, 2017. http://www.theses.fr/2017LYSEC039/document.
Der volle Inhalt der QuelleElectronic equipments intended to be integrated in aircrafts are subjected to normative requirements. EMC (Electromagnetic Compatibility) qualification tests became one of the mandatory requirements. This PhD thesis, carried out within the framework of the SIMUCEDO project (SIMulation CEM based on the DO-160 standard), contributes to the modeling of the Bulk Current Injection (BCI) qualification test. Concept, detailed in section 20 in the DO-160 standard, is to generate a noise current via cables using probe injection, then monitor EUT satisfactorily during test. Among the qualification tests, the BCI test is one of the most constraining and time consuming. Thus, its modeling ensures a saving of time, and a better control of the parameters which influence the success of the equipment under test. The modeling of the test was split in two parts : the equipment under test (EUT) on one hand, and the injection probe with the cables on the other hand. This thesis focuses on the EUT modeling. A "gray box" modeling was proposed by associating the "black box" model with the "extensive" model. The gray box is based on the measurement of standard impedances. Its identification is done with a "pi" model. The model, having the advantage of taking into account several configurations of the EUT, has been validated on an analog to digital converter (ADC). Another approach called modal, in function of common mode and differential mode, has been proposed. It takes into account the mode conversion when the EUT is asymmetrical. Specific PCBs were designed to validate the developed equations. An investigation was carried out to rigorously define the modal impedances, in particular the common mode (CM) impedance. We have shown that there is a discrepancy between two definitions of CM impedance in the literature. Furthermore, the mode conversion ratio (or the Longitudinal Conversion Loss : LCL) was quantified using analytical equations based on the modal approach. An N-input model has been extended to include industrial complexity. The EUT model is combined with the clamp and the cables model (made by the G2ELAB laboratory). Experimental measurements have been made to validate the combined model. According to these measurements, the CM current is influenced by the setup of the cables as well as the EUT. It has been shown that the connection of the shield to the ground plane is the most influent parameter on the CM current distribution
Pernet, Clément. „Algèbre linéaire exacte efficace : le calcul du polynôme caractéristique“. Phd thesis, Université Joseph Fourier (Grenoble), 2006. http://tel.archives-ouvertes.fr/tel-00111346.
Der volle Inhalt der QuelleLe calcul du polynôme caractéristique est l'un des problèmes classiques en algèbre linéaire. Son calcul exact permet par exemple de déterminer la similitude entre deux matrices, par le calcul de la forme normale de Frobenius, ou la cospectralité de deux graphes. Si l'amélioration de sa complexité théorique reste un problème ouvert, tant pour les méthodes denses que boîte noire, nous abordons la question du point de vue de la praticabilité : des algorithmes adaptatifs pour les matrices denses ou boîte noire sont dérivés des meilleurs algorithmes existants pour assurer l'efficacité en pratique. Cela permet de traiter de façon exacte des problèmes de dimensions jusqu'alors inaccessibles.
Pamart, Pierre-Yves. „Contrôle des décollements en boucle fermée“. Phd thesis, Université Pierre et Marie Curie - Paris VI, 2011. http://tel.archives-ouvertes.fr/tel-00659979.
Der volle Inhalt der QuelleAmara, Meriem. „Maîtrise des émissions conduites des électroniques de puissance“. Thesis, Lyon, 2019. http://www.theses.fr/2019LYSEC047.
Der volle Inhalt der QuelleThis thesis is focused on the black box EMC modeling of a three-phase inverter for aerospace applications. The work performed in this thesis is carried out in the framework of DGAC project (Directorate General of Civil Aviation) titled MECEP (Control of conducted emissions of power electronics).To protect the network board from the conducted electromagnetic interferences and to meet the EMC standards and especially the aviation standard “DO160F”, an EMC filter is absolutely necessary for each power converter. This disturbance levels generated by this type of system require careful design to ensure the filtering of parasitic currents that propagate in common mode (CM) and differential mode (DM). Therefore, the work of this thesis is devoted to the study of a generic EMC modeling, rapid and able to represent correctly the electromagnetic behavior of the converter from the side of the DC network and from the side of the AC load. This modeling is based on a “black box” representation. The identified EMC model contains disturbance sources and equivalent CM and DM impedances. This type of model is validated for two standard drive chains of 4 kW. It is able to predict the impact of different parameters such as the operating point, the network impedance and the load impedance in the frequency domain. A good agreement is obtained in all cases up to a frequency of 50 MHz Finally, the proposed EMC modeling, which represents the electromagnetic behavior of the DC input side of the converter, is extended to represent the AC output side behavior. The main advantages of the proposed “black box” EMC modeling are the rapidity, the simplicity and the construction without the knowledge of the internal structure of the converter. That can be protected by the industrial secret
Dahan, Jean-Jacques. „La démarche de découverte expérimentalement médiée par Cabri-Géomètre en mathématiques: un essai de formalisation à partir de l'analyse de démarches de résolutions de problèmes de boîtes noires“. Phd thesis, 2005. http://tel.archives-ouvertes.fr/tel-00356107.
Der volle Inhalt der QuelleL'analyse de la résolution d'une boîte noire particulière permet d'affiner notre modèle a priori de la démarche de découverte en y précisant le rôle de la figure (Duval), les niveaux de géométrie (praxéologies G1 et G2 de Parzysz) et leurs prolongements que nous développons (G1 et G2 informatiques), les cadres d'investigations (Millar) et la place de la preuve expérimentale (Johsua).
Les analyses des expérimentations mises en place permettent de disposer d'un modèle amélioré qui doit permettre aux enseignants d'avoir une connaissance minimale des étapes heuristiques du travail de leurs élèves, de concevoir des activités d'études et de recherches ayant des objectifs précis en liaison avec les étapes formalisées de notre modélisation et d'envisager leur possible évaluation.
Des analyses d'activités existantes avec notre grille montrent la validité du modèle étudié. Des propositions d'activités ont été construites pour favoriser l'apparition de telle ou telle phase de la recherche; elles montrent la viabilité de ce modèle dans la conception d'ingénieries didactiques générant une démarche conforme à la démarche postulée.