Dissertations / Theses on the topic 'Modèle de boîte noire'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Modèle de boîte noire.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Akerma, Mahdjouba. "Impact énergétique de l’effacement dans un entrepôt frigorifique : analyse des approches systémiques : boîte noire / boîte blanche." Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS187.
Full textRefrigerated warehouses and cold rooms, mainly used for food conservation, constitute available storage cells; they can be considered as a network of "thermal batteries" ready to be used and one of the best existing solutions to store and delay electricity consumption. However, the risk related to temperature fluctuations of products due to periods of demand response - DR* and the risk of energy overconsumption limit the use of this strategy by industrials in food refrigeration. The present PhD thesis aims to characterize the electrical DR of warehouses and cold rooms by examining the thermal behavior of those systems, in terms of temperature fluctuation and electrical consumption. An experimental set-up was developed to study several DR scenarios (duration, frequency and operating conditions) and to propose new indicators to characterize the impact of DR periods on the thermal and energy behavior of refrigeration systems. This study has highlighted the importance of the presence of load to limit the temperature rise and thus to reduce the impact on stored products. The potential for DR application in the case of a cold store and a cold room was assessed, based on the development of two modeling approaches: “black box” (Machine Learning by artificial neural networks using Deep Learning models) and “white box” (physics). A possibility of interaction between these two approaches has been proposed, based on the use of black box models for prediction and the use of the white box model to generate input and output data
Muzammil, Shahbaz Muhammad. "Rétro-conception de modèles d'automates étendus de composant logiciels boîte-noire pour le test d'intégration." Grenoble INPG, 2008. http://www.theses.fr/2008INPG0166.
Full textA challenging issue in component based software engineering is to deliver quality of service. When components come from third-party sources (aka black boxes), the specifications are often absent/insufficient for their formal analysis. The thesis addresses the problem of uncovering the behaviors of black box software components to support testing and analysis of the integrated system that is composed of such components. We propose to learn finite state machine models (where transitions are labelled with parameterized inputs/outputs) and provide a framework for testing and analyzing the integrated system using the inferred models. The approach has been validated on various case studies provides by France Telecom that has produced encouraging results
Benni, Benjamin. "Un modèle de raisonnement pour les opérateurs de composition logicielle décrits en boite noire." Thesis, Université Côte d'Azur (ComUE), 2019. http://www.theses.fr/2019AZUR4096.
Full textThe complexity of software systems made it necessary to split them up and reunite them afterward. Separating concerns is a well-studied challenge and teams separate the work to be done beforehand. Still, separating without considering the recomposition leads to rushed, unsafe, and time-consuming recomposition. The composition should create the right and best system with minimal human effort. Composition operators are often ad-hoc solutions developed by non-specialist development teams. They are not developed using high-level formalism and end up being too complicated or too poorly formalized to support proper reasonings. We call them "black-boxes" as existing techniques requiring knowledge of its internals cannot be applied or reused. However, black-box operators, like others, must ensure guarantees: one must assess their idempotency to use them in a distributed context; provide an average execution time to assess usage in a reactive system; check conflicts to validate that the composed artifact conforms to business properties. Despite the black-box aspect, none of these properties are domain-specific. In this thesis, we present a domain-independent approach that enables (i) reasonings on composition equation, (ii) to compose them safely, (iii) by assessing properties similar to the ones from the state-of-the-art. We validated the approach in heterogeneous application domains: 19 versions of Linux kernel with 54 rewriting rules, fixing 13 antipatterns in 22 Android apps, and validating the efficiency of the approach on the composition of 20k Docker images
Irfan, Muhammad Naeem. "Analyse et optimisation d'algorithmes pour l'inférence de modèles de composants logiciels." Phd thesis, Université de Grenoble, 2012. http://tel.archives-ouvertes.fr/tel-00767894.
Full textRomero, Ugalde Héctor Manuel. "Identification de systèmes utilisant les réseaux de neurones : un compromis entre précision, complexité et charge de calculs." Thesis, Paris, ENSAM, 2013. http://www.theses.fr/2013ENAM0001/document.
Full textThis report concerns the research topic of black box nonlinear system identification. In effect, among all the various and numerous techniques developed in this field of research these last decades, it seems still interesting to investigate the neural network approach in complex system model estimation. Even if accurate models have been derived, the main drawbacks of these techniques remain the large number of parameters required and, as a consequence, the important computational cost necessary to obtain the convenient level of the model accuracy desired. Hence, motivated to address these drawbacks, we achieved a complete and efficient system identification methodology providing balanced accuracy, complexity and cost models by proposing, firstly, new neural network structures particularly adapted to a very wide use in practical nonlinear system modeling, secondly, a simple and efficient model reduction technique, and, thirdly, a computational cost reduction procedure. It is important to notice that these last two reduction techniques can be applied to a very large range of neural network architectures under two simple specific assumptions which are not at all restricting. Finally, the last important contribution of this work is to have shown that this estimation phase can be achieved in a robust framework if the quality of identification data compels it. In order to validate the proposed system identification procedure, application examples driven in simulation and on a real process, satisfactorily validated all the contributions of this thesis, confirming all the interest of this work
Romero, ugalde Héctor manuel. "Identification de systèmes utilisant les réseaux de neurones : un compromis entre précision, complexité et charge de calculs." Phd thesis, Ecole nationale supérieure d'arts et métiers - ENSAM, 2013. http://pastel.archives-ouvertes.fr/pastel-00869428.
Full textOuenzar, Mohammed. "Validation de spécifications de systèmes d'information avec Alloy." Mémoire, Université de Sherbrooke, 2013. http://hdl.handle.net/11143/6594.
Full textDominique, Cyril. "Modélisation dynamique des modules actifs à balayage électronique par séries de Volterra et intégration de ces modèles pour une simulation de type système." Paris 6, 2002. http://www.theses.fr/2002PA066106.
Full textRos, Raymond. "Optimisation Continue Boîte Noire : Comparaison et Conception d'Algorithmes." Phd thesis, Université Paris Sud - Paris XI, 2009. http://tel.archives-ouvertes.fr/tel-00595922.
Full textLonguet, Delphine. "Test à partir de spécifications axiomatiques." Phd thesis, Université d'Evry-Val d'Essonne, 2007. http://tel.archives-ouvertes.fr/tel-00258792.
Full textLa sélection des données à soumettre au logiciel peut être effectuée selon différentes approches. Lorsque la phase de sélection d'un jeu de tests est opérée à partir d'un objet de référence décrivant plus ou moins formellement le comportement du logiciel, sans connaissance de l'implantation elle-même, on parle de test « boîte noire ». Une des approches de test boîte noire pour laquelle un cadre formel a été proposé est celle qui utilise comme objet de référence une spécification logique du système sous test.
Le cadre général de test à partir de spécifications logiques (ou axiomatiques) pose les conditions et les hypothèses sous lesquelles il est possible de tester un système. La première hypothèse consiste à considérer le système sous test comme un modèle formel implantant les opérations dont le comportement est décrit par la spécification. La seconde hypothèse a trait à l'observabilité du système sous test. Il faut fixer la forme des formules qui peuvent être interprétées par le système, c'est-à-dire qui peuvent être des tests. On se restreint généralement au moins aux formules qui ne contiennent pas de variables. Une fois ces hypothèses de test posées, on dispose d'un jeu de tests initial, celui de toutes les formules observables qui sont des conséquences logiques de la spécification.
Le premier résultat à établir est l'exhaustivité de cet ensemble, c'est-à-dire sa capacité à prouver la correction du système s'il pouvait être soumis dans son intégralité. Le jeu de tests exhaustif étant le plus souvent infini, une phase de sélection intervient afin de choisir un jeu de tests de taille finie et raisonnable à soumettre au système. Plusieurs approches sont possibles. L'approche suivie dans ma thèse, dite par partition, consiste a diviser le jeu de tests exhaustif initial en sous-jeux de tests, selon un certain critère de sélection relatif à une fonctionnalité ou à une caractéristique du système que l'on veut tester. Une fois cette partition suffisamment fine, il suffit de choisir un cas de test dans chaque sous-jeu de test obtenu en appliquant l'hypothèse d'uniformité (tous les cas de test d'un jeu de test sont équivalents pour faire échouer le système). Le deuxième résultat à établir est que la division du jeu de tests initial n'ajoute pas (correction de la procédure) et ne fait pas perdre (complétude) de cas de test.
Dans le cadre des spécifications algébriques, une des méthodes de partition du jeu de tests exhaustif qui a été très étudiée, appelée dépliage des axiomes, consiste à procéder à une analyse par cas de la spécification. Jusqu'à présent, cette méthode s'appuyait sur des spécifications équationnelles dont les axiomes avaient la caractéristique d'être conditionnels positifs (une conjonction d'équations implique une équation).
Le travail de ma thèse a eu pour but d'étendre et d'adapter ce cadre de sélection de tests à des systèmes dynamiques spécifiés dans un formalisme axiomatique, la logique modale du premier ordre. La première étape a consisté à généraliser la méthode de sélection définie pour des spécifications équationnelles conditionnelles positives aux spécifications du premier ordre. Ce cadre de test a ensuite été d'adapté à des spécifications modales du premier ordre. Le premier formalisme de spécification considéré est une extension modale de la logique conditionnelle positive pour laquelle le cadre de test a été initialement défini. Une fois le cadre de test adapté aux spécifications modales conditionnelles positives, la généralisation aux spécifications modales du premier ordre a pu être effectuée.
Dans chacun de ces formalismes nous avons effectué deux tâches. Nous avons d'une part étudié les conditions nécessaires à imposer à la spécification et au système sous test pour obtenir l'exhaustivité du jeu de tests initial. Nous avons d'autre part adapté et étendu la procédure de sélection par dépliage des axiomes à ces formalismes et montré sa correction et sa complétude. Dans les deux cadres généraux des spécifications du premier ordre et des spécifications modales du premier ordre, nous avons montré que les conditions nécessaires à l'exhausitivité du jeu de test visé étaient mineures car faciles à assurer dans la pratique, ce qui assure une généralisation satisfaisante de la sélection dans ce cadre.
Loshchilov, Ilya. "Surrogate-Assisted Evolutionary Algorithms." Phd thesis, Université Paris Sud - Paris XI, 2013. http://tel.archives-ouvertes.fr/tel-00823882.
Full textBout, Erwan David Mickaël. "Poïétique et procédure : pour une esthétique de la boîte noire." Paris 1, 2009. http://www.theses.fr/2009PA010631.
Full textBarbillon, Pierre. "Méthodes d'interpolation à noyaux pour l'approximation de fonctions type boîte noire coûteuses." Phd thesis, Université Paris Sud - Paris XI, 2010. http://tel.archives-ouvertes.fr/tel-00559502.
Full textSaives, Jérémie. "Identification Comportementale "Boîte-noire" des Systèmes à Evénements Discrets par Réseaux de Petri Interprétés." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLN018/document.
Full textThis thesis proposes a method to identify compact and expressive models of closed-loop reactive Discrete Event Systems (DES), for reverse-engineering or certification. The identification is passive, and blackbox, accessible knowledge being limited to input/output signals. Interpreted Petri Nets (IPN) represent both the observable behaviour (direct input/output causalities) and the unobservable behaviour (internal state evolutions) of the system. This thesis aims at identifying IPN models from an observed sequence of I/O vectors. The proposed contributions extend previous results towards scalability, to deal with realistic systems who exhibit concurrency.Firstly, the construction of the observable part of the IPN is improved by the addition of a filter limiting the effect of concurrency. It detects and removes spurious synchronizations caused by the controller. Then, a new approach is proposed to improve the discovery of the unobservable part. It is based on the use of projections and guarantees the reproduction of the observed behaviour, despite concurrency. An efficient heuristic is proposed to compute a model adapted to reverse-engineering, limiting the computational cost. Finally, a distributed approach is proposed to further reduce the computational cost, by automatically partitioning the system into subsystems. The efficiency of the cumulative effect of these contributions is validated on a system of realistic size
Pamart, Pierre-Yves. "Contrôle des décollements en boucle fermée." Phd thesis, Université Pierre et Marie Curie - Paris VI, 2011. http://tel.archives-ouvertes.fr/tel-00659979.
Full textYakhou, Karim. "Validation expérimentale d'un modèle dynamique global de boîte de vitesses automobile." Lyon, INSA, 1999. http://theses.insa-lyon.fr/publication/1999ISAL0092/these.pdf.
Full textGlobal numerical models making it possible to simulate the dynamic behavior of automobile gearboxes have been developed. They are based on the Finite Element Method and their principal originality is to integrate the whole of the couplings between the various technological components (shafts, gears, bearings, casings). Before being exploited in engineering and design departments, these numerical models must be validated with experiments. It is the purpose of this thesis. The complexity of such mechanical structures makes difficult the implementation of "traditional" updating procedures calling upon the mathematical methods described in the bibliography. Indeed, the gearbox cannot be "isolated" from the rest of the power transmission chain and the tests must be carried out on a mechanism under load and possibly under operating conditions. A trial run articulated in two great stages was thus implemented. The first one is devoted to the study of the system under load but non-rotating, and is organized according to a "step by step" procedure : starting with the shafts, the other components are gradually integrated into the studied system, first the bearings, then the gears and finally the casings. Thus, the modeling of these various parts has been validated and the existence of significant couplings has been confirmed. In the second stage, the gearbox is studied in operating conditions. Some privileged observing parameters have been selected (transmission error, bearing loads). Therefore, an original load sensor directly integrated into a ball bearing in order to preserve the architecture and the operating conditions of the gearbox has been manufactured using the technology of piezoelectric films. A good agreement between experiments and computations have been established (especially from vibrations amplitude point of view), ensuring the validity of the main modeling assumptions (linearization of the dynamic behavior around a static working point, use of the quasi-static transmission error under load to model the excitation generated by the gears)
Lavalle, Julien. "Modèle effectif de matière noire fermionique - Recherche de matière noire supersymétrique avec le télescope gamma CELESTE." Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2004. http://tel.archives-ouvertes.fr/tel-00142472.
Full textGraux, François. "Méthodologie de modélisation boîte noire de circuits hyperfréquences non linéaires par réseaux de neurones : applications au radar." Lille 1, 2001. https://pepite-depot.univ-lille.fr/RESTREINT/Th_Num/2001/50376-2001-47.pdf.
Full textCassabois, Guillaume. "Origines et limites du modèle de l'atome artificiel pour une boîte quantique de semiconducteurs." Habilitation à diriger des recherches, Université Pierre et Marie Curie - Paris VI, 2006. http://tel.archives-ouvertes.fr/tel-00011932.
Full textpour étudier les propriétés électroniques et optiques des boîtes quantiques de semiconducteurs. Elle a conduit à des expériences élégantes qui utilisent les concepts de base de la physique quantique de systèmes élémentaires et qui montrent l'intérêt des boîtes quantiques pour l'information quantique.
Ces expériences ont cependant toutes en commun d'utiliser des boîtes quantiques à basse température et les mesures de spectroscopie optique sont faites sur l'état excitonique fondamental de la boîte quantique. Cette constatation lève d'emblée le problème des limites de validité du modèle de l'atome artificiel dont l'utilisation, certes fertile, semble pourtant se resteindre à des conditions expérimentales très précises.
Dans ce document, nous allons aborder plus généralement l'étude des propriétés électroniques et optiques de boîtes quantiques dans le système modèle de nanostructures auto-organisées InAs/GaAs afin de cerner les limites de validité du modèle de l'atome artificiel.
Cheaito, Hassan. "Modélisation CEM des équipements aéronautiques : aide à la qualification de l’essai BCI." Thesis, Lyon, 2017. http://www.theses.fr/2017LYSEC039/document.
Full textElectronic equipments intended to be integrated in aircrafts are subjected to normative requirements. EMC (Electromagnetic Compatibility) qualification tests became one of the mandatory requirements. This PhD thesis, carried out within the framework of the SIMUCEDO project (SIMulation CEM based on the DO-160 standard), contributes to the modeling of the Bulk Current Injection (BCI) qualification test. Concept, detailed in section 20 in the DO-160 standard, is to generate a noise current via cables using probe injection, then monitor EUT satisfactorily during test. Among the qualification tests, the BCI test is one of the most constraining and time consuming. Thus, its modeling ensures a saving of time, and a better control of the parameters which influence the success of the equipment under test. The modeling of the test was split in two parts : the equipment under test (EUT) on one hand, and the injection probe with the cables on the other hand. This thesis focuses on the EUT modeling. A "gray box" modeling was proposed by associating the "black box" model with the "extensive" model. The gray box is based on the measurement of standard impedances. Its identification is done with a "pi" model. The model, having the advantage of taking into account several configurations of the EUT, has been validated on an analog to digital converter (ADC). Another approach called modal, in function of common mode and differential mode, has been proposed. It takes into account the mode conversion when the EUT is asymmetrical. Specific PCBs were designed to validate the developed equations. An investigation was carried out to rigorously define the modal impedances, in particular the common mode (CM) impedance. We have shown that there is a discrepancy between two definitions of CM impedance in the literature. Furthermore, the mode conversion ratio (or the Longitudinal Conversion Loss : LCL) was quantified using analytical equations based on the modal approach. An N-input model has been extended to include industrial complexity. The EUT model is combined with the clamp and the cables model (made by the G2ELAB laboratory). Experimental measurements have been made to validate the combined model. According to these measurements, the CM current is influenced by the setup of the cables as well as the EUT. It has been shown that the connection of the shield to the ground plane is the most influent parameter on the CM current distribution
Dubois, Amaury. "Optimisation et apprentissage de modèles biologiques : application à lirrigation [sic l'irrigation] de pomme de terre." Thesis, Littoral, 2020. http://www.theses.fr/2020DUNK0560.
Full textThe subject of this PhD concerns one of the LISIC themes : modelling and simulation of complex systems, as well as optimization and automatic learning for agronomy. The objectives of the thesis are to answer the questions of irrigation management of the potato crop and the development of decision support tools for farmers. The choice of this crop is motivated by its important share in the Haut-de-France region. The manuscript is divided into 3 parts. The first part deals with continuous multimodal optimization in a black box context. This is followed by a presentation of a methodology for the automatic calibration of biological model parameters through reformulation into a black box multimodal optimization problem. The relevance of the use of inverse analysis as a methodology for automatic parameterisation of large models in then demonstrated. The second part presents 2 new algorithms, UCB Random with Decreasing Step-size and UCT Random with Decreasing Step-size. Thes algorithms are designed for continuous multimodal black-box optimization whose choice of the position of the initial local search is assisted by a reinforcement learning algorithms. The results show that these algorithms have better performance than (Quasi) Random with Decreasing Step-size algorithms. Finally, the last part focuses on machine learning principles and methods. A reformulation of the problem of predicting soil water content at one-week intervals into a supervised learning problem has enabled the development of a new decision support tool to respond to the problem of crop management
Gherson, David. "Gravitino dans l'Univers primordial : un modèle d'extra-dimension et de matière noire." Phd thesis, Université Claude Bernard - Lyon I, 2007. http://tel.archives-ouvertes.fr/tel-00283293.
Full textGherson, David. "Gravitino dans l’Univers primordial : un modèle d’extra-dimension et de matière noire." Lyon 1, 2007. http://tel.archives-ouvertes.fr/docs/00/28/32/93/PDF/thesegherson.pdf.
Full textThis work can be related to the Horava-Witten M-theory in which the Universe could appear 5 dimensional at a stage of its evolution but also to theories of Baryogenesis through Leptogenesis which imply high reheating temperatures after Inflation. The studied cosmological model is within the framework of a 5 dimensional supergravity with the extra -dimension compactified on an orbifolded circle, where the matter and gauge field are located on one of the two branes localised at the orbifold fixed points and where the supergravity fields can propagate in the whole spatial dimensions. In the model, the Dark matter is made of neutralino which is supposed to be the lightest supersymmetric particle. We have shown that there are curves of constraints between the size of the extra-dimension and the reheating temperature of the Universe after Inflation. The constraints come from the measurements of the amount of Dark matter in the Universe and from the model of the Big Bang Nucleosynthesis of light elements
Lasluisa, Daniel. "Contributions to optimization in energy : from bilevel optimization to optimal design of renewable energy plant." Electronic Thesis or Diss., Perpignan, 2024. http://www.theses.fr/2024PERP0009.
Full textIn this thesis work, we develop and apply optimization techniques in energy design and management. First we focus on bilevel optimization and developed new theoretical analysis for single-leader-multi-follower games with cardinality constraints. It is then applied to optimal location of charging stations for electric vehicles. The second part is dedicated to economic optimization of solar power plants from a long term as well as from a short term perspective. Innovating global optimization approach mixing optimal design of storage and optimal operation in a market context is developed. Then at a short term scale, the optimal control of energy production of a solar power plant is analysed
En este trabajo de tesis, desarrollamos y aplicamos técnicas de optimización en el dise˜no y gestión de energía. En primer lugar, nos enfocamos en la optimización binivel y desarrollamos nuevo análisis teórico para single-leader-multi-follower games con restricciones de cardinalidad. Luego, se aplica a la localización óptima de estaciones de carga por vehículos eléctricos. La segunda parte está dedicada a la optimización económica de plantas solares desde una perspectiva a largo plazo, así como desde una perspectiva a corto plazo. Se desarrolla un enfoque innovador de optimización global que combina el dise˜no óptimo de almacenamiento y la operación óptima en un contexto de mercado. Luego, a escala a corto plazo, se analiza el control óptimo de la producción de energía de una planta solar
Peirani, Sébastien. "Aspects dynamiques et physiques de la matière noire." Nice, 2005. http://www.theses.fr/2005NICE4101.
Full textThis work aims to study the dynamics of dark matter halos as well as the possibility of detection of gamma-rays resulting from the annihilation of neutralinos, supposed to be the constituent of dark matter (DM). In a first step, numerical simulations have been performed in the context of the Lambda-CDM cosmology and we have studied the effects of merger/accretion on the angular momentum evolution of halos and their dynamical relaxation. Our results indicate that halos acquire angular momentum essentially by the transfer of orbital angular momentum to spin during merger/accretion events rather than by tital torques. In a second step, we have studied the effects of the inclusion of a cosmological constant term in the spherical Tolman-Lemaître collapse model and re-derived masses for some nearby groups of galaxies, in particular the Local Group and Virgo. Our procedure yields a new evaluation of the Hubble constant in quite agreement with recent determination by other methods. Finally, we have predicted gamma-rays fluxes from different sources as M31, M87, Draco and Sagittarius and their detectability by the forthcoming GLAST satellite. The analysis of detection or not at different energy thresholds allows to constraint the neutralino mass and the spatial distribution of DM in those objects
Laugel, Thibault. "Interprétabilité locale post-hoc des modèles de classification "boites noires"." Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS215.
Full textThis thesis focuses on the field of XAI (eXplainable AI), and more particularly local post-hoc interpretability paradigm, that is to say the generation of explanations for a single prediction of a trained classifier. In particular, we study a fully agnostic context, meaning that the explanation is generated without using any knowledge about the classifier (treated as a black-box) nor the data used to train it. In this thesis, we identify several issues that can arise in this context and that may be harmful for interpretability. We propose to study each of these issues and propose novel criteria and approaches to detect and characterize them. The three issues we focus on are: the risk of generating explanations that are out of distribution; the risk of generating explanations that cannot be associated to any ground-truth instance; and the risk of generating explanations that are not local enough. These risks are studied through two specific categories of interpretability approaches: counterfactual explanations, and local surrogate models
Berthou, Thomas. "Développement de modèles de bâtiment pour la prévision de charge de climatisation et l'élaboration de stratégies d'optimisation énergétique et d'effacement." Phd thesis, Ecole Nationale Supérieure des Mines de Paris, 2013. http://pastel.archives-ouvertes.fr/pastel-00935434.
Full textLefort, Romain. "Contribution des technologies CPL et sans fil à la supervision des réseaux de distribution d'électricité." Thesis, Poitiers, 2015. http://www.theses.fr/2015POIT2253/document.
Full textEstablishing a supervisory infrastructure allows a better smart management than an expensive strengthening of distribution network to respond to new constraints at the energies control (Consumption, REN, EV ...). To transmit data, Power Line Communication (PLC) technologies present an advantage in this context. In fact, it enables a superposition of High Frequency (HF) signals on electrical signal 50/60 Hz. However, electric networks have not been developed to this application because of difficult propagation conditions. This research work makes a contribution to develop a simulation platform in objective to transmit data to 1 MHz. In first time, each network element is studied singly and in second time, together, to estimate "Outdoor PLC" transmission performance. The first element studied is the networks variation in function of frequency and time. Several 24h disturbance measurements on LV customers are presented. The second element is the transformers which established connection between Medium Voltage (MV) and Low Voltage (LV). The proposed modeling method is based on a "lumped model" and a "black box model". These models are applied to a 100 kVA H61 transformer most commonly used by French distribution system operator in rural and suburban networks. The third element is the power line used in MV and LV networks. The proposed modeling method is based on a "cascaded model" from the theory of transmission line. This model is applied to one power line used in LV underground network. Each model is obtained from various impedance measurements. To complete, an introductory study on mobile radio communication is performed to remote network distribution
Benchekroun, Fouad. "Etude des inclusions fluides et modèle de dépôt de l'or dans le gisement de Salsigne (Montagne Noire)." Toulouse 3, 1995. http://www.theses.fr/1995TOU30120.
Full textPerrault-Hébert, Maude. "Modélisation de la régénération de l’épinette noire suite au passage d’un feu en forêt boréale fermée." Mémoire, Université de Sherbrooke, 2016. http://hdl.handle.net/11143/9460.
Full textBuzzi, Adeline. "Nouveaux tests du modèle cosmologique : élaboration et applications." Aix-Marseille 1, 2010. http://www.theses.fr/2010AIX11057.
Full textThe Cosmological Model describes today the evolution of the universe on large scales. During this thesis, we developed and applied to state-of-art data new cosmological tests. We first propose an observational criterion to reject a non-metric scenario of cosmic acceleration. Then we propose a strategy that aims to validate using galaxy distribution the consistency of one of the pilars of the Cosmological Model : the Copernican Principle. We estimate quantitatively the scale of isotropy of this distribution around different distant observers (and find for this scale 150Mpc/h), and then compare with results obtained using N-body simulations. Finally, we elaborate and then apply for the first time a geometrical test based on the Alcock-Paczynski effect. We analyze the symmetry properties of binary systems of galaxies, and computed the necessary corrections of their proper velocities. We obtain values for the parameters of the model, and confirm independantly the current cosmological paradigm. All the tests we propose can be implemented using available data, and will be optimally implemented using future surveys (SNeIa, redshift surveys of galaxies, such as EUCLID or BigBOSS)
Chbib, Dyaa. "Détermination des paramètres cosmologiques dans le cadre du modèle de Friedmann-Lemaîtres." Thesis, Aix-Marseille, 2017. http://www.theses.fr/2017AIXM0294/document.
Full textA century after the Universe model of Friedmann-Lemaître, the observations comfort it with a cosmological constant $\Lambda$ and a dark matter component without pressure (dust) and cold dominating the baryonic one, which is denoted by $\Lambda$CDM model. The acceleration of the expansion of the Universe confirmed by the Hubble diagram of the supernovae in 1998 imposes a strictly positive value on the cosmological constant. My thesis work focuses on the estimation of the cosmological parameters values of the standard model using the null correlation technique. This approach has the advantage of being more robust than the usual techniques. This work deals with modelling samples of the quasar event and the supernova event, which enables us to generate samples in order to validate the statistical approaches. We used data from the Sloan Digital Sky Survey (SDSS) for quasars, and the SuperNova Legacy Survey (SNLS) and SDSS-II for supernovae. The Statistical inferences suggest a Universe spatially Closed and a weaker presence of dark matter than that in the Standard model. Such a statistical analysis can be used to constrain dark energy models. Application of this technique might be useful for analyzing of clusters of galaxies observed through the effect of Sunyaev Zel'dovich, in view of deriving the cosmological model and provide an answer to the question of the contribution of massive neutrinos in the formation of clusters in the primordial era of the Universe
Sanou, Loé. "Définition et réalisation d'une boîte à outils générique dédiée à la Programmation sur Exemple." Phd thesis, Université de Poitiers, 2008. http://tel.archives-ouvertes.fr/tel-00369484.
Full textBerthou, Thomas. "Développement de modèles de bâtiment pour la prévision de charge de climatisation et l’élaboration de stratégies d’optimisation énergétique et d’effacement." Thesis, Paris, ENMP, 2013. http://www.theses.fr/2013ENMP0030/document.
Full textTo reach the objectives of reducing the energy consumption and increasing the flexibility of buildings energy demand, it is necessary to have load forecast models easy to adapt on site and efficient for the implementation of energy optimization and load shedding strategies. This thesis compares several inverse model architectures ("black box", "grey box"). A 2nd order semi-physical model (R6C2) has been selected to forecast load curves and the average indoor temperature for heating and cooling. It is also able to simulate unknown situations (load shedding), absent from the learning phase. Three energy optimization and load shedding strategies adapted to operational constraints are studied. The first one optimizes the night set-back to reduce consumption and to reach the comfort temperature in the morning. The second strategy optimizes the set-point temperatures during a day in the context of variable energy prices, thus reducing the energy bill. The third strategy allows load curtailment in buildings by limiting load while meeting specified comfort criteria. The R6C2 model and strategies have been faced with a real building (elementary school). The study shows that it is possible to forecast the electrical power and the average temperature of a complex building with a single-zone model; the developed strategies are assessed and the limitations of the model are identified
Begin, Thomas. "Modélisation et calibrage automatiques de systèmes." Paris 6, 2008. http://www.theses.fr/2008PA066540.
Full textBatina, Jean de Dieu. "Une nouvelle approche du développement économique des pays d'Afrique noire au regard du modèle des pays du sud-est asiatique." Paris 2, 2006. http://www.theses.fr/2006PA020035.
Full textAssous, Maxime. "Modèle progressif de la maladie de parkinson après dysfonctionnement aigu des transporteurs du glutamate dans la substance noire chez le rat." Thesis, Aix-Marseille, 2013. http://www.theses.fr/2013AIXM4034/document.
Full textParkinson's disease (PD) is characterized by the progressive degeneration of substantia nigra (SN) dopaminergic neurons. Central players in PD pathogenesis, including mitochondrial dysfunction and oxidative stress, might affect the function of excitatory amino acid transporters (EAATs). Here, we investigated whether acute EAATs dysfunction might in turn contribute to the vicious cycles sustaining the progression of dopamine neuron degeneration. PDC application on nigral slices triggered sustained glutamate-mediated excitation selectively in dopamine neurons. In vivo time-course study (4-120 days) revealed that a single intranigral PDC injection triggers progressive degeneration of exclusively dopamine neurons with unilateral to bilateral and caudorostral evolution. This degenerative process associates GSH depletion and specific increase in γ-glutamyltranspeptidase activity, oxidative stress, excitotoxicity, autophagy and glial reaction. The anti-oxidant N-acetylcysteine and the NMDA receptor antagonists ifenprodil and memantine provided significant neuroprotection Transient compensatory changes in dopamine function markers in SN and striatum accompanied cell loss and axonal dystrophy. Motor abnormalities (hypolocomotion and forelimb akinesia) showed late onset, when ipsilateral neuronal loss exceeded 50%. These findings outline a functional link between EAATs dysfunction and several PD pathogenic mechanisms and pathological hallmarks, and provide the first acutely-triggered rodent model of progressive parkinsonism
Cabana, Antoine. "Contribution à l'évaluation opérationnelle des systèmes biométriques multimodaux." Thesis, Normandie, 2018. http://www.theses.fr/2018NORMC249/document.
Full textDevelopment and spread of connected devices, in particular smartphones, requires the implementation of authentication methods. In an ergonomic concern, manufacturers integrates biometric systems in order to deal with logical control access issues. These biometric systems grant access to critical data and application (payment, e-banking, privcy concerns : emails...). Thus, evaluation processes allows to estimate the systems' suitabilty with these uses. In order to improve recognition performances, manufacturer are susceptible to perform multimodal fusion.In this thesis, the evaluation of operationnal biometric systems has been studied, and an implementation is presented. A second contribution studies the quality estimation of speech samples, in order to predict recognition performances
Varelas, Konstantinos. "Randomized Derivative Free Optimization via CMA-ES and Sparse Techniques : Applications to Radars." Thesis, Institut polytechnique de Paris, 2021. http://www.theses.fr/2021IPPAX012.
Full textIn this thesis, we investigate aspects of adaptive randomized methods for black-box continuous optimization. The algorithms that we study are based on the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) algorithm and focus on large scale optimization problems.We start with a description of CMA-ES and its relation to the Information Geometric Optimization (IGO) framework, succeeded by a comparative study of large scale variants of CMA-ES. We furthermore propose novel methods which integrate tools of high dimensional analysis within CMA-ES, to obtain more efficient algorithms for large scale partially separable problems.Additionally, we describe the methodology for algorithm performance evaluation adopted by the Comparing Continuous Optimizers (COCO) platform, and finalize the bbob-largescale test suite, a novel benchmarking suite with problems of increased dimensions and with a low computational cost.Finally, we present the formulation, methodology and obtained results for two applications related to Radar problems, the Phase Code optimization problem and the Phased-Array Pattern design problem
Pouchot-Lermans, Catherine. "Un modèle de chorée de Huntington : étude électrophysiologique de la Pars reticulata de la substance noire et du noyau entopédonculaire chez le rat." Bordeaux 2, 1986. http://www.theses.fr/1986BOR22031.
Full textEspel, Diane. "Développement d'une boîte à outils pour comprendre et prédire la dynamique spatiale et temporelle des macrophytes submergés : application aux écosystèmes fluviaux." Thesis, Toulouse, INPT, 2020. http://www.theses.fr/2020INPT0083.
Full textBecause of their multiple ecological roles, macrophytes are an important component of hydrosystems, and are thus essential to conserve. However, during the summer season, high densities of submerged species cause recurrent problems in certain rivers, especially in urban areas for users and managers, and can have negative consequences for ecosystem health. In a context of global change, the overgrowth of submerged macrophytes calls for new tools to better understand the dynamics of macrophytes meadows and to predict their dynamics according to different environmental scenarios. In this context, this thesis aims to develop a toolbox accompanying a multispecific mechanistic model for the production of submerged aquatic plants, the DEMETHER model. This model simulates the spatial and temporal biomass dynamics for two common species of the mid-part of the Garonne (Myriophyllum spicatum and Ranunculus fluitans) over river section of about one kilometer. It takes into account the variability of local hydromorphological and meteorological conditions. To do this, the model requires determining certain ecophysiological parameters and having spatialized biomass data for its calibration, as well as bathymetric and substrate data. The first phase of this work then consisted in field surveys to characterize the study site and in developing numerical or experimental tools for the acquisition of these data. The first tool developed aims to monitor submerged macrophytes by remote sensing. The method explored here confirmed the potential of high spatial resolution (50 cm) multispectral imagery of Pléiades satellites, processed by machine learning algorithms, to map the distribution of macrophyte beds and quantify their biomass in situ. This approach has also led us to propose an optimized sampling strategy for macrophytes in large rivers for future investigations. This work opens up interesting perspectives for applying the method to drone imagery, and continuing its development for automated monthly monitoring. In parallel, a tool was developed for measuring physiological parameters via oximetry and applied to the two species of interest. The data obtained provide information in particular on the photosynthetic and respiratory capacities of each species in response to limiting factors (light, temperature). The second phase of this work consisted in applying the DEMETHER model to explore different climate change scenarios. Simulations of the macrophyte dynamics in terms of biomass were carried out for current thermal conditions and for a foreseeable rise in temperatures by 2041-2070. The results showed the importance of the temperature sensitivity of certain physiological processes to explain the distribution patterns of the two species studied, highlighting the interest of mechanistic modelling to understand the structuring of macrophyte communities. The first results obtained with this toolbox confirmed its functionality. However, in order to extend its application range, each of the tools developed during the thesis will need to be further improved, in particular to refine the calibration of the DEMETHER model. Specific suggestions have been made to this aim
Virgone-Carlotta, Angélique. "Intéractions microglie/neurones dans un modèle murin de neurodégénérescence induite par la 6-OHDA." Thesis, Lyon 1, 2011. http://www.theses.fr/2011LYO10308.
Full textThis thesis work is aimed to study microglial reaction and microglia/neuron interactions in a murine model of dopaminergic neurodegeneration induced by the injection of 6‐hydroxydopamine (6‐OHDA). In this model, we first describe the kinetics of microglial activation, neuronal cell loss and behavioral alterations in relation with the dopaminergic defect. In the injured substantia nigra, we observed a progressive loss of TH+ (Tyrosine Hydroxylase ‐positive) dopaminergic neurons and an early but transient microglial activation. The deleterious role of microglial activation is strongly suggested by the observation of a partial neuroprotection against 6‐OHDA‐induced toxicity in genetically DAP12 Knock‐In mice, in which microglial cells are defective in regard to their number and function. In addition, we identified various types of cell‐tocell contacts between neurons and microglia in the injured substantia nigra. Such physical interactions were established between microglia and neuronal cell bodies several days before the peak of neuronal death and in the majority of cases in neurons showing morphological signs of apoptosis. Finally, we also identified a new type of physical interactions consisting in microglial ramifications penetrating the soma of TH+ neurons. These interactions present similarities with the so‐called « tunelling nanotubes » previously described in the literature and represent a particular type of penetrating microglial ramifications the we named "tunelling ramifications.". Interestingly, in the injured substantia nigra, the presence of TH+ vacuoles in the cytoplasm of numerous microglial cells strongly suggests that microglial ramifications support microphagocytosis targeted toward the cytoplasm of dopaminergic neurons. The precise function and molecular mechanisms of such unique interactions need to be further assessed. However, our work provides a set of original data that deepens our knowledge on the dialogue between microglia and neurons in a mouse model of Parkinson's disease
Chanraud-Carraze, Véronique. "Effet de l'inactivation du noyau sous-thalamique sur l'activité des neurones de la pars reticulata de la substance noire : un modèle d'hémiballisme chez le rat." Bordeaux 2, 1994. http://www.theses.fr/1994BOR2M106.
Full textCendrès-Bozzi, Christophe. "Troubles du cycle veille/sommeil liés à la maladie de Parkinson : modèle animal, mécanismes et approches thérapeutiques." Thesis, Lyon 1, 2011. http://www.theses.fr/2011LYO10081.
Full textMotors disorders are not the only symptoms of Parkinson disease (PD), and sleep disorders such as somnolence and narcolepsy are frequently reported in PD patients. Despite much investigation worldwide, it remains unknown whether these disorders are caused by dopaminergic (DArgic), non-DArgic neural lesions, nocturnal motor disability or deleterious effect of anti-PD drugs. Using multiple experimental approaches (EEG and sleep-wake recordings/pharmacological dosing / immunohistochemistry) in cats treated with MPTP, which causes DArgic neuronal loss, we have studied the possible correlation between the induced effects on the sleep-wake (S/W) cycle and those on DArgic neurons. MPTP (5mg/kg/day x 5, i.p.) caused, during the acute period, a slow wave sleep hypersomnia (SWS, up to 80% of recorded time) and a suppression of paradoxical sleep (PS), accompanied with pronounced behavioural somnolence, marked decrease in locomotion and difficulty to initiate movements. DArgic agonists L-dopa and ropinirole transiently prevented hypersomnia in SWS. During the chronic period, whereas the amount of W and SWS returned to control, PS transiently increased, associated with narcolepsy-like episodes. Ex-vivo analyses revealed marked decrease in TH labelling (cell bodies in the substantia nigra and terminal-like dots in the striatum) whereas cholinergic neurons in the basal forebrain and mesopontine-tegmentum seemed unchanged. Thus, MPTP treated cats showed major signs of motor and S/W disorders similar to those seen in PD patients and so could serve as useful animal model. Our results also suggest a possible correlation/causality between the MPTP-induced S/W disorders and DArgic cell loss
Coupon, Jean. "Galaxies à grand décalage spectral : mesures et contraintes cosmologiques." Paris 6, 2009. http://www.theses.fr/2009PA066581.
Full textTarhini, Ahmad. "Nouvelle physique, Matière noire et cosmologie à l'aurore du Large Hadron Collider." Phd thesis, Université Claude Bernard - Lyon I, 2013. http://tel.archives-ouvertes.fr/tel-00847781.
Full textNganga, Massengo Arnaud. "Les revendications afro-antillaises à la télévision publique française (1998-2008) : des contentieux postcoloniaux à la re-légitimation d’un modèle d’intégration." Thesis, Bordeaux 3, 2013. http://www.theses.fr/2013BOR30060.
Full textFrom a French public channels corpus, this study aims to analize Tv representions of postcolonial contentious issues, in the heart of French Blacks mobilisations which are structured around three mean claims (visibility, discriminations and memory recognition). Describing the will of French Blacks to exist on public sphere, these claims make the historic debate of the “Question noire” reappeared from the 2000s. The research, which intends to question the way in which Afro carribean mobilisations were told and represented on French public television, identifies following major trends. Fisrtly, the television debates analysis underlines an “eristic problematisation” of “Question noire” related issues with essentially polemical media coverage. The result of this type of access to the media agenda is a constant exhumation of an ethnoracial split in media and public discourses. Secondly, Tv coverage analysis reveals a symbolic production of an opposition between two dominant media figures. In one side, the “Ultra-républicains” playing the rôle of self-proclaimed defenders of French republic, and, on the other side, a coalition of minoriy claims defenders. The study, at last, reveals both discourses of disqualification of the minorities, and, discourses of re-legitimation of the French model of integration. This thesis consists of two parts. The first one deals with French Black history. It presents historic reasons of their presence from slavery up to decolonization. The second part explores the representation of postcolonial contentious issues in French public televisions. Structured on five chapters, it proposes a content analysis of our corpus based on 38 broadcasts between 1998 and 2008
Uhlrich, Josselin. "Contrôle de l'activation microgliale par les lymphocytes T dans un modèle murin de neurodégénérescence induite par la 6-OHDA." Thesis, Lyon 1, 2014. http://www.theses.fr/2014LYO10139/document.
Full textThis thesis work describes and analyzes the neuroinflammatory reaction that accompanies neuronal cell death in a murine model of Parkinson's disease. In this model, induced by the intrastriatal injection of 6-hydroxydopamine (6-OHDA), a toxic dopamine analog, we report on the main features and kinetics of microglial activation, T-cell infiltration, loss of TH+ (Tyrosine Hydroxylase) dopaminergic neurons and motor behavior alterations. We also assessed the presence of T-cells in the susbstantia nigra of Parkinson's disease patients and found that, as observed in the 6-OHDA murine model, the neuronal cell death of dopaminergic neurons triggers a low-grade T-cell infiltration that accompanies microglial activation. We then studied the impact of genetically-determined T-cell immunodeficiency on histological and functional outcomes in the 6-OHDA model. Our results show that, as compared to immunocompetent control mice, immunodeficient strains consisting in Foxn1 KO, CD3 KO, NOD SCID or RAG KO mice consistently presented, at varied levels, a highest susceptibility to 6-OHDA induced dopaminergic neurodegeneration. The observed accentuation of neuronal cell loss was accompanied by a marked increase of microglial activation and motor behavior alterations. Our work demonstrates the pathophysiological role of neuroinflammation and adaptative immunity in the 6-OHDA model. It also suggests that T-cells infiltrating the substantia nigra of Parkinson's disease patients dampen microglial activation and could, via this inhibitory effect, slow the progression of dopaminergic cell loss. Overall this thesis work provides original data on the interactions between T-cells, microglia and dopaminergic neurons in the context of Parkinson's disease and the murine 6-OHDA model
Dahito, Marie-Ange. "Constrained mixed-variable blackbox optimization with applications in the automotive industry." Electronic Thesis or Diss., Institut polytechnique de Paris, 2022. http://www.theses.fr/2022IPPAS017.
Full textNumerous industrial optimization problems are concerned with complex systems and have no explicit analytical formulation, that is they are blackbox optimization problems. They may be mixed, namely involve different types of variables (continuous and discrete), and comprise many constraints that must be satisfied. In addition, the objective and constraint blackbox functions may be computationally expensive to evaluate.In this thesis, we investigate solution methods for such challenging problems, i.e constrained mixed-variable blackbox optimization problems involving computationally expensive functions.As the use of derivatives is impractical, problems of this form are commonly tackled using derivative-free approaches such as evolutionary algorithms, direct search and surrogate-based methods.We investigate the performance of such deterministic and stochastic methods in the context of blackbox optimization, including a finite element test case designed for our research purposes. In particular, the performance of the ORTHOMADS instantiation of the direct search MADS algorithm is analyzed on continuous and mixed-integer optimization problems from the literature.We also propose a new blackbox optimization algorithm, called BOA, based on surrogate approximations. It proceeds in two phases, the first of which focuses on finding a feasible solution, while the second one iteratively improves the objective value of the best feasible solution found. Experiments on instances stemming from the literature and applications from the automotive industry are reported. They namely include results of our algorithm considering different types of surrogates and comparisons with ORTHOMADS
Attoue, Nivine. "Use of Smart Technology for heating energy optimization in buildings : experimental and numerical developments for indoor temperature forecasting." Thesis, Lille 1, 2019. http://www.theses.fr/2019LIL1I021/document.
Full textWith the highly developing concerns about the future of energy resources, the optimization of energy consumption becomes a must in all sectors. A lot of research was dedicated to buildings regarding that they constitute the highest energy consuming sector mainly because of their heating needs. Technologies have been improved and several methods are proposed for energy consumption optimization. Energy saving procedures can be applied through innovative control and management strategies. The objective of this thesis is to introduce the smart concept in the building system to reduce the energy consumption, as well as to improve comfort conditions and users’ satisfaction. The study aims to develop a model that makes it possible to predict thermal behavior of buildings. The thesis proposes a methodology based on the selection of pertinent input parameters, after a relevance analysis of a large set of input parameters, for the development of a simplified artificial neural network (ANN) model, used for indoor temperature forecasting. This model can be easily used in the optimal regulation of buildings’ energy devices. The smart domain needs an automated process to understand the buildings’ dynamics and to describe its characteristics. Such strategies are well described using reduced thermal models. Thus, the thesis presents a preliminary study for the generation of an automated process to determine short term indoor temperature prediction and buildings characteristics based on grey-box modeling. This study is based on a methodology capable of finding the most reliable set of data that describes the best the building’s dynamics. The study shows that the most performant order for reduced-models is governed by the dynamics of the collected data used