Dissertations / Theses on the topic 'Evaluation probabiliste des fautes'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 31 dissertations / theses for your research on the topic 'Evaluation probabiliste des fautes.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Alexandrescu, Marian-Dan. "Outils pour la simulation des fautes transitoires." Grenoble INPG, 2007. http://www.theses.fr/2007INPG0084.
Full textSingle Events (SE) are produced by the interaction of charged particles with the transistors of a microelectronic circuit. These perturbations may alter the functioning of the circuit and cause logic faults and errors. As the sensitivity of circuits increases for each technological evolution, specific tools are needed for the design of hardened circuits. This thesis aims at furthering the comprehension of the phenomena and proposes EDA tools to help the analysis of these problems in today's ICs. We have developed methodologies for the characterization of the cells from the standard library and tools for accelerated fault simulation and probabilistic analysis of single events. The results provided by these tools allow the designer to correctly evaluate the sensitivity of his design and select the most adequate methods to improve the reliability of ICs
Kaâniche, Mohamed. "Evaluation de la sûreté de fonctionnement informatique. Fautes physiques, fautes de conception, malveillances." Habilitation à diriger des recherches, Institut National Polytechnique de Toulouse - INPT, 1999. http://tel.archives-ouvertes.fr/tel-00142168.
Full textCarjaval, Moncada Claudio Andrés. "Evaluation probabiliste de la sécurité structurale des barrages-poids." Clermont-Ferrand 2, 2009. https://hal.archives-ouvertes.fr/tel-01298893.
Full textOboni, Franco. "Evaluation probabiliste des performances des pieux forés chargés axialement en tête /." [S.l.] : [s.n.], 1988. http://library.epfl.ch/theses/?nr=716.
Full textKoita, Abdourahmane. "Evaluation probabiliste de la dangerosité des trajectoires de véhicules en virages." Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2011. http://tel.archives-ouvertes.fr/tel-00626964.
Full textJoubert, Philippe. "Conception et evaluation d'une architecture multiprocesseur a memoire partagee tolerante aux fautes." Rennes 1, 1993. http://www.theses.fr/1993REN10004.
Full textKameni, Ngassa Christiane. "Décodeurs LDPC opérant sur des circuits à comportement probabiliste : limites théoriques et évaluation pratique de la capacité de correction." Thesis, Cergy-Pontoise, 2014. http://www.theses.fr/2014CERG0735/document.
Full textOver the past few years, there has been an increasing interest in error correction decoders built out of unreliable components. Indeed, it is widely accepted that future generation of electronic circuit will be inherently unreliable, due to the increase in density integration and aggressive voltage scaling. Furthermore, error correction decoders play a crucial role both in reliable transmission of information and in the design of reliable storage systems. It is then important to investigate the robustness of error correction decoders in presence of hardware noise.In this thesis we focus on LDPC decoders built out of unreliable computing units. We consider three types of LDPC decoders: the finite-precision Min-Sum (MS) decoder, the Self-Corrected Min-Sum (SCMS) decoder and the Stochastic decoder.We begin our study by the statistical analysis of the finite-precision Min-Sum decoder with probabilistic components. To this end, we first introduce probabilistic models for the arithmetic and logic units of the decoder and discuss their symmetry properties. We conduct a thorough asymptotic analysis and derive density evolution equations for the noisy Min-Sum decoder. We highlight that in some particular cases, the noise introduced by the device can increase the correction capacity of the noisy Min-Sum with respect to the noiseless decoder. We also reveal the existence of a specific threshold phenomenon, referred to as functional threshold, which can be viewed as the generalization of the threshold definition for noisy decoders. We then corroborate the asymptotic results through Monte-Carlo simulations.Since density evolution cannot be defined for decoders with memory, the analysis of noisy Self-corrected Min-Sum decoders and noisy Stochastic decoders was restricted to Monte-Carlo simulations.We emulate the noisy SCMS decoders with various noise parameters and show that noisy SCMS decoders perform close to the noiseless SCMS decoder for a wide range of noise parameters. Therefore, one can think of the self-correction circuit as a noisy patch applied to the noisy MS decoder, in order to improve its robustness to hardware defect. We also evaluate the impact of the decoder scheduling on the robustness of the noisy MS and SCMS decoders and show that when the serial scheduling is used neither the noisy MS decoder nor the noisy SCMS decoder can provide acceptable error correction.Finally, we investigate the performance of stochastic decoders with edge-memories in presence of hardware noise. We propose two error models for the noisy components. We show that in some cases, the hardware noise can be used to lower the error floor of the decoder meaning that stochastic decoders have an inherent fault tolerant capability
QUENNESSON, CYRIL. "Evaluation de circuits vlsi autocontroles prenant en compte un modele de fautes multiples." Paris 6, 1997. http://www.theses.fr/1997PA066163.
Full textGEFFLAUT, ALAIN. "Proposition et evaluation d'une architecture multiprocesseur extensible a memoire partagee tolerante aux fautes." Rennes 1, 1995. http://www.theses.fr/1995REN10034.
Full textSopena, Julien. "Algorithmes d'exclusion mutuelle : tolérance aux fautes et adaptation aux grilles." Paris 6, 2008. http://www.theses.fr/2008PA066665.
Full textBorrel, Nicolas. "Evaluation d'injection de fautes Laser et conception de contre-mesures sur une architecture à faible consommation." Thesis, Aix-Marseille, 2015. http://www.theses.fr/2015AIXM4358.
Full textIn many applications such as credit cards, confidential data is used. In this regard, the systems-on-chip used in these applications are often deliberately attacked. This puts the security of our data at a high risk. Furthermore, many SoC devices have become battery-powered and require very low power consumption. In this context, semiconductor manufacturers should propose secured and low-power solutions.This thesis presents a security evaluation and a countermeasures design for a low-power, triple-well architecture dedicated to low-power applications. The security context of this research focuses on a Laser sensitivity evaluation of this architecture.This paper first presents the state of the art of Laser fault injection techniques, focusing on the physical effects induced by a Laser beam. Afterward, we discuss the different dual-and triple-well architectures studied in order to compare their security robustness. Then, a physical study of these architectures as substrate resistor and capacitor modeling highlights their impact on security. This evaluation lets us anticipate the phenomena potentially induced by the Laser inside the biasing well (P-well, N-well) and the MOS transistors.Following the analysis of the physical phenomena resulting from the interaction between the laser and the silicon, electrical modeling of the CMOS gates was developed for dual and triple-well architectures. This enabled us to obtain a good correlation between measurements and electrical simulations.In conclusion, this work enabled us to determine possible design rules for increasing the security robustness of CMOS gates as well as the development of Laser sensors able to detect an attack
Belhadaoui, Hicham. "Conception sûre des systèmes mécatroniques intelligents pour des applications critiques." Phd thesis, Institut National Polytechnique de Lorraine - INPL, 2011. http://tel.archives-ouvertes.fr/tel-00601086.
Full textSchwob, Cyrille. "Approche non locale probabiliste pour l'analyse en fatigue des zones à gradient de contraintes." Toulouse 3, 2007. http://www.theses.fr/2007TOU30246.
Full textA fatigue criterion taking into account the stress gradient effect has been developed and integrated in a global probabilistic framework. The proposed criterion is a non local criterion averaging a classical criterion over a particular area. This area is defined by a mesoscopic criterion derived from Papadopoulos analysis. The predictions of the new criterion are compared to experimental results coming from a dedicated test campaign on an aluminium alloy. Results are found to be in good agreement with experiments for a wide variety of geometry and load, thus demonstrating the relative robustness of the fatigue model. The fatigue model is then integrated in a probabilistic framework, the results being again satisfactorily confronted to experimental results on the same alloy. In particular the statistical quality of the probabilized SN curves obtained from the whole model is similar to the experimental one
Berthelot, Christophe. "Evaluation d'une architecture vectorielle pour des méthodes de Monte Carlo : analyse probabiliste de conditions au bord artificielles pour des inéquations variationnelles." Paris 6, 2003. http://www.theses.fr/2003PA066505.
Full textSmidts, Olivier. "Analyse probabiliste du risque de stockage de déchets radioactifs par la méthode des arbres d'événements continus." Doctoral thesis, Universite Libre de Bruxelles, 1997. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/212182.
Full textDans cette thèse, l'analyse d'incertitude liée à la composition du milieu géologique est partagée entre l'écoulement et le transport de la manière suivante: a) une solution moyenne de l'écoulement est tout d'abord déterminée à l'aide d'un code basé sur la méthode des différences finies. Cette solution est ensuite soumise à une analyse de sensibilité. Cette analyse débouche sur la résolution d'un problème inverse afin d'améliorer l'estimation initiale des paramètres moyens d'écoulement; b) l'effet de la variation aléatoire de la vitesse d'écoulement est envisagé lors du transport des radionucléides. Le transport est résolu à l'aide d'une méthode Monte Carlo non analogue.
L'analyse de sensibilité du problème d'écoulement est réalisée à l'aide d'une méthode variationnelle. La méthode proposée a comme avantage celui de pouvoir quantifier l'incertitude de structure; c'est-à-dire l'incertitude liée à la géométrie du milieu géologique.
Une méthodologie Monte Carlo non analogue est utilisée pour le transport de chaînes de radionucléides en milieu stochastique. Les apports de cette méthodologie pour le calcul du risque reposent sur trois points:
1) L'utilisation d'une solution de transport simple (sous la forme d'une solution adjointe) dans les mécanismes de la simulation Monte Carlo. Cette solution de transport permet de résumer, entre deux positions successives du marcheur aléatoire, les processus chimicophysiques (advection, diffusion-dispersion, adsorption, désorption,) apparaissant à l'échelle microscopique. Elle rend possible des simulations efficaces de transport en accélérant les mécanismes de transition des marcheurs aléatoires dans le domaine géologique et dans le temps.
2) L'application de la méthode des arbres d'événements continus au transport de chaînes de radionucléides. Cette méthode permet d'envisager les transitions radioactives entre éléments d'une chaîne selon un même formalisme que celui qui prévaut pour les simulations de transport d'un radionucléide unique. Elle permet donc de passer du transport d'un radionucléide au transport d'une chaîne de radionucléides sans coûts supplémentaires en temps de calcul et avec un coût supplémentaire en mémoire limité.
3) L'application de techniques dites de "double randomization" au problème de transport de radionucléides dans un milieu géologique stochastique. Ces techniques permettent de combiner efficacement une simulation Monte Carlo de paramètres avec une simulation Monte Carlo de transport et ainsi d'inclure l'incertitude associée à la composition du milieu géologique explicitement dans le calcul du risque.
Il ressort de ce travail des perspectives prometteuses de développements ultérieurs de la méthodologie Monte Carlo non analogue pour le calcul du risque.
Doctorat en sciences appliquées
info:eu-repo/semantics/nonPublished
Mahdavi, Chahrokh. "Analyse probabiliste du comportement des sols et des ouvrages. Evaluation des risques dans les études géotechniques de traces de remblais sur sols mous." Phd thesis, Ecole Nationale des Ponts et Chaussées, 1985. http://tel.archives-ouvertes.fr/tel-00523281.
Full textEchard, Benjamin. "Evaluation par krigeage de la fiabilité des structures sollicitées en fatigue." Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2012. http://tel.archives-ouvertes.fr/tel-00800208.
Full textWall, Martinez Hiram Alejandro. "Evaluation probabiliste du risque lié à l'exposition à des aflatoxines et des fumonisines dû à la consommation de tortillas de maïs à la ville de Veracruz." Thesis, Brest, 2016. http://www.theses.fr/2016BRES0068/document.
Full textOne of the chemical hazards that WHO has reported more frequently is cereals contamination with mycotoxins, mainly aflatoxins and fumonisins. NOM-188-SSA1-2002 establishes that aflatoxin concentration in grain should not exceed 20 mg kg-1 ; however, there are reported concentrations > 200 mg kg-1 in maize. Although it has been documented that nixtamalizacion removes more than 90% of fumonisins and between 80 and 95% of aflatoxins, the residual amount could be important, finding reports concentrations higher than 100 mg kg-1 of aflatoxin in tortilla, representing a risk due to the high consumption of tortillas in Mexico (325 g d-1). The JECFA (2001) establishes a maximum intake of 2 mg kg-1 pc d-1 for fumonisin and aflatoxin recommends reducing “as low as reasonably achievable” levels. 3 random and representative sampling in Veracruz city, each in 40 tortillerias, were made. Corn intake and weight of the population were estimated using a consumption questionnaire. Mycotoxins analysis were performed by HPLC-FD using immunoaffinity columns according to European standard UNE-EN ISO 14123 : 2008 for aflatoxins and UNE-EN 13585 : 2002 for fumonisin in the CIRAD (Montpellier, France). Statistical analysis were performed under a probabilistic approach in collaboration with the University of Bretagne Occidentale (Brest, France), building probability density function (PDF) and using the Monte Carlo method. PDF parameters of the weight of the population was 74.15kg for men (which coincides with reported by CANAIVE) and 65.83kg for women ; the pollution aflatoxin tortilla was 0.54 – 1.96mg kg-1 and fumonisin from 65.46 – 136.00mg kg-1 ; the tortilla consumption was 148.3g of corn per person per day ; the daily intake of aflatoxins was 0.94 – 3.14ng kg-1 bw d-1 and fumonisin of 146.24 – 314.99ng kg-1 bw d-1. Samples with higher aflatoxin contamination came from tortillerias that make the nixtamalization in situ. In assessing exposure it was found that up to 60% of the population could be consuming more than the recommended by JECFA (2001) for aflatoxin dose (1ng kg-1 bw d-1). Exposure to fumonisins intake was < 5% due to low contamination by these mycotoxins. The results suggest that the population of the city of Veracruz could be in food risk by eating contaminated corn tortillas AFT. It is advisable to extend this study in rural communities, where risk factors could increase
Excoffon, William. "Résilience des systèmes informatiques adaptatifs : modélisation, analyse et quantification." Phd thesis, Toulouse, INPT, 2018. http://oatao.univ-toulouse.fr/20791/1/Excoffon_20791.pdf.
Full textChibani, Kais. "Analyse de robustesse de systèmes intégrés numériques." Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAT080/document.
Full textIntegrated circuits are not immune to natural or malicious interferences that may cause transient faults which lead to errors (soft errors) and potentially to wrong behavior. This must be mastered particularly in the case of critical systems which impose safety and/or security constraints. To optimize protection strategies of such systems, it is essential to identify the most critical elements. The assessment of the criticality of each block allows limiting the protection to the most sensitive blocks. This thesis aims at proposing approaches in order to analyze, early in the design flow, the robustness of a digital system. The key criterion used is the lifetime of data stored in the registers for a given application. In the case of microprocessor-based systems, an analytical approach has been developed and validated on a SparcV8 microprocessor (LEON3). This approach is based on a new methodology to refine assessments of registers criticality. Then a more generic and complementary approach was implemented to compute the criticality of all flip-flops from a synthesizable description. The tool implementing this approach was tested on significant systems such as hardware crypto accelerators and a hardware/software system based on the LEON3 processor. Fault injection campaigns have validated the two approaches proposed in this thesis. In addition, these approaches are characterized by their generality, their efficiency in terms of accuracy and speed and a low-cost implementation. Another benefit is also their ability to re-use the functional verification environments
Lemaire, Jean. "Un modèle d'évaluation de la vulnérabilité sismique du bâti existant selon l'Eurocode : essai méthodologique et application sur un territoire." Thesis, Paris 10, 2018. http://www.theses.fr/2018PA100010/document.
Full textThe seismic risk is a subject of multidisciplinary study which is the object of numerous research works. For a long time, it was studied in terms of hazard and it is only in the middle of the 20th century that we became interested in the vulnerability of the exposed elements. In spite of the multiplicity of the studies on the seismic risk, none of them adopts a global approach by using the earthquake-resistant regulations. Within the framework of thesis, we support the hypothesis that it possible to estimate the vulnerability of dwellings on the scale of several buildings by using the European standard, Eurocode 8. Using these regulations has the advantage reducing the time to study physical vulnerability by assessing the seismic resistance of a single building, where the latter represents a population of several buildings used as collective dwellings. The proposed methodology, illustrated on the example of the Mulhouse-Basel conurbation, consists of two phases. The first one consists in studying the seismic hazard of the urban area of Mulhouse and Basel through the bibliographical studies of some authors. This phase also consists in examining the compatibility of the European and Helvetian seismic regulations. Finally, a diagnosis of the existing structures and of the population is made to assess the vulnerability of these two urban territories, after a division of both cities into historic-geographical sectors. A second phase consists in proposing a simplified model of deterministic and probabilistic assessment of the vulnerability of the built, based on the new European regulation and the mechanics of the structures, to evaluate the seismic resistance of buildings. The probability aspect allowed to refine the proposed model to integrate certain uncertainties. A case study feigning an important earthquake of magnitude Mw equal to 6 on the Richter scale, integrating the phenomena of site effects as recommended by Eurocode 8, validated the application of the envisaged model. The proposed evaluation model is intended to provide a tool for assessing the vulnerability of the built without performing mechanical calculations. Thus, it aims to be accessible to all (geographers, engineers, seismologists, etc…). More generally, this model aims to provide a decision-making tool in the approach of prevention which the public authorities owe to the population, because they allow to determine the more or less big vulnerability of the studied areas
Ramos, Vargas Pablo Francisco. "Evaluation de la sensibilité face aux SEE et méthodologie pour la prédiction de taux d’erreurs d’applications implémentées dans des processeurs Multi-cœur et Many-cœur." Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAT022/document.
Full textThe present thesis aims at evaluating the SEE static and dynamic sensitivity of three different COTS multi-core and many-core processors. The first one is the Freescale P2041 multi-core processor manufactured in 45nm SOI technology which implements ECC and parity in their cache memories. The second one is the Kalray MPPA-256 many-core processor manufactured in 28nm TSMC CMOS technology which integrates 16 compute clusters each one with 16 processor cores, and implements ECC in its static memories and parity in its cache memories. The third one is the Adapteva Epiphany E16G301 microprocessor manufactured in 65nm CMOS process which integrates 16 processor cores and do not implement protection mechanisms. The evaluation was accomplished through radiation experiments with 14 Mev neutrons in particle accelerators to emulate a harsh radiation environment, and by fault injection in cache memories, shared memories or processor registers, to simulate the consequences of SEUs in the execution of the program. A deep analysis of the observed errors was carried out to identify vulnerabilities in the protection mechanisms. Critical zones such as address tag and general purpose registers were affected during the radiation experiments. In addition, The Code Emulating Upset (CEU) approach, developed at TIMA Laboratory was extended to multi-core and many core processors for predicting the application error rate by combining the results issued from fault injection campaigns with those coming from radiation experiments
Duroeulx, Margaux. "Évaluation de la fiabilité des systèmes modélisés par arbres de défaillances grâce aux techniques de satisfiabilité." Electronic Thesis or Diss., Université de Lorraine, 2020. http://www.theses.fr/2020LORR0026.
Full textThis thesis focuses on designing critical systems, those functioning is impacted by failures that could be dangerous for goods and people. During its design, it is crucial to convey a dependability analysis in order to determine the potential failures, their criticity and their probability of occurrence. The aim of this thesis is to involve satisfiability techniques in the computation of the reliability of the system: its probability to ensure its mission for a given time. In the first part, we consider static systems, those for which the functioning only depends on the functioning of their components functioning. We model the system by a fault tree, which is a widespread modeling tool in the reliability community, from which we can obtain the structure function. The structure function is a formula describing the combinations of components failure which are tolerated or not by the system. We also model the system by a Hasse diagram, which represents the states of the system depending on the states of the components. The probabilistic assessment of the trust placed to the system is based on the reliability function determined from the Hasse diagram. In the second part, we consider dynamic systems, for which the order between failures has an impact on the system. For example, it is the case for electric generators, which deprive the other components of electricity when they fail. In order to adapt to dynamic systems, the approach developed of the first part, we define minimal tie set sequences as the extension of minimal tie sets for dynamic systems, and we compute them by using satisfiability techniques. We also propose an adaptation of Hasse diagrams for dynamic systems to determine the reliability function
Bentria, Dounia. "Combining checkpointing and other resilience mechanisms for exascale systems." Thesis, Lyon, École normale supérieure, 2014. http://www.theses.fr/2014ENSL0971/document.
Full textIn this thesis, we are interested in scheduling and optimization problems in probabilistic contexts. The contributions of this thesis come in two parts. The first part is dedicated to the optimization of different fault-Tolerance mechanisms for very large scale machines that are subject to a probability of failure and the second part is devoted to the optimization of the expected sensor data acquisition cost when evaluating a query expressed as a tree of disjunctive Boolean operators applied to Boolean predicates. In the first chapter, we present the related work of the first part and then we introduce some new general results that are useful for resilience on exascale systems.In the second chapter, we study a unified model for several well-Known checkpoint/restart protocols. The proposed model is generic enough to encompass both extremes of the checkpoint/restart space, from coordinated approaches to a variety of uncoordinated checkpoint strategies. We propose a detailed analysis of several scenarios, including some of the most powerful currently available HPC platforms, as well as anticipated exascale designs.In the third, fourth, and fifth chapters, we study the combination of different fault tolerant mechanisms (replication, fault prediction and detection of silent errors) with the traditional checkpoint/restart mechanism. We evaluated several models using simulations. Our results show that these models are useful for a set of models of applications in the context of future exascale systems.In the second part of the thesis, we study the problem of minimizing the expected sensor data acquisition cost when evaluating a query expressed as a tree of disjunctive Boolean operators applied to Boolean predicates. The problem is to determine the order in which predicates should be evaluated so as to shortcut part of the query evaluation and minimize the expected cost.In the sixth chapter, we present the related work of the second part and in the seventh chapter, we study the problem for queries expressed as a disjunctive normal form. We consider the more general case where each data stream can appear in multiple predicates and we consider two models, the model where each predicate can access a single stream and the model where each predicate can access multiple streams
Kamali, Nejad Mojtaba. "Propositions de résolution numérique des problèmes d'analyse de tolérance en fabrication : approche 3D." Phd thesis, Grenoble 1, 2009. http://tel.archives-ouvertes.fr/tel-00445639.
Full textSafadi, El Abed El. "Contribution à l'évaluation des risques liés au TMD (transport de matières dangereuses) en prenant en compte les incertitudes." Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAT059/document.
Full textWhen an accidental event is occurring, the process of technological risk assessment, in particular the one related to Dangerous Goods Transportation (DGT), allows assessing the level of potential risk of impacted areas in order to provide and quickly take prevention and protection actions (containment, evacuation ...). The objective is to reduce and control its effects on people and environment. The first issue of this work is to evaluate the risk level for areas subjected to dangerous goods transportation. The quantification of the intensity of the occurring events needed to do this evaluation is based on effect models (analytical or computer code). Regarding the problem of dispersion of toxic products, these models mainly contain inputs linked to different databases, like the exposure data and meteorological data. The second problematic is related to the uncertainties affecting some model inputs. To determine the geographical danger zone where the estimated risk level is not acceptable, it is necessary to identify and take in consideration the uncertainties on the inputs in aim to propagate them in the effect model and thus to have a reliable evaluation of the risk level. The first phase of this work is to evaluate and propagate the uncertainty on the gas concentration induced by uncertain model inputs during its evaluation by dispersion models. Two approaches are used to model and propagate the uncertainties. The first one is the set-membership approach based on interval calculus for analytical models. The second one is the probabilistic approach (Monte Carlo), which is more classical and used more frequently when the dispersion model is described by an analytic expression or is is defined by a computer code. The objective is to compare the two approaches to define their advantages and disadvantages in terms of precision and computation time to solve the proposed problem. To determine the danger zones, two dispersion models (Gaussian and SLAB) are used to evaluate the risk intensity in the contaminated area. The risk mapping is achieved by using two methods: a probabilistic method (Monte Carlo) which consists in solving an inverse problem on the effect model and a set-membership generic method that defines the problem as a constraint satisfaction problem (CSP) and to resolve it with an set-membership inversion method. The second phase consists in establishing a general methodology to realize the risk mapping and to improve performance in terms of computation time and precision. This methodology is based on three steps: - Firstly the analysis of the used effect model. - Secondly the proposal of a new method for the uncertainty propagationbased on a mix between the probabilistic and set-membership approaches that takes advantage of both approaches and that is suited to any type of spatial and static effect model. -Finally the realization of risk mapping by inversing the effect models. The sensitivity analysis present in the first step is typically addressed to probabilistic models. The validity of using Sobol indices for interval models is discussed and a new interval sensitivity indiceis proposed
Bounceur, Ahcène. "Plateforme CAO pour le test de circuits mixtes." Grenoble INPG, 2007. http://www.theses.fr/2007INPG0034.
Full textThe growing complexity of modern chips poses challenging test problems due to the requirement for specialized test equipment and the involved lengthy test times. This is particularly true for heterogeneous chips that comprise digital, analogue, and RF blocks onto the same substrate. Many research efforts are currently under way in the mixed-signal test domain. Theses efforts concern optimization of tests at the production stage (e. G. Off-line) or during the lifetime of the chip (on-line test). A promising research direction is the integration of additional circuitry on-chip, aiming to facilitate the test application (Design For Test) and/or to perform Built-In-Self-Test. The efficiency of such test techniques, both in terms of test accuracy and test cost, must be assessed during the design stage. However, there is an alarming lack of CAT tools, which are necessary, in order to facilitate the study of these techniques and, thereby, expedite their transfer into a production setting. In this thesis, we develop a CAT platform that can be used for the validation of analogue test techniques. The platform includes tools for fault modeling, injection and simulation, as well as tools for analogue test vector generation and optimization. A new statistical method is proposed and integrated into the platform, in order to assess the quality of test techniques during the design stage. This method aims to set the limits of the considered test criteria. Then, the different test metrics (as Fault coverage, Defect level or Yield loss) are evaluated under the presence of parametric and catastrophic faults. Some specific tests can be added to improve the structural fault coverage. The CAT platform is integrated in the Cadence design framework environment
Schweitzer, Alexis. "Amélioration du niveau de sécurité des systèmes électroniques programmables par application du concept d'analyse de signature." Nancy 1, 1987. http://www.theses.fr/1987NAN10034.
Full textElghazel, Wiem. "Wireless sensor networks for Industrial health assessment based on a random forest approach." Thesis, Besançon, 2015. http://www.theses.fr/2015BESA2055/document.
Full textAn efficient predictive maintenance is based on the reliability of the monitoring data. In some cases, themonitoring activity cannot be ensured with individual or wired sensors. Wireless sensor networks (WSN) arethen an alternative. Considering the wireless communication, data loss becomes highly probable. Therefore,we study certain aspects of WSN reliability. We propose a distributed algorithm for network resiliency and datasurvival while optimizing energy consumption. This fault tolerant algorithm reduces the risks of data loss andensures the continuity of data transfer. We also simulated different network topologies in order to evaluate theirimpact on data completeness at the sink level. Thereafter, we propose an approach to evaluate the system’sstate of health using the random forests algorithm. In an offline phase, the random forest algorithm selects theparameters holding more information about the system’s health state. These parameters are used to constructthe decision trees that make the forest. By injecting the random aspect in the training set, the algorithm (thetrees) will have different starting points. In an online phase, the algorithm evaluates the current health stateusing the sensor data. Each tree will provide a decision, and the final class is the result of the majority voteof all trees. When sensors start to break down, the data describing a health indicator becomes incompleteor unavailable. Considering that the trees have different starting points, the absence of some data will notnecessarily result in the interruption of the prediction process
Diouri, Mohammed El Mehdi. "Efficacité énergétique dans le calcul très haute performance : application à la tolérance aux pannes et à la diffusion de données." Phd thesis, Ecole normale supérieure de lyon - ENS LYON, 2013. http://tel.archives-ouvertes.fr/tel-00881094.
Full textHADJIAT, K. "EVALUATION PREDICTIVE DE LA SURETE DE FONCTIONNEMENT D'UN CIRCUIT INTEGRE NUMERIQUE." Phd thesis, 2005. http://tel.archives-ouvertes.fr/tel-00009485.
Full textNous présentons une nouvelle approche pour la génération de mutants, permettant l'instrumentation d'un circuit pour des modèles de fautes hétérogènes. Pendant la définition d'une campagne d'injection de fautes, le flot d'analyse que nous avons proposé permet au concepteur d'introduire, dans le même circuit, des inversions de bits uniques (SEU) ou multiples (MBF), ou encore des transitions erronées. En outre, nous avons visé une génération de mutant la plus efficace selon plusieurs contraintes qui incluent (1) la modification simple et automatique de la description initiale du circuit, (2) l'optimisation des entrées additionnelles pour le contrôle d'injection et (3) la réduction du surcoût matériel après synthèse pour une bonne compatibilité avec des campagnes d'injection de fautes basées sur l'émulation.
Dans le flot d'analyse, un modèle comportemental est généré permettant au concepteur d'identifier les chemins de propagation d'erreurs dans le circuit. Une telle analyse vise à identifier, très tôt dans le flot de conception, les modes de défaillance inacceptables d'un circuit afin de modifier immédiatement sa description et ainsi améliorer sa robustesse.
Nous présentons des résultats obtenus suite à des injections multi niveaux dans des descriptions VHDL de circuits numériques. Ces résultats démontrent qu'une campagne d'injection réalisée très tôt dans le processus de conception, sur une description encore très éloignée de l'implémentation finale, peut donner des informations très utiles sur les caractéristiques de sûreté d'un circuit.