Dissertations / Theses on the topic 'Réparation des données'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 19 dissertations / theses for your research on the topic 'Réparation des données.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Tzompanaki, Aikaterini. "Réponses manquantes : Débogage et Réparation de requêtes." Thesis, Université Paris-Saclay (ComUE), 2015. http://www.theses.fr/2015SACLS223/document.
Full textWith the increasing amount of available data and data transformations, typically specified by queries, the need to understand them also increases. “Why are there medicine books in my sales report?” or “Why are there not any database books?” For the first question we need to find the origins or provenance of the result tuples in the source data. However, reasoning about missing query results, specified by Why-Not questions as the latter previously mentioned, has not till recently receivedthe attention it is worth of. Why-Not questions can be answered by providing explanations for the missing tuples. These explanations identify why and how data pertinent to the missing tuples were not properly combined by the query. Essentially, the causes lie either in the input data (e.g., erroneous or incomplete data) or at the query level (e.g., a query operator like join). Assuming that the source data contain all the necessary relevant information, we can identify the responsible query operators formingquery-based explanations. This information can then be used to propose query refinements modifying the responsible operators of the initial query such that the refined query result contains the expected data. This thesis proposes a framework targeted towards SQL query debugging and fixing to recover missing query results based on query-based explanations and query refinements.Our contribution to query debugging consist in two different approaches. The first one is a tree-based approach. First, we provide the formal framework around Why-Not questions, missing from the state-of-the-art. Then, we review in detail the state-of-the-art, showing how it probably leads to inaccurate explanations or fails to provide an explanation. We further propose the NedExplain algorithm that computes correct explanations for SPJA queries and unions there of, thus considering more operators (aggregation) than the state of the art. Finally, we experimentally show that NedExplain is better than the both in terms of time performance and explanation quality. However, we show that the previous approach leads to explanations that differ for equivalent query trees, thus providing incomplete information about what is wrong with the query. We address this issue by introducing a more general notion of explanations, using polynomials. The polynomial captures all the combinations in which the query conditions should be fixed in order for the missing tuples to appear in the result. This method is targeted towards conjunctive queries with inequalities. We further propose two algorithms, Ted that naively interprets the definitions for polynomial explanations and the optimized Ted++. We show that Ted does not scale well w.r.t. the size of the database. On the other hand, Ted++ is capable ii of efficiently computing the polynomial, relying on schema and data partitioning and advantageous replacement of expensive database evaluations by mathematical calculations. Finally, we experimentally evaluate the quality of the polynomial explanations and the efficiency of Ted++, including a comparative evaluation.For query fixing we propose is a new approach for refining a query by leveraging polynomial explanations. Based on the input data we propose how to change the query conditions pinpointed by the explanations by adjusting the constant values of the selection conditions. In case of joins, we introduce a novel type of query refinements using outer joins. We further devise the techniques to compute query refinements in the FixTed algorithm, and discuss how our method has the potential to be more efficient and effective than the related work.Finally, we have implemented both Ted++ and FixTed in an system prototype. The query debugging and fixing platform, short EFQ allows users to nteractively debug and fix their queries when having Why- Not questions
Martinez, Matias. "Extraction and analysis of knowledge for automatic software repair." Thesis, Lille 1, 2014. http://www.theses.fr/2014LIL10101/document.
Full textBug fixing is a frequent activity done in the software life cycle. The activity aims at removing the gap between the expected behavior of a program and what it actually does. In the recent years, several automatic software repair approaches have emerged to automatically synthesize bug fixes. Unfortunately, bug fixing could be even hard and expensive for automatic program repair approaches. For example, to repair a given bug, a repair technique could spend infinite time to find a fix among a large number of candidate fixes. In this thesis, we aim at improving repairability of bugs. That is, to increase the number of bugs repaired by repair approaches. First, we concentrate on the study of repair search spaces i.e., all possible solutions for the fix. We aim at adding repair approaches strategies to optimize the search of solutions in the repair search space. We present a strategy to reduce the time to find a fix. The strategy consumes information extracted from repairs done by developers. Then, we focus on the evaluation of automatic repair approaches. We aim at introducing methodologies and evaluation procedures to have meaningful repair approach evaluations.We first define a methodology to define defect datasets that minimize the possibility of biased results. The way a dataset is built impacts on the result of an approach evaluation. We present a dataset that includes a particular kind of defect: if conditional defects. Then, we aim at measuring the repairability of this kind of defect by evaluating three state-of-the-art automatic software repair approaches
Obry, Tom. "Apprentissage numérique et symbolique pour le diagnostic et la réparation automobile." Thesis, Toulouse, INSA, 2020. http://www.theses.fr/2020ISAT0014.
Full textClustering is one of the methods resulting from unsupervised learning which aims to partition a data set into different homogeneous groups in the sense of a similarity criterion. The data in each group then share common characteristics. DyClee is a classifier that performs a classification based on digital data arriving in a continuous flow and which proposes an adaptation mechanism to update this classification, thus performing dynamic clustering in accordance with the evolution of the system or process being followed. Nevertheless, the only consideration of numerical attributes does not allow to apprehend all the fields of application. In this generalization objective, this thesis proposes on the one hand an extension to nominal categorical data, and on the other hand an extension to mixed data. Hierarchical clustering approaches are also proposed in order to assist the experts in the interpretation of the obtained clusters and in the validation of the generated partitions. The presented algorithm, called Mixed DyClee, can be applied in various application domains. In the case of this thesis, it is used in the field of automotive diagnostics
Tahrat, Sabiha. "Data inconsistency detection and repair over temporal knowledge bases." Electronic Thesis or Diss., Université Paris Cité, 2021. http://www.theses.fr/2021UNIP5209.
Full textWe investigate the feasibility of automated reasoning over temporal DL-Lite (TDL-Lite) knowledge bases (KBs). We translate TDL-Lite KBs into a fragment of FO-logic and into LTL and apply off-the-shelf LTL and FO-based reasoners for checking the satisfiability. We conduct various experiments to analyse the runtime performance of different reasoners on toy scenarios and on randomly generated TDL-Lite KBs as well as the size of the LTL translation. To improve the reasoning performance when dealing with large ABoxes, our work also proposes an approach for abstracting temporal assertions in KBs. We run several experiments with this approach to assess the effectiveness of the technique by measuring the gain in terms of the size of the translation, the number of ABox assertions and individuals. We also measure the new runtime of some solvers on such abstracted KBs. Lastly, in an effort to make the usage of TDL-Lite KBs a reality, we present a fully-fledged tool with a graphical interface to design them. Our interface is based on conceptual modeling principles, and it is integrated with our translation tool and a temporal reasoner. In this thesis, we also address the problem of handling inconsistent data in Temporal Description Logic (TDL) knowledge bases. Considering the data part of the knowledge base as the source of inconsistency over time, we propose an ABox repair approach. This is the first work handling the repair in TDL Knowledge bases. To do so, our goal is two folds: 1) detect temporal inconsistencies and 2) propose a data temporal repair. For the inconsistency detection, we propose a reduction approach from TDL to DL which allows to provide a tight NP-complete upper bound for TDL concept satisfiability and to use highly optimized DL reasoners that can bring precise explanation (the set of inconsistent data assertions). Thereafter, from the obtained explanation, we propose a method for automatically computing the best repair in the temporal setting based on the allowed rigid predicates and the time order of assertions
Yu, Mulin. "Reconstruction et correction de modèles urbains à l'aide de structures de données cinétiques." Thesis, Université Côte d'Azur, 2022. http://www.theses.fr/2022COAZ4077.
Full textCompact and accurate digital 3D models of buildings are commonly used by practitioners for the visualization of existing or imaginary environments, the physical simulations or the fabrication of urban objects. Generating such ready-to-use models is however a difficult problem. When created by designers, 3D models usually contain geometric errors whose automatic correction is a scientific challenge. When created from data measurements, typically laser scans or multiview images, the accuracy and complexity of the models produced by existing reconstruction algorithms often do not reach the requirements of the practitioners. In this thesis, I address this problem by proposing two algorithms: one for repairing the geometric errors contained in urban-specific formats of 3D models, and one for reconstructing compact and accurate models from input point clouds generated from laser scanning or multiview stereo imagery. The key component of these algorithms relies upon a space-partitioning data structure able to decompose the space into polyhedral cells in a natural and efficient manner. This data structure is used to both correct geometric errors by reassembling the facets of defect-laden 3D models, and reconstruct concise 3D models from point clouds with a quality that approaches those generated by Computer-Aided-Design interactive tools.My first contribution is an algorithm to repair different types of urban models. Prior work, which traditionally relies on local analysis and heuristic-based geometric operations on mesh data structures, is typically tailored-made for specific 3D formats and urban objects. We propose a more general method to process different types of urban models without tedious parameter tuning. The key idea lies on the construction of a kinetic data structure that decomposes the 3D space into polyhedra by extending the facets of the imperfect input model. Such a data structure allows us to re-build all the relations between the facets in an efficient and robust manner. Once built, the cells of the polyhedral partition are regrouped by semantic classes to reconstruct the corrected output model. I demonstrate the robustness and efficiency of the algorithm on a variety of real-world defect-laden models and show its competitiveness with respect to traditional mesh repairing techniques from both Building Information Modeling (BIM) and Geographic Information Systems (GIS) data.My second contribution is a reconstruction algorithm inspired by the Kinetic Shape Reconstruction method, that improves the later in different ways. In particular, I propose a data fitting technique for detecting planar primitives from unorganized 3D point clouds. Departing from an initial configuration, the technique refines both the continuous plane parameters and the discrete assignment of input points to them by seeking high fidelity, high simplicity and high completeness. The solution is found by an exploration mechanism guided by a multi-objective energy function. The transitions within the large solution space are handled by five geometric operators that create, remove and modify primitives. I demonstrate its potential, not on buildings only, but on a variety of scenes, from organic shapes to man-made objects
Comignani, Ugo. "Interactive mapping specification and repairing in the presence of policy views." Thesis, Lyon, 2019. http://www.theses.fr/2019LYSE1127/document.
Full textData exchange between sources over heterogeneous schemas is an ever-growing field of study with the increased availability of data, oftentimes available in open access, and the pooling of such data for data mining or learning purposes. However, the description of the data exchange process from a source to a target instance defined over a different schema is a cumbersome task, even for users acquainted with data exchange. In this thesis, we address the problem of allowing a non-expert user to spec- ify a source-to-target mapping, and the problem of ensuring that the specified mapping does not leak information forbidden by the security policies defined over the source. To do so, we first provide an interactive process in which users provide small examples of their data, and answer simple boolean questions in order to specify their intended mapping. Then, we provide another process to rewrite this mapping in order to ensure its safety with respect to the source policy views. As such, the first main contribution of this thesis is to provide a formal definition of the problem of interactive mapping specification, as well as a formal resolution process for which desirable properties are proved. Then, based on this formal resolution process, practical algorithms are provided. The approach behind these algorithms aims at reducing the number of boolean questions users have to answers by making use of quasi-lattice structures to order the set of possible mappings to explore, allowing an efficient pruning of the space of explored mappings. In order to improve this pruning, an extension of this approach to the use of integrity constraints is also provided. The second main contribution is a repairing process allowing to ensure that a mapping is “safe” with respect to a set of policy views defined on its source schema, i.e., that it does not leak sensitive information. A privacy-preservation protocol is provided to visualize the information leaks of a mapping, as well as a process to rewrite an input mapping into a safe one with respect to a set of policy views. As in the first contribution, this process comes with proofs of desirable properties. In order to reduce the number of interactions needed with the user, the interactive part of the repairing process is also enriched with the possibility of learning which rewriting is preferred by users, in order to obtain a completely automatic process. Last but not least, we present extensive experiments over the open source prototypes built from two contributions of this thesis
Rantsoudis, Christos. "Bases de connaissance et actions de mise à jour préférées : à la recherche de consistance au travers des programmes de la logique dynamique." Thesis, Toulouse 3, 2018. http://www.theses.fr/2018TOU30286.
Full textIn the database literature it has been proposed to resort to active integrity constraints in order to restore database integrity. Such active integrity constraints consist of a classical constraint together with a set of preferred update actions that can be triggered when the constraint is violated. In the first part of this thesis, we review the main repairing routes that have been proposed in the literature and capture them by means of Dynamic Logic programs. The main tool we employ for our investigations is the recently introduced logic DL-PA, which constitutes a variant of PDL. We then go on to explore a new, dynamic kind of database repairing whose computational complexity and general properties are compared to the previous established approaches. In the second part of the thesis we leave the propositional setting and pursue to adapt the aforementioned ideas to higher level languages. More specifically, we venture into Description Logics and investigate extensions of TBox axioms by update actions that denote the preferred ways an ABox should be repaired in case of inconsistency with the axioms of the TBox. The extension of the TBox axioms with these update actions constitute new, active TBoxes. We tackle the problem of repairing an ABox with respect to such an active TBox both from a syntactic as well as a semantic perspective. Given an initial ABox, the syntactic approach allows us to construct a set of new ABoxes out of which we then identify the most suitable repairs. On the other hand, for the semantic approach we once again resort to a dynamic logic framework and view update actions, active inclusion axioms and repairs as programs. Given an active TBox aT , the framework allows to check (1) whether a set of update actions is able to repair an ABox according to the active axioms of aT by interpreting the update actions locally and (2) whether an ABox A' is the repair of a given ABox A under the active axioms of aT using a bounded number of computations by interpreting the update actions globally. After discussing the strong points of each direction, we conclude by combining the syntactic and semantic investigations into a cohesive approach
Laval, Quentin. "Un environnement de modélisation et d'aide à la décision pour un problème de livraison - collecte sous incertitudes : application à la PFL de l'AP-HM." Thesis, Aix-Marseille, 2017. http://www.theses.fr/2017AIXM0193.
Full textThis thesis work is part of the logistics project of the Assistance Publique-Hôpitaux de Marseille. Indeed the AP-HM opened a logistics platform in April 2013 in order to central- ize production activities of meals, sterilization, storage product and bleaching of linen. These products are then transported in containers, thanks to a team of transport, to the four hospitals in Marseille. After consumption of the products, by healthcare units, used containers must be re- ported to the logistics platform that they are disinfected and reinstated in the production loop. The purpose of this research study is to propose a method and a tool to help the team of regulation of transport for the management of transport resources. This study takes into account the variability of the transport time and the hazards that could inter- vene in the life cycle of a tour of transport.For this we make a knowledge model of logistics system using the ASCI methodology. This model of knowledge is then validated with a simulation model. We offer then a method and a tool allowing the generation of daily tour schedule. This method is an ad - hoc solu- tion that integrates solving a problem loading, planning for vehicle and crew, as well as representation and statistical modelling of variability in the time of transport in urban areas. Indeed, the daily congestion rate can vary a transport time of one to two. Finally, for the management of the ups and downs, we propose a method of repair of planning that we model with multi agent systems. This last point of this thesis according to failure sce- narios, makes it possible to propose the best solution to the transport staff
Gigot, Sébastien. "Contribution à la conception d’une base de données de maintenabilité opérationnelle dynamique : Proposition de prise en compte des facteurs pénalisants dans l’estimation du MTTR [Mean Time To Repair]." Ecole nationale d'ingénieurs (Saint-Etienne), 2013. http://www.theses.fr/2013ENISE021.
Full textThis thesis is intended to describe a methodology for assessment of the risk of not satisfying the requirements of maintainability for industrial equipment complex in operation. It allows you to better understand the problematic of the overrun of the duration of repair linked to the activities of operational serviceability. Few of the studies are at the present time, devoted to this topic made complex by the multitude of different activities and a level of requirement always growing. The requirements of maintainability and availability appear more and more. Our proposal focuses on the evaluation of the criteria of maintainability criticisms involved in a process of maintainability in order to optimize the sequence of actions since the fault up to the rehabilitation service. The analysis of these inhibiting factors led us to develop a model for estimating of the MTTR to minimize the delta of derivatives linked to the overrun of the repair time. The results illustrated by specific examples allow to assess the maintainability of the system in relation to the objectives set and to propose, if necessary, actions to decrease of risks to optimize the system unavailability. This work is interested in the development of an approach to the modeling of complex systems for the assessment of maintenance strategies. Its culmination is a tool to help in the decision to build and meet the maintenance programs by performing the choice best suited. Our works have focused on the operational maintainability and on the importance of the estimation of repair times taking into account the context in which evolved the system in order to identify the events penally. These works have stressed the importance to be given to the methodology of treatment of a failure, by proposing to reconsider the concept of maintainability operational in order to better control the uncertainties related to the excedance of repair times
Khraibani, Hussein. "Modélisation statistique de données longitudinales sur un réseau routier entretenu." Ecole centrale de Nantes, 2010. http://www.theses.fr/2010ECDN0040.
Full textRoad transportation has a direct impact on a country's economy. Infrastructures, particularly pavements, deteriorate under the effect of traffic and climate. As a result, they most constantly undergo maintenance which often requires expensive works. The optimization of maintenance strategies and the scheduling of works necessarily pass by a study that makes use of deterioration evolution laws and accounts for the effect of maintenance on these laws. In this respect, numerous theoretical and experimental works ranging linear and nonlinear regressions to more sophisticated methods such as Markov chain have been conducted. The thesis presents a survey of models and methods and focuses on the analysis of survival data (MADS), an analysis which constituted the objective of important works at LCPC. In order to acount for the fact that current databases contain repeated measurements of each pavement section, the thesis proposes a different approach based on the use of nonlinear mixed-effects models (NLME). First, it carries out a comparison between the NLME and MADS models on different databases in terms of the goodness of fit and prediction capability. The comparison then allows to draw conclusions about the applicability of the two models
Efaga, Eugène-Désiré. "Analyse des données du retour d'expérience pour l'organisation de la maintenance des équipements de production des PME/PMI dans le cadre de la MBF (Maintenance Basée sur la Fiabilité)." Université Louis Pasteur (Strasbourg) (1971-2008), 2004. https://publication-theses.unistra.fr/public/theses_doctorat/2004/EFAGA_Eugene-Desire_2004.pdf.
Full textMartin, Florent. "Pronostic de défaillances de pompes à vide - Exploitation automatique de règles extraites par fouille de données." Thesis, Grenoble, 2011. http://www.theses.fr/2011GRENA011.
Full textThis thesis presents a symbolic rule-based method that addresses system prognosis. It also details a successful application to complex vacuum pumping systems. More precisely, using historical vibratory data, we first model the behavior of the pumps by extracting a given type of episode rules, namely the First Local Maximum episode rules (FLM-rules). The algorithm that extracts FLM-rules also determines automatically their respective optimal temporal window, i.e. the temporal window in which the probability of observing the premiss and the conclusion of a rule is maximum. A subset of the extracted FLM-rules is then selected in order to further predict pumping system failures in a vibratory data stream context. Our contribution consists in selecting the most reliable FLM-rules, continuously matching them in a data stream of vibratory data and building a forecast time interval using the optimal temporal windows of the FLM-rules that have been matched
Alileche, Lyes. "Use of BIM for the optimal management of existing buildings." Thesis, Lille 1, 2018. http://www.theses.fr/2018LIL1I058/document.
Full textThis research concerns the use of the Building Information Modeling (BIM) for the optimal management of existing buildings, in particular social housing buildings. These buildings are characterized by aging, poor energy performances and tenant’s low-income. The building managers suffer from lack of data concerning the buildings asset which could lead to poor operating decisions.The thesis discusses how the BIM could help to meet existing buildings challenges by the creation of a friendly comprehensive system including information about the building and equipment as well as the maintenance. The benefits of the BIM model are illustrated through two case studies, which concern a social housing residence and a research building respectively.This thesis is composed of four parts. The first part includes a literature review concerning the current methods of facility management, and the role of BIM in improving this management.The second part describes steps carried out to realize the BIM model of an existing social housing residence which includes 50 dwells.The third part describes the use of BIM to optimize facilities management and building maintenance. The last part describes the development of a dynamic BIM model using the as built BIM and real time data collected with sensors to inform users and managers about energy consumption and abnormal events
Ben, Zakour Asma. "Extraction des utilisations typiques à partir de données hétérogènes en vue d'optimiser la maintenance d'une flotte de véhicules." Thesis, Bordeaux 1, 2012. http://www.theses.fr/2012BOR14539/document.
Full textThe present work is part of an industrial project driven by 2MoRO Solutions company.It aims to develop a high value service enabling aircraft operators to optimize their maintenance actions.Given the large amount of data available around aircraft exploitation, we aim to analyse the historical events recorded with each aircraft in order to extract maintenance forecasting. Theresults are used to integrate and consolidate maintenance tasks in order to minimize aircraft downtime and risk of failure. The proposed method involves three steps : (i) streamlining information in order to combinethem, (ii) organizing this data for easy analysis and (iii) an extraction step of useful knowledgein the form of interesting sequences. [...]
Ahmadi, Mehdi. "Gestion patrimoniale des réseaux d’assainissement : Impact de la qualité des données et du paramétrage du modèle utilisé sur l’évaluation de l’état des tronçons et des patrimoines." Thesis, Lyon, INSA, 2014. http://www.theses.fr/2014ISAL0034/document.
Full textAsset management ensures that the best decisions are made for elements of the asset in order to reduce risks, optimize performance and minimize costs. A proactive asset management includes development of prioritization schemes for selection of inspection and rehabilitation needs. In this regard, we have identified the following bottlenecks which are addressed in this manuscript. First, there is a need of elaborating inspection programs based on deterioration models in order to be more cost-effective. The influence of availability and quality of data (imprecision, uncertainty, incompleteness) on models’ predictive power has not been studied in depth. We propose two methods to establish the list of the most informative factors from a representative sample of an asset stock. Among other results, this study would suggest that having imprecision in the database is preferable to having a database with no information on one specific factor. Second, once segments are inspected, they should be evaluated by a condition grading protocol. Though various condition grading protocols have been developed, they all fail to undertake the sensitivity of managers and stakeholders to the over or under-estimation of assets’ condition grade as many sources of uncertainty could be found within the condition assessment process. We propose a procedure in order to deal with this uncertainty. We also carry out some sensitivity analyses of parameters employed in this procedure. The results of these sensitivity tests are then applied to a part of the Greater Lyon asset stock. The results show that the assessment of segments into a condition grade depends heavily on the hypotheses that a manager could make about the under or over-estimation of its assets’ condition. Third, at the moment small number of utilities has completely inspected and evaluated their asset stocks. Therefore, the use of a representative sample from an asset stock in order to calibrate decision-support models as deterioration models seems mandatory. Nevertheless, in this regard we should tackle with following problematic issues: 1) How to draw a representative sample of an asset stock which reflects the characteristics of this asset stock? and 2) What is the impact of used sample on the calibration outcomes of these multivariate models? Hence, by drawing several samples with different sizes according to different sampling methods and applying Monte Carlo method, we have proposed a procedure in order to study the influence of available sample on the outcomes of a multivariate model. By proposing some statistical analyses, we showed that the calibration process depends extremely on available sample which could results, if this latter is not drawn properly, into erroneous conclusions
Lebranchu, Alexis. "Analyse de données de surveillance et synthèse d'indicateurs de défauts et de dégradation pour l'aide à la maintenance prédictive de parcs de turbines éoliennes." Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAT082/document.
Full textThe wind energy sector has rapidly gown in the last 10 years. The number and the size of wind turbines have multiplied, which increases the difficulty and the criticality of the maintenance, and forces the wind turbine industry to change from a corrective and systematic maintenance to a conditional and predictive maintenance. The objective of this research is to develop failure indicators using numerical SCADA data, available at a low price but with a very low sampling frequency (10 min), in order to make online monitoring.A thorough bibliographical analysis on the surveillance of wind farms using SCADA data shows that two types of approaches are usually suggested. The first approach, called mono-turbine, where a good behaviour model of a turbine is learnt over unfaulty periods. With this approach, it is possible to create residuals measuring the difference between the predicted value by the model and the on-line measure, which serves as failure indicators. The mono-turbine models have the peculiarity in that they use variables coming from the same turbine as the farms. The second approach, called multi-turbine, are methods where the similarity between machines is used. Where the most recent researches mostly suggest creating performance curves for every machine on the farm during a period of time and comparing these curves between each other, we make the original proposal to combine both approaches and compare mono-turbine residuals with a farm reference representing the behaviour of the turbines of the farm.We validate in an extensive way those failure indicators by analysing their performances on a database made up of SCADA variable recordings of a duration of 4 years on a windfarm of 6 machines. We also propose relevant performance criteria allowing an estimation in a realistic way of the gains and possible additional costs which would generate these indicators if they were integrated into a tool of maintenance. Therefore, we show that the rate of useless interventions associated with false alarms produced by the failure indicators, which cause a heavy additional cost for the company, can be strongly decreased by the mono-turbines indicators merging that we propose, while preserving a sufficient detection time for the maintenance teams to plan interventions
Redondin, Maxime. "Approches de classifications à partir de données fortement censurées pour l'analyse de fiabilité et la définition de stratégies de maintenance : application aux marquages routiers dans un contexte de véhicules autonomes." Thesis, Paris Est, 2018. http://www.theses.fr/2018PESC1118/document.
Full textThe quality and reliability of road infrastructure and its equipment play a major role in road safety. This is especially true if we are interested in autonomous car traffic. Recent papers from VEDECOM Institut proves that a clear and reliable road marking is important in it decison making. Marking lanes are detected by camera. These markings need an accurate maintenance strategy to guarantee that the markings remain perceptible. This report proposes different solutions based on the reliabilty and maintenance theory. Today, the markings reliability is based on the retroreflective illuminance. A retroreflective marking reflects light from a vehicle headlight back in the direction of the driver. Marking retroreflectivity can be dynamically inspected using a retroreflectometer. The litterature of the last thirty years proposes degradation models for retroreflective marking based on a regression model. All of them have a common weakness: they are difficult to apply directly to a given road network. This report presents maintenance models who math with current maintenance actions. A marking lane is interpreted as multi-unit systeme. All unit are laid in parallel. The global maintenance strategy is based on four points. First, the whole inspection data is formalized into one monitoring base. If inspection data is missing or if the maintenance historic is unavailable else an estimation process based on the Agglomerative Hierarchical Clustering (AHC) is proposed. Second, to replace a whole markings lane is logistically difficult to work. Again, an AHC of the monitoring proposed several clusters. Each cluster presents it own degradation model. Clusters are geographically tracked and correlated to specific situation (interchange, urban area, bypass...). That's why a cluster is interpreted as a maintenance strategic area. Thirdly, a Weibull analysis of each cluster is done. Current retroreflectometers cannot detects the exact faillure moment. this information is statistically censored. Three cases are identified : left, right and interval censored. To parameter a Weibull model, an EM Algorithm is propoased as an alternative to the Maximum Likelihood Estimator. This algorithm is also an estimator to censored markings life time. Lastly, two classic preventive maintenance strategies are proposed : systematic according to the age and conditionned to the current degradation. Each one is credible according the current maintenance practice. The first prposed a passsive managament of the markings maintenance. The second ensures an advanced knowledge of the road network over the time. On a multi-unit system no-repairable and strongly censored, units which admit the same degradation model are identified by a clustering approach. Each cluster present it own Weibull analysis. Finally, an adapted maintenance strategy is done
Grandval, Philippe. "Caractérisation des variations génétiques constitutionnelles de signification inconnue dans le syndrome de Lynch." Thesis, Aix-Marseille, 2014. http://www.theses.fr/2014AIXM5004/document.
Full textLynch syndrome is a frequent cancer predisposition with an autosomal dominant mode of inheritance and caused by heterozygous germ line mutations in one of the major DNA mismatch repair (MMR) genes (MLH1, MSH2 and MSH6). For 20 years, the French laboratories network involved in Lynch syndrome identified a total of 6687 variations. Among them, 707, mainly missense variations, remained variants of uncertain significance (VUS), thus could not be used for reliable genetic counseling. The aim of our study was to develop an algorithm able to classify VUS, according to the international consensus (IARC). This algorithm was constructed based on criteria usually required for genetic characterization such as in silico analysis, phenotypical data (segregation, Amsterdam criteria's), MMR status in tumor cells, functional assays, splicing analyses and published data. Data were registered in the French database. As a result of this work, we were able to classify 370 variants of the 707 (52,3%). As part of this work, we also analyzed phenotypical data of patients with Lynch syndrome and showed that breast cancer can definitively be excluded from the spectrum of Lynch-related cancers, and that EPCAM mutations, which may lead to Lynch syndrome, are associated with a very low incidence of endometrial cancer and have probably to be considered as an allelic disease with specific clinical recommendations
Liu, Yinling. "Conception et vérification du système d'Information pour la maintenance aéronautique." Thesis, Lyon, 2019. http://www.theses.fr/2019LYSEI133.
Full textOperational support is one of the most important aspects of aeronautical maintenance. It aims to provide a portfolio of services to implement maintenance with a high level of efficiency, reliability and accessibility. One of the major difficulties in operational support is that there is no platform that integrates all aircraft maintenance processes in order to reduce costs and improve the level of service. It is therefore necessary to build an autonomous aircraft maintenance system in which all maintenance information can be collected, organized, analyzed and managed in a way that facilitates decision-making. To do this, an innovative methodology has been proposed, which concerns modelling, simulation, formal verification and performance analysis of the autonomous system mentioned. Three axes were addressed in this thesis. The first axis concerns the design and simulation of an autonomous system for aeronautical maintenance. We offer an innovative design of an autonomous system that supports automatic decision making for maintenance planning. The second axis is the verification of models on simulation systems. We propose a more comprehensive approach to verifying global behaviours and operational behaviours of systems. The third axis focuses on the analysis of the performance of simulation systems. We propose an approach of combining an agent-based simulation system with the “Fuzzy Rough Nearest Neighbor” approach, in order to implement efficient classification and prediction of aircraft maintenance failures with missing data. Finally, simulation models and systems have been proposed. Simulation experiments illustrate the feasibility of the proposed approach