Literatura académica sobre el tema "Réparation des données"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Réparation des données".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Artículos de revistas sobre el tema "Réparation des données"
Cachera, Faustine y Astrid Wagner. "Le droit à réparation des victimes d’un traitement illicite de données : les précisions apportées par la CJUE". Pin Code N° 16, n.º 4 (18 de diciembre de 2023): 18–22. http://dx.doi.org/10.3917/pinc.016.0018.
Texto completoMarchand, Anne. "Réparation et prévention, des liens ambivalents". Regards N° 61, n.º 1 (11 de julio de 2023): 151–62. http://dx.doi.org/10.3917/regar.061.0151.
Texto completoAgbodo, Jean Paul. "Les Actions en Responsabilité Médicale Pour Faute de Diagnostic: Aspects de Droit Comparé Français et Ivoirien". European Scientific Journal, ESJ 19, n.º 26 (30 de septiembre de 2023): 30. http://dx.doi.org/10.19044/esj.2023.v19n26p30.
Texto completoThébaud-Mony, Annie. "Les fibres courtes d’amiante sont-elles toxiques ? Production de connaissances scientifiques et maladies professionnelles". Sciences sociales et santé 28, n.º 2 (2010): 95–114. http://dx.doi.org/10.3406/sosan.2010.1964.
Texto completoChevalier, A., J. Brière, M. Feurprier, F. Paboeuf y E. Imbernon. "Mise en évidence des secteurs d’activité économique à haut risque d’accident du travail : utilisation d’un outil de surveillance construit à partir des données de réparation des régimes de sécurité sociale". Archives des Maladies Professionnelles et de l'Environnement 73, n.º 5 (noviembre de 2012): 720. http://dx.doi.org/10.1016/j.admp.2012.09.039.
Texto completoNisse, Catherine, Natalie Vongmany, Nadège Lepage, Serge Faye, Lynda Larabi y Juliette Bloch. "Pathologies en relation avec le travail (PRT) dans le secteur du Commerce - réparation d’automobiles et de motocycles : données du Réseau national de vigilance et de prévention des pathologies professionnelles (RNV3P) 2001–2018". Archives des Maladies Professionnelles et de l'Environnement 81, n.º 5 (octubre de 2020): 737–38. http://dx.doi.org/10.1016/j.admp.2020.03.803.
Texto completoMéplan, Catherine, Gerald Verhaegh, Marie-Jeanne Richard y Pierre Hainaut. "Metal ions as regulators of the conformation and function of the tumour suppressor protein p53: implications for carcinogenesis". Proceedings of the Nutrition Society 58, n.º 3 (agosto de 1999): 565–71. http://dx.doi.org/10.1017/s0029665199000749.
Texto completoZegveld, Liesbeth. "Remedies for victims of violations of international humanitarian law". International Review of the Red Cross 85, n.º 851 (septiembre de 2003): 497–527. http://dx.doi.org/10.1017/s0035336100183790.
Texto completoBrun, Philippe. "Personnes et préjudice". Colloque 33, n.º 2 (24 de noviembre de 2014): 187–209. http://dx.doi.org/10.7202/1027451ar.
Texto completoMontcho, Bernard y Séraphin Atidegla Capo. "Perceptions Paysannes et Caractérisation de la Motorisation des Exploitations Agricoles dans la Commune de Kétou". European Scientific Journal, ESJ 19, n.º 20 (31 de julio de 2023): 124. http://dx.doi.org/10.19044/esj.2023.v19n20p124.
Texto completoTesis sobre el tema "Réparation des données"
Tzompanaki, Aikaterini. "Réponses manquantes : Débogage et Réparation de requêtes". Thesis, Université Paris-Saclay (ComUE), 2015. http://www.theses.fr/2015SACLS223/document.
Texto completoWith the increasing amount of available data and data transformations, typically specified by queries, the need to understand them also increases. “Why are there medicine books in my sales report?” or “Why are there not any database books?” For the first question we need to find the origins or provenance of the result tuples in the source data. However, reasoning about missing query results, specified by Why-Not questions as the latter previously mentioned, has not till recently receivedthe attention it is worth of. Why-Not questions can be answered by providing explanations for the missing tuples. These explanations identify why and how data pertinent to the missing tuples were not properly combined by the query. Essentially, the causes lie either in the input data (e.g., erroneous or incomplete data) or at the query level (e.g., a query operator like join). Assuming that the source data contain all the necessary relevant information, we can identify the responsible query operators formingquery-based explanations. This information can then be used to propose query refinements modifying the responsible operators of the initial query such that the refined query result contains the expected data. This thesis proposes a framework targeted towards SQL query debugging and fixing to recover missing query results based on query-based explanations and query refinements.Our contribution to query debugging consist in two different approaches. The first one is a tree-based approach. First, we provide the formal framework around Why-Not questions, missing from the state-of-the-art. Then, we review in detail the state-of-the-art, showing how it probably leads to inaccurate explanations or fails to provide an explanation. We further propose the NedExplain algorithm that computes correct explanations for SPJA queries and unions there of, thus considering more operators (aggregation) than the state of the art. Finally, we experimentally show that NedExplain is better than the both in terms of time performance and explanation quality. However, we show that the previous approach leads to explanations that differ for equivalent query trees, thus providing incomplete information about what is wrong with the query. We address this issue by introducing a more general notion of explanations, using polynomials. The polynomial captures all the combinations in which the query conditions should be fixed in order for the missing tuples to appear in the result. This method is targeted towards conjunctive queries with inequalities. We further propose two algorithms, Ted that naively interprets the definitions for polynomial explanations and the optimized Ted++. We show that Ted does not scale well w.r.t. the size of the database. On the other hand, Ted++ is capable ii of efficiently computing the polynomial, relying on schema and data partitioning and advantageous replacement of expensive database evaluations by mathematical calculations. Finally, we experimentally evaluate the quality of the polynomial explanations and the efficiency of Ted++, including a comparative evaluation.For query fixing we propose is a new approach for refining a query by leveraging polynomial explanations. Based on the input data we propose how to change the query conditions pinpointed by the explanations by adjusting the constant values of the selection conditions. In case of joins, we introduce a novel type of query refinements using outer joins. We further devise the techniques to compute query refinements in the FixTed algorithm, and discuss how our method has the potential to be more efficient and effective than the related work.Finally, we have implemented both Ted++ and FixTed in an system prototype. The query debugging and fixing platform, short EFQ allows users to nteractively debug and fix their queries when having Why- Not questions
Martinez, Matias. "Extraction and analysis of knowledge for automatic software repair". Thesis, Lille 1, 2014. http://www.theses.fr/2014LIL10101/document.
Texto completoBug fixing is a frequent activity done in the software life cycle. The activity aims at removing the gap between the expected behavior of a program and what it actually does. In the recent years, several automatic software repair approaches have emerged to automatically synthesize bug fixes. Unfortunately, bug fixing could be even hard and expensive for automatic program repair approaches. For example, to repair a given bug, a repair technique could spend infinite time to find a fix among a large number of candidate fixes. In this thesis, we aim at improving repairability of bugs. That is, to increase the number of bugs repaired by repair approaches. First, we concentrate on the study of repair search spaces i.e., all possible solutions for the fix. We aim at adding repair approaches strategies to optimize the search of solutions in the repair search space. We present a strategy to reduce the time to find a fix. The strategy consumes information extracted from repairs done by developers. Then, we focus on the evaluation of automatic repair approaches. We aim at introducing methodologies and evaluation procedures to have meaningful repair approach evaluations.We first define a methodology to define defect datasets that minimize the possibility of biased results. The way a dataset is built impacts on the result of an approach evaluation. We present a dataset that includes a particular kind of defect: if conditional defects. Then, we aim at measuring the repairability of this kind of defect by evaluating three state-of-the-art automatic software repair approaches
Obry, Tom. "Apprentissage numérique et symbolique pour le diagnostic et la réparation automobile". Thesis, Toulouse, INSA, 2020. http://www.theses.fr/2020ISAT0014.
Texto completoClustering is one of the methods resulting from unsupervised learning which aims to partition a data set into different homogeneous groups in the sense of a similarity criterion. The data in each group then share common characteristics. DyClee is a classifier that performs a classification based on digital data arriving in a continuous flow and which proposes an adaptation mechanism to update this classification, thus performing dynamic clustering in accordance with the evolution of the system or process being followed. Nevertheless, the only consideration of numerical attributes does not allow to apprehend all the fields of application. In this generalization objective, this thesis proposes on the one hand an extension to nominal categorical data, and on the other hand an extension to mixed data. Hierarchical clustering approaches are also proposed in order to assist the experts in the interpretation of the obtained clusters and in the validation of the generated partitions. The presented algorithm, called Mixed DyClee, can be applied in various application domains. In the case of this thesis, it is used in the field of automotive diagnostics
Tahrat, Sabiha. "Data inconsistency detection and repair over temporal knowledge bases". Electronic Thesis or Diss., Université Paris Cité, 2021. http://www.theses.fr/2021UNIP5209.
Texto completoWe investigate the feasibility of automated reasoning over temporal DL-Lite (TDL-Lite) knowledge bases (KBs). We translate TDL-Lite KBs into a fragment of FO-logic and into LTL and apply off-the-shelf LTL and FO-based reasoners for checking the satisfiability. We conduct various experiments to analyse the runtime performance of different reasoners on toy scenarios and on randomly generated TDL-Lite KBs as well as the size of the LTL translation. To improve the reasoning performance when dealing with large ABoxes, our work also proposes an approach for abstracting temporal assertions in KBs. We run several experiments with this approach to assess the effectiveness of the technique by measuring the gain in terms of the size of the translation, the number of ABox assertions and individuals. We also measure the new runtime of some solvers on such abstracted KBs. Lastly, in an effort to make the usage of TDL-Lite KBs a reality, we present a fully-fledged tool with a graphical interface to design them. Our interface is based on conceptual modeling principles, and it is integrated with our translation tool and a temporal reasoner. In this thesis, we also address the problem of handling inconsistent data in Temporal Description Logic (TDL) knowledge bases. Considering the data part of the knowledge base as the source of inconsistency over time, we propose an ABox repair approach. This is the first work handling the repair in TDL Knowledge bases. To do so, our goal is two folds: 1) detect temporal inconsistencies and 2) propose a data temporal repair. For the inconsistency detection, we propose a reduction approach from TDL to DL which allows to provide a tight NP-complete upper bound for TDL concept satisfiability and to use highly optimized DL reasoners that can bring precise explanation (the set of inconsistent data assertions). Thereafter, from the obtained explanation, we propose a method for automatically computing the best repair in the temporal setting based on the allowed rigid predicates and the time order of assertions
Yu, Mulin. "Reconstruction et correction de modèles urbains à l'aide de structures de données cinétiques". Thesis, Université Côte d'Azur, 2022. http://www.theses.fr/2022COAZ4077.
Texto completoCompact and accurate digital 3D models of buildings are commonly used by practitioners for the visualization of existing or imaginary environments, the physical simulations or the fabrication of urban objects. Generating such ready-to-use models is however a difficult problem. When created by designers, 3D models usually contain geometric errors whose automatic correction is a scientific challenge. When created from data measurements, typically laser scans or multiview images, the accuracy and complexity of the models produced by existing reconstruction algorithms often do not reach the requirements of the practitioners. In this thesis, I address this problem by proposing two algorithms: one for repairing the geometric errors contained in urban-specific formats of 3D models, and one for reconstructing compact and accurate models from input point clouds generated from laser scanning or multiview stereo imagery. The key component of these algorithms relies upon a space-partitioning data structure able to decompose the space into polyhedral cells in a natural and efficient manner. This data structure is used to both correct geometric errors by reassembling the facets of defect-laden 3D models, and reconstruct concise 3D models from point clouds with a quality that approaches those generated by Computer-Aided-Design interactive tools.My first contribution is an algorithm to repair different types of urban models. Prior work, which traditionally relies on local analysis and heuristic-based geometric operations on mesh data structures, is typically tailored-made for specific 3D formats and urban objects. We propose a more general method to process different types of urban models without tedious parameter tuning. The key idea lies on the construction of a kinetic data structure that decomposes the 3D space into polyhedra by extending the facets of the imperfect input model. Such a data structure allows us to re-build all the relations between the facets in an efficient and robust manner. Once built, the cells of the polyhedral partition are regrouped by semantic classes to reconstruct the corrected output model. I demonstrate the robustness and efficiency of the algorithm on a variety of real-world defect-laden models and show its competitiveness with respect to traditional mesh repairing techniques from both Building Information Modeling (BIM) and Geographic Information Systems (GIS) data.My second contribution is a reconstruction algorithm inspired by the Kinetic Shape Reconstruction method, that improves the later in different ways. In particular, I propose a data fitting technique for detecting planar primitives from unorganized 3D point clouds. Departing from an initial configuration, the technique refines both the continuous plane parameters and the discrete assignment of input points to them by seeking high fidelity, high simplicity and high completeness. The solution is found by an exploration mechanism guided by a multi-objective energy function. The transitions within the large solution space are handled by five geometric operators that create, remove and modify primitives. I demonstrate its potential, not on buildings only, but on a variety of scenes, from organic shapes to man-made objects
Comignani, Ugo. "Interactive mapping specification and repairing in the presence of policy views". Thesis, Lyon, 2019. http://www.theses.fr/2019LYSE1127/document.
Texto completoData exchange between sources over heterogeneous schemas is an ever-growing field of study with the increased availability of data, oftentimes available in open access, and the pooling of such data for data mining or learning purposes. However, the description of the data exchange process from a source to a target instance defined over a different schema is a cumbersome task, even for users acquainted with data exchange. In this thesis, we address the problem of allowing a non-expert user to spec- ify a source-to-target mapping, and the problem of ensuring that the specified mapping does not leak information forbidden by the security policies defined over the source. To do so, we first provide an interactive process in which users provide small examples of their data, and answer simple boolean questions in order to specify their intended mapping. Then, we provide another process to rewrite this mapping in order to ensure its safety with respect to the source policy views. As such, the first main contribution of this thesis is to provide a formal definition of the problem of interactive mapping specification, as well as a formal resolution process for which desirable properties are proved. Then, based on this formal resolution process, practical algorithms are provided. The approach behind these algorithms aims at reducing the number of boolean questions users have to answers by making use of quasi-lattice structures to order the set of possible mappings to explore, allowing an efficient pruning of the space of explored mappings. In order to improve this pruning, an extension of this approach to the use of integrity constraints is also provided. The second main contribution is a repairing process allowing to ensure that a mapping is “safe” with respect to a set of policy views defined on its source schema, i.e., that it does not leak sensitive information. A privacy-preservation protocol is provided to visualize the information leaks of a mapping, as well as a process to rewrite an input mapping into a safe one with respect to a set of policy views. As in the first contribution, this process comes with proofs of desirable properties. In order to reduce the number of interactions needed with the user, the interactive part of the repairing process is also enriched with the possibility of learning which rewriting is preferred by users, in order to obtain a completely automatic process. Last but not least, we present extensive experiments over the open source prototypes built from two contributions of this thesis
Rantsoudis, Christos. "Bases de connaissance et actions de mise à jour préférées : à la recherche de consistance au travers des programmes de la logique dynamique". Thesis, Toulouse 3, 2018. http://www.theses.fr/2018TOU30286.
Texto completoIn the database literature it has been proposed to resort to active integrity constraints in order to restore database integrity. Such active integrity constraints consist of a classical constraint together with a set of preferred update actions that can be triggered when the constraint is violated. In the first part of this thesis, we review the main repairing routes that have been proposed in the literature and capture them by means of Dynamic Logic programs. The main tool we employ for our investigations is the recently introduced logic DL-PA, which constitutes a variant of PDL. We then go on to explore a new, dynamic kind of database repairing whose computational complexity and general properties are compared to the previous established approaches. In the second part of the thesis we leave the propositional setting and pursue to adapt the aforementioned ideas to higher level languages. More specifically, we venture into Description Logics and investigate extensions of TBox axioms by update actions that denote the preferred ways an ABox should be repaired in case of inconsistency with the axioms of the TBox. The extension of the TBox axioms with these update actions constitute new, active TBoxes. We tackle the problem of repairing an ABox with respect to such an active TBox both from a syntactic as well as a semantic perspective. Given an initial ABox, the syntactic approach allows us to construct a set of new ABoxes out of which we then identify the most suitable repairs. On the other hand, for the semantic approach we once again resort to a dynamic logic framework and view update actions, active inclusion axioms and repairs as programs. Given an active TBox aT , the framework allows to check (1) whether a set of update actions is able to repair an ABox according to the active axioms of aT by interpreting the update actions locally and (2) whether an ABox A' is the repair of a given ABox A under the active axioms of aT using a bounded number of computations by interpreting the update actions globally. After discussing the strong points of each direction, we conclude by combining the syntactic and semantic investigations into a cohesive approach
Laval, Quentin. "Un environnement de modélisation et d'aide à la décision pour un problème de livraison - collecte sous incertitudes : application à la PFL de l'AP-HM". Thesis, Aix-Marseille, 2017. http://www.theses.fr/2017AIXM0193.
Texto completoThis thesis work is part of the logistics project of the Assistance Publique-Hôpitaux de Marseille. Indeed the AP-HM opened a logistics platform in April 2013 in order to central- ize production activities of meals, sterilization, storage product and bleaching of linen. These products are then transported in containers, thanks to a team of transport, to the four hospitals in Marseille. After consumption of the products, by healthcare units, used containers must be re- ported to the logistics platform that they are disinfected and reinstated in the production loop. The purpose of this research study is to propose a method and a tool to help the team of regulation of transport for the management of transport resources. This study takes into account the variability of the transport time and the hazards that could inter- vene in the life cycle of a tour of transport.For this we make a knowledge model of logistics system using the ASCI methodology. This model of knowledge is then validated with a simulation model. We offer then a method and a tool allowing the generation of daily tour schedule. This method is an ad - hoc solu- tion that integrates solving a problem loading, planning for vehicle and crew, as well as representation and statistical modelling of variability in the time of transport in urban areas. Indeed, the daily congestion rate can vary a transport time of one to two. Finally, for the management of the ups and downs, we propose a method of repair of planning that we model with multi agent systems. This last point of this thesis according to failure sce- narios, makes it possible to propose the best solution to the transport staff
Gigot, Sébastien. "Contribution à la conception d’une base de données de maintenabilité opérationnelle dynamique : Proposition de prise en compte des facteurs pénalisants dans l’estimation du MTTR [Mean Time To Repair]". Ecole nationale d'ingénieurs (Saint-Etienne), 2013. http://www.theses.fr/2013ENISE021.
Texto completoThis thesis is intended to describe a methodology for assessment of the risk of not satisfying the requirements of maintainability for industrial equipment complex in operation. It allows you to better understand the problematic of the overrun of the duration of repair linked to the activities of operational serviceability. Few of the studies are at the present time, devoted to this topic made complex by the multitude of different activities and a level of requirement always growing. The requirements of maintainability and availability appear more and more. Our proposal focuses on the evaluation of the criteria of maintainability criticisms involved in a process of maintainability in order to optimize the sequence of actions since the fault up to the rehabilitation service. The analysis of these inhibiting factors led us to develop a model for estimating of the MTTR to minimize the delta of derivatives linked to the overrun of the repair time. The results illustrated by specific examples allow to assess the maintainability of the system in relation to the objectives set and to propose, if necessary, actions to decrease of risks to optimize the system unavailability. This work is interested in the development of an approach to the modeling of complex systems for the assessment of maintenance strategies. Its culmination is a tool to help in the decision to build and meet the maintenance programs by performing the choice best suited. Our works have focused on the operational maintainability and on the importance of the estimation of repair times taking into account the context in which evolved the system in order to identify the events penally. These works have stressed the importance to be given to the methodology of treatment of a failure, by proposing to reconsider the concept of maintainability operational in order to better control the uncertainties related to the excedance of repair times
Khraibani, Hussein. "Modélisation statistique de données longitudinales sur un réseau routier entretenu". Ecole centrale de Nantes, 2010. http://www.theses.fr/2010ECDN0040.
Texto completoRoad transportation has a direct impact on a country's economy. Infrastructures, particularly pavements, deteriorate under the effect of traffic and climate. As a result, they most constantly undergo maintenance which often requires expensive works. The optimization of maintenance strategies and the scheduling of works necessarily pass by a study that makes use of deterioration evolution laws and accounts for the effect of maintenance on these laws. In this respect, numerous theoretical and experimental works ranging linear and nonlinear regressions to more sophisticated methods such as Markov chain have been conducted. The thesis presents a survey of models and methods and focuses on the analysis of survival data (MADS), an analysis which constituted the objective of important works at LCPC. In order to acount for the fact that current databases contain repeated measurements of each pavement section, the thesis proposes a different approach based on the use of nonlinear mixed-effects models (NLME). First, it carries out a comparison between the NLME and MADS models on different databases in terms of the goodness of fit and prediction capability. The comparison then allows to draw conclusions about the applicability of the two models
Libros sobre el tema "Réparation des données"
Monitoring technologies for bridge management. [Brentwood]: Multi Science Pub., 2011.
Buscar texto completoIngram, Geoff. High-Performance Oracle: Proven Methods for Achieving Optimum Performance and Availability. Wiley & Sons, Incorporated, John, 2009.
Buscar texto completoHigh-Performance Oracle: Proven Methods for Achieving Optimum Performance and Availability. Wiley, 2002.
Buscar texto completoCapítulos de libros sobre el tema "Réparation des données"
NAZAROV, Anatoly, János SZTRIK y Anna KVACH. "Résultats récents en files d’attente avec rappels à sources finies avec collisions". En Théorie des files d’attente 1, 247–97. ISTE Group, 2021. http://dx.doi.org/10.51926/iste.9001.ch8.
Texto completo