Dissertations / Theses on the topic 'Systèmes informatiques – Qualité'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Systèmes informatiques – Qualité.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Chabini, Noureddine. "Méthodes pour améliorer la qualité des implantations matérielles de systèmes informatiques." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp05/NQ65350.pdf.
Full textLe, Parc Philippe. "Méthodes et outils informatiques pour la commande locale ou distante de systèmes réactifs." Habilitation à diriger des recherches, Université de Bretagne occidentale - Brest, 2004. http://tel.archives-ouvertes.fr/tel-00496850.
Full textPucel, Xavier. "A unified point of view on diagnosability." Toulouse, INSA, 2008. http://eprint.insa-toulouse.fr/archive/00000237/.
Full textThe problem of model-based fault diagnosis in complex systems has received an increasing interest over the past decades. Experience has proved that it needs to be taken into account during the system design stage, by means of diagnosability analysis. Diagnosability is the ability of a system to exhibit different symptoms for a set of anticipated fault situations. Several approaches for diagnosability have been developed using different modelling formalisms. , yet the reasoning for diagnosability analysis is very similar in all these approaches. This thesis provides a comparison of these and a unified definition of diagnosability. An original approach for diagnosability analysis, based on partial fault modes, is described and implemented in the context of service oriented architecture, more precisely on web services. An original generalization of the definition of diagnosability to any set of system states is presented, that accounts for many kinds of properties, like repair preconditions or quality of service. This work opens perspectives for model independent diagnosability reasoning, diagnosability based on other types of models, and in integrating diagnosis into a general purpose supervision tool. Model-based diagnosis and diagnosability of software systems is still a young applicative domain, and opens many connections with the software safety engineering domain
Cambolive, Guillaume. "Scrables : un système intelligent d'audit." Toulouse 3, 1993. http://www.theses.fr/1993TOU30237.
Full textKhemaja, Maha. "Scrables : un système de conception et d'exploitation de documents structurés." Toulouse 3, 1993. http://www.theses.fr/1993TOU30275.
Full textChahed, Tijani. "La qualité de service de bout en bout dans l'Internet : mapping de la QoS entre IP et ATM, services intégrés et services différenciés." Versailles-St Quentin en Yvelines, 2000. http://www.theses.fr/2000VERS003V.
Full textGoichon, François. "Equité d'accès aux ressources dans les systèmes partagés best-effort." Phd thesis, INSA de Lyon, 2013. http://tel.archives-ouvertes.fr/tel-00921313.
Full textLayouni, Mouna farah. "Architecture pour la fédération de cercles de confiance dans une approche Système de systèmes." Electronic Thesis or Diss., Paris, CNAM, 2013. http://www.theses.fr/2013CNAM0898.
Full textMastering the increasing complexity of our socio-economic organizations and technology infrastructure requires more and more integrated information systems. The organization of our advanced societies could no longer grow on reasonable terms without the support of information technology.The different information systems belongs more and more to an informational meshing, forming in this way a complex system of systems which is getting complex day after day and because of that is becoming increasingly vital.This system of systems must meet two objectives: i) implementation of a system by integrating systems offered by different providers, ii) interoperation of systems already in use in order to obtain new properties due to their cooperation, usually with an efficiency increase.These objectives and this complexity can be grasped by a purely analytical approach. That is why we have advocated throughout this thesis a solution of system of systems based on an interoperability graph built on a hierarchical ontological comparison (foundation of trust). The platform of mobile agents associated with this system will implement a dynamic search of services which tries to satisfy the quality criteria required by the user
Gawedzki, Ignacy. "Algorithmes distribués pour la sécurité et la qualité de service dans les réseaux ad hoc mobiles." Paris 11, 2008. http://www.theses.fr/2008PA112240.
Full textCurrently available routing protocols for ad hoc networks assume the total absence of malicious participants, although this assumption is seldom true in practical applications. In this work, we look for a way to augment proactive protocols so as to let nodes watch over the network in a distributed fashion and measure the threat represented by every node. The measurement is used to extract a quality of service metric used in turn by the routing protocol to avoid the most suspected nodes, according to all the implemented detection methods. We propose to detect data packet loss, be it intentional or not. The detection is performed by a verification of the principle of flow conservation, based on the exchange of packet counters between neighbors. A scalable method of diffusion of these values is also proposed. The actual checking is used to maintain a local degree of distrust which is diffused in all the network and recombined by the nodes into a global metric of distrust in each node. The application to the OLSR protocol is described and its performance evaluated by simulation. We show that the solution is efficient and that the impact of the control overhead on the medium capacity remains low. Finally, an experimental platform of OLSR with quality of service and security is presented, which is aimed at making our solutions work in an actual real setup in order to reveal any potential problem that may appear with the use of commercial off-the-shelf hardware
Nassrallah, Rabih. "Modèle domanial pour la gestion et le contrôle de la qualité de service de bout en bout dans les réseaux IP hétérogènes." Troyes, 2007. http://www.theses.fr/2007TROY0014.
Full textThe uses of Internet network extended in last years by integrating new services IP. Contrary to the applications known as elastic, these advanced services induce constraints for their operations. However, traditional network IP (Best Effort) was conceived to transport information without guarantee neither of time nor of availability of bandwidth. Several evolutions of the network however made it possible to take into account the requirements for quality of service of the applications in a single domain managed overall by an operator. In this document, we are interested in the problem of the en-to-end quality of service over multiple domains networks. We focus on the heterogeneous aspect of the quality of service control over theses domains that can lead to an incoherent treatment of the IP packet of the same application through the various networks thus resulting a degradation of the service. This thesis proposes a model for the control of the en-to-end quality of service and the available bandwidth management over various networks. This model must help the operators to offer a homogeneous en-to-end service to flows in a real environment such as the Internet. Our model is based on an evaluation of the en-to-end quality of service prior to the flow admission in the network, associated to a degradation monitoring over all the duration of the service
Lambert, Anthony. "Méthodes pour l'amélioration de la qualité dans l'Internet inter-domaine." Compiègne, 2009. http://www.theses.fr/2009COMP1826.
Full textThe Internet is made of thousands of networks gathered into Autonomous Systems (AS) and controlled by various kinds of administrative entities. In order to provide full connectivity and reachability to the Internet, the AS connect to one another and exchange routing information about their networks through the BGP protocol. This thesis was all about developing new methods to improve the Quality of Service in the Internet at the AS level granularity. We first focused on improving our knowledge of the inter-domain dynamics through tomography and Root Cause Analysis studies. Then, based on the results of these studies, we have developed new timers, which reduce the path exploration phenomenon when applied to the BGP protocol
Layouni, Mouna farah. "Architecture pour la fédération de cercles de confiance dans une approche Système de systèmes." Thesis, Paris, CNAM, 2013. http://www.theses.fr/2013CNAM0898.
Full textMastering the increasing complexity of our socio-economic organizations and technology infrastructure requires more and more integrated information systems. The organization of our advanced societies could no longer grow on reasonable terms without the support of information technology.The different information systems belongs more and more to an informational meshing, forming in this way a complex system of systems which is getting complex day after day and because of that is becoming increasingly vital.This system of systems must meet two objectives: i) implementation of a system by integrating systems offered by different providers, ii) interoperation of systems already in use in order to obtain new properties due to their cooperation, usually with an efficiency increase.These objectives and this complexity can be grasped by a purely analytical approach. That is why we have advocated throughout this thesis a solution of system of systems based on an interoperability graph built on a hierarchical ontological comparison (foundation of trust). The platform of mobile agents associated with this system will implement a dynamic search of services which tries to satisfy the quality criteria required by the user
Djouama, Amir. "Contrôle de topologie ambiant dans les réseaux sans fil." Versailles-St Quentin en Yvelines, 2010. http://www.theses.fr/2010VERS0019.
Full textWith the evolution of the wireless communication systems and the increasing control of complexity in the material, it becomes possible to conceive network architectures dynamicaly controlable and equiped with a capacity of ambient decision. Within the framework of the thesis we propose to study and optimize the control of a network made up of mobile nodes wich communicate without infrastructure. Two levels of control will be considered, one being at the level of the lower layers while the other adresses the aspects relating to the higher layers. The dynamic control on the level of the lower layers relates to two aspects : 1- The adaptation of the topology of the network and the routing to the requests coming from the lower layers. 2- Optimization inter-layer (cross-layer) in order to use as well as possible the resources of the network and in particular the radio. The dynamic control on the level of the higher layer relates to the discovery and the adaptation of the application to the services of communications offered by the lower layers. In a second step, we study the admission control. We propose to study the local parameters of each node, wich are interesting for the continuity of sessions. An approach for admission control is given which interact lifetime of nodes and their point of attachment
Vasilas, Dimitrios. "A flexible and decentralised approach to query processing for geo-distributed data systems." Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS132.
Full textThis thesis studies the design of query processing systems, across a diversity of geo-distributed settings. Optimising performance metrics such as response time, freshness, or operational cost involves design decisions, such as what derived state (e.g., indexes, materialised views, or caches) to maintain, and how to distribute and where to place the corresponding computation and state. These metrics are often in tension, and the trade-offs depend on the specific application and/or environment. This requires the ability to adapt the query engine's topology and architecture, and the placement of its components. This thesis makes the following contributions: - A flexible architecture for geo-distributed query engines, based on components connected in a bidirectional acyclic graph. - A common microservice abstraction and API for these components, the Query Processing Unit (QPU). A QPU encapsulates some primitive query processing task. Multiple QPU types exist, which can be instantiated and composed into complex graphs. - A model for constructing modular query engine architectures as a distributed topology of QPUs, enabling flexible design and trade-offs between performance metrics. - Proteus, a QPU-based framework for constructing and deploying query engines. - Representative deployments of Proteus and experimental evaluation thereof
Ribamar, Martins Bringel Filho José de. "CxtBAC : une famille de modèles de contrôle d’accès sensible au cotexte pour les environnements pervasifs." Grenoble, 2010. http://www.theses.fr/2010GRENM059.
Full textIn persasive environments, with the possibility of offering users distributed access on applications and services, from anywhere and at anytime, new issues arise with regard to access control mechanism. Generaly, the existing access control solutions make static user-permission associations and are unaware of the situation (context) when defining and enforcing access control policies. In order to address these issues, we propose a family of Context-Based Access Control models, named CxtBAC (Context-Based Access Control), which is composed by eight conceptual models that can be used as basis to construct context-based access control solutions. CxtBAC models explore contextual information as central concept for assigning permissions to users. In fact, context information can describe the situation of resource owners, resource requestors, resources, and the environment around them. Unlike existing access control proposals such as RBAC-based solutions (Role-Based Access Control), CxtBAC makes access decisions taking into account the contextual information that characterizes the situation of involved entities (e. G. , resource owner and resource requestor). In a CxtBAC access rule, a set of permission is associated with an access context and users are dynamically associated with that access context. CxtBAC is independent of security policy language used to describe access control policies. As part of this work, we have proposed an implementation of CxtBAC policies based on ontologies and inference rules
Elloumi, Imène. "Gestion de la mobilité inter réseaux d'accès et de la qualité de service dans une approche NGN/IMS." Toulouse 3, 2012. http://thesesups.ups-tlse.fr/1852/.
Full textIn the new landscape of multi-service convergence of the NGN/IMS approach and considering the needs of the user of NGN, that is mobile in a very heterogeneous environment, we have proposed informational, architectural and organizational adaptations in order to monitor the inter network mobility of access and ensure the complete continuity of the QoS in such an environment. Indeed, we have noticed that the bases of the knowledge of the users' profiles lack decision-making information relative to a later analysis. Consequently, there is a need to raise the level of abstraction of the available information. Our first contribution is therefore informational. Decisional information are added to the HSS basis to enrich the knowledge base, which is expressed under the form of "profile of QoSd", where the new information inform directly the decisions to be taken according to the user's profile (preferences QoS and pricing, bandwidth, location. . . . ). Thus, this new knowledge basis will allow dynamic adaptations and will, therefore, make the IMS session more effective. We have, therefore, modeled QoS information relative to users' profiles and IMS offered services by using the CIM classes that we call "QoS Pattern of the IMS information". From the architectural point of view, we have simulated new components to intercept the useful lacking information to ensure the management of an IMS session in real time. Our second contribution consists in adding new components of constituent management IQMS: "Interworking QoS Management Subsystem", that collect the extracted information from the protocols at all levels in a mobile IMS session and will be able to manage the users' mobility (QoS handover and QoS interworking). Our third contribution is proposed organizational subsystem for QoS management in the multiple providers context when a user can to subscribe to one or several providers according to QoS criteria "Interworking QoS Management Subsystem multiple providers: IQMSmp"
Houmani, Nesma. "Analyse de la qualité des signatures manuscrites en-ligne par la mesure d'entropie." Phd thesis, Institut National des Télécommunications, 2011. http://tel.archives-ouvertes.fr/tel-00765378.
Full textPham, Thanh Son. "Autonomous management of quality of service in virtual networks." Thesis, Compiègne, 2014. http://www.theses.fr/2014COMP2147/document.
Full textThis thesis presents a fully distributed resilient routing scheme for switch-based networks. A failure is treated locally, so other nodes in the network do not need to undertakespecial actions. In contrast to conventional IP routing schemes, each node routesthe traffic on the basis of the entering arc and of the destination. The resulting constraintis that two flows to the same destination entering in a node by a common archave to merge after this arc. We show that this is sufficient for dealing with all singlelink failure situations, assuming that the network is symmetric and two-link connected.We model the dimensioning problem with an Integer Linear Program which can besolved exactly for small networks. We also propose several heuristics for larger networks.Our method generalizes the methods of Xi et Chao and Li and of Nelakuditiet al. who have proposed similar schemes in the context of IP. Our methods are moreefficient than previous ones. We have also studied the existence of a resilient routingscheme for single node failure situation in switch-based network. We study also thecase of multi-link failure situations and show that requiring the network to be connectedafter any failure does not guarantee the existence of a resilent routing schemeas described above
Boudaoud, Nassim. "Conception d'un système de diagnostic adaptatif en ligne pour la surveillance des systèmes évolutifs." Compiègne, 1997. http://www.theses.fr/1997COMP1060.
Full textMeskens, Nadine. "Contribution à la conception d'un système d'analyse de la qualité de programmes informatiques." Valenciennes, 1991. https://ged.uphf.fr/nuxeo/site/esupversions/6e227419-3072-403c-a486-15046e429a36.
Full textThis cost minimization must be preceeded by a diagnostic that allows to measure the effort required by a good quality and recommends actions to undertake. Many tools have been developed to obtain a quantitative view on software quality. However, much work remains to be done. Our research belongs to that stream of research. We show the limits of metrics and propose an expert system approach that allows to cover both syntactic and semantic aspects of software. Besides the software quality evaluation, the system helps to improve this quality by emphasizing the most penalizing quality criteria for the program under review. Given this information, programs could be graded and their maintenance cost reduced so that human ressources could develop new applications. In addition, an important work of collection of metrics and elaboration of quality checklists has been done
Deslandres, Véronique. "Contribution à l'amélioration de la qualité dans les systèmes de production par un système basé sur la connaissance." Lyon 1, 1993. http://www.theses.fr/1993LYO10177.
Full textKondratyeva, Olga. "Timed FSM strategy for optimizing web service compositions w.r.t. the quality and safety issues." Thesis, Université Paris-Saclay (ComUE), 2015. http://www.theses.fr/2015SACLL004/document.
Full textService-oriented architecture (SOA) together with a family of Everything-as-a-Service (XaaS) concepts nowadays are used almost everywhere, and the proper organization of collaborative activities becomes an important challenge. With the goal of bringing to the end-user safe and reliable service with the guaranteed level of quality, issues of service compositions verification and validation become of high practical and theoretical interest. In the related works, numerous models and techniques are proposed, but mostly focused on functional and non-functional issues in isolation, while integration of these parameters within unified formal framework still remains a problem to be solved – and therefore became one of the core objectives of this thesis. In our work, we address the problems of web service composition verification and optimization with respect to functional, quality and safety properties of the composition. Finite state models are proven to be useful for testing and verification purposes as well as for service quality evaluation at each step of service development. Therefore, we propose to use the model of Finite State Machine with Timeouts (TFSM) for integrating functional service description with time-related quality and safety parameters, and derive the extension of the model in order to adequately inherit significant nondeterminism due to the lack of observability and control over third-party component services. For the purpose of component optimization in the composition, we propose a method for deriving the largest solution containing all allowed component service implementations, based on solving TFSM parallel equation. Further, techniques for extracting restricted solutions with required properties (minimized/maximized time parameters, deadlock- and livelock-safety, similarity to the initially given component, etc.) have been proposed. In cases when the specification of a composite service is provided as a set of functional requirements, possibly, augmented with quality requirements, we propose a technique to minimize this set with respect to the component under optimization. Application of the obtained results for more efficient candidate component services discovery and binding, alongside with extending the framework for more complex distributed modes of communications, are among the topics for the future work
Grim-Yefsah, Malika. "Gestion des connaissances et externalisation informatique. Apports managériaux et techniques pour l'amélioration du processus de transition : Cas de l’externalisation informatique dans un EPST." Thesis, Paris 9, 2012. http://www.theses.fr/2012PA090047/document.
Full textThe research of this thesis deals with the issue of knowledge transfer during the transition process of an IT project outsourced in EPST. In particular, How to transfer knowledge, experience and routines related to outsourced activities from outgoing team to a new incoming team? We focus on the transition due to its significance for outsourcing success, its complexity and theoretical richness, and its limited current understanding. We chose to approach this problem through knowledge management. In the first part of this thesis, based on the Goal-Question-Metric paradigm, we propose an approach for the definition of quality metrics covering the given operational requirements. The metrics we define take tacit knowledge into account, using information from the structural analysis of an informal network. In a second phase of this research, we developed a method, relying on capitalization on knowledge and theoretical mechanisms of knowledge transfer, and a tool to implement this process of knowledge transfer
Gemayel, Charbel El. "Approche comportementale pour la validation et le test système des systèmes embarqués : Application aux dispositifs médicaux embarqués." Thesis, Lyon, INSA, 2014. http://www.theses.fr/2014ISAL0135/document.
Full textA Biomedical research seeks good reasoning for solving medical problems, based on intensive work and great debate. It often deals with beliefs or theories that can be proven, disproven or often refined after observations or experiments. The problem is how to make tests without risks for patients, including variability and uncertainty on a number of parameters (patients, evolution of disease, treatments …). Nowadays, medical treatment uses more and more embedded devices such as sensors, actuators, and controllers. Treatment depends on the availability and well-functioning of complex electronic systems, comprising thousands of lines of codes. A mathematical representation of patient or device is presented by a number of variables which are defined to represent the inputs, the outputs and a set of equations describing the interaction of these variables. The objective of this research is to develop tools and methodologies for the development of embedded systems for medical fields. The goal is to be able to model and jointly simulate the medical device as well the human body, at least the part of the body involved in the medical device, to analyze the performance and quality of service (QoS) of the interaction of the device with the human body. To achieve this goal our study focused on several points described below. After starting by defining a prototype of a new global and flexible architecture of mathematical model of human body, which is able to contain required data, we begin by proposing a new global methodology for modeling and simulation human body and medical systems, in order to better understand the best way to model and simulate these systems and for detecting performance and the quality of services of all system components. We use two techniques that help to evaluate the calculated QoS value. The first one calculates an index of severity which indicates the severity of the case studied. The second one using a normalization function that represents the simulation as a point in order to construct a new error grid and use it to evaluate the accuracy of value measured by patients. Using Keil development tools designed for ARM processors, we have declared a new framework in the objective to create a new tester model for the glucose-insulin system, and to define the basic rules for the tester which has the ability to satisfy well-established medical decision criteria. The framework begins by simulating a mathematical model of the human body, and this model was developed to operate in the closed loop of the glucose insulin. Then, the model of artificial pancreas has been implemented to control the mathematical model of human body. Finally a new tester model was created in order to analyze the performance of all the components of the glucose-insulin system.. We have used the suitability of partially observable Markov decision processes to formalize the planning of clinical management
Abdallah, Maïssa. "Ordonnancement temps réel pour l'optimisation de la qualité de service dans les systèmes autonomes en énergie." Nantes, 2014. http://archive.bu.univ-nantes.fr/pollux/show.action?id=cf544609-5d3c-45cb-9a2e-5ef19186f58d.
Full textIn this thesis, we focus on firm real-time applications allowing some timing constraints not to be meet (the ratio of satisfied constraints represents the level of Quality of Service provided by the system). These are expressed by deadlines i. E. The dates by which the jobs of the application must have completed their execution. Targeted real-time applications are very diverse such as multimedia ones or sensor networks which can occasionally tolerate some data loss. The aim of this thesis is to propose and validate through simulation, new scheduling strategies to optimize the Quality of Service. This work is based on previous works undertaken in the laboratory that focused on both real-time systems without energy consideration but subject to processing overload and autonomous energy systems without overload situations. Our contribution concerns fully autonomous systems powered by ambient energy and subject to both timing and energy constraints. Firstly, we consider a single-frequency uniprocessor system that only schedules periodic tasks (e. G. Monitoring / control), powered by an energy reservoir which is charged through an ambient energy source. The proposed Skip-Over model-based methods provide a solution to the management of both processing overload situations and energy starvation cases. Secondly, we extend our model to handle aperiodic non-critical tasks in our system. We provide a solution to the problem of minimizing the response time
Mehmood, Kashif. "Conception des Systèmes d'Information : une approche centrée sur les Patrons de Gestion de la Qualité." Electronic Thesis or Diss., Paris, CNAM, 2010. http://www.theses.fr/2010CNAM0721.
Full textConceptual models (CM) serve as the blueprints of information systems and their quality plays decisive role in the success of the end system. It has been witnessed that majority of the IS change-requests result due to deficient functionalities in the information systems. Therefore, a good analysis and design method should ensure that CM are correct and complete, as they are the communicating mediator between the users and the development team. Our approach targets the problems related to conceptual modeling quality by proposing a comprehensive solution. We designed multiple artifacts for different aspects of CM quality. These artifacts include the following: i. Formulation of comprehensive quality criteria (quality attributes, metrics, etc.) by federating the existing quality frameworks and identifying the quality criteria for gray areas. Most of the existing literature on CM quality evaluation represents disparate and autonomous quality frameworks proposing non-converging solutions. Thus, we synthesized (existing concepts proposed by researchers) and added the new concepts to formulate a comprehensive quality approach for conceptual models that also resulted in federating the existing quality frameworks. ii. Formulation of quality patterns to encapsulate past-experiences and good practices as the selection of relevant quality criteria (including quality attributes and metrics) with respect to a particular requirement (or goal) remains trickier for a non-expert user. These quality patterns encapsulate valuable knowledge in the form of established and better solutions to resolve quality problems in CM. iii. Designing of the guided quality driven process encompassing methods and techniques to evaluate and improve the conceptual models with respect to a specific user requirement or goal. Our process guides the user in formulating the desired quality goal, helps him/her in identifying the relevant quality patterns or quality attributes with respect to the quality goal and finally the process helps in evaluating the quality of the model and propose relevant recommendations for improvement. iv. Development of a software prototype “CM-Quality”. Our prototype implements all the above mentioned artifacts and proposes a workflow enabling its users to evaluate and improve CMs efficiently and effectively. We conducted a survey to validate the selection of the quality attributes through the above mentioned federating activity and also conducted three step detailed experiment to evaluate the efficacy and efficiency of our overall approach and proposed artifacts
Djedaini, Mahfoud. "Automatic assessment of OLAP exploration quality." Thesis, Tours, 2017. http://www.theses.fr/2017TOUR4038/document.
Full textIn a Big Data context, traditional data analysis is becoming more and more tedious. Many approaches have been designed and developed to support analysts in their exploration tasks. However, there is no automatic, unified method for evaluating the quality of support for these different approaches. Current benchmarks focus mainly on the evaluation of systems in terms of temporal, energy or financial performance. In this thesis, we propose a model, based on supervised automatic leaming methods, to evaluate the quality of an OLAP exploration. We use this model to build an evaluation benchmark of exploration support sys.terns, the general principle of which is to allow these systems to generate explorations and then to evaluate them through the explorations they produce
Mehmood, Kashif. "Conception des Systèmes d'Information : une approche centrée sur les Patrons de Gestion de la Qualité." Phd thesis, Conservatoire national des arts et metiers - CNAM, 2010. http://tel.archives-ouvertes.fr/tel-00922995.
Full textLéon, Chávez Miguel Angel. "Qualité de service et ordonnancement dans les systèmes de communication temps réel." Vandoeuvre-les-Nancy, INPL, 2000. http://docnum.univ-lorraine.fr/public/INPL_T_2000_LEON_CHAVEZ_M.pdf.
Full textIn this thesis, we are interested in special real time communication systems called Fieldbus networks, i. E. Without network nor transport layers. Most of these systems are designed to satisfy certain user time-related requirements, basically the periodic exchanges and sometimes the sporadic exchanges. Generally, the requirements are met and guaranteed by a static or off-line configuration of the message scheduling. We propose a dynamic solution in place of the static solutions. This solution takes as starting points the Quality of Service architecture defined in the wide-area networks and the Integrated service model of the Internet community which are generally used at the network layer. We propose then a dynamic resource reservation protocol which can be used by any MAC protocol of type centralized and for any topology. In addition, this protocol can accept various scheduling algorithms. It was specified in terms of state-transition systems and was validated using the ObjectGEODE tool
Ould, Sass Mohamed. "Le modèle BGW pour les systèmes temps réel surchargés : Ordonnancement monoprocesseur." Nantes, 2015. https://archive.bu.univ-nantes.fr/pollux/show/show?id=f97b4a19-e66c-4a8b-a74f-b09cf86d6e8c.
Full textReal-time embedded systems are found in various application domains. They have to offer an increasing number of functionalities and to provide the highest Quality of Service despite possible failures due to faults or processing overloads. In such systems, programs are characterized by upper bounds on finishing times and the QoS is assessed by the ratio of successful deadlines. In this thesis, we deal with this issue. We focus on a uniprocessor architecture in the framework of a firm real-time application that accepts deadline missing under some specified limits. Tasks are assumed to be periodic. Our first contribution lies in the proposition of a novel model for tasks which is called BGW model. It is drawn from two approaches respectively known as the skip-over model and the Deadline Mechanism. The first one provides timing fault-tolerance through passive dynamic software redundancy with two versions. The second one copes with transient processing overloads by discarding instances of the periodic tasks in a controlled and pre-specified way. We give a feasibility test for this model. In a second part, we describe the behavior of dynamic priority schedulers based on EDF (Earliest Deadline First) for BGW task sets. A performance analysis is reported which is mainly related to QoS evaluation and measurement of overheads (complexity of the scheduler). The following contribution concerns more sophisticated schedulers that permit to enhance the QoS as to improve service balancing
Hachem, Nabil. "MPLS-based mitigation technique to handle cyber attacks." Electronic Thesis or Diss., Evry, Institut national des télécommunications, 2014. http://www.theses.fr/2014TELE0013.
Full textCyber attacks cause considerable losses not only for end-users but also service providers. They are fostered by myriad of infected resources and mostly rely on network resources for whether propagating, controlling or damaging. There is an essential need to address these numerous attacks by efficient defence strategies. Researchers have dedicated large resources without reaching a comprehensive method to protect from network attacks. Defence strategies involve first a detection process, completed by mitigation actions. Research on detection is more active than on mitigation. Yet, it is crucial to close the security loop with efficient technique to mitigate counter attacks and their effects. In this thesis, we propose a novel technique to react to attacks that misuse network resources, e.g., DDoS, Botnet, worm spreading, etc. Our technique is built upon network traffic management techniques. We use the Multiprotocol Label Switching (MPLS) technology to manage the traffic diagnosed to be part of a network misuse by detection processes. The goals of our technique can be summarized as follows: first to provide the means — via QoS and routing schemes — to segregate the suspicious flows from the legitimate traffic; and second, to take control over suspicious flows. We profit from the enhancement on the inter-domain MPLS to permit a cooperation among providers building a large-scale defence mechanism. We develop a system to complete the management aspects of the proposed technique. This system performs tasks such as alert data extraction, strategy adaptation and equipments configurations. We model the system using a clustering method and a policy language in order to consistently and automatically manage the mitigation context and environment in which the proposed technique is running. Finally, we show the applicability of the technique and the system through simulation. We evaluate and analyse the QoS and financial impacts inside MPLS networks. The application of the technique demonstrates its effectiveness and reliability in not only alleviating attacks but also providing financial benefits for the different players in the mitigation chain, i.e., service providers
Costa, João Lino. "Méthodologie pour une amélioration de la qualité de production au Portugal." Chambéry, 2000. http://www.theses.fr/2000CHAMS004.
Full textBen, Hedia Belgacem. "Analyse temporelle des systèmes d'acquisition de données : une approche à base d'automates temporisés communicants et d'observateurs." Lyon, INSA, 2008. http://theses.insa-lyon.fr/publication/2008ISAL0111/these.pdf.
Full textAl-Moussa, Yasser. "Un langage de spécification de la connaissance statistique dans un cadre de système expert de contrôle de qualité." Aix-Marseille 3, 1991. http://www.theses.fr/1991AIX32036.
Full textThis work want to define a model of an expert system for representation and resolution of the quality statistical control problem (reception control and by attributs case). The structure of this model is based on the definition of a specification language of statistical knowledge for representation and resolution of the problem by the decisional statistical model (conception methodes of the decision rules according to the differentes statistical principales). The statistical knwledge are sometime heuristic, so they are represented by production rules : algorithmic ones are represented by tangly processes, ordered by the rules. The same type of knowledge are grouped in the same modules. Each module has got one or more tasks, according a general reasoning strategy;. The expertise in the domain (conception methode of a sampling plan) are represebted by strategies, in a module "control strategies". The knowledges of these modules are susceptible to create formal representation by the specification language, by the countershaft of the module "representation by statistic" of the environment and the objectives control" with conceptionof the decisional statistical model
Denis, Éloïse. "Objets-tests numériques pour le contrôle de qualité des systèmes de planification géométrique des traitements en radiothérapie." Nantes, 2008. http://www.theses.fr/2008NANT2135.
Full textThis work presents the conception and implementation of new automatic and quantitative quality assessment methods for geometric treatment planning in external radiotherapy. Treatment planning Systems (TPS) quality control is mandatory in France and in the world because of encountered risks but the physical tools recommended to lead this quality control are not adapted to the situation. We present a new methodology for control quality based on the definition of Digital Test Objects (DTO) that are directly introduced in the TPS without acquisition device. These DTO are consistently defined in a continuous and discrete modes. The TPS responses to input DTO are compared to theoretical results thanks to figures of merit specifically designed for each elementary control. The tests we carried out during this study allow to validate our solutions for the quality assessment of the auto-contouring, auto-margining, isocenter computation, collimator conformation and digitally reconstructed radiograph generation tools, as well as our solutions for marker positioning, collimator and displayed bean rotation, incidence, divergence and dimensions. Quality assessment solutions we propose are then fast and effective ( no acquisition by the device, reduced manipulations), and more precise thanks to the continuous-discrete equivalence realized at the beginning of the modelling
Ould, Cheikh Sidi. "Routage avec qualité de service des réseaux Mesh IEEE 802. 11s." Versailles-St Quentin en Yvelines, 2013. http://www.theses.fr/2013VERS0039.
Full textThe quality of service (QoS) remains a major challenge to improve the performance of the mesh networks based on IEEE 802. 11s. It is in this context that the contributions are part of our thesis, which improves the routing and quality of service (QoS) in the WMN networks. In order to provide a solution to this challenge and to improve the quality of service of real-time traffic, we propose a new method based on the reservation of bandwidth, combined with the protocol HWMP (Hybrid Wireless Mesh Protocol). This new method is called BRWMN (Multi-hop Bandwidth Reservation in WMN) and defines a technical reservation of bandwidth and a new metric called WAM (Weighted Airtim Metric) for the HWMP protocol. MBRWMN aims to provide required bandwidth hop-by-hop for the real- time traffic and uses an admission control so as to carry this out. However, to reduce the end-to-end delay and increase the throughput, we propose a new metric based on the diversity of channels combined with the transmission delay of packets. This new routing metric named NMH (New Metric for HWMP protocol) is used by the HWMP protocol. The solution we propose aimes to provide a better route for calculating the value of the metric NMH implemented by the HWMP protocol. For the same purpose and to improve the mesh network performance, we propose the method ODCAM (On Demand Channel Assignment Method), which proposes a new mechanism for channel diversity based on a hybrid method of allocating channels. The metric MWCETT (Modified Weighted Cumulative Expected Transmission time) is implemented by the HWMP protocol. In order to decrease the time of end-to-end delay and increase the throughput, our method calculates the MWCETT metric value along the route between the source and the destination
Hamze, Mohamad. "Autonomie, sécurité et QoS de bout en bout dans un environnement de Cloud Computing." Thesis, Dijon, 2015. http://www.theses.fr/2015DIJOS033/document.
Full textToday, Cloud Networking is one of the recent research areas within the Cloud Computing research communities. The main challenges of Cloud Networking concern Quality of Service (QoS) and security guarantee as well as its management in conformance with a corresponding Service Level Agreement (SLA). In this thesis, we propose a framework for resource allocation according to an end-to-end SLA established between a Cloud Service User (CSU) and several Cloud Service Providers (CSPs) within a Cloud Networking environment (Inter-Cloud Broker and Federation architectures). We focus on NaaS and IaaS Cloud services. Then, we propose the self-establishing of several kinds of SLAs and the self-management of the corresponding Cloud resources in conformance with these SLAs using specific autonomic cloud managers. In addition, we extend the proposed architectures and the corresponding SLAs in order to deliver a service level taking into account security guarantee. Moreover, we allow autonomic cloud managers to expand the self-management objectives to security functions (self-protection) while studying the impact of the proposed security on QoS guarantee. Finally, our proposed architecture is validated by different simulation scenarios. We consider, within these simulations, videoconferencing and intensive computing applications in order to provide them with QoS and security guarantee in a Cloud self-management environment. The obtained results show that our contributions enable good performances for these applications. In particular, we observe that the Broker architecture is the most economical while ensuring QoS and security requirements. In addition, we observe that Cloud self-management enables violations and penalties’ reduction as well as limiting security impact on QoS guarantee
Bouali, Tarek. "Platform for efficient and secure data collection and exploitation in intelligent vehicular networks." Thesis, Dijon, 2016. http://www.theses.fr/2016DIJOS003/document.
Full textNowadays, automotive area is witnessing a tremendous evolution due to the increasing growth in communication technologies, environmental sensing & perception aptitudes, and storage & processing capacities that we can find in recent vehicles. Indeed, a car is being a kind of intelligent mobile agent able to perceive its environment, sense and process data using on-board systems and interact with other vehicles or existing infrastructure. These advancements stimulate the development of several kinds of applications to enhance driving safety and efficiency and make traveling more comfortable. However, developing such advanced applications relies heavily on the quality of the data and therefore can be realized only with the help of a secure data collection and efficient data treatment and analysis. Data collection in a vehicular network has been always a real challenge due to the specific characteristics of these highly dynamic networks (frequent changing topology, vehicles speed and frequent fragmentation), which lead to opportunistic and non long lasting communications. Security, remains another weak aspect in these wireless networks since they are by nature vulnerable to various kinds of attacks aiming to falsify collected data and affect their integrity. Furthermore, collected data are not understandable by themselves and could not be interpreted and understood if directly shown to a driver or sent to other nodes in the network. They should be treated and analyzed to extract meaningful features and information to develop reliable applications. In addition, developed applications always have different requirements regarding quality of service (QoS). Several research investigations and projects have been conducted to overcome the aforementioned challenges. However, they still did not meet perfection and suffer from some weaknesses. For this reason, we focus our efforts during this thesis to develop a platform for a secure and efficient data collection and exploitation to provide vehicular network users with efficient applications to ease their travel with protected and available connectivity. Therefore, we first propose a solution to deploy an optimized number of data harvesters to collect data from an urban area. Then, we propose a new secure intersection based routing protocol to relay data to a destination in a secure manner based on a monitoring architecture able to detect and evict malicious vehicles. This protocol is after that enhanced with a new intrusion detection and prevention mechanism to decrease the vulnerability window and detect attackers before they persist their attacks using Kalman filter. In a second part of this thesis, we concentrate on the exploitation of collected data by developing an application able to calculate the most economic itinerary in a refined manner for drivers and fleet management companies. This solution is based on several information that may affect fuel consumption, which are provided by vehicles and other sources in Internet accessible via specific APIs, and targets to economize money and time. Finally, a spatio-temporal mechanism allowing to choose the best available communication medium is developed. This latter is based on fuzzy logic to assess a smooth and seamless handover, and considers collected information from the network, users and applications to preserve high quality of service
Caneva, Sandra. "Contrôle de qualité des systèmes de calcul de distribution de dose en radiothérapie." Toulouse 3, 2001. http://www.theses.fr/2001TOU30189.
Full textAlaya, Bechir. "Gestion de qualité de la service dans les systèmes multimédia distribués." Le Havre, 2012. http://www.theses.fr/2012LEHA0015.
Full textMultimedia applications manage great quantities of data, whose exploitation must respect temporal constraints in order to read the video packets without interruption. Considering the similarities existing between multimedia applications and Real-Time Databases Systems (RTDBSs), we proposed an approach which consists of adapting researches done on quality of service (QoS) management in RTDBSs to distributed multimedia systems. We present a method to control QoS of distributed multimedia applications, which allows the control of application components (master server, video servers and network). We then propose a new method to QoS management into the master server and into the video servers. From this representation, we propose a feedback control architecture applied to the master server and a replication strategy of video content in case of overload of video servers. Then, we develop an approach to QoS management in case of network overload. Experiments have permitted to evaluate the performance of our approaches implementation
Djelouah, Redouane. "Vérification et réparation interactive de bases de connaissances : le système ICC de traitement des incohérences et des incomplétudes." Angers, 2004. http://www.theses.fr/2004ANGE0031.
Full textTo use a knowledge base we must verify its quality by ensuring it does not contain what we call an anomaly. In this thesis, we use knowledge bases which are rule bases. Two types of anomalies revealing serious errors are studied : incoherencies and incompleteness. An incompleteness shows the necessity to complete the knowledge in the base so as to cover the whole area studied. An incoherence shows the need to reduce the knowledge in the base so as to eliminate the contradictory deductions. In the first phase of our work called verification of the base we propose, on the one hand, a formal characterization of incoherency and incompleteness of a rule base, on the other hand, we propose algorithms to detect these anomalies. Here we propose a new formal characterization of a rule base called C_Cohérence, this improves characterizations found in other studies. The second phase of our work called reparation of the base offers a new method of repairing the contents of a rule base which eliminates the incoherencies and incompleteness detected in the first phase. This repair takes place with interaction with an expert : we suggest modifications of the base contents to the expert who then decides whether to apply them one by one. The two phases of verification and reparation were implemented in a system called ICC
Goubertier, Pascal. "Contribution à une meilleure prise en compte du choix des liaisons dans le processus de conception de produits." Châtenay-Malabry, Ecole centrale de Paris, 1993. http://www.theses.fr/1993ECAP0315.
Full textLe, Duc Bao. "A QoI-aware framework for adaptative monitoring." Paris 6, 2010. http://www.theses.fr/2010PA066636.
Full textLes systèmes distribués et ubiquitaires sont maintenant déployés de manière massive et avec des contraintes de disponibilité 24 heures sur 24 et 7 jours sur 7. Dans ce contexte, la supervision devient une activité fondamentale et transervale dans les systèmes informatiques d'entreprise. Au-delà de l'administration traditionnelle des systèmes et du contrôle de leur charge, de nouvelles activités requièrent de plus en plus une gestion automatisée de ces systèmes, amenant à de nouvelles exigences de supervision. Des tâches spécifiques telles que la planification, l'allocation de ressources et le diagnostic fondent leurs décisions sur des informations dynamiques et continues provenant de la supervision des services, des systèmes et des infrastructures. De plus, ces prises de décision et plus généralement, la gestion autonome des systèmes, sont désormais organisés autour de Service Level Agreements (SLAs) se référant à des critères de Qualité de Service (QdS). Comme d'importantes fluctuations de QdS sont communément subies par les clients lors des appels distants de service, une grande variation dans les exigences de supervision est aussi observée, que ce soit leur durée de vie, leur précision et leur granularité. Ceci se définit généralement comme la Qualité de l'Information (QdI), c'est-à-dire une expression des propriétés requises sur les données supervisées [Buchholz03]. Par ailleurs les contextes de déploiement ont aussi évolué en taille et en complexité, à partir de systèmes centralisés couplés à un réseau à faible latence, en passant par des infrastructures inter-entreprises à grande échelle et à forte latence, pour aboutir à des systèmes ubiquitaires caractérisés par des contextes fortement dynamiques. La contribution de cette thèse est ADAMO (ADAptive MOnitoring), un canevas de supervision adaptatif pour la gestion de la QdI. Partant de sources de données produisant des flots dynamiques, ce canevas permet de prendre en compte des requêtes de supervision de données explicitant les sources ainsi que la QdI requises de chacune, et de les transformer en paramètres de configuration des sources de base, tout en tenant compte de contraintes de ressources, comme la bande passante réseau totale affectée à la transmission de ces données. Nous partons ainsi du principe que, dans un système de supervision, des arbitrages sont souvent nécessaires entre la QdI, requise par les systèmes de prise de décision, et les ressources du système pour effectuer les supervisions nécessaires. Lorsque la QdI demandée est trop élevée, l'activité de supervision dégrade fortement les performances du système sous-jacent. En observant que les clients ont souvent tendance à s'intéresser aux mêmes données, mais avec des besoins de QdI différents, ADAMO repose sur une approche de résolution de contraintes pour fournir des mécanismes, statiques et dynamiques, qui gèrent un accès partagé, optimisé et flexible aux données par la génération et la configuration des composants de gestion de la QdI et de la QdS. Construit d'entités abstraites et générales, ADAMO vise à englober les capacités d'un grand nombre de systèmes de supervision. Afin d'être réutilisable et extensible, il est aussi fondé sur une architecture à composants et fournit des points d'extension, facilitant ainsi l'introduction de nouvelles fonctionnalités. Différentes parties de l'architecture sont aussi configurables, et peuvent être partiellement générées à partir de descriptions de haut niveau correspondant aux exigences de supervision. Enfin, le canevas s'auto-adapte de façon dynamique aux variations du niveau des ressources. Ce mécanisme est lui-même construit en utilisant tous les éléments du canevas, et illustre ainsi ses capacités d'abstraction et d'extension
Boite, Julien. "Acheminement différencié et auto-adaptatif des flux réseau pour la qualité de livraison des serivces." Troyes, 2012. http://www.theses.fr/2012TROY0032.
Full textCommunication networks allow accessing a wide and diversified range of services that took a prominent place in our activities. The solicitation of a service generates a set of flows. The network is responsible for routing and forwarding them. Some flows are subject to particular criticality or quality constraints that must be met to ensure that services are delivered in accordance with users’ growing expectations. The network infrastructure combines the various means to reach the elements involved in the delivery of services. The unification of these means is a wealth which we should take profit of better than we do now. In this thesis, we investigate how to make the network able to exploit all these resources in a timely manner by automatically forwarding flows according to their constraints and network status. It requires to monitor the performance offered by different paths to state on their ability to deal with the respective needs of flows. Then, the network must be configured dynamically to set up this differentiated and flexible forwarding of flows. We propose mechanisms for performing measurements and dynamic forwarding to integrate adaptive flow processing functionalities within the network. We instantiate these mechanisms to deal with an adaptive gateway selection in heterogeneous wireless mesh networks. We also investigate the instability problem that can occur with adaptive routing in a large overlay network. We assess the extent of this problem and propose a mechanism that reduces this instability
Jumel, Fabrice. "Définition et gestion d'une qualité de service pour les applications temps réel." Vandoeuvre-les-Nancy, INPL, 2003. http://www.theses.fr/2003INPL094N.
Full textThe aim of this thesis is to provide some helps for the design of real time control system. The main contribution is to link the quality of the controlled system with the different parts of the control application's implementation. Both analytical analysis and simulation have been studied, using different models for the architecture of the control application. Those models allow a direct study of the nominal mode and have been adapted in order to propose some evaluation techniques for the safety of the control system in presence of transient faults. Finally, different scheduling techniques, especially based on "real time" resource reservation algorithms, have been presented in order to increase the quality of those applications
Shahram, Nourizadeh. "Un système de télésanté contextuel avec support de qualité de service pour le maintien à domicile." Phd thesis, Institut National Polytechnique de Lorraine - INPL, 2011. http://tel.archives-ouvertes.fr/tel-00645544.
Full textLabéjof, Jonathan. "R-* : Réflexivité au service de l'évolution des systèmes de systèmes." Thesis, Lille 1, 2012. http://www.theses.fr/2012LIL10078/document.
Full textIn a connected world, interconnection of heterogeneous systems becomes a need. Systems of systems (SoS) answer to this need by providing global supervision and control over such systems, named sub-systems in the context of SoS. Some sub-systems face to a dynamic environment, therefore, they have to evolve in order to meet new requirements, and they have to perform adaptations whenever availability is a requirement. Main difficulty about evolution is it can concern a set of sub-systems, or a global vision such as one provided by system of systems. Therefore, the problems of evolution and adaptation are important. In the domain of software engineering, this thesis provides the R-* approach that defends the hypothesis that the more a system is Reflective, and the more it is able to adapt, and so, to evolve. Three major contributions and one use case justify R-*. R-DDS and R-MOM add reflective capabilities in asynchronous communication sub-systems, and R-EMS adds reflectivity on a global vision of a SoS, its sub-systems and its environment. R-DDS adds reflectivity to Data Distribution Service dedicated to real-time and embedded domains. R-MOM goes up in abstraction compared to R-DDS, in adding reflective capabilities at level of asynchronous middleware functionalities. R-EMS is a Reflective Environment Management System helping SoS use. Finally, use case and evaluation are done over a sub-model implementation of THALES’ SoS TACTICOS
Gourmelin, Yves. "Optimisation de l'utilisation des systèmes de traitement des analyses biologiques." Paris 12, 1995. http://www.theses.fr/1995PA120012.
Full textJelassi, Sofiene. "Contrôle adaptatif de la qualité lors du transfert interactif de la voix sur un réseau mobile ad-hoc." Paris 6, 2010. http://www.theses.fr/2010PA066190.
Full textMordal, Karine. "Analyse et conception d'un modèle de qualité logiciel." Paris 8, 2012. http://www.theses.fr/2012PA084030.
Full textWe present a model and prototype for analysis and assess quality of large size industrial software, Squash model based on the empirical model developed by Qualixo and Air France-KLM enterprises. We determine an evaluation methodology for both technical and functional quality principles. Then we formalized a quality model taking into account the industrial applications requirements from this methodology. The implementation of this model is being validated by Generali. From the domain of software development and functional validation, we have formalized a model based on hierarchical quality models from standards ISO 9126 and Square. This model is divided into two levels: the conceptual quality level that defines the main quality principles and the technical quality that defines basic technical rules and measures used to assess the top level. The main issue of quality model conception is how to bring the gap between metrics and quality principles : the metrics meannig often defined for individual components cannot be easily transposed to higher abstraction levels. Furthermore, we need to highlight problems and do not hide progress. To achieve this, we use combinaison and aggregation formulas to asses quality principles from metrics and fuzzy logic to assess quality principles from non-metric measurements