Dissertations / Theses on the topic 'Dependability analysi'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Dependability analysi.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Looker, Nik. "Dependability analysis of Web services." Thesis, Durham University, 2006. http://etheses.dur.ac.uk/2888/.
Full textYang, Joseph Sang-chin. "System dependability analysis and evaluation." Master's thesis, This resource online, 1994. http://scholar.lib.vt.edu/theses/available/etd-03172010-020227/.
Full textXu, Changyi. "Operational dependability model generation." Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEI129.
Full textAssessing complex industrial systems to be on dependable service is what the engineers and researchers have long been aiming for. Recent advanced researches in the Model-based safety assessment, especially the Structre Analysis and Component Modeling, provide the practicable methodologies to assess the dependability, yet a lack of the framework which is able to assess both the structure and the various behaviors of the components in one uniformed model retains them to achieve the excellent assessment. Moreover, as the system’s operations are not considerable in the models, the service in the aspect of operational dependability is not able to be assessed both in quality and in quantity. Although several existing assessment tools have already show their potential to model the various behaviors in the form of n-state models or consider the operations as repair priority to be event sequence in the model, fusing ‘structure’, ‘various behaviors’ and ‘operations’ is still a challenge, highlighting a need for one viable framework that bridge the gap among them both by quality or quantity. In this research, a formal model generation approach is studied to bridge this gap, which is able to assess the system operatinal dependability by considering the system structure, various behaviors, and operations. Here, the composition of the component models is introduced in order to generate a global model of the system, the total breakdown states are identified according to the resulted failure expression for the purpose to fully consider the system’s structure, and the operational dependability is further realized by quality by applying the trajectory specifications, while by quantity by developing a cost evaluating technology termed Capacity Calculation Fault Tree. In the end, a demonstration of a miniplant system illustrates the wide potential of this research for guaranteeing the dependable service of complex industrial systems
Zakucia, Jozef. "Metódy posudzovania spoľahlivosti zložitých elektronických systémov pre kozmické aplikácie." Doctoral thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2015. http://www.nusl.cz/ntk/nusl-234213.
Full textKabir, Sohag. "Compositional dependability analysis of dynamic systems with uncertainty." Thesis, University of Hull, 2016. http://hydra.hull.ac.uk/resources/hull:13595.
Full textRajagopalan, Mohan. "Optimizing System Performance and Dependability Using Compiler Techniques." Diss., Tucson, Arizona : University of Arizona, 2006. http://etd.library.arizona.edu/etd/GetFileServlet?file=file:///data1/pdf/etd/azu%5Fetd%5F1439%5F1%5Fm.pdf&type=application/pdf.
Full textDas, Olivia. "Performance and dependability analysis of fault-tolerant layered distributed systems." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape17/PQDD_0005/MQ32429.pdf.
Full textDas, Olivia Carleton University Dissertation Engineering Systems and Computer. "Performance and dependability analysis of fault-tolerant layered distributed systems." Ottawa, 1998.
Find full textMandak, Wayne S. Stowell Charles A. "Dynamic Assembly for System Adaptability, Dependability and Assurance (DASADA) project analysis /." Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2001. http://handle.dtic.mil/100.2/ADA393486.
Full textThesis advisors, LuQi, Man-Tak Shing, John S. Osmundson, Richard Riehle. Includes bibliographical references (p. 79-81). Also available online.
Kang, Eunsuk. "A Framework for Dependability analysis of software systems with trusted bases." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/58386.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (p. 73-76).
A new approach is suggested for arguing that a software system is dependable. The key idea is to structure the system so that highly critical requirements are localized in small subsets of the system called trusted bases. In most systems, the satisfaction of a requirement relies on assumptions about the environment, in addition to the behavior of software. Therefore, establishing a trusted base for a critical property must be carried out as early as the requirements phase. This thesis proposes a new framework to support this activity. A notation is used to construct a dependability argument that explains how the system satisfies critical requirements. The framework provides a set of analysis techniques for checking the soundness of an argument, identifying the members of a trusted base, and illustrating the impact of failures of trusted components. The analysis offers suggestions for redesigning the system so that it becomes more reliable. The thesis demonstrates the effectiveness of this approach with a case study on electronic voting systems.
by Eunsuk Kang.
S.M.
Block, Jan Martin. "Dependability analysis of military aircraft fleet performance in a lifecycle perspective /." Luleå : Luleå University of Technology, 2009. http://pure.ltu.se/ws/fbspretrieve/3074039.
Full textRugina, Ana-Elena. "Dependability modeling and evaluation – From AADL to stochastic Petri nets." Phd thesis, Toulouse, INPT, 2007. http://oatao.univ-toulouse.fr/7649/1/rugina.pdf.
Full textMeadows, Thomas A. "Analysis of F/A-18 engine maintenance costs using the Boeing Dependability Cost Model." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1994. http://handle.dtic.mil/100.2/ADA289983.
Full textMehmood, Qaiser. "A Maintainability Analysis of Dependability Evaluation of an Avionic System using AADL to PNML Transformation." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-12807.
Full textde, Souza Matos Júnior Rubens. "An automated approach for systems performance and dependability improvement through sensitivity analysis of Markov chains." Universidade Federal de Pernambuco, 2011. https://repositorio.ufpe.br/handle/123456789/2451.
Full textCoordenação de Aperfeiçoamento de Pessoal de Nível Superior
Sistemas computacionais estão em constante evolução para satisfazer crescimentos na demanda, ou novas exigências dos usuários. A administração desses sistemas requer decisões que sejam capazes de prover o nível mais alto nas métricas de desempenho e dependabilidade, com mudanças mínimas `a configuração existente. É comum realizar análises de desempenho, confiabilidade, disponibilidade e performabilidade de sistemas através de modelos analíticos, e as cadeias de Markov representam um dos formalismos matemáticos mais utilizados, permitindo estimar algumas métricas de interesse, dado um conjunto de parâmetros de entrada. No entanto, a análise de sensibilidade, quando feita, é executada simplesmente variando o conjunto de parâmetros dentro de suas faixas de valores e resolvendo repetidamente o modelo escolhido. A análise de sensibilidade diferencial permite a quem está modelando encontrar gargalos de uma maneira mais sistemática e eficiente. Este trabalho apresenta uma abordagem automatizada para análise de sensibilidade, e almeja guiar a melhoria de sistemas computacionais. A abordagem proposta é capaz de acelerar o processo de tomada de decisão, no que se refere a optimização de ajustes de hardware e software, além da aquisição e substituição de componentes. Tal metodologia usa as cadeias de Markov como técnica de modelagem formal, e a análise de sensibilidade desses modelos, preenchendo algumas lacunas encontradas na literatura sobre análise de sensibilidade. Por fim, a análise de sensibilidade de alguns sistemas distribuídos selecionados, conduzida neste trabalho, destaca gargalos nestes sistemas e fornece exemplos da acurácia da metodologia proposta, assim como ilustra sua aplicabilidade
Martínez, Raga Miquel. "Improving the process of analysis and comparison of results in dependability benchmarks for computer systems." Doctoral thesis, Universitat Politècnica de València, 2018. http://hdl.handle.net/10251/111945.
Full textDependability benchmarks are designed to assess, by quantifying through quantitative performance and dependability attributes, the behavior of systems in presence of faults. In this type of benchmarks, where systems are assessed in presence of perturbations, not being able to select the most suitable system may have serious implications (economical, reputation or even lost of lives). For that reason, dependability benchmarks are expected to meet certain properties, such as non-intrusiveness, representativeness, repeatability or reproducibility, that guarantee the robustness and accuracy of their process. However, despite the importance that comparing systems or components has, there is a problem present in the field of dependability benchmarking regarding the analysis and comparison of results. While the main focus in this field of research has been on developing and improving experimental procedures to obtain the required measures in presence of faults, the processes involving the analysis and comparison of results were mostly unattended. This has caused many works in this field to analyze and compare results of different systems in an ambiguous way, as the process followed in the analysis is based on argumentation, or not even present. Hence, under these circumstances, benchmark users will have it difficult to use these benchmarks and compare their results with those from others. Therefore extending the application of these dependability benchmarks and perform cross-exploitation of results among works is not likely to happen. This thesis has focused on developing a methodology to assist dependability benchmark performers to tackle the problems present in the analysis and comparison of results of dependability benchmarks. Designed to guarantee the fulfillment of dependability benchmark's properties, this methodology seamlessly integrates the process of analysis of results within the procedural flow of a dependability benchmark. Inspired on procedures taken from the field of operational research, this methodology provides evaluators with the means not only to make their process of analysis explicit to anyone, but also more representative for the context being. The results obtained from the application of this methodology to several case studies in different domains, will show the actual contributions of this work to improving the process of analysis and comparison of results in dependability benchmarking for computer systems.
Els dependability benchmarks (o benchmarks de confiabilitat, en valencià), són dissenyats per avaluar, mitjançant la categorització quantitativa d'atributs de confiabilitat i prestacions, el comportament de sistemes en presència de fallades. En aquest tipus de benchmarks, on els sistemes són avaluats en presència de pertorbacions, el no ser capaços de triar el sistema que millor s'adapta a les nostres necessitats pot tenir, de vegades, greus conseqüències (econòmiques, de reputació, o fins i tot pèrdua de vides). Per aquesta raó, aquests benchmarks han de complir certes propietats, com són la no-intrusió, la representativitat, la repetibilitat o la reproductibilitat, que garanteixen la robustesa i precisió dels seus processos. Així i tot, malgrat la importància que té la comparació de sistemes o components, existeix un problema a l'àmbit del dependability benchmarking relacionat amb l'anàlisi i la comparació de resultats. Mentre que el principal focus d'investigació s'ha centrat en el desenvolupament i la millora de processos per a obtenir mesures en presència de fallades, aquells aspectes relacionats amb l'anàlisi i la comparació de resultats es van desatendre majoritàriament. Açò ha donat lloc a diversos treballs en aquest àmbit on els processos d'anàlisi i comparació es realitzen de forma ambigua, mitjançant argumentació, o ni tan sols queden reflectits. Sota aquestes circumstàncies, als usuaris dels benchmarks se'ls presenta una dificultat a l'hora d'utilitzar aquests benchmarks i comparar els seus resultats amb els obtinguts per altres usuaris. Per tant, estendre l'aplicació dels benchmarks de confiabilitat i realitzar l'explotació creuada de resultats és una tasca actualment poc viable. Aquesta tesi s'ha centrat en el desenvolupament d'una metodologia per a donar suport als desenvolupadors i usuaris de benchmarks de confiabilitat a l'hora d'afrontar els problemes existents a l'anàlisi i comparació de resultats. Dissenyada per a assegurar el compliment de les propietats d'aquests benchmarks, la metodologia integra el procés d'anàlisi de resultats en el flux procedimental dels benchmarks de confiabilitat. Inspirada en procediments propis de l'àmbit de la investigació operativa, aquesta metodologia proporciona als avaluadors els mitjans necessaris per a fer el seu procés d'anàlisi explícit, i més representatiu per al context donat. Els resultats obtinguts d'aplicar aquesta metodologia en diversos casos d'estudi de distints dominis d'aplicació, mostrarà les contribucions d'aquest treball a millorar el procés d'anàlisi i comparació de resultats en processos d'avaluació de la confiabilitat per a sistemes basats en computador.
Martínez Raga, M. (2018). Improving the process of analysis and comparison of results in dependability benchmarks for computer systems [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/111945
TESIS
Deming, Philip E. "A generalizability analysis of the dependability of scores for the College Basic Academic Subjects Examination /." free to MU campus, to others for purchase, 2000. http://wwwlib.umi.com/cr/mo/fullcit?p9974622.
Full textKumar, Vikas. "An empirical investigation of the linkage between dependability, quality and customer satisfaction in information intensive service firms." Thesis, University of Exeter, 2010. http://hdl.handle.net/10036/3011.
Full textJaved, Muhammad Atif, and UL Muram Faiz UL Muram Faiz. "A framework for the analysis of failure behaviors in component-based model-driven development of dependable systems." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-13886.
Full textCHESS Project - http://chess-project.ning.com/
Nilsson, Markus. "A tool for automatic formal analysis of fault tolerance." Thesis, Linköping University, Department of Computer and Information Science, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-4435.
Full textThe use of computer-based systems is rapidly increasing and such systems can now be found in a wide range of applications, including safety-critical applications such as cars and aircrafts. To make the development of such systems more efficient, there is a need for tools for automatic safety analysis, such as analysis of fault tolerance.
In this thesis, a tool for automatic formal analysis of fault tolerance was developed. The tool is built on top of the existing development environment for the synchronous language Esterel, and provides an output that can be visualised in the Item toolkit for fault tree analysis (FTA). The development of the tool demonstrates how fault tolerance analysis based on formal verification can be automated. The generated output from the fault tolerance analysis can be represented as a fault tree that is familiar to engineers from the traditional FTA analysis. The work also demonstrates that interesting attributes of the relationship between a critical fault combination and the input signals can be generated automatically.
Two case studies were used to test and demonstrate the functionality of the developed tool. A fault tolerance analysis was performed on a hydraulic leakage detection system, which is a real industrial system, but also on a synthetic system, which was modeled for this purpose.
Kabir, Sohag, I. Sorokos, K. Aslansefat, Y. Papadopoulos, Y. Gheraibia, J. Reich, M. Saimler, and R. Wei. "A Runtime Safety Analysis Concept for Open Adaptive Systems." Springer, 2019. http://hdl.handle.net/10454/17416.
Full textIn the automotive industry, modern cyber-physical systems feature cooperation and autonomy. Such systems share information to enable collaborative functions, allowing dynamic component integration and architecture reconfiguration. Given the safety-critical nature of the applications involved, an approach for addressing safety in the context of reconfiguration impacting functional and non-functional properties at runtime is needed. In this paper, we introduce a concept for runtime safety analysis and decision input for open adaptive systems. We combine static safety analysis and evidence collected during operation to analyse, reason and provide online recommendations to minimize deviation from a system’s safe states. We illustrate our concept via an abstract vehicle platooning system use case.
This conference paper is available to view at http://hdl.handle.net/10454/17415.
Khatri, Abdul Rafay [Verfasser]. "Development, verification and analysis of a fault injection tool for improving dependability of FPGA systems / Abdul Rafay Khatri." Kassel : Universitätsbibliothek Kassel, 2021. http://d-nb.info/123338449X/34.
Full textAysan, Hüseyin. "Fault-Tolerance Strategies and Probabilistic Guarantees for Real-Time Systems." Doctoral thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-14653.
Full textJakl, Jan. "Funkční analýza rizik (FHA) 4-místného letounu pro osobní dopravu." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2010. http://www.nusl.cz/ntk/nusl-229297.
Full textDing, Kai [Verfasser], Klaus [Gutachter] Janschek, and Antoine [Gutachter] Rauzy. "Zuverlässigkeitsorientierter Entwurf und Analyse von Steuerungssystemen auf Modellebene unter zufälligen Hardwarefehlern : Dependability-oriented Design and Analysis of Control Systems at the Model Level under Random Hardware Faults / Kai Ding ; Gutachter: Klaus Janschek, Antoine Rauzy." Dresden : Technische Universitaet Dresden, 2021. http://d-nb.info/1236990455/34.
Full textHoang, Victoria, and Kevin Ly. "Signalfel – Hur kan dessa reduceras? : Analys av driftstörningar i signalsystem på Ostkustbanan." Thesis, KTH, Byggvetenskap, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-174142.
Full textDuring the past decades, the railway train delays have increased greatly associated with a decreasing reliability. The cause of the low reliability can be connected to the increased amount of traffic and the lagging maintenance, that is to say worn track that remains too long. This thereby increases sensitivity to disturbance of the fault that causes stop in the traffic, usually termed it as a 'signal failure'. A signal failure is an error that can occur in a variety of components within the signalling system. These have been divided into six different parts consisting of signalling control, track circuits, beacons, train control systems, level crossings and signals themselves. An error in one of its components gives the result that the signals go to a safe state, which means a halt in traffic. The causes and components that contribute to a low reliability of the railways are highlighted in this work to raise awareness of the problems found in the railway. The focus has been on developing action proposals on the component that is deemed most sensitive in the signaling system. The results showed that errors occurring in the signaling system are mostly generated by the track circuits. The most common error in track circuits is the occurrence of conduction on the insulated joints, which is judged as the most sensitive component in the signalling system. This is especially true in the Stockholm area, which is where the train traffic is as most dense and where disturbances affect a large number of travelers. Actions performed on track circuit faults are mainly short-term solutions such as cleansing, checks and no actions at all. Solutions usually performs after error has occurred, which means that signalling failure and its consequences already has affected the traffic. In order to increase the reliability it requires a more active and effective maintenance work. Investments in innovative solutions and action proposals should be performed in order to reduce the frequency of disturbances.
Simache, Cristina. "Evaluation de la sûreté de fonctionnement de systèmes Unix et Windows à partir de données opérationnelles : méthode et application." Toulouse 3, 2004. http://www.theses.fr/2004TOU30280.
Full textAcademic and industrial computing environments are mainly based on interconnected heterogeneous systems including a large number of Unix, Windows NT and Windows 2000 workstations and servers. These environments are designed to facilitate resource sharing and cooperative work between users. However, these benefits may be compromised by failures affecting the communication network, the applications or the end systems. There is no better way to understand the behavior of computing environments in the presence of faults than by direct measurement, analysis and assessment based on data obtained from the observation of their behavior in an operational environment. Our work focuses on the development and the implementation of methods allowing data collection and dependability analysis of log files automatically recorded by some operating systems. The target systems in our study are Unix, Windows NT and Windows 2000 systems interconnected in a local area network. Besides the definition and the implementation of the data collection strategy, the data processing aims to extract the relevant information and to obtain quantitative measures in order to characterize the target systems from a dependability point of view. We also showed how the measures assessed from operational data can be integrated within an analytical modeling allowing the estimation of user-perceived availability. The comparative analysis of measures characterizing the systems and those reflecting users' perceptions represents another original result of our work
Sklenář, Filip. "Analýza provozních rizik nově zaváděných typů letadel." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2014. http://www.nusl.cz/ntk/nusl-231641.
Full textIonescu, Dorina-Romina. "Évaluation quantitative de séquences d’événements en sûreté de fonctionnement à l’aide de la théorie des langages probabilistes." Thesis, Université de Lorraine, 2016. http://www.theses.fr/2016LORR0309/document.
Full textDependability studies are often based on the assumption of events (failures and repairs) independence but also on the analyse of cut-set which describes the subsets of components causing a system failure. In the case of dynamic systems where the events occurrence order has a direct impact on the dysfunctional behaviour, it is important to promote using event sequences instead of cut-sets for dependability assessment. In the first part, a formal framework is proposed. It helps in determining sequences of events that describe the evolution of the system and their assessment, using the theory of probabilistic languages and the theory of Markov/semi-Markov processes. The assessment integrates the calculation of the probability occurrence of the event sequences and their criticality (cost and length). For the assessment of complex systems with multiple operating/failure modes, a modular approach based on composition operators (choice and concatenation) is proposed. Evaluation of the probability of a global sequence of events is performed from local Markov/semi-Markov models for each mode of the system. The different contributions are applied on two case studies with a growing complexity
MATOS, JÚNIOR Rubens de Souza. "Identification of Availability and Performance Bottlenecks in Cloud Computing Systems: an approach based on hierarchical models and sensitivity analysis." Universidade Federal de Pernambuco, 2016. https://repositorio.ufpe.br/handle/123456789/18702.
Full textMade available in DSpace on 2017-05-04T17:58:30Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) tese_rubens_digital_biblioteca_08092016.pdf: 4506490 bytes, checksum: 251226257a6b659a6ae047e659147a8a (MD5) Previous issue date: 2016-03-01
CAPES
Cloud computing paradigm is able to reduce costs of acquisition and maintenance of computer systems, and enables the balanced management of resources according to the demand. Hierarchical and composite analytical models are suitable for describing performance and dependability of cloud computing systems in a concise manner, dealing with the huge number of components which constitute such kind of system. That approach uses distinct sub-models for each system level and the measures obtained in each sub-model are integrated to compute the measures for the whole system. Identification of bottlenecks in hierarchical models might be difficult yet, due to the large number of parameters and their distribution among distinct modeling levels and formalisms. This thesis proposes methods for evaluation and detection of bottlenecks of cloud computing systems. The methodology is based on hierarchical modeling and parametric sensitivity analysis techniques tailored for such a scenario. This research introduces methods to build unified sensitivity rankings when distinct modeling formalisms are combined. These methods are embedded in the Mercury software tool, providing an automated sensitivity analysis framework for supporting the process. Distinct case studies helped in testing the methodology, encompassing hardware and software aspects of cloud systems, from basic infrastructure level to applications that are hosted in private clouds. The case studies showed that the proposed approach is helpful for guiding cloud systems designers and administrators in the decision-making process, especially for tune-up and architectural improvements. It is possible to employ the methodology through an optimization algorithm proposed here, called Sensitive GRASP. This algorithm aims at optimizing performance and dependability of computing systems that cannot stand the exploration of all architectural and configuration possibilities to find the best quality of service. This is especially useful for cloud-hosted services and their complex underlying infrastructures.
O paradigma de computação em nuvem é capaz de reduzir os custos de aquisição e manutenção de sistemas computacionais e permitir uma gestão equilibrada dos recursos de acordo com a demanda. Modelos analíticos hierárquicos e compostos são adequados para descrever de forma concisa o desempenho e a confiabilidade de sistemas de computação em nuvem, lidando com o grande número de componentes que constituem esse tipo de sistema. Esta abordagem usa sub-modelos distintos para cada nível do sistema e as medidas obtidas em cada sub-modelo são usadas para calcular as métricas desejadas para o sistema como um todo. A identificação de gargalos em modelos hierárquicos pode ser difícil, no entanto, devido ao grande número de parâmetros e sua distribuição entre os distintos formalismos e níveis de modelagem. Esta tese propõe métodos para a avaliação e detecção de gargalos de sistemas de computação em nuvem. A abordagem baseia-se na modelagem hierárquica e técnicas de análise de sensibilidade paramétrica adaptadas para tal cenário. Esta pesquisa apresenta métodos para construir rankings unificados de sensibilidade quando formalismos de modelagem distintos são combinados. Estes métodos são incorporados no software Mercury, fornecendo uma estrutura automatizada de apoio ao processo. Uma metodologia de suporte a essa abordagem foi proposta e testada ao longo de estudos de casos distintos, abrangendo aspectos de hardware e software de sistemas IaaS (Infraestrutura como um serviço), desde o nível de infraestrutura básica até os aplicativos hospedados em nuvens privadas. Os estudos de caso mostraram que a abordagem proposta é útil para orientar os projetistas e administradores de infraestruturas de nuvem no processo de tomada de decisões, especialmente para ajustes eventuais e melhorias arquiteturais. A metodologia também pode ser aplicada por meio de um algoritmo de otimização proposto aqui, chamado Sensitive GRASP. Este algoritmo tem o objetivo de otimizar o desempenho e a confiabilidade de sistemas em cenários onde não é possível explorar todas as possibilidades arquiteturais e de configuração para encontrar a melhor qualidade de serviço. Isto é especialmente útil para os serviços hospedados na nuvem e suas complexas
Novák, Josef. "Metody analýzy spolehlivostních dat z provozu a zkoušek letadel." Doctoral thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2011. http://www.nusl.cz/ntk/nusl-233972.
Full textBrini, Manel. "Safety-Bag pour les systèmes complexes." Thesis, Compiègne, 2018. http://www.theses.fr/2018COMP2444/document.
Full textAutonomous automotive vehicles are critical systems. Indeed, following their failures, they can cause catastrophic damage to the human and the environment in which they operate. The control of autonomous vehicles is a complex function, with many potential failure modes. In the case of experimental platforms that have not followed either the development methods or the certification cycle required for industrial systems, the probabilities of failure are much greater. Indeed, these experimental vehicles face two problems that impede their dependability, which is the justified confidence that can be had in their correct behavior. First, they are used in open environment, with a very wide execution context. This makes their validation very complex, since many hours of testing would be necessary, with no guarantee that all faults in the system are detected and corrected. In addition, their behavior is often very difficult to predict or model. This may be due to the use of artificial intelligence software to solve complex problems such as navigation or perception, but also to the multiplicity of systems or components interacting and complicating the behavior of the final system, for example by generating behaviors emerging. A technique to increase the safety of these autonomous systems is the establishment of an Independent Safety Component, called "Safety-Bag". This system is integrated between the control application and the actuators of the vehicle, which allows it to check online a set of safety necessities, which are necessary properties to ensure the safety of the system. Each safety necessity is composed of a safety trigger condition and a safety intervention applied when the safety trigger condition is violated. This intervention consists of either a safety inhibition that prevents the system from moving to a risk state, or a safety action to return the autonomous vehicle to a safe state. The definition of safety necessities must follow a rigorous method to be systematic. To do this, we carried out in our work a study of dependability based on two fault prevention methods: FMEA and HazOp-UML, that respectively focus on the internal hardware and software components of the system and on the road environment and driving process. The result of these risk analyzes is a set of safety requirements. Some of these safety requirements can be translated into safety necessities, implementable and verifiable by the Safety-Bag. Others cannot be implemented in the Safety-Bag. The latter must remain simple so that it is easy to be validated. Then, we carried out experiments based on the faults injection in order to validate some safety necessities and to evaluate the Safety-Bag's behavior. These experiments were done on our robotic vehicle type Fluence in our laboratory in two different settings, on the actual track SEVILLE at first and then on the virtual track simulated by the Scanner Studio software on the VILAD testbed. The Safety-Bag remains a promising but partial solution for autonomous industrial vehicles. On the other hand, it meets the essential needs for the safety of experimental autonomous vehicles
Vicenzutti, Andrea. "Innovative Integrated Power Systems for All Electric Ships." Doctoral thesis, Università degli studi di Padova, 2016. http://hdl.handle.net/11577/3424463.
Full textOggigiorno, nelle grandi navi la propulsione elettrica è una valida alternativa a quella meccanica. Infatti, attualmente quest'ultima è limitata solo alle navi con requisiti particolari, quali la necessità di una elevata velocità di crociera o l’uso di combustibili specifici. L'uso della propulsione elettrica, in coppia con la progressiva elettrificazione dei carichi di bordo, ha portato alla nascita del concetto di All Electric Ship (AES). Una AES è una nave in cui tutti i carichi di bordo (propulsione inclusa) sono alimentati da un unico sistema elettrico, chiamato Sistema Elettrico Integrato (Integrated Power System - IPS). L'IPS è un sistema chiave in una AES, per cui richiede una progettazione ed una gestione accurata. In effetti, in una AES tale sistema alimenta quasi tutto, mettendo in evidenza il problema di garantire sia la corretta Power Quality, sia la continuità del servizio. La progettazione di un sistema così complesso viene convenzionalmente fatta considerando i singoli componenti separatamente, per semplificare il processo. Tuttavia tale pratica può portare a prestazioni ridotte, problemi di integrazione e sovradimensionamento. Come se non bastasse, la procedura di progettazione separata influisce pesantemente sull'affidabilità del sistema, a causa della difficoltà nel valutare l'effetto sulla nave di un guasto in un singolo sottosistema. Per questi motivi è necessario un nuovo processo di progettazione in grado di considerare l'effetto di tutti i componenti e sottosistemi del sistema, consentendo così di migliorare i più importanti driver applicati nella progettazione di una nave: efficienza, efficacia, affidabilità e riduzione dei costi. Date queste premesse, l'obiettivo della ricerca era di ottenere una nuova metodologia di progettazione applicabile al sistema elettrico integrato delle AES, in grado di considerare il sistema nel suo insieme, comprese tutte le sue interdipendenze interne. Il risultato di tale ricerca è descritto in questo lavoro di tesi, e consiste in un sub-processo che dovrà essere integrato nel processo di progettazione convenzionale del sistema elettrico integrato. In questa tesi viene effettuata un'ampia rassegna dello stato dell'arte, per consentire la comprensione del contesto, del perché tale processo innovativo è necessario e quali tecniche innovative possono essere utilizzate come un aiuto nella progettazione. Ogni punto è discusso concentrandosi sullo scopo di questa tesi, presentando così argomenti, bibliografia, e valutazioni personali volte ad indirizzare il lettore a comprendere l'impatto del processo di progettazione proposto. In particolare, dopo un primo capitolo dedicato all’introduzione delle AES in cui sono descritte come tali navi si sono evolute e quali sono le applicazioni più impattanti, si effettua una discussione ragionata sul processo di progettazione convenzionale delle navi, contenuta nel secondo capitolo. In aggiunta a questo viene effettuata un'analisi approfondita del processi di progettazione dell’IPS, per spiegare il contesto in cui il processo di progettazione innovativo deve essere integrato. Alcuni esempi di problemi derivanti dal processo di progettazione tradizionale sono dati, per motivare la proposta di un processo nuovo. In aggiunta ai problemi dovuti alla progettazione, altre motivazioni portano alla necessità di un rinnovato processo di progettazione, quali l'imminente introduzione di sistemi di distribuzione innovativi a bordo nave e la recente comparsa di nuovi requisiti il cui impatto sull’IPS è significativo. Per questo, un excursus su questi due temi è fatto nel terzo capitolo, con riferimento alle più recenti fonti letterarie e ricerche. Il quarto capitolo è dedicato alla descrizione degli strumenti che verranno utilizzati per costruire l'innovativo processo di progettazione. La prima parte del capitolo è dedicata alla teoria della fidatezza (dependability), in grado di dare un approccio sistematico e coerente alla determinazione degli effetti guasti sui sistemi complessi. Attraverso la teoria della fidatezza e le sue tecniche è possibile: determinare l'effetto sul sistema dei guasti ai singoli componenti; valutare tutte le possibili cause di un dato evento di avaria; valutare alcuni indici matematici relativi al sistema, al fine di confrontare diverse soluzioni progettuali; definire dove e come il progettista deve intervenire per migliorare il sistema. La seconda parte del quarto capitolo è dedicata ai software per la simulazione del comportamento dell’IPS ed ai test hardware-in-the-loop. In particolare viene discusso l'uso di tali sistemi come aiuto nella progettazione di sistemi di potenza, per permettere di comprendere perché tali strumenti sono stati integrati nel processo di progettazione sviluppato. Il quinto capitolo è dedicato al processo di progettazione sviluppato nel corso della ricerca. Viene discusso come tale processo funziona, come dovrebbe essere integrato nel processo di progettazione convenzionale, e qual è l'impatto che esso ha sulla progettazione. In particolare, la procedura sviluppata implica sia l'applicazione delle tecniche proprie della teoria della fidatezza (in particolare la Failure Tree Analysis), sia la simulazione del comportamento dinamico dell’IPS attraverso un modello matematico del sistema tarato sui transitori elettromeccanici. Infine, per dimostrare l'applicabilità della procedura proposta, nel sesto capitolo viene analizzato un caso di studio: l'IPS di una nave da perforazione offshore oil & gas dotata di posizionamento dinamico. Questo caso di studio è stato scelto a causa dei requisiti molto stringenti di questa classe di navi, il cui impatto sul progetto dell’IPS è significativo. Viene presentata l'analisi dell’IPS tramite la tecnica di Fault Tree Analysis (anche se con un livello di dettaglio semplificato), seguita dal calcolo di diversi indici di affidabilità. Tali risultati, unitamente a norme e regolamenti vigenti, sono stati utilizzati per definire i dati di input per le simulazioni, effettuate utilizzando un modello matematico dell’IPS costruito appositamente. I risultati delle simulazioni hanno consentito di valutare come il sistema dinamicamente si porta all’avaria a partire dai guasti rilevanti, e pertanto di proporre soluzioni migliorative.
Bader, Kaci. "Tolérance aux fautes pour la perception multi-capteurs : application à la localisation d'un véhicule intelligent." Thesis, Compiègne, 2014. http://www.theses.fr/2014COMP2161/document.
Full textPerception is a fundamental input for robotic systems, particularly for positioning, navigation and interaction with the environment. But the data perceived by these systems are often complex and subject to significant imprecision. To overcome these problems, the multi-sensor approach uses either multiple sensors of the same type to exploit their redundancy or sensors of different types for exploiting their complementarity to reduce the sensors inaccuracies and uncertainties. The validation of the data fusion approach raises two major problems. First, the behavior of fusion algorithms is difficult to predict, which makes them difficult to verify by formal approaches. In addition, the open environment of robotic systems generates a very large execution context, which makes the tests difficult and costly. The purpose of this work is to propose an alternative to validation by developing fault tolerance mechanisms : since it is difficult to eliminate all the errors of the perceptual system, We will try to limit impact in their operation. We studied the inherently fault tolerance allowed by data fusion by formally analyzing the data fusion algorithms, and we have proposed detection and recovery mechanisms suitable for multi-sensor perception, we implemented the proposed mechanisms on vehicle localization application using Kalman filltering data fusion. We evaluated the proposed mechanims using the real data replay and fault injection technique
Bartl, Michal. "Metodika vkládání kontrolních prvků do číslicového systému." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2009. http://www.nusl.cz/ntk/nusl-236627.
Full textSaied, Majd. "Fault-tolerant control of an octorotor unmanned aerial vehicle under actuators failures." Thesis, Compiègne, 2016. http://www.theses.fr/2016COMP2287.
Full textWith growing demands for safety and reliability, and an increasing awareness about the risks associated with system malfunction, dependability has become an essential concern in modern technological systems, particularly safety-critical systems such as aircrafts or railway systems. This has led to the design and development of fault tolerant control systems (FTC). The main objective of a FTC architecture is to maintain the desirable performance of the system in the event of faults and to prevent local faults from causing failures. The last years witnessed many developments in the area of fault detection and diagnosis and fault tolerant control for Unmanned Aerial rotary-wing Vehicles. In particular, there has been extensive work on stability improvements for quadrotors in case of partial failures, and recently, some works addressed the problem of a quadrotor complete propeller failure. However, these studies demonstrated that a complete loss of a quadrotor motor results in a vehicle that is not fully controllable. An alternative is then to consider multirotors with redundant actuators (octorotors or hexarotors). Inherent redundancy available in these vehicles can be exploited, in the event of an actuator failure, to redistribute the control effort among the remaining working actuators such that stability and complete controllability are retained. In this thesis, fault-tolerant control approaches for rotary-wing UAVs are investigated. The work focuses on developing algorithms for a coaxial octorotor UAV. However, these algorithms are designed to be applicable to any redundant multirotor under minor modifications. A nonlinear model-based fault detection and isolation system for motors failures is constructed based on a nonlinear observer and on the outputs of the inertial measurement unit. Motors speeds and currents given by the electronic speed controllers are also used in another fault detection and isolation module to detect actuators failures and distinguish between motors failures and propellers damage. An offline rule-based reconfigurable control mixing is designed in order to redistribute the control effort on the healthy actuators in case of one or more motors failures. A complete architecture including fault detection and isolation followed by system recovery is tested experimentally on a coaxial octorotor and compared to other architectures based on pseudo-inverse control allocation and a robust controller using second order sliding mode
Delmas, Adrien. "Contribution à l'estimation de la durée de vie résiduelle des systèmes en présence d'incertitudes." Thesis, Compiègne, 2019. http://www.theses.fr/2019COMP2476/document.
Full textPredictive maintenance strategies can help reduce the ever-growing maintenance costs, but their implementation represents a major challenge. Indeed, it requires to evaluate the health state of the component of the system and to prognosticate the occurrence of a future failure. This second step consists in estimating the remaining useful life (RUL) of the components, in Other words, the time they will continue functioning properly. This RUL estimation holds a high stake because the precision and accuracy of the results will influence the relevance and effectiveness of the maintenance operations. Many methods have been developed to prognosticate the remaining useful life of a component. Each one has its own particularities, advantages and drawbacks. The present work proposes a general methodology for component RUL estimation. The objective i to develop a method that can be applied to many different cases and situations and does not require big modifications. Moreover, several types of uncertainties are being dealt With in order to improve the accuracy of the prognostic. The proposed methodology can help in the maintenance decision making process. Indeed, it is possible to select the optimal moment for a required intervention thanks to the estimated RUL. Furthermore, dealing With the uncertainties provides additional confidence into the prognostic results
Semotam, Petr. "Prediktivní systém údržby obráběcích strojů s využitím vibrodiagnostiky." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2018. http://www.nusl.cz/ntk/nusl-382193.
Full textGorayeb, Diana Maria da Câmara. "Gestão de continuidade de negócios aplicada no ensino presencial mediado por recursos tecnológicos." Universidade de São Paulo, 2012. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-08052012-115710/.
Full textThis paper proposes guidelines for Business Continuity Management (BCM) that uses a technology called Education System Mediated Classroom Resources Technology (SPMRT), which needs, for the achievement of their academic activities, a complex system for transmission of lessons and requires a great effort to control their operations and coordinated fast responses in case of errors, faults, attacks and defects, or any incidents that result in the disruption of their activities. Maintaining this technological environment is related to the implementation of efficient processes of risk management and continuous improvement cycle in the IT environment with the adoption of ITIL® and through the construction of a Business Continuity Plan (BCP), documented by elements of the UML using the Business Impact Analysis (BIA), Risk Assessment (RA) and the attributes of Dependability: availability, reliability, security, confidentiality, integrity and maintainability.
Sadou, Nabil. "Aide à la conception des systèmes embarqués sûrs de fonctionnement." Phd thesis, INSA de Toulouse, 2007. http://tel.archives-ouvertes.fr/tel-00192045.
Full textNOSTRO, NICOLA. "MODEL-BASED APPROACHES TO DEPENDABILITY AND SECURITY ASSESSMENT IN CRITICAL AND DYNAMIC SYSTEMS." Doctoral thesis, 2015. https://hdl.handle.net/2158/947559.
Full textMONTECCHI, LEONARDO. "A Methodology and Framework for Model-Driven Dependability Analysis of Critical Embedded Systems and Directions Towards Systems of Systems." Doctoral thesis, 2013. http://hdl.handle.net/2158/851697.
Full textKabir, Sohag. "An overview of fault tree analysis and its application in model based dependability analysis." 2017. http://hdl.handle.net/10454/17428.
Full textFault Tree Analysis (FTA) is a well-established and well-understood technique, widely used for dependability evaluation of a wide range of systems. Although many extensions of fault trees have been proposed, they suffer from a variety of shortcomings. In particular, even where software tool support exists, these analyses require a lot of manual effort. Over the past two decades, research has focused on simplifying dependability analysis by looking at how we can synthesise dependability information from system models automatically. This has led to the field of model-based dependability analysis (MBDA). Different tools and techniques have been developed as part of MBDA to automate the generation of dependability analysis artefacts such as fault trees. Firstly, this paper reviews the standard fault tree with its limitations. Secondly, different extensions of standard fault trees are reviewed. Thirdly, this paper reviews a number of prominent MBDA techniques where fault trees are used as a means for system dependability analysis and provides an insight into their working mechanism, applicability, strengths and challenges. Finally, the future outlook for MBDA is outlined, which includes the prospect of developing expert and intelligent systems for dependability analysis of complex open systems under the conditions of uncertainty.
Sharvia, S., Sohag Kabir, M. Walker, and Y. Papadopoulos. "Model-based dependability analysis: State-of-the-art, challenges, and future outlook." 2015. http://hdl.handle.net/10454/17434.
Full textMandak, Wayne S., and Charles A. Stowell. "Dynamic Assembly for System Adaptability, Dependability and Assurance (DASADA) project analysis." Thesis, 2001. http://hdl.handle.net/10945/10926.
Full textWang, Ding-Chau, and 王鼎超. "Dependability and performance analysis of distributed algorithms for managing replicated data." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/21474899552073101005.
Full text國立成功大學
資訊工程學系碩博士班
91
Data replication is a proven technique for improving data availability of distributed systems. Historically the past research focused mainly on the development of replicated data management algorithms that can be proven correct and result in improved data availability, with the performance issues associated with data maintenance largely ignored. In this thesis, we analyze both dependability and performance characteristics of distributed algorithms for managing replicated data by developing generic modeling techniques based on Petri nets, with the goal to identify environmental conditions under which these replicated data management algorithms can be used to satisfy system dependability and performance requirements. First, we investigate an effective technique for calculating the access time distribution for requests that access replicated data maintained by the distributed system using the majority voting as a case. The technique can be used to estimate the reliability of real-time applications which must access replicated data with a deadline requirement. Then we enhance this technique to analyze user-perceived dependability and performance properties of quorum-based algorithms. User-perceived dependability and performance metrics are very different from conventional ones in that the dependability and performance properties must be assessed from the perspective of users accessing the system. A feature of the enhanced techniques is that no assumption is made regarding the interconnection topology, the number of replicas, or the quorum definition used by the replicated system, thus making it applicable to a wide class of quorum-based algorithms. Our analysis shows that when the user-perceiveness is taken into consideration, the effect of increasing the network connectivity and number of replicas on the availability and dependability properties perceived by users is very different from that under conventional metrics. Thus, unlike conventional metrics, user-perceived metrics allow a tradeoff to be exploited between the hardware invested, i.e., higher network connectivity and number of replicas, and the performance and dependability properties perceived by users. Next we analyze reconfigurable algorithms to determine how often the system should detect and react to failure conditions so that reorganization operations can be performed by the system at the appropriate time to improve the availability of replicated data without adversely compromising the performance of the system. We use dynamic voting as a case study to reveal design trade-offs for designing such reconfigurable algorithms and illustrate how often failure detection and reconfiguration activities should be performed, by means of using dummy updates, so as to maximize data availability. Dummy updates are system-initiated maintenance updates that will only update the state of the system regarding the availability of replicated data without actually changing the value of replicated data. However, because of using locks, dummy updates can hinder normal user-initiated updates during the execution of the conventional 2-phase commitment (2PC) protocol. We develop a modified 2PC protocol to be used by dummy updates and show that the modified 2PC protocol greatly improves the availability of replicated data compared to the conventional 2PC protocol. Lastly, we examine the availability and performance characteristics of replicated data in wireless cellular environments in which users access replicated data through base stations of the network as they roam in and out of those base stations. We address the issues of when, where and how to place replicas on the base stations by developing a performance model to analyze periodic maintenance strategies for managing replicated objects in mobile wireless client-server environments. Under a periodical maintenance strategy, the system periodically checks local cells to determine if a replicated object should be allocated or deallocated in a cell to reduce the access cost. Our performance model considers the missing-read cost, write-propagation cost and the periodic maintenance cost with the objective to identify optimal periodic maintenance intervals to minimize the overall cost. Our analysis results show that the overall cost is high when the user arrival-departure ratio and the read-write ratio work against each other and is low otherwise. Under the fixed periodic maintenance strategy, i.e., the maintenance interval is a constant, there exists an optimal periodic maintenance interval that would yield the minimum cost. Further, the optimal periodic maintenance interval increases as the arrival-departure ratio and the read-write ratio work in harmony. We also discover that by adjusting the periodic intervals dynamically in response to state changes of the system at run time, it can further reduce the overall cost obtainable by the fixed periodic maintenance strategy at optimizing conditions.
Clark, Jeffrey Alan. "Dependability analysis of fault-tolerant multiprocessor architectures through simulated fault injection." 1993. https://scholarworks.umass.edu/dissertations/AAI9408266.
Full textde, Vries INGRID. "AN ANALYSIS OF TEST CONSTRUCTION PROCEDURES AND SCORE DEPENDABILITY OF A PARAMEDIC RECERTIFICATION EXAM." Thesis, 2012. http://hdl.handle.net/1974/7434.
Full textThesis (Master, Education) -- Queen's University, 2012-09-06 22:41:41.552
CECCARELLI, ANDREA. "Analysis of critical systems through rigorous, reproducible and comparable experimental assessment." Doctoral thesis, 2012. http://hdl.handle.net/2158/596157.
Full textSilva, Nuno Pedro de Jesus. "An Empirical Approach to Improve the Quality and Dependability of Critical Systems Engineering." Doctoral thesis, 2018. http://hdl.handle.net/10316/79833.
Full textCritical systems, such as space, railways and avionics systems, are developed under strict requirements envisaging high integrity in accordance to specific standards. For such software systems, generally an independent assessment is put into effect (as a safety assessment or in the form of Independent Software Verification and Validation - ISVV) after the regular development lifecycle and V&V activities, aiming at identifying and correcting residual faults and raising confidence in the software. These systems are very sensitive to failures (they might cause severe impacts), and even if they are today reaching very low failure rates, there is always a need to guarantee higher quality and dependability levels. However, it has been observed that there are still a significant number of defects remaining at the latest lifecycle phases, questioning the effectiveness of the previous engineering processes and V&V techniques. This thesis proposes an empirical approach to identify the nature of defects (quality, dependability, safety gaps) and, based on that knowledge, to provide support to improve critical systems engineering. The work is based on knowledge about safety critical systems and how they are specified/developed/validated (standards, processes and techniques, resources, lifecycles, technologies, etc.). Improvements are obtained from an orthogonal classification and further analysis of issues collected from real systems at all lifecycle phases. Such historical data (issues) have been studied, classified and clustered according to different properties and taking into account the issue introduction phase, the involved techniques, the applicable standards, and particularly the root causes. The identified improvements shall be reflected in the development and V&V techniques, on resources training or preparation, and drive standards modifications or adoption. The first and more encompassing contribution of this work is the definition of a defects assessment process that can be used and applied in industry in a simple way and independently from the industrial domain. The process makes use of a dataset collected from existing issues reflecting process deficiencies, and supports the analysis of these data towards identifying the root causes for those problems and defining appropriate measures to avoid them in future systems. As part of the defect assessment process activities, we propose an adaptation of the Orthogonal Defect Classification (ODC) for critical issues. In practice, ODC was used as an initial classification and then it was tuned according to the gaps and difficulties found during the initial stages of our defects classification activities. The refinement was applied on the defect types, triggers and impacts. Improved taxonomies for these three parameters are proposed. A subsequent contribution of our work is the application and integration of a root cause analysis process to show the connection of the defects (or issue groups) with the engineering properties and environment. The engineering properties (e.g. human and technical resources properties, events, processes, methods, tools and standards) are, in fact, the principal input for the classes of root causes. A fishbone root cause analysis was proposed, integrated in the process and applied to the available dataset. A practical contribution of the work comprises the identification of a specific set of root causes and applicable measures to improve the quality of the engineered systems (removal of those causes). These root causes and proposed measures allow the provision of quick and specific feedback to the industrial engineering teams as soon as the defects are analyzed. The list/database has been compiled from the dataset and includes the feedback and contributions from the experts that responded to a process/framework validation survey. The root causes and the associated measures represent a valuable body of knowledge to support future defects assessments. The last key contribution of our work is the promotion of a cultural change to appropriately make use of real defects data (the main input of the process), which shall be appropriately documented and easily collected, cleaned and updated. The regular use of defects data with the application of the proposed defects assessment process will contribute to measure the quality evolutions and the progress of implementation of the corrective actions or improvement measures that are the essential output of the process.
Os sistemas críticos, tais como os sistemas espaciais, ferroviários ou os sistemas de aviónica, são desenvolvidos sob requisitos estritos que visam atingir alta integridade ao abrigo de normas específicas. Para tais sistemas de software, é geralmente aplicada uma avaliação independente (como uma avaliação de safety ou na forma de uma Verificação e Validação de Software Independente - ISVV) após o ciclo de desenvolvimento e as respetivas atividades de V&V, visando identificar e corrigir falhas residuais e aumentar a confiança no software. Estes sistemas são muito sensíveis a falhas (pois estas podem causar impactos severos), e apesar de atualmente se conseguir atingir taxas de falhas muito baixas, há sempre a necessidade de garantir a maior qualidade dos sistemas e os maiores níveis de confiabilidade. No entanto, observa-se que ainda existe um número significativo de defeitos que permanecem nas últimas fases do ciclo de desenvolvimento, o que nos leva a questionar a eficácia dos processos de engenharia usados e as técnicas de V&V aplicadas. Esta tese propõe uma abordagem empírica para identificar a natureza dos defeitos (de qualidade, confiabilidade, lacunas de safety) e com base nesse conhecimento proporcionar uma melhoria da engenharia de sistemas críticos. O trabalho é baseado em conhecimento sobre os sistemas críticos e na forma como estes são especificados / desenvolvidos / validados (normas, processos e técnicas, recursos, ciclo de vida, tecnologias, etc.). As recomendações de melhorias para os sistemas críticos são obtidas a partir de uma classificação ortogonal e posterior análise de dados de defeitos obtidos de sistemas reais cobrindo todas as fases do ciclo de vida. Estes dados históricos (defeitos) foram estudados, classificados e agrupados de acordo com diferentes propriedades, considerando a fase de introdução do defeito, as técnicas envolvidas, as normas aplicáveis e, em particular, as possíveis causas fundamentais (ou raiz). As melhorias identificadas deverão refletir-se nas técnicas de desenvolvimento / V&V, na formação ou preparação de recursos humanos e orientar alterações ou adoção de normas. A primeira e mais abrangente das contribuições deste trabalho é a definição de um processo de avaliação de defeitos que pode ser usado e aplicado na indústria de forma simples e independente do domínio industrial. O processo proposto baseia-se na disponibilidade de um conjunto de dados de problemas que refletem deficiências de processo de desenvolvimento e suporta a análise desses dados para identificar as suas causas raiz e definir medidas apropriadas para evitá-los em sistemas futuros. Como parte das atividades do processo de avaliação de defeitos, é proposta uma adaptação da Classificação Ortogonal de Defeitos (ODC) para sistemas críticos. Na prática, a ODC foi usada como uma classificação inicial e depois ajustada de acordo com as lacunas e dificuldades encontradas durante os estágios iniciais das atividades de classificação de defeitos. O refinamento foi aplicado aos tipos de defeito, aos eventos que levaram a esses defeitos e aos seus impactos. Neste trabalho, são propostas versões melhoradas das taxonomias para esses três parâmetros. Uma contribuição subsequente é a aplicação e integração de um processo de análise de causas raiz para relacionar os defeitos (ou grupos de problemas) com as propriedades e o ambiente de engenharia. As propriedades de engenharia (por exemplo, recursos humanos e técnicos, eventos, processos, métodos, ferramentas e normas) são, de facto, as principais fontes para a identificação das classes de causas raiz. A análise de causas de raiz proposta é baseada em diagramas fishbone, tendo sido integrada no processo e aplicada ao conjunto de dados disponíveis. Uma contribuição prática do nosso trabalho é a identificação de um conjunto específico de causas raiz e de medidas aplicáveis para melhorar a qualidade dos sistemas de engenharia (eliminação dessas causas). As causas e as medidas propostas permitem um retorno rápido e específico logo que os defeitos são analisados. A lista / base de dados foi compilada a partir do conjunto de dados de defeitos e inclui os comentários e contribuições de especialistas que responderam a um formulário de validação do processo. As causas raiz e as medidas associadas representam um conjunto valioso de conhecimento que pode suportar futuras análises de defeitos. A última contribuição chave do nosso trabalho é a promoção de uma mudança cultural para fazer uso apropriado de dados de defeitos reais (principal fonte do processo), os quais devem ser devidamente documentados e facilmente recolhidos, tratados e atualizados. O uso regular de dados sobre defeitos através da aplicação do processo de análise de defeitos proposto contribuirá para medir a evolução da qualidade e o progresso da implementação das ações corretivas ou medidas de melhoria que são o principal resultado do processo.