Thèses sur le sujet « Dependability analysi »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Dependability analysi.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleures thèses pour votre recherche sur le sujet « Dependability analysi ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.

1

Looker, Nik. « Dependability analysis of Web services ». Thesis, Durham University, 2006. http://etheses.dur.ac.uk/2888/.

Texte intégral
Résumé :
Web Services form the basis of the web based eCommerce eScience applications so it is vital that robust services are developed. Traditional validation and verification techniques are centred around the concept of removing all faults to guarantee correct operation whereas Dependability gives an assessment of how dependably a system can deliver the required functionality by assessing attributes, and by eliminating threats via means attempts to improve dependability. Fault injection is a well-proven dependability assessment method. Although much work has been done in the area of fault injection and distributed systems in general, there appears to have been little research carried out on applying this to middleware systems and Web Services in particular. There are additional problems associated with applying existing fault injection technologies to Web Services running in a virtual machine environment since most are either invasive or work at a machine level. The Fault Injection Technology (FIT) method has been devised to address these problems for middleware systems. The Web Service-Fault Injection Technology (WS-FIT) implementation applies the FIT method, based on network level fault injection, to Web Services to create a non-invasive dependability assessment method. It allows targeted perturbation of Web Service RFC parameters as well as more traditional network level fault injection operations. The WS-FIT tool includes taxonomies that define a system under test, fault models to apply and failure modes to be detected, and uses these taxonomies to generate fault injection campaigns. WS-FIT has been applied to a number of case studies and has successfully demonstrated its effectiveness. It has also been successfully applied to a third-party system to evaluate dependability means. It performed this dependability assessment as well as allowing debugging of the means to be undertaken uncovering unknown faults.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Yang, Joseph Sang-chin. « System dependability analysis and evaluation ». Master's thesis, This resource online, 1994. http://scholar.lib.vt.edu/theses/available/etd-03172010-020227/.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Xu, Changyi. « Operational dependability model generation ». Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEI129.

Texte intégral
Résumé :
L'objectif à long terme des ingénieurs et des chercheurs est d'évaluer la fiabilité des systèmes industriels complexes. Les évaluations de la sécurité fondées sur des modèles effectuées ces dernières années, en particulier les études d'analyse structurelle et de modélisation des composants, fournissent des méthodes pratiques d'évaluation de la fiabilité,Toutefois, l'absence d'un cadre permettant d'évaluer simultanément la structure et les comportements des différents éléments d'un modèle unifié n'a pas permis d'obtenir d'excellentes évaluations. En outre, les opérations du système n'étant pas pris en compte dans le modèle, il n'est pas possible d'évaluer la qualité et la quantité du service en termes de fiabilité du opérations. Cette invention concerne un procédé de génération de modèle formalisé qui permet d 'évaluer la fiabilité du fonctionnement du système en tenant compte de sa structure, de ses divers comportements et de ses opérations. La composition du modèle de composant est introduite pour générer un modèle global du système. Afin de tenir pleinement compte de la structure du système, l 'état total de défaillance du système est déterminé sur la base de l' expression de défaillance obtenue. Sur le plan qualitatif, la fiabilité opérationnelle est encore renforcée par l'application des spécifications de trajectoire.Et, Sur le plan quantitif, il est renforcée par la mise au point d'une technique d'évaluation des coûts appelée arbre de calcul de capacité. Enfin, l'exemple d'un système industriel illustre l'énorme potentiel qu'offre l'étude pour garantir la fiabilité des services fournis par les systèmes industriels complexes
Assessing complex industrial systems to be on dependable service is what the engineers and researchers have long been aiming for. Recent advanced researches in the Model-based safety assessment, especially the Structre Analysis and Component Modeling, provide the practicable methodologies to assess the dependability, yet a lack of the framework which is able to assess both the structure and the various behaviors of the components in one uniformed model retains them to achieve the excellent assessment. Moreover, as the system’s operations are not considerable in the models, the service in the aspect of operational dependability is not able to be assessed both in quality and in quantity. Although several existing assessment tools have already show their potential to model the various behaviors in the form of n-state models or consider the operations as repair priority to be event sequence in the model, fusing ‘structure’, ‘various behaviors’ and ‘operations’ is still a challenge, highlighting a need for one viable framework that bridge the gap among them both by quality or quantity. In this research, a formal model generation approach is studied to bridge this gap, which is able to assess the system operatinal dependability by considering the system structure, various behaviors, and operations. Here, the composition of the component models is introduced in order to generate a global model of the system, the total breakdown states are identified according to the resulted failure expression for the purpose to fully consider the system’s structure, and the operational dependability is further realized by quality by applying the trajectory specifications, while by quantity by developing a cost evaluating technology termed Capacity Calculation Fault Tree. In the end, a demonstration of a miniplant system illustrates the wide potential of this research for guaranteeing the dependable service of complex industrial systems
Styles APA, Harvard, Vancouver, ISO, etc.
4

Zakucia, Jozef. « Metódy posudzovania spoľahlivosti zložitých elektronických systémov pre kozmické aplikácie ». Doctoral thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2015. http://www.nusl.cz/ntk/nusl-234213.

Texte intégral
Résumé :
This thesis deals with a common cause failure analysis (CCF) for space devices. This analysis belongs among dependability analyses, which have not been sufficiently developed in a field of space industry in corresponding technical and normative documents. Therefore, we focused on devising a new procedure of a qualitative and quantitative common cause failure analysis for the space applications herein. This new procedure of the qualitative and quantitative CCF analysis was applied on redundant systems of a special space device microaccelerometer (ACC), which was developed in VZLÚ. Performance of the qualitative CCF analysis can lead to recommendations to change design of the system, making the system less susceptible to the common cause failures. Performance of the quantitative CCF analysis and its inclusion into the computation of the system reliability can lead to a more accurate estimation of the reliability (in most cases it leads to decreasing the system reliability). During the development of the ACC there were not defined any requirements to perform the CCF analysis within general dependability requirements (defined by the customer and by ECSS standards). Hence, we compared computations of the ACC reliability with and without considering the CCFs. When the CCFs were considered, the reliability of the ACC was decreased according to our assumption. On our example of the ACC we showed advantages of the performance of the CCF analysis within the dependability analyses during development of the space devices.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Kabir, Sohag. « Compositional dependability analysis of dynamic systems with uncertainty ». Thesis, University of Hull, 2016. http://hydra.hull.ac.uk/resources/hull:13595.

Texte intégral
Résumé :
Over the past two decades, research has focused on simplifying dependability analysis by looking at how we can synthesise dependability information from system models automatically. This has led to the field of model-based safety assessment (MBSA), which has attracted a significant amount of interest from industry, academia, and government agencies. Different model-based safety analysis methods, such as Hierarchically Performed Hazard Origin & Propagation Studies (HiP-HOPS), are increasingly applied by industry for dependability analysis of safety-critical systems. Such systems may feature multiple modes of operation where the behaviour of the systems and the interactions between system components can change according to what modes of operation the systems are in. MBSA techniques usually combine different classical safety analysis approaches to allow the analysts to perform safety analyses automatically or semi-automatically. For example, HiP-HOPS is a state-of-the-art MBSA approach which enhances an architectural model of a system with logical failure annotations to allow safety studies such as Fault Tree Analysis (FTA) and Failure Modes and Effects Analysis (FMEA). In this way it shows how the failure of a single component or combinations of failures of different components can lead to system failure. As systems are getting more complex and their behaviour becomes more dynamic, capturing this dynamic behaviour and the many possible interactions between the components is necessary to develop an accurate failure model. One of the ways of modelling this dynamic behaviour is with a state-transition diagram. Introducing a dynamic model compatible with the existing architectural information of systems can provide significant benefits in terms of accurate representation and expressiveness when analysing the dynamic behaviour of modern large-scale and complex safety-critical systems. Thus the first key contribution of this thesis is a methodology to enable MBSA techniques to model dynamic behaviour of systems. This thesis demonstrates the use of this methodology using the HiP-HOPS tool as an example, and thus extends HiP-HOPS with state-transition annotations. This extension allows HiP-HOPS to model more complex dynamic scenarios and perform compositional dynamic dependability analysis of complex systems by generating Pandora temporal fault trees (TFTs). As TFTs capture state, the techniques used for solving classical FTs are not suitable to solve them. They require a state space solution for quantification of probability. This thesis therefore proposes two methodologies based on Petri Nets and Bayesian Networks to provide state space solutions to Pandora TFTs. Uncertainty is another important (yet incomplete) area of MBSA: typical MBSA approaches are not capable of performing quantitative analysis under uncertainty. Therefore, in addition to the above contributions, this thesis proposes a fuzzy set theory based methodology to quantify Pandora temporal fault trees with uncertainty in failure data of components. The proposed methodologies are applied to a case study to demonstrate how they can be used in practice. Finally, the overall contributions of the thesis are evaluated by discussing the results produced and from these conclusions about the potential benefits of the new techniques are drawn.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Rajagopalan, Mohan. « Optimizing System Performance and Dependability Using Compiler Techniques ». Diss., Tucson, Arizona : University of Arizona, 2006. http://etd.library.arizona.edu/etd/GetFileServlet?file=file:///data1/pdf/etd/azu%5Fetd%5F1439%5F1%5Fm.pdf&type=application/pdf.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Das, Olivia. « Performance and dependability analysis of fault-tolerant layered distributed systems ». Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape17/PQDD_0005/MQ32429.pdf.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Das, Olivia Carleton University Dissertation Engineering Systems and Computer. « Performance and dependability analysis of fault-tolerant layered distributed systems ». Ottawa, 1998.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Mandak, Wayne S. Stowell Charles A. « Dynamic Assembly for System Adaptability, Dependability and Assurance (DASADA) project analysis / ». Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2001. http://handle.dtic.mil/100.2/ADA393486.

Texte intégral
Résumé :
Thesis (M.S. in Computer Science) Naval Postgraduate School, June 2001. Wayne S. Mandak. Thesis (M.S. in Information Technology Management) Naval Postgraduate School, June 2001. Charles A. Stowell.
Thesis advisors, LuQi, Man-Tak Shing, John S. Osmundson, Richard Riehle. Includes bibliographical references (p. 79-81). Also available online.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Kang, Eunsuk. « A Framework for Dependability analysis of software systems with trusted bases ». Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/58386.

Texte intégral
Résumé :
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 73-76).
A new approach is suggested for arguing that a software system is dependable. The key idea is to structure the system so that highly critical requirements are localized in small subsets of the system called trusted bases. In most systems, the satisfaction of a requirement relies on assumptions about the environment, in addition to the behavior of software. Therefore, establishing a trusted base for a critical property must be carried out as early as the requirements phase. This thesis proposes a new framework to support this activity. A notation is used to construct a dependability argument that explains how the system satisfies critical requirements. The framework provides a set of analysis techniques for checking the soundness of an argument, identifying the members of a trusted base, and illustrating the impact of failures of trusted components. The analysis offers suggestions for redesigning the system so that it becomes more reliable. The thesis demonstrates the effectiveness of this approach with a case study on electronic voting systems.
by Eunsuk Kang.
S.M.
Styles APA, Harvard, Vancouver, ISO, etc.
11

Block, Jan Martin. « Dependability analysis of military aircraft fleet performance in a lifecycle perspective / ». Luleå : Luleå University of Technology, 2009. http://pure.ltu.se/ws/fbspretrieve/3074039.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
12

Rugina, Ana-Elena. « Dependability modeling and evaluation – From AADL to stochastic Petri nets ». Phd thesis, Toulouse, INPT, 2007. http://oatao.univ-toulouse.fr/7649/1/rugina.pdf.

Texte intégral
Résumé :
Performing dependability evaluation along with other analyses at architectural level allows both predicting the effects of architectural decisions on the dependability of a system and making tradeoffs. Thus, both industry and academia focus on defining model driven engineering (MDE) approaches and on integrating several analyses in the development process. AADL (Architecture Analysis and Design Language) has proved to be efficient for architectural modeling and is considered by industry in the context presented above. Our contribution is a modeling framework allowing the generation of dependability-oriented analytical models from AADL models, to facilitate the evaluation of dependability measures, such as reliability or availability. We propose an iterative approach for system dependability modeling using AADL. In this context, we also provide a set of reusable modeling patterns for fault tolerant architectures. The AADL dependability model is transformed into a GSPN (Generalized Stochastic Petri Net) by applying model transformation rules. We have implemented an automatic model transformation tool. The resulting GSPN can be processed by existing tools to obtain dependability measures. The modeling approach is illustrated on a subsystem of the French Air trafic Control System.
Styles APA, Harvard, Vancouver, ISO, etc.
13

Meadows, Thomas A. « Analysis of F/A-18 engine maintenance costs using the Boeing Dependability Cost Model ». Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1994. http://handle.dtic.mil/100.2/ADA289983.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
14

Mehmood, Qaiser. « A Maintainability Analysis of Dependability Evaluation of an Avionic System using AADL to PNML Transformation ». Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-12807.

Texte intégral
Résumé :
Context.In the context of Software Architecture, AADL (ArchitectureAnalysis and Design Language) is one of the latest standards (SAE StandardAS5506) used for analyzing and designing of architectures of software sys-tems. Dependability evaluation of an avionic system, modeled in AADL, isconducted using petri nets standard PNML (ISO standard ISO/IEC15909-2).A maintainability analysis of PNML dependability model is also con-ducted. Objectives. In this study we investigate maintainability analysis of PNMLdependability model of an avionic system designed in AADL. Structural,functional, fault-tolerance and recovery dependencies are modeled, imple-mented, simulated and validated in PNML. Maintainability analysis withrespect to ‘changeability’ factor is also conducted. Methods.This study is a semi-combination of ’case-study’ and ’implemen-tation’ research methodologies. The implementation of case-study systemis conducted by modeling the case-study system in AADL using OSATE2tool and simulating the dependability models in PNML using Wolfgangtool. PNML dependability models are validated by comparing with GSPNdependability models of previously published research. Results. As a result of this research, PNML dependability model was ob-tained. The difficulties that influenced the research in AADL Error ModelAnnex and the OSATE2 tool are also analyzed and documented. PNMLand GSPN are compared for complexity. And maintainability analysis forPNML dependability model w.r.t ‘changeability’ factor is also an outcomeof this research. This research is recommended for software testing at ar-chitecture level as a standardized way for testing the software componentsfor faults and errors and their impact on dependable components. Conclusions. We conclude that PNML is an ISO standard and is the al-ternative for GSPN for dependability. Also, AADL Error Model Annex isstill evolving and there is a need of availability of proper literature publiclyfor better understanding. Also, PNML dependability model possesses the‘changeability’ factor of maintainability analysis and therefore it is able toadapt changes in the architecture. Also, dependability factors of a softwarecan be tested at architecture level using the standards; AADL and PNML
Styles APA, Harvard, Vancouver, ISO, etc.
15

de, Souza Matos Júnior Rubens. « An automated approach for systems performance and dependability improvement through sensitivity analysis of Markov chains ». Universidade Federal de Pernambuco, 2011. https://repositorio.ufpe.br/handle/123456789/2451.

Texte intégral
Résumé :
Made available in DSpace on 2014-06-12T15:58:19Z (GMT). No. of bitstreams: 2 arquivo3464_1.pdf: 2672787 bytes, checksum: 9bee33c2153182c2ce64b9027453243a (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2011
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
Sistemas computacionais estão em constante evolução para satisfazer crescimentos na demanda, ou novas exigências dos usuários. A administração desses sistemas requer decisões que sejam capazes de prover o nível mais alto nas métricas de desempenho e dependabilidade, com mudanças mínimas `a configuração existente. É comum realizar análises de desempenho, confiabilidade, disponibilidade e performabilidade de sistemas através de modelos analíticos, e as cadeias de Markov representam um dos formalismos matemáticos mais utilizados, permitindo estimar algumas métricas de interesse, dado um conjunto de parâmetros de entrada. No entanto, a análise de sensibilidade, quando feita, é executada simplesmente variando o conjunto de parâmetros dentro de suas faixas de valores e resolvendo repetidamente o modelo escolhido. A análise de sensibilidade diferencial permite a quem está modelando encontrar gargalos de uma maneira mais sistemática e eficiente. Este trabalho apresenta uma abordagem automatizada para análise de sensibilidade, e almeja guiar a melhoria de sistemas computacionais. A abordagem proposta é capaz de acelerar o processo de tomada de decisão, no que se refere a optimização de ajustes de hardware e software, além da aquisição e substituição de componentes. Tal metodologia usa as cadeias de Markov como técnica de modelagem formal, e a análise de sensibilidade desses modelos, preenchendo algumas lacunas encontradas na literatura sobre análise de sensibilidade. Por fim, a análise de sensibilidade de alguns sistemas distribuídos selecionados, conduzida neste trabalho, destaca gargalos nestes sistemas e fornece exemplos da acurácia da metodologia proposta, assim como ilustra sua aplicabilidade
Styles APA, Harvard, Vancouver, ISO, etc.
16

Martínez, Raga Miquel. « Improving the process of analysis and comparison of results in dependability benchmarks for computer systems ». Doctoral thesis, Universitat Politècnica de València, 2018. http://hdl.handle.net/10251/111945.

Texte intégral
Résumé :
Los dependability benchmarks (o benchmarks de confiabilidad en español), están diseñados para evaluar, mediante la categorización cuantitativa de atributos de confiabilidad y prestaciones, el comportamiento de sistemas en presencia de fallos. En este tipo de benchmarks, donde los sistemas se evalúan en presencia de perturbaciones, no ser capaces de elegir el sistema que mejor se adapta a nuestras necesidades puede, en ocasiones, conllevar graves consecuencias (económicas, de reputación, o incluso de pérdida de vidas). Por esa razón, estos benchmarks deben cumplir ciertas propiedades, como son la no-intrusión, la representatividad, la repetibilidad o la reproducibilidad, que garantizan la robustez y precisión de sus procesos. Sin embargo, a pesar de la importancia que tiene la comparación de sistemas o componentes, existe un problema en el ámbito del dependability benchmarking relacionado con el análisis y la comparación de resultados. Mientras que el principal foco de investigación se ha centrado en el desarrollo y la mejora de procesos para obtener medidas en presencia de fallos, los aspectos relacionados con el análisis y la comparación de resultados quedaron mayormente desatendidos. Esto ha dado lugar a diversos trabajos en este ámbito donde el proceso de análisis y la comparación de resultados entre sistemas se realiza de forma ambigua, mediante argumentación, o ni siquiera queda reflejado. Bajo estas circunstancias, a los usuarios de los benchmarks se les presenta una dificultad a la hora de utilizar estos benchmarks y comparar sus resultados con los obtenidos por otros usuarios. Por tanto, extender la aplicación de los benchmarks de confiabilidad y realizar la explotación cruzada de resultados es una tarea actualmente poco viable. Esta tesis se ha centrado en el desarrollo de una metodología para dar soporte a los desarrolladores y usuarios de benchmarks de confiabilidad a la hora de afrontar los problemas existentes en el análisis y comparación de resultados. Diseñada para asegurar el cumplimiento de las propiedades de estos benchmarks, la metodología integra el proceso de análisis de resultados en el flujo procedimental de los benchmarks de confiabilidad. Inspirada en procedimientos propios del ámbito de la investigación operativa, esta metodología proporciona a los evaluadores los medios necesarios para hacer su proceso de análisis explícito, y más representativo para el contexto dado. Los resultados obtenidos de aplicar esta metodología en varios casos de estudio de distintos dominios de aplicación, mostrará las contribuciones de este trabajo a mejorar el proceso de análisis y comparación de resultados en procesos de evaluación de la confiabilidad para sistemas basados en computador.
Dependability benchmarks are designed to assess, by quantifying through quantitative performance and dependability attributes, the behavior of systems in presence of faults. In this type of benchmarks, where systems are assessed in presence of perturbations, not being able to select the most suitable system may have serious implications (economical, reputation or even lost of lives). For that reason, dependability benchmarks are expected to meet certain properties, such as non-intrusiveness, representativeness, repeatability or reproducibility, that guarantee the robustness and accuracy of their process. However, despite the importance that comparing systems or components has, there is a problem present in the field of dependability benchmarking regarding the analysis and comparison of results. While the main focus in this field of research has been on developing and improving experimental procedures to obtain the required measures in presence of faults, the processes involving the analysis and comparison of results were mostly unattended. This has caused many works in this field to analyze and compare results of different systems in an ambiguous way, as the process followed in the analysis is based on argumentation, or not even present. Hence, under these circumstances, benchmark users will have it difficult to use these benchmarks and compare their results with those from others. Therefore extending the application of these dependability benchmarks and perform cross-exploitation of results among works is not likely to happen. This thesis has focused on developing a methodology to assist dependability benchmark performers to tackle the problems present in the analysis and comparison of results of dependability benchmarks. Designed to guarantee the fulfillment of dependability benchmark's properties, this methodology seamlessly integrates the process of analysis of results within the procedural flow of a dependability benchmark. Inspired on procedures taken from the field of operational research, this methodology provides evaluators with the means not only to make their process of analysis explicit to anyone, but also more representative for the context being. The results obtained from the application of this methodology to several case studies in different domains, will show the actual contributions of this work to improving the process of analysis and comparison of results in dependability benchmarking for computer systems.
Els dependability benchmarks (o benchmarks de confiabilitat, en valencià), són dissenyats per avaluar, mitjançant la categorització quantitativa d'atributs de confiabilitat i prestacions, el comportament de sistemes en presència de fallades. En aquest tipus de benchmarks, on els sistemes són avaluats en presència de pertorbacions, el no ser capaços de triar el sistema que millor s'adapta a les nostres necessitats pot tenir, de vegades, greus conseqüències (econòmiques, de reputació, o fins i tot pèrdua de vides). Per aquesta raó, aquests benchmarks han de complir certes propietats, com són la no-intrusió, la representativitat, la repetibilitat o la reproductibilitat, que garanteixen la robustesa i precisió dels seus processos. Així i tot, malgrat la importància que té la comparació de sistemes o components, existeix un problema a l'àmbit del dependability benchmarking relacionat amb l'anàlisi i la comparació de resultats. Mentre que el principal focus d'investigació s'ha centrat en el desenvolupament i la millora de processos per a obtenir mesures en presència de fallades, aquells aspectes relacionats amb l'anàlisi i la comparació de resultats es van desatendre majoritàriament. Açò ha donat lloc a diversos treballs en aquest àmbit on els processos d'anàlisi i comparació es realitzen de forma ambigua, mitjançant argumentació, o ni tan sols queden reflectits. Sota aquestes circumstàncies, als usuaris dels benchmarks se'ls presenta una dificultat a l'hora d'utilitzar aquests benchmarks i comparar els seus resultats amb els obtinguts per altres usuaris. Per tant, estendre l'aplicació dels benchmarks de confiabilitat i realitzar l'explotació creuada de resultats és una tasca actualment poc viable. Aquesta tesi s'ha centrat en el desenvolupament d'una metodologia per a donar suport als desenvolupadors i usuaris de benchmarks de confiabilitat a l'hora d'afrontar els problemes existents a l'anàlisi i comparació de resultats. Dissenyada per a assegurar el compliment de les propietats d'aquests benchmarks, la metodologia integra el procés d'anàlisi de resultats en el flux procedimental dels benchmarks de confiabilitat. Inspirada en procediments propis de l'àmbit de la investigació operativa, aquesta metodologia proporciona als avaluadors els mitjans necessaris per a fer el seu procés d'anàlisi explícit, i més representatiu per al context donat. Els resultats obtinguts d'aplicar aquesta metodologia en diversos casos d'estudi de distints dominis d'aplicació, mostrarà les contribucions d'aquest treball a millorar el procés d'anàlisi i comparació de resultats en processos d'avaluació de la confiabilitat per a sistemes basats en computador.
Martínez Raga, M. (2018). Improving the process of analysis and comparison of results in dependability benchmarks for computer systems [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/111945
TESIS
Styles APA, Harvard, Vancouver, ISO, etc.
17

Deming, Philip E. « A generalizability analysis of the dependability of scores for the College Basic Academic Subjects Examination / ». free to MU campus, to others for purchase, 2000. http://wwwlib.umi.com/cr/mo/fullcit?p9974622.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
18

Kumar, Vikas. « An empirical investigation of the linkage between dependability, quality and customer satisfaction in information intensive service firms ». Thesis, University of Exeter, 2010. http://hdl.handle.net/10036/3011.

Texte intégral
Résumé :
The information service sector e.g. utilities, telecommunications and banking has grown rapidly in recent years and is a significant contributor to the Gross Domestic Product (GDP) of the world’s leading economies. Though, the information service sector has grown significantly, there have been relatively few attempts by researchers to explore this sector. The lack of research in this sector has motivated my PhD research that aims to explore the pre-established relationships between dependability, quality and customer satisfaction (RQ1) within the context of information service sector. Literature looking at the interrelationship between the dependability and quality (RQ2a), and their further impact on customer satisfaction (RQ2b) is also limited. With the understanding that Business to Business (B2B) and Business to Customer (B2C) businesses are different, exploring these relationships in these two different types of information firms will further add to existing literature. This thesis also attempts to investigate the relative significance of dependability and quality in both B2B and B2C information service firms (RQ3a and RQ3b). To address these issues, this PhD research follows a theory testing approach and uses multiple case studies to address the research questions. In total five cases from different B2B and B2C information service firms are being investigated. To explore the causality, the time series data set of over 24 to 60 months time and the ‘Path Analysis’ method has been used. For the generalization of the findings, Cumulative Meta Analysis method has been applied. The findings of this thesis indicate that dependability significantly affects customer satisfaction and an interrelationship exists between dependability and quality that further impacts customer satisfaction. The findings from B2C cases challenges the traditional priority afforded to relational aspect of quality by showing that dependability is the key driver of customer satisfaction. However, B2B cases findings shows that both dependability and quality are key drivers of customer satisfaction. Therefore, the findings of this thesis add considerably to literature in B2B and B2C information services context.
Styles APA, Harvard, Vancouver, ISO, etc.
19

Javed, Muhammad Atif, et UL Muram Faiz UL Muram Faiz. « A framework for the analysis of failure behaviors in component-based model-driven development of dependable systems ». Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-13886.

Texte intégral
Résumé :
Currently, the development of high-integrity embedded component-based software systems is not supported by well-integrated means allowing for quality evaluation and design support within a development process. Quality, especially dependability, is very important for such systems. The CHESS (Composition with Guarantees for High-integrity Embedded Software Components Assembly) project aims at providing a new systems development methodology to capture extra-functional concerns and extend Model Driven Engineering industrial practices and technology approaches to specifically address the architectural structure, the interactions and the behavior of system components while guaranteeing their correctness and the level of service at run time. The CHESS methodology is expected to be supported by a tool-set which consists of a set of plug-ins integrated within the Eclipse IDE. In the framework of the CHESS project, this thesis addresses the lack of well integrated means concerning quality evaluation and proposes an integrated framework to evaluate the dependability of high-integrity embedded systems. After a survey of various failure behavior analysis techniques, a specific technique, called Failure Propagation and Transformation Calculus (FPTC), is selected and a plug-in, called CHESS-FPTC, is developed within the CHESS tool-set. FPTC technique allows users to calculate the failure behavior of the system from the failure behavior of its building components. Therefore, to fully support FPTC, CHESS-FPTC plug-in allows users to model the failure behavior of the building components, perform the analysis automatically and get the analysis results back into their initial models. A case study about AAL2 Signaling Protocol is presented to illustrate and evaluate the CHESS-FPTC framework.
CHESS Project - http://chess-project.ning.com/
Styles APA, Harvard, Vancouver, ISO, etc.
20

Nilsson, Markus. « A tool for automatic formal analysis of fault tolerance ». Thesis, Linköping University, Department of Computer and Information Science, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-4435.

Texte intégral
Résumé :

The use of computer-based systems is rapidly increasing and such systems can now be found in a wide range of applications, including safety-critical applications such as cars and aircrafts. To make the development of such systems more efficient, there is a need for tools for automatic safety analysis, such as analysis of fault tolerance.

In this thesis, a tool for automatic formal analysis of fault tolerance was developed. The tool is built on top of the existing development environment for the synchronous language Esterel, and provides an output that can be visualised in the Item toolkit for fault tree analysis (FTA). The development of the tool demonstrates how fault tolerance analysis based on formal verification can be automated. The generated output from the fault tolerance analysis can be represented as a fault tree that is familiar to engineers from the traditional FTA analysis. The work also demonstrates that interesting attributes of the relationship between a critical fault combination and the input signals can be generated automatically.

Two case studies were used to test and demonstrate the functionality of the developed tool. A fault tolerance analysis was performed on a hydraulic leakage detection system, which is a real industrial system, but also on a synthetic system, which was modeled for this purpose.

Styles APA, Harvard, Vancouver, ISO, etc.
21

Kabir, Sohag, I. Sorokos, K. Aslansefat, Y. Papadopoulos, Y. Gheraibia, J. Reich, M. Saimler et R. Wei. « A Runtime Safety Analysis Concept for Open Adaptive Systems ». Springer, 2019. http://hdl.handle.net/10454/17416.

Texte intégral
Résumé :
No
In the automotive industry, modern cyber-physical systems feature cooperation and autonomy. Such systems share information to enable collaborative functions, allowing dynamic component integration and architecture reconfiguration. Given the safety-critical nature of the applications involved, an approach for addressing safety in the context of reconfiguration impacting functional and non-functional properties at runtime is needed. In this paper, we introduce a concept for runtime safety analysis and decision input for open adaptive systems. We combine static safety analysis and evidence collected during operation to analyse, reason and provide online recommendations to minimize deviation from a system’s safe states. We illustrate our concept via an abstract vehicle platooning system use case.
This conference paper is available to view at http://hdl.handle.net/10454/17415.
Styles APA, Harvard, Vancouver, ISO, etc.
22

Khatri, Abdul Rafay [Verfasser]. « Development, verification and analysis of a fault injection tool for improving dependability of FPGA systems / Abdul Rafay Khatri ». Kassel : Universitätsbibliothek Kassel, 2021. http://d-nb.info/123338449X/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
23

Aysan, Hüseyin. « Fault-Tolerance Strategies and Probabilistic Guarantees for Real-Time Systems ». Doctoral thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-14653.

Texte intégral
Résumé :
Ubiquitous deployment of embedded systems is having a substantial impact on our society, since they interact with our lives in many critical real-time applications. Typically, embedded systems used in safety or mission critical applications (e.g., aerospace, avionics, automotive or nuclear domains) work in harsh environments where they are exposed to frequent transient faults such as power supply jitter, network noise and radiation. They are also susceptible to errors originating from design and production faults. Hence, they have the design objective to maintain the properties of timeliness and functional correctness even under error occurrences. Fault-tolerance plays a crucial role towards achieving dependability, and the fundamental requirement for the design of effective and efficient fault-tolerance mechanisms is a realistic and applicable model of potential faults and their manifestations. An important factor to be considered in this context is the random nature of faults and errors, which, if addressed in the timing analysis by assuming a rigid worst-case occurrence scenario, may lead to inaccurate results. It is also important that the power, weight, space and cost constraints of embedded systems are addressed by efficiently using the available resources for fault-tolerance. This thesis presents a framework for designing predictably dependable embedded real-time systems by jointly addressing the timeliness and the reliability properties. It proposes a spectrum of fault-tolerance strategies particularly targeting embedded real-time systems. Efficient resource usage is attained by considering the diverse criticality levels of the systems' building blocks. The fault-tolerance strategies are complemented with the proposed probabilistic schedulability analysis techniques, which are based on a comprehensive stochastic fault and error model.
Styles APA, Harvard, Vancouver, ISO, etc.
24

Jakl, Jan. « Funkční analýza rizik (FHA) 4-místného letounu pro osobní dopravu ». Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2010. http://www.nusl.cz/ntk/nusl-229297.

Texte intégral
Résumé :
At the beginning this master's thesis includes of a comprehensive review of aircraft accidents in this category, 2-6-digit aircraft for passenger transport. Since this work focused on autopilot, so naturally there is a basic overview of most common autopilots, which can be found in these aircraft now, but in the future. Functional hazard analysis (FHA) for the 4-seater plane for passenger services primarily investigates cases of catastrophic malfunction, which in most cases accompanied by the likelihood taken from different databases. The airplane, which is created for this analysis will preferably equipped with instruments for IFR flights. There is also a brief overview of the regulations necessary for the installation of these systems in the airplane. At the end of this work is to design the dashboard, a design layout of equipment for future aircraft, with an emphasis on maximum transparency.
Styles APA, Harvard, Vancouver, ISO, etc.
25

Ding, Kai [Verfasser], Klaus [Gutachter] Janschek et Antoine [Gutachter] Rauzy. « Zuverlässigkeitsorientierter Entwurf und Analyse von Steuerungssystemen auf Modellebene unter zufälligen Hardwarefehlern : Dependability-oriented Design and Analysis of Control Systems at the Model Level under Random Hardware Faults / Kai Ding ; Gutachter : Klaus Janschek, Antoine Rauzy ». Dresden : Technische Universitaet Dresden, 2021. http://d-nb.info/1236990455/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
26

Hoang, Victoria, et Kevin Ly. « Signalfel – Hur kan dessa reduceras ? : Analys av driftstörningar i signalsystem på Ostkustbanan ». Thesis, KTH, Byggvetenskap, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-174142.

Texte intégral
Résumé :
Under de senaste årtionden har tågförseningar i järnvägen ökat allt mer i samband med en minskande driftsäkerhet. Orsaken till den låga driftsäkerheten kan kopplas till den ökade trafikmängden och eftersatta underhållsarbeten, det vill säga slitna spår som används alldeles för länge. Detta ökar därmed störningskänsligheten för fel som ger upphov av stopp i trafiken, vanligtvis benämns det som ett ”signalfel”. Ett signalfel handlar om ett fel som kan uppstå i en mängd olika komponenter inom signalanläggningen. Dessa har delats in i sex olika delar bestående av signalställverket, spårledningar, baliser, tågledningssystem, plankorsningar samt själva signalerna. Ett fel i någon av dess komponenter ger till följd att signalerna går till ett säkert läge vilket innebär ett stopp i trafiken. De orsaker och komponenter som bidrar till en låg driftsäkerhet i järnvägen lyfts fram i detta arbete för att upplysa om de problem som återfinns i järnvägen. Fokus har legat på att ta fram åtgärdsförslag på den komponent som bedöms känsligast i signalanläggningen. Resultatet visade på att fel som sker i signalanläggningen till störst del genereras av spårledningar. Inom spårledningar så är det mest förekommande felet överledningar av isolerskarvar, vilket bedöms vara den känsligaste komponenten i signalanläggningen. Detta gäller speciellt i Stockholmsområdet, där tågtrafiken är som tätast och störningar påverkar ett stort antal resenärer. Åtgärderna som utförs för spårledningsfel är främst kortsiktiga lösningar såsom rensningar, kontroller eller ingen åtgärd alls. Oftast utförs lösningarna först efter att fel har uppstått, vilket innebär att ”signalfel” och dess konsekvenser redan har påverkat trafiken. För att höja driftsäkerheten krävs ett mer aktivt och effektivt underhållsarbete. Satsningar på innovativa lösningar och åtgärdsförslag bör utföras i syfte att minska frekvensen för störningar.
During the past decades, the railway train delays have increased greatly associated with a decreasing reliability. The cause of the low reliability can be connected to the increased amount of traffic and the lagging maintenance, that is to say worn track that remains too long. This thereby increases sensitivity to disturbance of the fault that causes stop in the traffic, usually termed it as a 'signal failure'. A signal failure is an error that can occur in a variety of components within the signalling system. These have been divided into six different parts consisting of signalling control, track circuits, beacons, train control systems, level crossings and signals themselves. An error in one of its components gives the result that the signals go to a safe state, which means a halt in traffic. The causes and components that contribute to a low reliability of the railways are highlighted in this work to raise awareness of the problems found in the railway. The focus has been on developing action proposals on the component that is deemed most sensitive in the signaling system. The results showed that errors occurring in the signaling system are mostly generated by the track circuits. The most common error in track circuits is the occurrence of conduction on the insulated joints, which is judged as the most sensitive component in the signalling system. This is especially true in the Stockholm area, which is where the train traffic is as most dense and where disturbances affect a large number of travelers. Actions performed on track circuit faults are mainly short-term solutions such as cleansing, checks and no actions at all. Solutions usually performs after error has occurred, which means that signalling failure and its consequences already has affected the traffic. In order to increase the reliability it requires a more active and effective maintenance work. Investments in innovative solutions and action proposals should be performed in order to reduce the frequency of disturbances.
Styles APA, Harvard, Vancouver, ISO, etc.
27

Simache, Cristina. « Evaluation de la sûreté de fonctionnement de systèmes Unix et Windows à partir de données opérationnelles : méthode et application ». Toulouse 3, 2004. http://www.theses.fr/2004TOU30280.

Texte intégral
Résumé :
Les environnements informatiques académiques ou industriels utilisent de grands parcs de systèmes interconnectés, le plus souvent hétérogènes, incluant un nombre important de machines et serveurs Unix, Windows NT et Windows 2000. Ces environnements sont conçus pour favoriser le partage des ressources et le travail coopératif entre les utilisateurs. Néanmoins, ces avantages peuvent être compromis par des défaillances du réseau de communication, des applications ou des systèmes hôtes. Le meilleur moyen pour comprendre le comportement de systèmes informatiques en présence de fautes est de collecter des données issues de l'observation de leur comportement dans leur environnement opérationnel. Nos travaux portent sur le développement et la mise en œuvre de méthodes permettant de faciliter la collecte et l'exploitation pour des analyses de sûreté de fonctionnement de fichiers enregistrés automatiquement par certains systèmes d'exploitation. Les systèmes ciblés dans notre étude sont des machines Unix, Windows NT et Windows 2000 interconnectés au travers d'un réseau local. Outre la définition et la mise en œuvre de la stratégie de collecte de données opérationnelles, le traitement des données vise à extraire les informations pertinentes et à obtenir des mesures quantitatives pour caractériser les systèmes du point de vue de la sûreté de fonctionnement. Nous avons aussi montré comment les mesures estimées à partir des données opérationnelles peuvent être intégrées dans une modélisation analytique permettant d'évaluer la disponibilité telle qu'elle est perçue par les utilisateurs. L'analyse comparative des mesures caractérisant les systèmes et des celles reflétant la perception des utilisateurs constitue aussi un résultat original de nos travaux
Academic and industrial computing environments are mainly based on interconnected heterogeneous systems including a large number of Unix, Windows NT and Windows 2000 workstations and servers. These environments are designed to facilitate resource sharing and cooperative work between users. However, these benefits may be compromised by failures affecting the communication network, the applications or the end systems. There is no better way to understand the behavior of computing environments in the presence of faults than by direct measurement, analysis and assessment based on data obtained from the observation of their behavior in an operational environment. Our work focuses on the development and the implementation of methods allowing data collection and dependability analysis of log files automatically recorded by some operating systems. The target systems in our study are Unix, Windows NT and Windows 2000 systems interconnected in a local area network. Besides the definition and the implementation of the data collection strategy, the data processing aims to extract the relevant information and to obtain quantitative measures in order to characterize the target systems from a dependability point of view. We also showed how the measures assessed from operational data can be integrated within an analytical modeling allowing the estimation of user-perceived availability. The comparative analysis of measures characterizing the systems and those reflecting users' perceptions represents another original result of our work
Styles APA, Harvard, Vancouver, ISO, etc.
28

Sklenář, Filip. « Analýza provozních rizik nově zaváděných typů letadel ». Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2014. http://www.nusl.cz/ntk/nusl-231641.

Texte intégral
Résumé :
This thesis examines the process of introducing a new aircraft into the service, in particular by the steps from initial vision of a new aircraft until after the aircraft. The content of the thesis consists of seven parts. In the first four sections, I describe the organizations involved in aviation and reliability, physical principles of aircraft systems, accident statistics, regulatory requirements. The fifth section focuses on reliability and describes the procedures for the analysis of reliability. The sixth part is focused on the procedure for introducing new aircraft into service and also includes the methodology for eliminating the element of lack of confidence, which was one of the main objectives of this work. The seventh part is a demonstration of the procedure for the introduction of aircraft into operation.
Styles APA, Harvard, Vancouver, ISO, etc.
29

Ionescu, Dorina-Romina. « Évaluation quantitative de séquences d’événements en sûreté de fonctionnement à l’aide de la théorie des langages probabilistes ». Thesis, Université de Lorraine, 2016. http://www.theses.fr/2016LORR0309/document.

Texte intégral
Résumé :
Les études de sûreté de fonctionnement (SdF) sont en général basées sur l’hypothèse d’indépendance des événements de défaillance et de réparation ainsi que sur l’analyse des coupes qui décrivent les sous-ensembles de composants entraînant la défaillance du système. Dans le cas des systèmes dynamiques pour lesquels l’ordre d’occurrence des événements a une incidence directe sur le comportement dysfonctionnel du système, il est important de privilégier l’utilisation de séquences d’événements permettant une évaluation des indicateurs de SdF plus précise que les coupes. Ainsi, nous avons proposé, dans une première partie de nos travaux, un cadre formel permettant la détermination des séquences d’événements qui décrivent l’évolution du système ainsi que leur évaluation quantitative, en recourant à la théorie de langages probabilistes et à la théorie des processus markoviens/semi-markoviens. L'évaluation quantitative des séquences intègrent le calcul de leur probabilité d'occurrence ainsi que leur criticité (coût et longueur des séquences). Pour l’évaluation des séquences décrivant l’évolution des systèmes complexes présentant plusieurs modes de fonctionnement ou de défaillance, une approche modulaire basée sur les opérateurs de composition (choix et concaténation) a été proposée. Celle-ci consiste à calculer la probabilité d'une séquence d'événements globale à partir d'évaluations réalisées localement, mode par mode. Les différentes contributions sont appliquées sur deux cas d'étude de taille et complexité croissante
Dependability studies are often based on the assumption of events (failures and repairs) independence but also on the analyse of cut-set which describes the subsets of components causing a system failure. In the case of dynamic systems where the events occurrence order has a direct impact on the dysfunctional behaviour, it is important to promote using event sequences instead of cut-sets for dependability assessment. In the first part, a formal framework is proposed. It helps in determining sequences of events that describe the evolution of the system and their assessment, using the theory of probabilistic languages and the theory of Markov/semi-Markov processes. The assessment integrates the calculation of the probability occurrence of the event sequences and their criticality (cost and length). For the assessment of complex systems with multiple operating/failure modes, a modular approach based on composition operators (choice and concatenation) is proposed. Evaluation of the probability of a global sequence of events is performed from local Markov/semi-Markov models for each mode of the system. The different contributions are applied on two case studies with a growing complexity
Styles APA, Harvard, Vancouver, ISO, etc.
30

MATOS, JÚNIOR Rubens de Souza. « Identification of Availability and Performance Bottlenecks in Cloud Computing Systems : an approach based on hierarchical models and sensitivity analysis ». Universidade Federal de Pernambuco, 2016. https://repositorio.ufpe.br/handle/123456789/18702.

Texte intégral
Résumé :
Submitted by Rafael Santana (rafael.silvasantana@ufpe.br) on 2017-05-04T17:58:30Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) tese_rubens_digital_biblioteca_08092016.pdf: 4506490 bytes, checksum: 251226257a6b659a6ae047e659147a8a (MD5)
Made available in DSpace on 2017-05-04T17:58:30Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) tese_rubens_digital_biblioteca_08092016.pdf: 4506490 bytes, checksum: 251226257a6b659a6ae047e659147a8a (MD5) Previous issue date: 2016-03-01
CAPES
Cloud computing paradigm is able to reduce costs of acquisition and maintenance of computer systems, and enables the balanced management of resources according to the demand. Hierarchical and composite analytical models are suitable for describing performance and dependability of cloud computing systems in a concise manner, dealing with the huge number of components which constitute such kind of system. That approach uses distinct sub-models for each system level and the measures obtained in each sub-model are integrated to compute the measures for the whole system. Identification of bottlenecks in hierarchical models might be difficult yet, due to the large number of parameters and their distribution among distinct modeling levels and formalisms. This thesis proposes methods for evaluation and detection of bottlenecks of cloud computing systems. The methodology is based on hierarchical modeling and parametric sensitivity analysis techniques tailored for such a scenario. This research introduces methods to build unified sensitivity rankings when distinct modeling formalisms are combined. These methods are embedded in the Mercury software tool, providing an automated sensitivity analysis framework for supporting the process. Distinct case studies helped in testing the methodology, encompassing hardware and software aspects of cloud systems, from basic infrastructure level to applications that are hosted in private clouds. The case studies showed that the proposed approach is helpful for guiding cloud systems designers and administrators in the decision-making process, especially for tune-up and architectural improvements. It is possible to employ the methodology through an optimization algorithm proposed here, called Sensitive GRASP. This algorithm aims at optimizing performance and dependability of computing systems that cannot stand the exploration of all architectural and configuration possibilities to find the best quality of service. This is especially useful for cloud-hosted services and their complex underlying infrastructures.
O paradigma de computação em nuvem é capaz de reduzir os custos de aquisição e manutenção de sistemas computacionais e permitir uma gestão equilibrada dos recursos de acordo com a demanda. Modelos analíticos hierárquicos e compostos são adequados para descrever de forma concisa o desempenho e a confiabilidade de sistemas de computação em nuvem, lidando com o grande número de componentes que constituem esse tipo de sistema. Esta abordagem usa sub-modelos distintos para cada nível do sistema e as medidas obtidas em cada sub-modelo são usadas para calcular as métricas desejadas para o sistema como um todo. A identificação de gargalos em modelos hierárquicos pode ser difícil, no entanto, devido ao grande número de parâmetros e sua distribuição entre os distintos formalismos e níveis de modelagem. Esta tese propõe métodos para a avaliação e detecção de gargalos de sistemas de computação em nuvem. A abordagem baseia-se na modelagem hierárquica e técnicas de análise de sensibilidade paramétrica adaptadas para tal cenário. Esta pesquisa apresenta métodos para construir rankings unificados de sensibilidade quando formalismos de modelagem distintos são combinados. Estes métodos são incorporados no software Mercury, fornecendo uma estrutura automatizada de apoio ao processo. Uma metodologia de suporte a essa abordagem foi proposta e testada ao longo de estudos de casos distintos, abrangendo aspectos de hardware e software de sistemas IaaS (Infraestrutura como um serviço), desde o nível de infraestrutura básica até os aplicativos hospedados em nuvens privadas. Os estudos de caso mostraram que a abordagem proposta é útil para orientar os projetistas e administradores de infraestruturas de nuvem no processo de tomada de decisões, especialmente para ajustes eventuais e melhorias arquiteturais. A metodologia também pode ser aplicada por meio de um algoritmo de otimização proposto aqui, chamado Sensitive GRASP. Este algoritmo tem o objetivo de otimizar o desempenho e a confiabilidade de sistemas em cenários onde não é possível explorar todas as possibilidades arquiteturais e de configuração para encontrar a melhor qualidade de serviço. Isto é especialmente útil para os serviços hospedados na nuvem e suas complexas
Styles APA, Harvard, Vancouver, ISO, etc.
31

Novák, Josef. « Metody analýzy spolehlivostních dat z provozu a zkoušek letadel ». Doctoral thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2011. http://www.nusl.cz/ntk/nusl-233972.

Texte intégral
Résumé :
The doctoral thesis deals with reliability (dependability) analyses of operation and testing data of the Airplanes. Requirements of airworthiness regulations on aircraft hydraulic systems (with a focus on US FAR-23 and European CS-23 regulations) are taken into account. Mentioned regulations include requirements for the structural design, design of systems, etc. They cover wide range of airplanes from small sport airplanes to 19-seats transport aircraft. Also options for predictive reliability analyses (resources) and reliability tests are discussed in the doctoral thesis. Practical application is done on small transport airplane (currently in the development). The failure report is designed. Expected major contribution of the work is selection and practical application of the most suitable procedures for safety assessment on the field of aircraft hydraulic systems, with a focus on the small transport aircraft. Also the comparison to different data source is shown.
Styles APA, Harvard, Vancouver, ISO, etc.
32

Brini, Manel. « Safety-Bag pour les systèmes complexes ». Thesis, Compiègne, 2018. http://www.theses.fr/2018COMP2444/document.

Texte intégral
Résumé :
Les véhicules automobiles autonomes sont des systèmes critiques. En effet, suite à leurs défaillances, ils peuvent provoquer des dégâts catastrophiques sur l'humain et sur l'environnement dans lequel ils opèrent. Le contrôle des véhicules autonomes robotisés est une fonction complexe, qui comporte de très nombreux modes de défaillances potentiels. Dans le cas de plateformes expérimentales qui n'ont suivi ni les méthodes de développement ni le cycle de certification requis pour les systèmes industriels, les probabilités de défaillances sont beaucoup plus importantes. En effet, ces véhicules expérimentaux se heurtent à deux problèmes qui entravent leur sûreté de fonctionnement, c'est-à-dire la confiance justifiée que l'on peut avoir dans leur comportement correct. Tout d'abord, ils sont utilisés dans des environnements ouverts, au contexte d'exécution très large. Ceci rend leur validation très complexe, puisque de nombreuses heures de test seraient nécessaires, sans garantie que toutes les fautes du système soient détectées puis corrigées. De plus, leur comportement est souvent très difficile à prédire ou à modéliser. Cela peut être dû à l'utilisation des logiciels d'intelligence artificielle pour résoudre des problèmes complexes comme la navigation ou la perception, mais aussi à la multiplicité de systèmes ou composants interagissant et compliquant le comportement du système final, par exemple en générant des comportements émergents. Une technique permettant d'augmenter la sécurité-innocuité (safety) de ces systèmes autonomes est la mise en place d'un composant indépendant de sécurité, appelé « Safety-Bag ». Ce système est intégré entre l'application de contrôle-commande et les actionneurs du véhicule, ce qui lui permet de vérifier en ligne un ensemble de nécessités de sécurité, qui sont des propriétés nécessaires pour assurer la sécurité-innocuité du système. Chaque nécessité de sécurité est composée d'une condition de déclenchement et d'une intervention de sécurité appliquée quand la condition de déclenchement est violée. Cette intervention consiste soit en une inhibition de sécurité qui empêche le système d'évoluer vers un état à risques, soit en une action de sécurité afin de remettre le véhicule autonome dans un état sûr. La définition des nécessités de sécurité doit suivre une méthode rigoureuse pour être systématique. Pour ce faire, nous avons réalisé dans nos travaux une étude de sûreté de fonctionnement basée sur deux méthodes de prévision des fautes : AMDEC (Analyse des Modes de Défaillances, leurs Effets et leur Criticité) et HazOp-UML (Etude de dangers et d'opérabilité) qui mettent l'accent respectivement sur les composants internes matériels et logiciels du système et sur l'environnement routier et le processus de conduite. Le résultat de ces analyses de risques est un ensemble d'exigences de sécurité. Une partie de ces exigences de sécurité peut être traduite en nécessités de sécurité implémentables et vérifiables par le Safety-Bag. D'autres ne le peuvent pas pour que le système Safety-Bag reste un composant relativement simple et validable. Ensuite, nous avons effectué des expérimentations basées sur l'injection de fautes afin de valider certaines nécessités de sécurité et évaluer le comportement de notre Safety-Bag. Ces expériences ont été faites sur notre véhicule robotisé de type Fluence dans notre laboratoire dans deux cadres différents, sur la piste réelle SEVILLE dans un premier temps et ensuite sur la piste virtuelle simulée par le logiciel Scanner Studio sur le banc VILAD. Le Safety-Bag reste une solution prometteuse mais partielle pour des véhicules autonomes industriels. Par contre, il répond à l'essentiel des besoins pour assurer la sécurité-innocuité des véhicules autonomes expérimentaux
Autonomous automotive vehicles are critical systems. Indeed, following their failures, they can cause catastrophic damage to the human and the environment in which they operate. The control of autonomous vehicles is a complex function, with many potential failure modes. In the case of experimental platforms that have not followed either the development methods or the certification cycle required for industrial systems, the probabilities of failure are much greater. Indeed, these experimental vehicles face two problems that impede their dependability, which is the justified confidence that can be had in their correct behavior. First, they are used in open environment, with a very wide execution context. This makes their validation very complex, since many hours of testing would be necessary, with no guarantee that all faults in the system are detected and corrected. In addition, their behavior is often very difficult to predict or model. This may be due to the use of artificial intelligence software to solve complex problems such as navigation or perception, but also to the multiplicity of systems or components interacting and complicating the behavior of the final system, for example by generating behaviors emerging. A technique to increase the safety of these autonomous systems is the establishment of an Independent Safety Component, called "Safety-Bag". This system is integrated between the control application and the actuators of the vehicle, which allows it to check online a set of safety necessities, which are necessary properties to ensure the safety of the system. Each safety necessity is composed of a safety trigger condition and a safety intervention applied when the safety trigger condition is violated. This intervention consists of either a safety inhibition that prevents the system from moving to a risk state, or a safety action to return the autonomous vehicle to a safe state. The definition of safety necessities must follow a rigorous method to be systematic. To do this, we carried out in our work a study of dependability based on two fault prevention methods: FMEA and HazOp-UML, that respectively focus on the internal hardware and software components of the system and on the road environment and driving process. The result of these risk analyzes is a set of safety requirements. Some of these safety requirements can be translated into safety necessities, implementable and verifiable by the Safety-Bag. Others cannot be implemented in the Safety-Bag. The latter must remain simple so that it is easy to be validated. Then, we carried out experiments based on the faults injection in order to validate some safety necessities and to evaluate the Safety-Bag's behavior. These experiments were done on our robotic vehicle type Fluence in our laboratory in two different settings, on the actual track SEVILLE at first and then on the virtual track simulated by the Scanner Studio software on the VILAD testbed. The Safety-Bag remains a promising but partial solution for autonomous industrial vehicles. On the other hand, it meets the essential needs for the safety of experimental autonomous vehicles
Styles APA, Harvard, Vancouver, ISO, etc.
33

Vicenzutti, Andrea. « Innovative Integrated Power Systems for All Electric Ships ». Doctoral thesis, Università degli studi di Padova, 2016. http://hdl.handle.net/11577/3424463.

Texte intégral
Résumé :
Nowadays, in the large ships the electric propulsion solution is a viable alternative to the mechanical one. In fact, at present the latter is limited only to ships with peculiar requirements, such as the need of a high cruise speed or use of specific fuels. The use of electric propulsion, paired with progressive electrification of onboard loads, led to the birth of the All Electric Ship (AES) concept. An AES is a ship where all onboard loads (propulsion included) are electrically powered by a single power system, called Integrated Power System (IPS). The IPS is a key system in an AES, thus requiring both accurate design and management. Indeed, in AES electricity powers almost everything, highlighting the issue of guaranteeing both the proper Power Quality and Continuity of Service. The design of such a complex system has been conventionally done considering all the single components separately, to simplify the process. However, such practice leads to poor performance, integration issues, and oversizing. Moreover, the separate design procedure affects heavily system's reliability, due to the difficulty in assessing the effect on the ship of a fault in a single subsystem. For these reasons, a new design process is needed, able to consider the effect of all components and subsystems on the system, thus improving the ship design's most important drivers: efficiency, effectiveness, reliability, and cost saving. Therefore, the aim of the research has been to obtain a new design methodology, applicable to the AES’ IPS, which is able to consider the systems as a whole, with all its internal interdependencies. The results of such research are depicted in this thesis work, as a sub-process to be integrated into IPS’s design process. In this thesis, a wide review of the state of the art is done, to allow understanding the context, why such innovative process is needed, and which innovative techniques can be used as an aid in design. Each point is discussed focusing on the aim of this thesis, thus presenting topics, bibliography, and personal evaluations tailored to direct the reader to comprehend the impact of the proposed design process. In particular, after a first chapter dedicated to the introduction of All Electric Ships, in which are described how such ships have evolved, and what are the most impacting applications, a reasoned discussion on the conventional ship-design process is given in the second chapter. In addition to that, an in-depth analysis of the IPS design is done, to explain the context in which the proposed innovative design process has to be integrated. Several examples of issues coming from the conventional design process are given, to motivate the proposal of a new design process. Not only the above mentioned design issues, but also the upcoming introduction of innovative distribution systems onboard ships and the recent emergence of new requirements, whose impact on IPS is significant, are motivations calling for a new design process. Due to that, an excursus of both these two topics is given in the third chapter, referring to recent literature and research activities. Chapter four is dedicated to the description of the tools that will be used to build the innovative design process. The first part is dedicated to dependability theory, which is able to give a systematic and coherent approach to the determination of faults effects on complex systems. Through dependability theory and its techniques, it is possible: to assess the effect of single components faults on the overall system; to assess all the possible causes of a given system failure; to evaluate mathematical figures related to the system in order to compare different design solutions; and to define where the designer must intervene to improve the system. The second part of the fourth chapter is dedicated to power system’s software simulators and hardware in the loop testing. In particular, the use of such systems as an aid in designing power systems is discussed, to allow comprehending why such tools have been integrated in the innovative design process developed. The fifth chapter is dedicated to the developed design process. Discussion is presented on how such process work, how it should be integrated in ship design process, and which is the impact it have on the design. In particular, the developed procedure implies both the application of dependability theory techniques (in particular Failure Tree Analysis), and the simulation of the dynamic behavior of the power system through a mathematical model of the system tailored on electromechanical transients. Finally, to demonstrate the applicability of the proposed procedure, in chapter six a case of study has been analyzed: the IPS of a Dynamic Positioned Offshore Oil & Gas drillship. This has been done due to the stringent requirements these ships have, whose impact on power system’s design is significant. The analysis of the IPS done through the Fault Tree Analysis technique is presented (though using a simplified detail level), followed by the calculation of several dependability indexes. Such results, together with applicable rules and regulations, have been used to define the input data for simulations, carried out using a mathematical model of the IPS built on purpose. Simulations outcomes have been used in turn to evaluate the dynamic processes bringing the system from relevant faults to failure, in order to improve the system’s response to the fault events.
Oggigiorno, nelle grandi navi la propulsione elettrica è una valida alternativa a quella meccanica. Infatti, attualmente quest'ultima è limitata solo alle navi con requisiti particolari, quali la necessità di una elevata velocità di crociera o l’uso di combustibili specifici. L'uso della propulsione elettrica, in coppia con la progressiva elettrificazione dei carichi di bordo, ha portato alla nascita del concetto di All Electric Ship (AES). Una AES è una nave in cui tutti i carichi di bordo (propulsione inclusa) sono alimentati da un unico sistema elettrico, chiamato Sistema Elettrico Integrato (Integrated Power System - IPS). L'IPS è un sistema chiave in una AES, per cui richiede una progettazione ed una gestione accurata. In effetti, in una AES tale sistema alimenta quasi tutto, mettendo in evidenza il problema di garantire sia la corretta Power Quality, sia la continuità del servizio. La progettazione di un sistema così complesso viene convenzionalmente fatta considerando i singoli componenti separatamente, per semplificare il processo. Tuttavia tale pratica può portare a prestazioni ridotte, problemi di integrazione e sovradimensionamento. Come se non bastasse, la procedura di progettazione separata influisce pesantemente sull'affidabilità del sistema, a causa della difficoltà nel valutare l'effetto sulla nave di un guasto in un singolo sottosistema. Per questi motivi è necessario un nuovo processo di progettazione in grado di considerare l'effetto di tutti i componenti e sottosistemi del sistema, consentendo così di migliorare i più importanti driver applicati nella progettazione di una nave: efficienza, efficacia, affidabilità e riduzione dei costi. Date queste premesse, l'obiettivo della ricerca era di ottenere una nuova metodologia di progettazione applicabile al sistema elettrico integrato delle AES, in grado di considerare il sistema nel suo insieme, comprese tutte le sue interdipendenze interne. Il risultato di tale ricerca è descritto in questo lavoro di tesi, e consiste in un sub-processo che dovrà essere integrato nel processo di progettazione convenzionale del sistema elettrico integrato. In questa tesi viene effettuata un'ampia rassegna dello stato dell'arte, per consentire la comprensione del contesto, del perché tale processo innovativo è necessario e quali tecniche innovative possono essere utilizzate come un aiuto nella progettazione. Ogni punto è discusso concentrandosi sullo scopo di questa tesi, presentando così argomenti, bibliografia, e valutazioni personali volte ad indirizzare il lettore a comprendere l'impatto del processo di progettazione proposto. In particolare, dopo un primo capitolo dedicato all’introduzione delle AES in cui sono descritte come tali navi si sono evolute e quali sono le applicazioni più impattanti, si effettua una discussione ragionata sul processo di progettazione convenzionale delle navi, contenuta nel secondo capitolo. In aggiunta a questo viene effettuata un'analisi approfondita del processi di progettazione dell’IPS, per spiegare il contesto in cui il processo di progettazione innovativo deve essere integrato. Alcuni esempi di problemi derivanti dal processo di progettazione tradizionale sono dati, per motivare la proposta di un processo nuovo. In aggiunta ai problemi dovuti alla progettazione, altre motivazioni portano alla necessità di un rinnovato processo di progettazione, quali l'imminente introduzione di sistemi di distribuzione innovativi a bordo nave e la recente comparsa di nuovi requisiti il cui impatto sull’IPS è significativo. Per questo, un excursus su questi due temi è fatto nel terzo capitolo, con riferimento alle più recenti fonti letterarie e ricerche. Il quarto capitolo è dedicato alla descrizione degli strumenti che verranno utilizzati per costruire l'innovativo processo di progettazione. La prima parte del capitolo è dedicata alla teoria della fidatezza (dependability), in grado di dare un approccio sistematico e coerente alla determinazione degli effetti guasti sui sistemi complessi. Attraverso la teoria della fidatezza e le sue tecniche è possibile: determinare l'effetto sul sistema dei guasti ai singoli componenti; valutare tutte le possibili cause di un dato evento di avaria; valutare alcuni indici matematici relativi al sistema, al fine di confrontare diverse soluzioni progettuali; definire dove e come il progettista deve intervenire per migliorare il sistema. La seconda parte del quarto capitolo è dedicata ai software per la simulazione del comportamento dell’IPS ed ai test hardware-in-the-loop. In particolare viene discusso l'uso di tali sistemi come aiuto nella progettazione di sistemi di potenza, per permettere di comprendere perché tali strumenti sono stati integrati nel processo di progettazione sviluppato. Il quinto capitolo è dedicato al processo di progettazione sviluppato nel corso della ricerca. Viene discusso come tale processo funziona, come dovrebbe essere integrato nel processo di progettazione convenzionale, e qual è l'impatto che esso ha sulla progettazione. In particolare, la procedura sviluppata implica sia l'applicazione delle tecniche proprie della teoria della fidatezza (in particolare la Failure Tree Analysis), sia la simulazione del comportamento dinamico dell’IPS attraverso un modello matematico del sistema tarato sui transitori elettromeccanici. Infine, per dimostrare l'applicabilità della procedura proposta, nel sesto capitolo viene analizzato un caso di studio: l'IPS di una nave da perforazione offshore oil & gas dotata di posizionamento dinamico. Questo caso di studio è stato scelto a causa dei requisiti molto stringenti di questa classe di navi, il cui impatto sul progetto dell’IPS è significativo. Viene presentata l'analisi dell’IPS tramite la tecnica di Fault Tree Analysis (anche se con un livello di dettaglio semplificato), seguita dal calcolo di diversi indici di affidabilità. Tali risultati, unitamente a norme e regolamenti vigenti, sono stati utilizzati per definire i dati di input per le simulazioni, effettuate utilizzando un modello matematico dell’IPS costruito appositamente. I risultati delle simulazioni hanno consentito di valutare come il sistema dinamicamente si porta all’avaria a partire dai guasti rilevanti, e pertanto di proporre soluzioni migliorative.
Styles APA, Harvard, Vancouver, ISO, etc.
34

Bader, Kaci. « Tolérance aux fautes pour la perception multi-capteurs : application à la localisation d'un véhicule intelligent ». Thesis, Compiègne, 2014. http://www.theses.fr/2014COMP2161/document.

Texte intégral
Résumé :
La perception est une entrée fondamentale des systèmes robotiques, en particulier pour la localisation, la navigation et l'interaction avec l'environnement. Or les données perçues par les systèmes robotiques sont souvent complexes et sujettes à des imprécisions importantes. Pour remédier à ces problèmes, l'approche multi-capteurs utilise soit plusieurs capteurs de même type pour exploiter leur redondance, soit des capteurs de types différents pour exploiter leur complémentarité afin de réduire les imprécisions et les incertitudes sur les capteurs. La validation de cette approche de fusion de données pose deux problèmes majeurs.Tout d'abord, le comportement des algorithmes de fusion est difficile à prédire,ce qui les rend difficilement vérifiables par des approches formelles. De plus, l'environnement ouvert des systèmes robotiques engendre un contexte d'exécution très large, ce qui rend les tests difficiles et coûteux. L'objet de ces travaux de thèse est de proposer une alternative à la validation en mettant en place des mécanismes de tolérance aux fautes : puisqu'il est difficile d'éliminer toutes les fautes du système de perception, on va chercher à limiter leurs impacts sur son fonctionnement. Nous avons étudié la tolérance aux fautes intrinsèquement permise par la fusion de données en analysant formellement les algorithmes de fusion de données, et nous avons proposé des mécanismes de détection et de rétablissement adaptés à la perception multi-capteurs. Nous avons ensuite implémenté les mécanismes proposés pour une application de localisation de véhicules en utilisant la fusion de données par filtrage de Kalman. Nous avons finalement évalué les mécanismes proposés en utilisant le rejeu de données réelles et la technique d'injection de fautes, et démontré leur efficacité face à des fautes matérielles et logicielles
Perception is a fundamental input for robotic systems, particularly for positioning, navigation and interaction with the environment. But the data perceived by these systems are often complex and subject to significant imprecision. To overcome these problems, the multi-sensor approach uses either multiple sensors of the same type to exploit their redundancy or sensors of different types for exploiting their complementarity to reduce the sensors inaccuracies and uncertainties. The validation of the data fusion approach raises two major problems. First, the behavior of fusion algorithms is difficult to predict, which makes them difficult to verify by formal approaches. In addition, the open environment of robotic systems generates a very large execution context, which makes the tests difficult and costly. The purpose of this work is to propose an alternative to validation by developing fault tolerance mechanisms : since it is difficult to eliminate all the errors of the perceptual system, We will try to limit impact in their operation. We studied the inherently fault tolerance allowed by data fusion by formally analyzing the data fusion algorithms, and we have proposed detection and recovery mechanisms suitable for multi-sensor perception, we implemented the proposed mechanisms on vehicle localization application using Kalman filltering data fusion. We evaluated the proposed mechanims using the real data replay and fault injection technique
Styles APA, Harvard, Vancouver, ISO, etc.
35

Bartl, Michal. « Metodika vkládání kontrolních prvků do číslicového systému ». Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2009. http://www.nusl.cz/ntk/nusl-236627.

Texte intégral
Résumé :
The topics described in this diploma thesis belong to the area of digital systems testability analysis. Basic concepts as dependability, controllability, observability and testability are explained. Methods of raising testability and dependability of digital circuits are mentioned including the metrics which allow to evaluate testability parameters. Furthermore, the thesis describes the formal model of digital systems which introduces the implementing part of the thesis. Within this part, a program tool is demonstrated, which allows to identify the components of digital circuits and their function. The other function of the program tool is to create control circuits that check the correct function of such digital circuits.
Styles APA, Harvard, Vancouver, ISO, etc.
36

Saied, Majd. « Fault-tolerant control of an octorotor unmanned aerial vehicle under actuators failures ». Thesis, Compiègne, 2016. http://www.theses.fr/2016COMP2287.

Texte intégral
Résumé :
La sûreté de fonctionnement est devenue indispensable pour tous les systèmes critiques où des vies humaines sont en jeu (comme l’aéronautique, le ferroviaire, etc.). Ceci a conduit à la conception et au développement des architectures tolérantes aux fautes, l’objectif de ces architectures étant de maintenir un service correct délivré par le système malgré la présence de fautes, et en particulier de garantir la sécurité-innocuité et la fiabilité du système. La tolérance aux fautes sur les drones aériens multirotors a récemment reçu une attention importante de la part de la communauté scientifique. En particulier, plusieurs travaux ont été développés sur la tolérance aux fautes des quadrirotors suite à des fautes partielles sur les actionneurs, et récemment des recherches ont abordé le problème de panne totale de l’un des actionneurs. D’après ces études, il a été montré qu’une défaillance totale d’un actionneur dans un quadrirotor rend le système non complètement contrôlable. Une solution proposée est de considérer des multirotors avec des actionneurs redondants (hexarotors ou octorotors). La redondance inhérente disponible dans ces véhicules est exploitée, en cas de défaillance sur les actionneurs, pour redistribuer les efforts de commande sur les moteurs sains de façon à garantir la stabilité et la contrôlabilité complète du système. Dans ce travail de thèse, des approches pour la conception de systèmes de commande tolérants aux fautes des drones multirotors sont étudiées et appliquées au contrôle des octorotors. Toutefois, les algorithmes sont conçus de manière à être applicables sur les autres types de multirotors avec des modifications mineures. D’abord, une analyse de contrôlabilité de l’octorotor après l’occurrence de défaillances sur les actionneurs est présentée. Ensuite, un module de détection et d’isolation de pannes moteurs basé sur un observateur non-linéaire et les mesures de la centrale inertielle est proposé. Les mesures des vitesses et des courants de moteurs fournis par les contrôleurs de vitesse sont également utilisées dans un autre algorithme de détection pour détecter les défaillances des actionneurs et distinguer les pannes moteurs des pertes des hélices. Un module de rétablissement basé sur la reconfiguration du multiplexage est proposé pour redistribuer les efforts de commande d’une manière optimale sur les actionneurs sains après l’occurrence de défaillances dans le système. Une architecture complète, comprenant la détection et l’isolation des défauts suivie par le rétablissement du système est validée expérimentalement sur un octorotor coaxial puis elle est comparée à d’autres architectures basées sur l’allocation de commande et la tolérance aux fautes passive par mode glissant
With growing demands for safety and reliability, and an increasing awareness about the risks associated with system malfunction, dependability has become an essential concern in modern technological systems, particularly safety-critical systems such as aircrafts or railway systems. This has led to the design and development of fault tolerant control systems (FTC). The main objective of a FTC architecture is to maintain the desirable performance of the system in the event of faults and to prevent local faults from causing failures. The last years witnessed many developments in the area of fault detection and diagnosis and fault tolerant control for Unmanned Aerial rotary-wing Vehicles. In particular, there has been extensive work on stability improvements for quadrotors in case of partial failures, and recently, some works addressed the problem of a quadrotor complete propeller failure. However, these studies demonstrated that a complete loss of a quadrotor motor results in a vehicle that is not fully controllable. An alternative is then to consider multirotors with redundant actuators (octorotors or hexarotors). Inherent redundancy available in these vehicles can be exploited, in the event of an actuator failure, to redistribute the control effort among the remaining working actuators such that stability and complete controllability are retained. In this thesis, fault-tolerant control approaches for rotary-wing UAVs are investigated. The work focuses on developing algorithms for a coaxial octorotor UAV. However, these algorithms are designed to be applicable to any redundant multirotor under minor modifications. A nonlinear model-based fault detection and isolation system for motors failures is constructed based on a nonlinear observer and on the outputs of the inertial measurement unit. Motors speeds and currents given by the electronic speed controllers are also used in another fault detection and isolation module to detect actuators failures and distinguish between motors failures and propellers damage. An offline rule-based reconfigurable control mixing is designed in order to redistribute the control effort on the healthy actuators in case of one or more motors failures. A complete architecture including fault detection and isolation followed by system recovery is tested experimentally on a coaxial octorotor and compared to other architectures based on pseudo-inverse control allocation and a robust controller using second order sliding mode
Styles APA, Harvard, Vancouver, ISO, etc.
37

Delmas, Adrien. « Contribution à l'estimation de la durée de vie résiduelle des systèmes en présence d'incertitudes ». Thesis, Compiègne, 2019. http://www.theses.fr/2019COMP2476/document.

Texte intégral
Résumé :
La mise en place d’une politique de maintenance prévisionnelle est un défi majeur dans l’industrie qui tente de réduire le plus possible les frais relatifs à la maintenance. En effet, les systèmes sont de plus en plus complexes et demandent un suivi de plus en plus poussé afin de rester opérationnels et sécurisés. Une maintenance prévisionnelle nécessite d’une part d’évaluer l’état de dégradation des composants du système, et d’autre part de pronostiquer l’apparition future d’une panne. Plus précisément, il s’agit d’estimer le temps restant avant l’arrivée d’une défaillance, aussi appelé Remaining Useful Life ou RUL en anglais. L’estimation d’une RUL constitue un réel enjeu car la pertinence et l’efficacité des actions de maintenance dépendent de la justesse et de la précision des résultats obtenus. Il existe de nombreuses méthodes permettant de réaliser un pronostic de durée de vie résiduelle, chacune avec ses spécificités, ses avantages et ses inconvénients. Les travaux présentés dans ce manuscrit s’intéressent à une méthodologie générale pour estimer la RUL d’un composant. L’objectif est de proposer une méthode applicable à un grand nombre de cas et de situations différentes sans nécessiter de modification majeure. De plus, nous cherchons aussi à traiter plusieurs types d’incertitudes afin d’améliorer la justesse des résultats de pronostic. Au final, la méthodologie développée constitue une aide à la décision pour la planification des opérations de maintenance. La RUL estimée permet de décider de l’instant optimal des interventions nécessaires, et le traitement des incertitudes apporte un niveau de confiance supplémentaire dans les valeurs obtenues
Predictive maintenance strategies can help reduce the ever-growing maintenance costs, but their implementation represents a major challenge. Indeed, it requires to evaluate the health state of the component of the system and to prognosticate the occurrence of a future failure. This second step consists in estimating the remaining useful life (RUL) of the components, in Other words, the time they will continue functioning properly. This RUL estimation holds a high stake because the precision and accuracy of the results will influence the relevance and effectiveness of the maintenance operations. Many methods have been developed to prognosticate the remaining useful life of a component. Each one has its own particularities, advantages and drawbacks. The present work proposes a general methodology for component RUL estimation. The objective i to develop a method that can be applied to many different cases and situations and does not require big modifications. Moreover, several types of uncertainties are being dealt With in order to improve the accuracy of the prognostic. The proposed methodology can help in the maintenance decision making process. Indeed, it is possible to select the optimal moment for a required intervention thanks to the estimated RUL. Furthermore, dealing With the uncertainties provides additional confidence into the prognostic results
Styles APA, Harvard, Vancouver, ISO, etc.
38

Semotam, Petr. « Prediktivní systém údržby obráběcích strojů s využitím vibrodiagnostiky ». Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2018. http://www.nusl.cz/ntk/nusl-382193.

Texte intégral
Résumé :
This diploma thesis concerns issues of predictive and condition based maintenance system of machine tools with using a vibrodiagnostics. It studies and researches its impacts through the basic processes of the maintenance system and characterizes the vibration diagnosis as its tool and mean. There is also described a process of putting condition based maintenance into practice in the practical part of the thesis. The development is realized at Siemens Ltd. Brno with all its requirements and aspects such as a maintenance audit which means the decision on the suitability of condition based maintenance within the current maintenance system, technical analysis as a part of introduction of vibration diagnosis and the practical example of acquiring, recording and assessment of measured vibration. Prior to the end the economic evaluation of the planned predictive maintenance system and the design of the general model of development and implementation of the maintenance system into practice are included.
Styles APA, Harvard, Vancouver, ISO, etc.
39

Gorayeb, Diana Maria da Câmara. « Gestão de continuidade de negócios aplicada no ensino presencial mediado por recursos tecnológicos ». Universidade de São Paulo, 2012. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-08052012-115710/.

Texte intégral
Résumé :
Este trabalho propõe diretrizes de Gestão de Continuidade de Negócios (GCN) para a tecnologia de Ensino Presencial Mediado por Recursos Tecnológicos (EPMRT), que conta, para a realização de suas atividades acadêmicas, com um sistema complexo para transmissão de aulas e exige um grande esforço para o controle das suas operações e das respostas coordenadas diante de erros, falhas e defeitos, ou quaisquer incidentes que resultem na interrupção das suas atividades. A manutenção deste ambiente tecnológico está relacionada com a implantação de processos eficientes de gestão de risco e do ciclo de melhoria contínua em ambiente de TI com a adoção do ITIL® e através da construção das diretrizes de um Plano de Continuidade de Negócios (PCN), documentado por meio de elementos da UML, utilizando a Análise de Impacto nos Negócios (BIA), a Avaliação dos Riscos (RA) e os atributos de Dependabilidade para os elementos tecnológicos: disponibilidade, confiabilidade, segurança, confidencialidade, integridade e manutenibilidade.
This paper proposes guidelines for Business Continuity Management (BCM) that uses a technology called Education System Mediated Classroom Resources Technology (SPMRT), which needs, for the achievement of their academic activities, a complex system for transmission of lessons and requires a great effort to control their operations and coordinated fast responses in case of errors, faults, attacks and defects, or any incidents that result in the disruption of their activities. Maintaining this technological environment is related to the implementation of efficient processes of risk management and continuous improvement cycle in the IT environment with the adoption of ITIL® and through the construction of a Business Continuity Plan (BCP), documented by elements of the UML using the Business Impact Analysis (BIA), Risk Assessment (RA) and the attributes of Dependability: availability, reliability, security, confidentiality, integrity and maintainability.
Styles APA, Harvard, Vancouver, ISO, etc.
40

Sadou, Nabil. « Aide à la conception des systèmes embarqués sûrs de fonctionnement ». Phd thesis, INSA de Toulouse, 2007. http://tel.archives-ouvertes.fr/tel-00192045.

Texte intégral
Résumé :
L'avancée technologique que les systèmes embarqués ont connue lors de ces dernières années les rend de plus en plus complexes. Ils sont non seulement responsables de la commande des différents composants mais aussi de leur surveillance. A l'occurrence d'événement pouvant mettre en danger la vie des utilisateurs, une certaine configuration du système est exécutée afin de maintenir le système dans un état dégradé mais sûr. Il est possible que la configuration échoue conduisant le système dans un état appelé " état redouté " avec des conséquences dramatiques pour le système et l'utilisateur. La description des scénarios qui mènent le système vers l'état redouté à partir d'un état de fonctionnement 'normal' permet de comprendre les raisons de la dérive afin de prévoir les configurations nécessaires qui permettent de les éviter Dans notre approche d'analyse de sûreté de fonctionnement des systèmes dynamiques, les scénarios sont générés à partir d'un modèle réseau de Petri. En s'appuyant sur la logique linéaire comme nouvelle représentation (basée sur les causalités) du modèle réseau de Petri, une analyse qualitative permet de déterminer un ordre partiel de franchissement des transitions et ainsi extraire les scénarios redoutés. La démarche est focalisée sur les parties du modèle intéressantes pour l'analyse de fiabilité évitant ainsi l'exploration de toutes les parties du système et le problème de l'explosion combinatoire. L'objectif final consiste en la détermination de scénarios minimaux. En effet, un scénario peut bien mener vers l'état redouté sans qu'il soit minimal. Il contient des événements qui ne sont pas strictement nécessaires à l'obtention finale de l'état critique redouté. De même que la notion de coupe minimale a été définie dans le cadre des arbres de défaillance, nous proposons une définition de ce qu'est un scénario minimal dans le cas des réseaux de Petri. Pour prendre en compte La nature hybride des systèmes, nous avons développé un simulateur hybride basé sur le couplage de l'algorithme de génération de scénarios redoutés avec un solveur d'équations différentielles. L'algorithme se charge de la partie discrète modélisée par le réseau de Petri et le solveur d'équations de la partie continue modélisée par un ensemble d'équations différentielles. Afin d'avoir une approche système pour l'analyse de la sûreté de fonctionnement, nous proposons une approche qui permet de prendre en compte les exigences de sûreté dans le processus d'ingénierie des exigences qui permet d'établir un modèle de traçabilité afin de s'assurer de la prise en compte de ces exigences tout au long du cycle de vie du système. L'approche est basée sur une norme de l'ingénierie système, en l'occurrence l'EIA-632.
Styles APA, Harvard, Vancouver, ISO, etc.
41

NOSTRO, NICOLA. « MODEL-BASED APPROACHES TO DEPENDABILITY AND SECURITY ASSESSMENT IN CRITICAL AND DYNAMIC SYSTEMS ». Doctoral thesis, 2015. https://hdl.handle.net/2158/947559.

Texte intégral
Résumé :
The thesis presents a work based on the study and research of topics related to resilience and security of complex systems, which present characteristics of heterogeneity, dynamicity, evolvability, interdependencies, interconnections, criticality with respect to the domain of application. Nowadays, the study of such systems and their characteristics is of paramount importance, because they are becoming more and more widespread, affecting our lives and our way of life. For this reason, it is crucial to design such systems with an high level of resilience and security. Indeed, their failures could lead, in the worst case, to catastrophic consequences, for instance if we consider the Critical Infrastructures.
Styles APA, Harvard, Vancouver, ISO, etc.
42

MONTECCHI, LEONARDO. « A Methodology and Framework for Model-Driven Dependability Analysis of Critical Embedded Systems and Directions Towards Systems of Systems ». Doctoral thesis, 2013. http://hdl.handle.net/2158/851697.

Texte intégral
Résumé :
In different domains, engineers have long used models to assess the feasibility of system designs; over other evaluation techniques modeling has the key advantage of not exercising a real instance of the system, which may be costly, dangerous, or simply unfeasible (e.g., if the system is still under design). In the development of critical systems, modeling is most often employed as a fault forecasting technique, since it can be used to estimate the degree to which a given design provides the required dependability attributes, i.e., to perform quantitative dependability analysis. More in general, models are employed in the evaluation of the Quality of Service (QoS) provided by the system, under the form of dependability, performance, or performability metrics. From an industrial perspective, modeling is also a valuable tool in the Verification & Validation (V&V) process, either as a support to the process itself (e.g., FTA), or as a means to verify specific quantitative or qualitative requirements. Modern computing systems have become very different from what they used to be in the past: their scale is growing, they are becoming massively distributed, interconnected, and evolving. Moreover, a shift towards the use of off-the-shelf components is becoming evident in several domains. Such increase in complexity makes model-based assessment a difficult and time-consuming task. In the last years, the development of system has increasingly adopted the Component-Based Development (CBD) and Model-Driven Engineering (MDE) philosophies as a way to reduce the complexity in system design and evaluation. CBD refers to the established practice of building a system out of reusable “black-box” components, while MDE refers to the systematic use of models as primary artefacts throughout the engineering lifecycle. Engineering languages like UML, BPEL, AADL, etc., allow not only a reasonable unambiguous specification of designs, but also serve as the input for subsequent development steps like code generation, formal verification, and testing. One of the core technologies supporting model-driven engineering is model transformation. Transformations can be used to refine models, apply design patterns, and project design models to various mathematical analysis domains in a precise and automated way. In recent years, model-driven engineering approaches have been also extensively used for the analysis of the extra-functional properties of the systems. To this purpose, language extensions were introduced and utilized to capture the required extra-functional concerns. Despite several approaches propose model transformations for dependability analysis, still there is not a standard approach for performing dependability analysis in a MDE environment. Indeed, when targeting critical embedded systems, the lack of support for dependability attributes, and extra-functional attributes in general, is one of the most recognized weaknesses of UML-based languages. Also, most of the approaches have been defined as extensions to a "general" system development process, often leaving the actual process unspecified. Similarly, supporting tools are typically detached from the design environment, and assume to receive as input a model satisfying certain constraints. While in principle such approach allows not to be bound to specific development methodologies, in practice it introduces a gap between the design of the functional system model, its enrichment with dependability information, and the subsequent analysis. Finally, the specification of properties our of components' context, which typically holds for functional properties, is much less understood for non-functional properties. The work in this thesis elaborates on the combined application of the CBD and MDE philosophies and technologies, with the aim to automate dependability analysis of modern computing systems. A considerable part of the work described in this thesis has been carried out in the context of the ARTEMIS-JU “CHESS” project, which aimed at defining, developing and assessing a methodology for the component-based design and development of embedded systems, using model-driven engineering techniques. The work in this thesis defines and realizes an extension to the CHESS framework for the automated evaluation of quantitative dependability properties. The extension constitutes of: i) a set of UML language extensions, collectively referred to as DEP-UML, for modeling dependability properties relevant for quantitative analysis; ii) a set of model-transformation rules for the automated generation of Stochastic Petri Nets (SPNs) models from system designs enriched with DEP-UML; and iii) a model-transformation tool, realized as a plugin for the Eclipse platform, concretely implementing the approach. After introducing the approach, we detail its application with two case studies. While for embedded systems it is often possible, or even mandatory, to follow and control the whole design and development process, the same does not hold for other classes of systems and infrastructures. In particular, large-scale complex systems don’t fit well in the paradigm proposed by the CHESS project, and alternative approaches are therefore needed. Following this observation, we then elaborate on a workflow for applying MDE approaches to support the modeling of large-scale complex systems. The workflow is based on a particular modeling technique, and a supporting domain-specific language, TMDL, which is defined in this thesis. After introducing a motivating example, the thesis details the workflow, introduces the TMDL language, describes a prototype realization of the approach, and describes the application of the approach to two examples. We then conclude with a discussion and a future view on how the contribution of this thesis can be extended to a comprehensive approach for dependability and performability evaluation in a "System of Systems" context. More in detail, this dissertation is organized as follows. Chapter 1 introduces the context of the work, describing the main concepts related to dependability, and dependability evaluation, with a focus on model-based assessment. The foundation of CBD and MDE approaches, the role of the UML language, and main related work are instead discussed in Chapter 2. Chapter 3 describes the CHESS project, and introduces the language extensions that have been defined to support dependability analysis. Moreover, the chapter details the entire process that drove us to such extensions, including the elicitation of language requirements and the evaluation of existing languages in the literature. The model-transformation algorithms for the generation of Stochastic Petri Nets are described in Chapter 4, while the adopted architecture for the concrete realization of the analysis plugin is described in Chapter 5. Chapter 6 describes the application of our approach to two case studies: of a multimedia processing workstation and a fire detection system. The need for a complementary approach for the evaluation of large-scale complex system is discussed in Chapter 7, with the aid of a motivating example of a distributed multimedia application. Chapter 8 describes our approach for the automated assembly of large dependability models through model-transformation. The thesis then concludes with an outlook on the relevance of the work presented in this thesis towards a System of Systems approach to the evaluation of large-scale complex systems.
Styles APA, Harvard, Vancouver, ISO, etc.
43

Kabir, Sohag. « An overview of fault tree analysis and its application in model based dependability analysis ». 2017. http://hdl.handle.net/10454/17428.

Texte intégral
Résumé :
Yes
Fault Tree Analysis (FTA) is a well-established and well-understood technique, widely used for dependability evaluation of a wide range of systems. Although many extensions of fault trees have been proposed, they suffer from a variety of shortcomings. In particular, even where software tool support exists, these analyses require a lot of manual effort. Over the past two decades, research has focused on simplifying dependability analysis by looking at how we can synthesise dependability information from system models automatically. This has led to the field of model-based dependability analysis (MBDA). Different tools and techniques have been developed as part of MBDA to automate the generation of dependability analysis artefacts such as fault trees. Firstly, this paper reviews the standard fault tree with its limitations. Secondly, different extensions of standard fault trees are reviewed. Thirdly, this paper reviews a number of prominent MBDA techniques where fault trees are used as a means for system dependability analysis and provides an insight into their working mechanism, applicability, strengths and challenges. Finally, the future outlook for MBDA is outlined, which includes the prospect of developing expert and intelligent systems for dependability analysis of complex open systems under the conditions of uncertainty.
Styles APA, Harvard, Vancouver, ISO, etc.
44

Sharvia, S., Sohag Kabir, M. Walker et Y. Papadopoulos. « Model-based dependability analysis : State-of-the-art, challenges, and future outlook ». 2015. http://hdl.handle.net/10454/17434.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
45

Mandak, Wayne S., et Charles A. Stowell. « Dynamic Assembly for System Adaptability, Dependability and Assurance (DASADA) project analysis ». Thesis, 2001. http://hdl.handle.net/10945/10926.

Texte intégral
Résumé :
This thesis focuses on an analysis of the dynamic behavior of software designed for future Department of Defense systems. The DoD is aware that as software becomes more complex, it will become extremely critical to have the ability for components to change themselves by swapping or modifying components, changing interaction protocols, or changing its topology. The Defense Advanced Research Programs Agency formed the Dynamic Assembly for Systems Adaptability, Dependability, and Assurance (DASADA) program in order to task academia and industry to develop dynamic gauges that can determine run-time composition, allow for the continual monitoring of software for adaptation, and ensure that all user defined properties remain stable before and after composition and deployment. Through the study, a review of all the DASADA technologies were identified as well as a thorough analysis of all 19 project demonstrations. This thesis includes a template built using the object-oriented methodologies of the Unified Modeling Language (UML) that will allow for functional and non-functional decomposition of any DASADA software technology project. In addition, this thesis includes insightful conclusions and recommendations on those DASADA projects that warrant further study and review.
Styles APA, Harvard, Vancouver, ISO, etc.
46

Wang, Ding-Chau, et 王鼎超. « Dependability and performance analysis of distributed algorithms for managing replicated data ». Thesis, 2003. http://ndltd.ncl.edu.tw/handle/21474899552073101005.

Texte intégral
Résumé :
博士
國立成功大學
資訊工程學系碩博士班
91
Data replication is a proven technique for improving data availability of distributed systems. Historically the past research focused mainly on the development of replicated data management algorithms that can be proven correct and result in improved data availability, with the performance issues associated with data maintenance largely ignored. In this thesis, we analyze both dependability and performance characteristics of distributed algorithms for managing replicated data by developing generic modeling techniques based on Petri nets, with the goal to identify environmental conditions under which these replicated data management algorithms can be used to satisfy system dependability and performance requirements. First, we investigate an effective technique for calculating the access time distribution for requests that access replicated data maintained by the distributed system using the majority voting as a case. The technique can be used to estimate the reliability of real-time applications which must access replicated data with a deadline requirement. Then we enhance this technique to analyze user-perceived dependability and performance properties of quorum-based algorithms. User-perceived dependability and performance metrics are very different from conventional ones in that the dependability and performance properties must be assessed from the perspective of users accessing the system. A feature of the enhanced techniques is that no assumption is made regarding the interconnection topology, the number of replicas, or the quorum definition used by the replicated system, thus making it applicable to a wide class of quorum-based algorithms. Our analysis shows that when the user-perceiveness is taken into consideration, the effect of increasing the network connectivity and number of replicas on the availability and dependability properties perceived by users is very different from that under conventional metrics. Thus, unlike conventional metrics, user-perceived metrics allow a tradeoff to be exploited between the hardware invested, i.e., higher network connectivity and number of replicas, and the performance and dependability properties perceived by users. Next we analyze reconfigurable algorithms to determine how often the system should detect and react to failure conditions so that reorganization operations can be performed by the system at the appropriate time to improve the availability of replicated data without adversely compromising the performance of the system. We use dynamic voting as a case study to reveal design trade-offs for designing such reconfigurable algorithms and illustrate how often failure detection and reconfiguration activities should be performed, by means of using dummy updates, so as to maximize data availability. Dummy updates are system-initiated maintenance updates that will only update the state of the system regarding the availability of replicated data without actually changing the value of replicated data. However, because of using locks, dummy updates can hinder normal user-initiated updates during the execution of the conventional 2-phase commitment (2PC) protocol. We develop a modified 2PC protocol to be used by dummy updates and show that the modified 2PC protocol greatly improves the availability of replicated data compared to the conventional 2PC protocol. Lastly, we examine the availability and performance characteristics of replicated data in wireless cellular environments in which users access replicated data through base stations of the network as they roam in and out of those base stations. We address the issues of when, where and how to place replicas on the base stations by developing a performance model to analyze periodic maintenance strategies for managing replicated objects in mobile wireless client-server environments. Under a periodical maintenance strategy, the system periodically checks local cells to determine if a replicated object should be allocated or deallocated in a cell to reduce the access cost. Our performance model considers the missing-read cost, write-propagation cost and the periodic maintenance cost with the objective to identify optimal periodic maintenance intervals to minimize the overall cost. Our analysis results show that the overall cost is high when the user arrival-departure ratio and the read-write ratio work against each other and is low otherwise. Under the fixed periodic maintenance strategy, i.e., the maintenance interval is a constant, there exists an optimal periodic maintenance interval that would yield the minimum cost. Further, the optimal periodic maintenance interval increases as the arrival-departure ratio and the read-write ratio work in harmony. We also discover that by adjusting the periodic intervals dynamically in response to state changes of the system at run time, it can further reduce the overall cost obtainable by the fixed periodic maintenance strategy at optimizing conditions.
Styles APA, Harvard, Vancouver, ISO, etc.
47

Clark, Jeffrey Alan. « Dependability analysis of fault-tolerant multiprocessor architectures through simulated fault injection ». 1993. https://scholarworks.umass.edu/dissertations/AAI9408266.

Texte intégral
Résumé :
This dissertation develops a new approach for evaluating the dependability of fault-tolerant computer systems. Dependability has traditionally been evaluated through combinatorial and Markov modeling. These analytical techniques have several limitations which can restrict their applicability. Simulation avoids many of the limitations, allowing for more precise representation of system attributes than feasible with analytical modeling. However, the computational demands of simulating a system in detail, at a low abstraction level, currently prohibit evaluation of high level dependability metrics such as reliability and availability. The new approach abstracts a system at the architectural level, and employs life testing through simulated fault-injection to accurately and efficiently measure dependability. The simulation models needed to implement this approach have been derived and integrated into a generalized software testbed called the REliable Architecture Characterization Tool (REACT). The effectiveness of REACT is demonstrated through the analysis of several alternative fault-tolerant multiprocessor architectures. Specifically, two dependability tradeoffs associated with triple-modular redundant (TMR) systems are investigated. The first explores the reliability-performance tradeoff made by voting unidirectionally, instead of bidirectionally, on either memory read or write accesses. The second examines the reliability-cost tradeoff made by duplicating, rather than triplicating, memory modules and comparing their outputs via error detecting codes. Both studies show that in many cases, acceptably little reliability is sacrificed for potentially large performance increases or cost reductions, in comparison to the original TMR system design.
Styles APA, Harvard, Vancouver, ISO, etc.
48

de, Vries INGRID. « AN ANALYSIS OF TEST CONSTRUCTION PROCEDURES AND SCORE DEPENDABILITY OF A PARAMEDIC RECERTIFICATION EXAM ». Thesis, 2012. http://hdl.handle.net/1974/7434.

Texte intégral
Résumé :
High-stakes testing is used for the purposes of providing results that have important consequences such as certifications, licensing, or credentialing. The purpose of this study was to examine aspects of an exam recently written by flight paramedics for recertification and make recommendations for development of future exams. In 2008, an unexpectedly high failure led to revisions in the exam development process for flight paramedics. Using principles of classical test theory and generalizability theory, I examined the decision consistency and dependability of the examination and found the decision consistency for dichotomous items to be within acceptable limits, yet the dependability was low. Discrimination was strong at the cut-score. An in-depth look into the process used to set the exam, as well as the psychometric properties of the exam and the items have led to recommendations that will contribute to future development of dependable exams in the industry that result in more valid interpretations with respect to paramedic competence.
Thesis (Master, Education) -- Queen's University, 2012-09-06 22:41:41.552
Styles APA, Harvard, Vancouver, ISO, etc.
49

CECCARELLI, ANDREA. « Analysis of critical systems through rigorous, reproducible and comparable experimental assessment ». Doctoral thesis, 2012. http://hdl.handle.net/2158/596157.

Texte intégral
Résumé :
The key role of computing systems and networks in a variety of high-valued and critical applications justifies the need for reliably and quantitatively assessing their characteristics. It is well known that the quantitative evaluation of performance and dependability-related attributes is an important activity of fault forecasting, since it aims at probabilistically estimating the adequacy of a system with respect to the requirements given in its specification. Quantitative system assessment can be performed using several approaches, generally classified into three categories: analytic, simulative and experimental. Each of these approaches shows different peculiarities, which determine the suitableness of the method for the analysis of a specific system aspect. The most appropriate method for quantitative assessment depends on the complexity of the system, its development stage, the specific aspects to be studied, the attributes to be evaluated, the accuracy required, and the resources available for the study. Focusing on experimental evaluation, increasing interest is being paid to quantitative evaluation based on measurement of dependability attributes and metrics of computer systems and infrastructures. This is an attractive option for assessing an existing system or prototype, because it allows monitoring a system to obtain highly accurate measurements of the system in execution in its real usage environment. A mandatory requirement of each experimental evaluation activity is to guarantee a high confidence in the results provided: this implies that the measuring system (the instruments and features used to perform the measurements), the target system and all factors that may influence the results of the experiments (e.g., the environment) need to be investigated and that possible sources of uncertainty in the results need to be addressed. Current situation is that, even if the measuring systems are carefully designed and actually provide confident results, that are not altered due to an intrusive set-up, badly designed experiments or measurement errors, there is seldom attention to quantify how well the measuring system (the tool) performs and what is the uncertainty of the results collected. Methodologies and tools for the evaluation and monitoring of distributed systems could benefit from the conceptual framework and mathematical tools and techniques offered by metrology (measurement theory), the science devoted to studying the measuring instruments and the processes of measuring. In fact metrology has developed theories and good practice rules to make measurements, to evaluate measurements results and to characterize measuring instruments. Additionally, well-structured evaluation processes and methods are key elements for the success of the experimental evaluation activity. The approaches to assess algorithms and systems are typically different one from the others and lack commonly applied rules, making comparison among different tools and results difficult. Despite the fact that sharing results and comparing them is acknowledged of paramount importance in the current dependability research community, it is a matter of fact that in the field of dependability the approach to quantitatively assess algorithms and systems is not univocal, but generally varies from a work to another, making the comparison among different tools and results quite difficult, if not meaningless. Should structured, fully depicted and trusted results be shared, then tools and experiments could be better compared. Starting from these observations, this Thesis proposes a general conceptual methodology for the experimental evaluation of critical systems. The methodology, subdivided in iterative phases, addresses all activities of experimental evaluation from objectives definition until conclusions and recommendations. The methodology tackles two key issues. The first is providing a metrological characterization of measurement results and measuring instruments, including the need to attentively report a description of such characterization. The second is proposing techniques and solutions (mainly from OLAP technologies) for the organization and archiving of measurement results collected, to ease data retrieval and comparison. The applicability of the methodology to industrial practices and V&V processes compliant to standards is shown by introducing a framework for the support of V&V process, and then discussing the interplay of the methodology and the framework to perform the experimental evaluation activities planned in a generic V&V process. The methodology is then applied to five case studies, where five very different kinds of systems are evaluated, ranging from COTS components to highly distributed and adaptive SOAs. These systems are (in ascending order of distributedness and complexity) i) the middleware service for resilient timekeeping R&SAClock, ii) low-cost GPS devices, iii) a safety-critical embedded system for railway train-borne equipment (a Driver Machine Interface), iv) a distributed algorithm prototyped and tested with an improved version of NekoStat, and v) a testing service for the runtime evaluation of dynamic SOAs. Case studies i), iv) and v) have been developed exclusively in the academic context (in University labs), while case studies ii) and iii) have been performed in cooperation with industries, to bring evidence of the effectiveness of the methodology in industrial V&V processes. These five case studies offer a comprehensive and exhaustive illustration of the methodology and its insight. They show how the methodology allows tackling the previous issues in different contexts, and prove its flexibility and generality.
Styles APA, Harvard, Vancouver, ISO, etc.
50

Silva, Nuno Pedro de Jesus. « An Empirical Approach to Improve the Quality and Dependability of Critical Systems Engineering ». Doctoral thesis, 2018. http://hdl.handle.net/10316/79833.

Texte intégral
Résumé :
Tese de doutoramento em Ciências e Tecnologias da Informação, apresentada ao Departamento de Engenharia Informática da Faculdade de Ciências e Tecnologia da Universidade de Coimbra
Critical systems, such as space, railways and avionics systems, are developed under strict requirements envisaging high integrity in accordance to specific standards. For such software systems, generally an independent assessment is put into effect (as a safety assessment or in the form of Independent Software Verification and Validation - ISVV) after the regular development lifecycle and V&V activities, aiming at identifying and correcting residual faults and raising confidence in the software. These systems are very sensitive to failures (they might cause severe impacts), and even if they are today reaching very low failure rates, there is always a need to guarantee higher quality and dependability levels. However, it has been observed that there are still a significant number of defects remaining at the latest lifecycle phases, questioning the effectiveness of the previous engineering processes and V&V techniques. This thesis proposes an empirical approach to identify the nature of defects (quality, dependability, safety gaps) and, based on that knowledge, to provide support to improve critical systems engineering. The work is based on knowledge about safety critical systems and how they are specified/developed/validated (standards, processes and techniques, resources, lifecycles, technologies, etc.). Improvements are obtained from an orthogonal classification and further analysis of issues collected from real systems at all lifecycle phases. Such historical data (issues) have been studied, classified and clustered according to different properties and taking into account the issue introduction phase, the involved techniques, the applicable standards, and particularly the root causes. The identified improvements shall be reflected in the development and V&V techniques, on resources training or preparation, and drive standards modifications or adoption. The first and more encompassing contribution of this work is the definition of a defects assessment process that can be used and applied in industry in a simple way and independently from the industrial domain. The process makes use of a dataset collected from existing issues reflecting process deficiencies, and supports the analysis of these data towards identifying the root causes for those problems and defining appropriate measures to avoid them in future systems. As part of the defect assessment process activities, we propose an adaptation of the Orthogonal Defect Classification (ODC) for critical issues. In practice, ODC was used as an initial classification and then it was tuned according to the gaps and difficulties found during the initial stages of our defects classification activities. The refinement was applied on the defect types, triggers and impacts. Improved taxonomies for these three parameters are proposed. A subsequent contribution of our work is the application and integration of a root cause analysis process to show the connection of the defects (or issue groups) with the engineering properties and environment. The engineering properties (e.g. human and technical resources properties, events, processes, methods, tools and standards) are, in fact, the principal input for the classes of root causes. A fishbone root cause analysis was proposed, integrated in the process and applied to the available dataset. A practical contribution of the work comprises the identification of a specific set of root causes and applicable measures to improve the quality of the engineered systems (removal of those causes). These root causes and proposed measures allow the provision of quick and specific feedback to the industrial engineering teams as soon as the defects are analyzed. The list/database has been compiled from the dataset and includes the feedback and contributions from the experts that responded to a process/framework validation survey. The root causes and the associated measures represent a valuable body of knowledge to support future defects assessments. The last key contribution of our work is the promotion of a cultural change to appropriately make use of real defects data (the main input of the process), which shall be appropriately documented and easily collected, cleaned and updated. The regular use of defects data with the application of the proposed defects assessment process will contribute to measure the quality evolutions and the progress of implementation of the corrective actions or improvement measures that are the essential output of the process.
Os sistemas críticos, tais como os sistemas espaciais, ferroviários ou os sistemas de aviónica, são desenvolvidos sob requisitos estritos que visam atingir alta integridade ao abrigo de normas específicas. Para tais sistemas de software, é geralmente aplicada uma avaliação independente (como uma avaliação de safety ou na forma de uma Verificação e Validação de Software Independente - ISVV) após o ciclo de desenvolvimento e as respetivas atividades de V&V, visando identificar e corrigir falhas residuais e aumentar a confiança no software. Estes sistemas são muito sensíveis a falhas (pois estas podem causar impactos severos), e apesar de atualmente se conseguir atingir taxas de falhas muito baixas, há sempre a necessidade de garantir a maior qualidade dos sistemas e os maiores níveis de confiabilidade. No entanto, observa-se que ainda existe um número significativo de defeitos que permanecem nas últimas fases do ciclo de desenvolvimento, o que nos leva a questionar a eficácia dos processos de engenharia usados e as técnicas de V&V aplicadas. Esta tese propõe uma abordagem empírica para identificar a natureza dos defeitos (de qualidade, confiabilidade, lacunas de safety) e com base nesse conhecimento proporcionar uma melhoria da engenharia de sistemas críticos. O trabalho é baseado em conhecimento sobre os sistemas críticos e na forma como estes são especificados / desenvolvidos / validados (normas, processos e técnicas, recursos, ciclo de vida, tecnologias, etc.). As recomendações de melhorias para os sistemas críticos são obtidas a partir de uma classificação ortogonal e posterior análise de dados de defeitos obtidos de sistemas reais cobrindo todas as fases do ciclo de vida. Estes dados históricos (defeitos) foram estudados, classificados e agrupados de acordo com diferentes propriedades, considerando a fase de introdução do defeito, as técnicas envolvidas, as normas aplicáveis e, em particular, as possíveis causas fundamentais (ou raiz). As melhorias identificadas deverão refletir-se nas técnicas de desenvolvimento / V&V, na formação ou preparação de recursos humanos e orientar alterações ou adoção de normas. A primeira e mais abrangente das contribuições deste trabalho é a definição de um processo de avaliação de defeitos que pode ser usado e aplicado na indústria de forma simples e independente do domínio industrial. O processo proposto baseia-se na disponibilidade de um conjunto de dados de problemas que refletem deficiências de processo de desenvolvimento e suporta a análise desses dados para identificar as suas causas raiz e definir medidas apropriadas para evitá-los em sistemas futuros. Como parte das atividades do processo de avaliação de defeitos, é proposta uma adaptação da Classificação Ortogonal de Defeitos (ODC) para sistemas críticos. Na prática, a ODC foi usada como uma classificação inicial e depois ajustada de acordo com as lacunas e dificuldades encontradas durante os estágios iniciais das atividades de classificação de defeitos. O refinamento foi aplicado aos tipos de defeito, aos eventos que levaram a esses defeitos e aos seus impactos. Neste trabalho, são propostas versões melhoradas das taxonomias para esses três parâmetros. Uma contribuição subsequente é a aplicação e integração de um processo de análise de causas raiz para relacionar os defeitos (ou grupos de problemas) com as propriedades e o ambiente de engenharia. As propriedades de engenharia (por exemplo, recursos humanos e técnicos, eventos, processos, métodos, ferramentas e normas) são, de facto, as principais fontes para a identificação das classes de causas raiz. A análise de causas de raiz proposta é baseada em diagramas fishbone, tendo sido integrada no processo e aplicada ao conjunto de dados disponíveis. Uma contribuição prática do nosso trabalho é a identificação de um conjunto específico de causas raiz e de medidas aplicáveis para melhorar a qualidade dos sistemas de engenharia (eliminação dessas causas). As causas e as medidas propostas permitem um retorno rápido e específico logo que os defeitos são analisados. A lista / base de dados foi compilada a partir do conjunto de dados de defeitos e inclui os comentários e contribuições de especialistas que responderam a um formulário de validação do processo. As causas raiz e as medidas associadas representam um conjunto valioso de conhecimento que pode suportar futuras análises de defeitos. A última contribuição chave do nosso trabalho é a promoção de uma mudança cultural para fazer uso apropriado de dados de defeitos reais (principal fonte do processo), os quais devem ser devidamente documentados e facilmente recolhidos, tratados e atualizados. O uso regular de dados sobre defeitos através da aplicação do processo de análise de defeitos proposto contribuirá para medir a evolução da qualidade e o progresso da implementação das ações corretivas ou medidas de melhoria que são o principal resultado do processo.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie