Дисертації з теми "Fault resilience"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-40 дисертацій для дослідження на тему "Fault resilience".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.
Wilkes, Charles Thomas. "Programming methodologies for resilience and availability." Diss., Georgia Institute of Technology, 1987. http://hdl.handle.net/1853/8308.
Повний текст джерелаNascimento, Flávia Maristela Santos. "A SIMULATION-BASED FAULT RESILIENCE ANALYSIS FOR REAL-TIME SYSTEMS." Escola Politécnica / Instituto de Matemática, 2009. http://repositorio.ufba.br/ri/handle/ri/21461.
Повний текст джерелаApproved for entry into archive by Vanessa Reis (vanessa.jamile@ufba.br) on 2017-02-17T14:58:14Z (GMT) No. of bitstreams: 1 flavia maristela santos nascimento.pdf: 1166834 bytes, checksum: 576c7c98a85b5cc824a7869fbb31347e (MD5)
Made available in DSpace on 2017-02-17T14:58:14Z (GMT). No. of bitstreams: 1 flavia maristela santos nascimento.pdf: 1166834 bytes, checksum: 576c7c98a85b5cc824a7869fbb31347e (MD5)
Sistemas de tempo real tem sido amplamente utilizados no contexto de sistemas mecatrônicos uma vez que, para controlar entidades do mundo real, ´e necessário considerar tanto seus requisitos lógicos quanto os temporais. Em tais sistemas, mecanismos para prover tolerância a falhas devem ser implementados já que falhas podem implicar em perdas consideráveis. Por exemplo, um erro em um sistema de controle de voo pode incorrer em perda de vidas humanas. Várias abordagens de escalonamento com tolerância a falhas para sistemas de tempo real foram derivadas. Entretanto, a maioria delas restringe o modelo de sistema e/ou falhas de modo particular, ou estão fortemente acopladas ao modelo de recuperação do sistema ou a política de escalonamento. Além disso, não existe uma m´métrica formal que permita comparar as abordagens existentes do ponto de vista da resiliência a falhas. O objetivo principal deste trabalho ´e preencher esta lacuna, fornecendo uma m´métrica de resiliência a falhas para sistemas de tempo real, que seja o mais independente possível dos modelos do sistema e/ou de falhas. Para tanto, uma análise baseada em simulação foi desenvolvida para calcular a resiliência de todas as tarefas de um sistema, através da simulação de intervalos de tempo específicos. Em seguida, t´técnicas de inferência estatística são utilizadas para inferir a resiliência do sistema. Os resultados mostraram que a m´métrica desenvolvida pode ser utilizada para comparar, por exemplo, duas políticas de escalonamento para sistemas de tempo real sob a ´ótica de resiliência a falhas, o que demonstra que a abordagem desenvolvida ´e razoavelmente independente do modelo de sistema.
Pai, Raikar Siddhesh Prakash Sunita. "Network Fault Resilient MPI for Multi-Rail Infiniband Clusters." The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1325270841.
Повний текст джерелаMonge, Solano Ignacio, and Enikő Matók. "Developing for Resilience: Introducing a Chaos Engineering tool." Thesis, Malmö universitet, Fakulteten för teknik och samhälle (TS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-20808.
Повний текст джерелаSouto, Laiz. "Data-driven approaches for event detection, fault location, resilience assessment, and enhancements in power systems." Doctoral thesis, Universitat de Girona, 2021. http://hdl.handle.net/10803/671402.
Повний текст джерелаEsta tesis presenta el estudio y el desarrollo de distintas técnicas basadas en datos para respaldar las tareas de detección de eventos, localización de fallos y resiliencia hacia mejoras en sistemas de energía eléctrica. Los contenidos se dividen en tres partes principales descritas a continuación. La primera parte investiga mejoras en el monitoreo de sistemas de energía eléctrica y métodos de detección de eventos con enfoque en técnicas de reducción de dimensionalidad en wide-area monitoring systems. La segunda parte se centra en contribuciones a tareas de localización de fallos en redes eléctricas de distribución, basándose en información acerca de la topología de la red y sus parámetros eléctricos para simulaciones de cortocircuito en una variedad de escenarios. La tercera parte evalúa mejoras en la resiliencia de sistemas de energía eléctrica ante eventos de alto impacto y baja probabilidad asociados con condiciones climáticas extremas y ataques provocados por humanos, basándose en información sobre la topología del sistema combinada con simulaciones de escenarios representativos para la evaluación y mitigación del impacto. En general, los algoritmos propuestos basados en datos contribuyen a la detección de eventos, la localización de fallos, y el aumento de la resiliencia de sistemas de energía eléctrica, basándose en mediciones eléctricas registradas por dispositivos electrónicos inteligentes, datos históricos de eventos pasados y escenarios representativos, en conjunto con información acerca de la topología de la red, parámetros eléctricos y estado operativo. La validación de los algoritmos, implementados en MATLAB, se basa en simulaciones computacionales utilizando modelos de red implementados en OpenDSS y Simulink
Bentria, Dounia. "Combining checkpointing and other resilience mechanisms for exascale systems." Thesis, Lyon, École normale supérieure, 2014. http://www.theses.fr/2014ENSL0971/document.
Повний текст джерелаIn this thesis, we are interested in scheduling and optimization problems in probabilistic contexts. The contributions of this thesis come in two parts. The first part is dedicated to the optimization of different fault-Tolerance mechanisms for very large scale machines that are subject to a probability of failure and the second part is devoted to the optimization of the expected sensor data acquisition cost when evaluating a query expressed as a tree of disjunctive Boolean operators applied to Boolean predicates. In the first chapter, we present the related work of the first part and then we introduce some new general results that are useful for resilience on exascale systems.In the second chapter, we study a unified model for several well-Known checkpoint/restart protocols. The proposed model is generic enough to encompass both extremes of the checkpoint/restart space, from coordinated approaches to a variety of uncoordinated checkpoint strategies. We propose a detailed analysis of several scenarios, including some of the most powerful currently available HPC platforms, as well as anticipated exascale designs.In the third, fourth, and fifth chapters, we study the combination of different fault tolerant mechanisms (replication, fault prediction and detection of silent errors) with the traditional checkpoint/restart mechanism. We evaluated several models using simulations. Our results show that these models are useful for a set of models of applications in the context of future exascale systems.In the second part of the thesis, we study the problem of minimizing the expected sensor data acquisition cost when evaluating a query expressed as a tree of disjunctive Boolean operators applied to Boolean predicates. The problem is to determine the order in which predicates should be evaluated so as to shortcut part of the query evaluation and minimize the expected cost.In the sixth chapter, we present the related work of the second part and in the seventh chapter, we study the problem for queries expressed as a disjunctive normal form. We consider the more general case where each data stream can appear in multiple predicates and we consider two models, the model where each predicate can access a single stream and the model where each predicate can access multiple streams
Raja, Chandrasekar Raghunath. "Designing Scalable and Efficient I/O Middleware for Fault-Resilient High-Performance Computing Clusters." The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1417733721.
Повний текст джерелаTeixeira, André. "Toward Cyber-Secure and Resilient Networked Control Systems." Doctoral thesis, KTH, Reglerteknik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-154204.
Повний текст джерелаEtt resilient system har förmågan att återhämta sig efter en kraftig och oväntad störning. Resiliens är en viktig egenskap hos industriella styrsystem som utgör en viktig komponent i många kritiska infrastrukturer, såsom processindustri och elkraftnät. Trenden att använda storskaliga IT-system, såsom Internet, inom styrsystem resulterar i en ökad sårbarhet för cyberhot. Traditionell IT-säkerhet tar inte hänsyn till den speciella koppling mellan fysikaliska komponenter och ITsystem som finns inom styrsystem. Å andra sidan så brukar traditionell reglerteknik fokusera på att hantera naturliga fel och inte cybersårbarheter. Teori och verktyg för resilienta och cybersäkra styrsystem saknas därför och behöver utvecklas. Denna avhandling bidrar till att ta fram ett ramverk för att analysera och konstruera just sådana styrsystem. Först så tar vi fram en representativ abstrakt modell för nätverkade styrsystem som består av fyra komponenter: den fysikaliska processen med sensorer och ställdon, kommunikationsnätet, det digitala styrsystemet och en feldetektor. Sedan införs en konceptuell modell för attacker gentemot det nätverkade styrsystemet. I modellen så beskrivs attacker som försöker undgå att skapa alarm i feldetektorn men ändå stör den fysikaliska processen. Dessutom så utgår modellen ifrån att den som utför attacken har begränsade resurser i fråga om modellkännedom och kommunikationskanaler. Det beskrivna ramverket används sedan för att studera resilens gentemot attackerna genom en riskanalys, där risk definieras utifrån ett hots scenario, konsekvenser och sannolikhet. Kvantitativa metoder för att uppskatta attackernas konsekvenser och sannolikheter tas fram, och speciellt visas hur hot med hög risk kan identifieras och motverkas. Resultaten i avhandlingen illustreras med ett flertal numeriska och praktiska exempel.
QC 20141016
Zounon, Mawussi. "On numerical resilience in linear algebra." Thesis, Bordeaux, 2015. http://www.theses.fr/2015BORD0038/document.
Повний текст джерелаAs the computational power of high performance computing (HPC) systems continues to increase by using huge number of cores or specialized processing units, HPC applications are increasingly prone to faults. This study covers a new class of numerical fault tolerance algorithms at application level that does not require extra resources, i.e., computational unit or computing time, when no fault occurs. Assuming that a separate mechanism ensures fault detection, we propose numerical algorithms to extract relevant information from available data after a fault. After data extraction, well chosen part of missing data is regenerated through interpolation strategies to constitute meaningful inputs to numerically restart the algorithm. We have designed these methods called Interpolation-restart techniques for numerical linear algebra problems such as the solution of linear systems or eigen-problems that are the inner most numerical kernels in many scientific and engineering applications and also often ones of the most time consuming parts. In the framework of Krylov subspace linear solvers the lost entries of the iterate are interpolated using the available entries on the still alive nodes to define a new initial guess before restarting the Krylov method. In particular, we consider two interpolation policies that preserve key numerical properties of well-known linear solvers, namely the monotony decrease of the A-norm of the error of the conjugate gradient or the residual norm decrease of GMRES. We assess the impact of the fault rate and the amount of lost data on the robustness of the resulting linear solvers.For eigensolvers, we revisited state-of-the-art methods for solving large sparse eigenvalue problems namely the Arnoldi methods, subspace iteration methods and the Jacobi-Davidson method, in the light of Interpolation-restart strategies. For each considered eigensolver, we adapted the Interpolation-restart strategies to regenerate as much spectral information as possible. Through intensive experiments, we illustrate the qualitative numerical behavior of the resulting schemes when the number of faults and the amount of lost data are varied; and we demonstrate that they exhibit a numerical robustness close to that of fault-free calculations. In order to assess the efficiency of our numerical strategies, we have consideredan actual fully-featured parallel sparse hybrid (direct/iterative) linear solver, MaPHyS, and we proposed numerical remedies to design a resilient version of the solver. The solver being hybrid, we focus in this study on the iterative solution step, which is often the dominant step in practice. The numerical remedies we propose are twofold. Whenever possible, we exploit the natural data redundancy between processes from the solver toperform an exact recovery through clever copies over processes. Otherwise, data that has been lost and is not available anymore on any process is recovered through Interpolationrestart strategies. These numerical remedies have been implemented in the MaPHyS parallel solver so that we can assess their efficiency on a large number of processing units (up to 12; 288 CPU cores) for solving large-scale real-life problems
Rink, Norman Alexander, and Jeronimo Castrillon. "Comprehensive Backend Support for Local Memory Fault Tolerance." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-215785.
Повний текст джерелаLiu, Jiaqi. "Handling Soft and Hard Errors for Scientific Applications." The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1483632126075067.
Повний текст джерелаJamal, Aygul. "A parallel iterative solver for large sparse linear systems enhanced with randomization and GPU accelerator, and its resilience to soft errors." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS269/document.
Повний текст джерелаIn this PhD thesis, we address three challenges faced by linear algebra solvers in the perspective of future exascale systems: accelerating convergence using innovative techniques at the algorithm level, taking advantage of GPU (Graphics Processing Units) accelerators to enhance the performance of computations on hybrid CPU/GPU systems, evaluating the impact of errors in the context of an increasing level of parallelism in supercomputers. We are interested in studying methods that enable us to accelerate convergence and execution time of iterative solvers for large sparse linear systems. The solver specifically considered in this work is the parallel Algebraic Recursive Multilevel Solver (pARMS), which is a distributed-memory parallel solver based on Krylov subspace methods.First we integrate a randomization technique referred to as Random Butterfly Transformations (RBT) that has been successfully applied to remove the cost of pivoting in the solution of dense linear systems. Our objective is to apply this method in the ARMS preconditioner to solve more efficiently the last Schur complement system in the application of the recursive multilevel process in pARMS. The experimental results show an improvement of the convergence and the accuracy. Due to memory concerns for some test problems, we also propose to use a sparse variant of RBT followed by a sparse direct solver (SuperLU), resulting in an improvement of the execution time.Then we explain how a non intrusive approach can be applied to implement GPU computing into the pARMS solver, more especially for the local preconditioning phase that represents a significant part of the time to compute the solution. We compare the CPU-only and hybrid CPU/GPU variant of the solver on several test problems coming from physical applications. The performance results of the hybrid CPU/GPU solver using the ARMS preconditioning combined with RBT, or the ILU(0) preconditioning, show a performance gain of up to 30% on the test problems considered in our experiments.Finally we study the effect of soft fault errors on the convergence of the commonly used flexible GMRES (FGMRES) algorithm which is also used to solve the preconditioned system in pARMS. The test problem in our experiments is an elliptical PDE problem on a regular grid. We consider two types of preconditioners: an incomplete LU factorization with dual threshold (ILUT), and the ARMS preconditioner combined with RBT randomization. We consider two soft fault error modeling approaches where we perturb the matrix-vector multiplication and the application of the preconditioner, and we compare their potential impact on the convergence of the solver
Stoicescu, Miruna. "Architecting Resilient Computing Systems : a Component-Based Approach." Thesis, Toulouse, INPT, 2013. http://www.theses.fr/2013INPT0120/document.
Повний текст джерелаEvolution during service life is mandatory, particularly for long-lived systems. Dependable systems, which continuously deliver trustworthy services, must evolve to accommodate changes e.g., new fault tolerance requirements or variations in available resources. The addition of this evolutionary dimension to dependability leads to the notion of resilient computing. Among the various aspects of resilience, we focus on adaptivity. Dependability relies on fault tolerant computing at runtime, applications being augmented with fault tolerance mechanisms (FTMs). As such, on-line adaptation of FTMs is a key challenge towards resilience. In related work, on-line adaption of FTMs is most often performed in a preprogrammed manner or consists in tuning some parameters. Besides, FTMs are replaced monolithically. All the envisaged FTMs must be known at design time and deployed from the beginning. However, dynamics occurs along multiple dimensions and developing a system for the worst-case scenario is impossible. According to runtime observations, new FTMs can be developed off-line but integrated on-line. We denote this ability as agile adaption, as opposed to the preprogrammed one. In this thesis, we present an approach for developing flexible fault-tolerant systems in which FTMs can be adapted at runtime in an agile manner through fine-grained modifications for minimizing impact on the initial architecture. We first propose a classification of a set of existing FTMs based on criteria such as fault model, application characteristics and necessary resources. Next, we analyze these FTMs and extract a generic execution scheme which pinpoints the common parts and the variable features between them. Then, we demonstrate the use of state-of-the-art tools and concepts from the field of software engineering, such as component-based software engineering and reflective component-based middleware, for developing a library of fine-grained adaptive FTMs. We evaluate the agility of the approach and illustrate its usability throughout two examples of integration of the library: first, in a design-driven development process for applications in pervasive computing and, second, in a toolkit for developing applications for WSNs
Excoffon, William. "Résilience des systèmes informatiques adaptatifs : modélisation, analyse et quantification." Phd thesis, Toulouse, INPT, 2018. http://oatao.univ-toulouse.fr/20791/1/Excoffon_20791.pdf.
Повний текст джерелаLauret, Jimmy. "Prévention et détection des interférences inter-aspects : méthode et application à l'aspectisation de la tolérance aux fautes." Phd thesis, Institut National Polytechnique de Toulouse - INPT, 2013. http://tel.archives-ouvertes.fr/tel-01067471.
Повний текст джерелаPsiakis, Rafail. "Performance optimization mechanisms for fault-resilient VLIW processors." Thesis, Rennes 1, 2018. http://www.theses.fr/2018REN1S095/document.
Повний текст джерелаEmbedded processors in critical domains require a combination of reliability, performance and low energy consumption. Very Long Instruction Word (VLIW) processors provide performance improvements through Instruction Level Parallelism (ILP) exploitation, while keeping cost and power in low levels. Since the ILP is highly application dependent, the processor does not use all its resources constantly and, thus, these resources can be utilized for redundant instruction execution. This thesis presents a fault injection methodology for VLIW processors and three hardware mechanisms to deal with soft, permanent and long-term faults leading to three contributions. The first contribution presents an Architectural Vulnerability Factor (AVF) and Instruction Vulnerability Factor (IVF) analysis schema for VLIW processors. A fault injection methodology at different memory structures is proposed to extract the architectural/instruction masking capabilities of the processor. A high-level failure classification schema is presented to categorize the output of the processor. The second contribution explores heterogeneous idle resources at run-time both inside and across consecutive instruction bundles. To achieve this, a hardware optimized instruction scheduling technique is applied in parallel with the pipeline to efficiently control the replication and the scheduling of the instructions. Following the trends of increasing parallelization, a cluster-based design is also proposed to tackle the issues of scalability, while maintaining a reasonable area/power overhead. The proposed technique achieves a speed-up of 43.68% in performance with a ~10% area and power overhead over existing approaches. AVF and IVF analysis evaluate the vulnerability of the processor with the proposed mechanism.The third contribution deals with persistent faults. A hardware mechanism is proposed which replicates at run-time the instructions and schedules them at the idle slots considering the resource constraints. If a resource becomes faulty, the proposed approach efficiently rebinds both the original and replicated instructions during execution. Early evaluation performance results show up to 49\% performance gain over existing techniques.In order to further decrease the performance overhead and to support single and multiple Long-Duration Transient (LDT) error mitigation a fourth contribution is presented. We propose a hardware mechanism, which detects the faults that are still active during execution and re-schedules the instructions to use not only the healthy function units, but also the fault-free components of the affected function units. When the fault faints, the affected function unit components can be reused. The scheduling window of the proposed mechanism is two instruction bundles being able to explore mitigation solutions in the current and the next instruction execution. The obtained fault injection results show that the proposed approach can mitigate a large number of faults with low performance, area, and power overhead
Lacerda, Felipe Gomes. "Classical leakage-resilient circuits from quantum fault-tolerant computation." reponame:Repositório Institucional da UnB, 2015. http://repositorio.unb.br/handle/10482/19594.
Повний текст джерелаSubmitted by Fernanda Percia França (fernandafranca@bce.unb.br) on 2015-12-03T18:53:40Z No. of bitstreams: 1 2015_FelipeGomesLacerda.pdf: 601123 bytes, checksum: 14f5ac6d48a9354291bd06577410685e (MD5)
Approved for entry into archive by Raquel Viana(raquelviana@bce.unb.br) on 2016-02-26T22:02:05Z (GMT) No. of bitstreams: 1 2015_FelipeGomesLacerda.pdf: 601123 bytes, checksum: 14f5ac6d48a9354291bd06577410685e (MD5)
Made available in DSpace on 2016-02-26T22:02:05Z (GMT). No. of bitstreams: 1 2015_FelipeGomesLacerda.pdf: 601123 bytes, checksum: 14f5ac6d48a9354291bd06577410685e (MD5)
Implementações físicas de algoritmos criptográficos vazam informação, o que os torna vulneráveis aos chamados ataques de canal lateral. Atualmente, criptografia é utilizada em uma variedade crescente de cenários, e frequentemente a suposição de que a execução de criptossistemas é fisicamente isolada não é realista. A área de resistência a vazamentos propõe mitigar ataques de canal lateral projetando protocolos que são seguros mesmo se a informação vaza durante a execução. Neste trabalho, estudamos computação resistente a vazamento, que estuda o problema de executar computação universal segura na presença de vazamento. Computação quântica tolerante a falhas se preocupa com o problema de ruído em computadores quânticos. Uma vez que é extremamente difícil isolar sistemas quânticos de ruído, a área de tolerância a falhas propões esquemas para executar computações corretamente mesmo se há algum ruído. Existe uma conexão entre resistência a vazamento e tolerância a falhas. Neste trabalho, mostramos que vazamento em um circuito clássico é uma forma de ruído, quando o circuito é interpretado como um circuito quântico. Posteriormente, provamos que para um modelo de vazamento arbitrário, existe um modelo de ruído correspondente para o qual um circuito que é tolerante a falhas de acordo com um modelo de ruído também é resistente a vazamento de acordo com o modelo de vazamento dado. Também mostramos como utilizar construções para tolerância a falhas para implementar circuitos clássicos que são seguros em modelos de vazamento específicos. Isto é feito estabelecendo critérios para os quais circuitos quânticos podem ser convertidos em circuitos clássicos de certa forma que a propriedade de resistência a vazamentos é preservada. Usando estes critérios, convertemos uma implementação de computação quântica tolerante a falhas em um compilador resistente a vazamentos clássicos, isto é, um esquema que compila um circuito arbitrário em um circuito de mesma funcionalidade que é resistente a vazamentos. ______________________________________________________________________________________________ ABSTRACT
Physical implementations of cryptographic algorithms leak information, which makes them vulnerable to so-called side-channel attacks. Cryptography is now used in an ever-increasing variety of scenarios, and the assumption that the execution of cryptosystems is physically insulated is often not realistic. The field of leakage resilience proposes to mitigate side-channel attacks by designing protocols that are secure even if information leaks during execution. In this work, we study leakage-resilient computation, which concerns the problem of performing secure universal computation in the presence of leakage. Fault-tolerant quantum computation is concerned with the problem of noise in quantum computers. Since it is very hard to insulate quantum systems from noise, fault tolerance proposes schemes for performing computations correctly even if some noise is present. It turns out that there exists a connection between leakage resilience and fault tolerance. In this work, we show that leakage in a classical circuit is a form of noise, when the circuit is interpreted as quantum. We then prove that for an arbitrary leakage model, there exists a corresponding noise model in which a circuit that is fault-tolerant against the noise model is also resilient against the given leakage model. We also show how to use constructions for fault tolerance to implement classical circuits that are secure in specific leakage models. This is done by establishing criteria in which quantum circuits can be converted into classical circuits in such a way that the leakage resilience property is preserved. Using these criteria, we convert an implementation of universal fault-tolerant quantum computation into a classical leakageresilient compiler, i.e., a scheme that compiles an arbitrary circuit into a circuit of the same functionality that is leakage-resilient.
Butler, Bryan P. (Bryan Philip). "A fault-tolerant shared memory system architecture for a Byzantine resilient computer." Thesis, Massachusetts Institute of Technology, 1989. http://hdl.handle.net/1721.1/13360.
Повний текст джерелаIncludes bibliographical references (leaves 145-147).
by Bryan P. Butler.
M.S.
Abbaspour, Ali Reza. "Active Fault-Tolerant Control Design for Nonlinear Systems." FIU Digital Commons, 2018. https://digitalcommons.fiu.edu/etd/3917.
Повний текст джерелаBiswas, Shuchismita. "Power Grid Partitioning and Monitoring Methods for Improving Resilience." Diss., Virginia Tech, 2021. http://hdl.handle.net/10919/104684.
Повний текст джерелаDoctor of Philosophy
The modern power grid faces multiple threats, including extreme-weather events, solar storms, and potential cyber-physical attacks. Towards the larger goal of enhancing power systems resilience, this dissertation develops strategies to mitigate the impact of such extreme events. The proposed schemes broadly aim to- a) improve grid performance in the immediate aftermath of a disruptive event, and b) enhance grid monitoring to identify precursors of impending failures. To improve grid performance after a disruption, we propose a proactive islanding strategy for the bulk power grid, aimed at arresting the propagation of cascading failures. For the distribution network, a mixed-integer linear program is formulated for identifying optimal sub-networks with load and distributed generators that may be retrofitted to operate as self-adequate microgrids, if supply from the bulk power systems is lost. To address the question of enhanced monitoring, we develop model-agnostic, computationally efficient recovery algorithms for archived and streamed data from Phasor Measurement Units (PMU) with data drops and additive noise. PMUs are highly precise sensors that provide high-resolution insight into grid dynamics. We also illustrate an application where PMU data is used to identify the location of temporary line faults.
Souza, Gisele Pinheiro. "Tuplebiz : um espaço de tuplas distribuido e com suporte a transações resilientes a falhas bizantinas." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2012. http://hdl.handle.net/10183/70239.
Повний текст джерелаThe coordination models enable the communication among the process in a distributed system. The shared data model is time and referential decoupled, which is represented by tuple spaces. For this reason, the tuple space is used by parallel and pervasive applications. The fault tolerance is very important for both type of application. For healthcare applications, the fault can cost a life. In this context, this work introduces the Tuplebiz, a distributed tuple space that supports transactions in environment where byzantine faults can occur. Byzantine faults include many types of system faults. The Tuplebiz is spitted in partitions. The main idea behind it is to distribute the tuple space among servers. Each partition guarantees the fault tolerance by using state machine replication. Furthermore, Tuplebiz has transaction support, which follows the ACID properties (atomicity, consistency, isolation, durability). The transaction manager is responsible for maintaining the isolation. Performance and fault injection tests were made in order to evaluate the Tuplebiz. The Tuplebiz latency is approximately 2.8 times bigger than the one for a non replicated system. The injection tests were based on an injection fault framework for byzantine faults. The tests applied were: lost message, delay message, corrupted message, system suspension and crash. The latency was worst on those cases; however the Tuplebiz was able to deal with all of them. Also, a case is presented. This case shows the integration between Tuplebiz and Guaraná, which is a domain specific language, used for designing Enterprise Application Integration applications. The solution integration tasks are centralized nowadays. The integration approach aims to distribute the tasks among servers.
Decouchant, Jérémie. "Collusions and Privacy in Rational-Resilient Gossip." Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAM034/document.
Повний текст джерелаGossip-based content dissemination protocols are a scalable and cheap alternative to centralised content sharing systems. However, it is well known that these protocols suffer from rational nodes, i.e., nodes that aim at downloading the content without contributing their fair share to the system. While the problem of rational nodes that act individually has been well addressed in the literature, textit{colluding} rational nodes is still an open issue. In addition, previous rational-resilient gossip-based solutions require nodes to log their interactions with others, and disclose the content of their logs, which may disclose sensitive information. Nowadays, a consensus exists on the necessity of reinforcing the control of users on their personal information. Nonetheless, to the best of our knowledge no privacy-preserving rational-resilient gossip-based content dissemination system exists.The contributions of this thesis are twofold.First, we present AcTinG, a protocol that prevents rational collusions in gossip-based content dissemination protocols, while guaranteeing zero false positive accusations. AcTing makes nodes maintain secure logs and mutually check each others' correctness thanks to verifiable but non predictable audits. As a consequence of its design, it is shown to be a Nash-equilibrium. A performance evaluation shows that AcTinG is able to deliver all messages despite the presence of colluders, and exhibits similar scalability properties as standard gossip-based dissemination protocols.Second, we describe PAG, the first accountable and privacy-preserving gossip protocol. PAG builds on a monitoring infrastructure, and homomorphic cryptographic procedures to provide privacy to nodes while making sure that nodes forward the content they receive. The theoretical evaluation of PAG shows that breaking the privacy of interactions is difficult, even in presence of a global and active opponent. We assess this protocol both in terms of privacy and performance using a deployment performed on a cluster of machines, simulations involving up to a million of nodes, and theoretical proofs. The bandwidth overhead is much lower than existing anonymous communication protocols, while still being practical in terms of CPU usage
Calugaru, Vladimir. "Earthquake Resilient Tall Reinforced Concrete Buildings at Near-Fault Sites Using Base Isolation and Rocking Core Walls." Thesis, University of California, Berkeley, 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=3616424.
Повний текст джерелаThis dissertation pursues three main objectives: (1) to investigate the seismic response of tall reinforced concrete core wall buildings, designed following current building codes, subjected to pulse type near-fault ground motion, with special focus on the relation between the characteristics of the ground motion and the higher-modes of response; (2) to determine the characteristics of a base isolation system that results in nominally elastic response of the superstructure of a tall reinforced concrete core wall building at the maximum considered earthquake level of shaking; and (3) to demonstrate that the seismic performance, cost, and constructability of a base-isolated tall reinforced concrete core wall building can be significantly improved by incorporating a rocking core-wall in the design.
First, this dissertation investigates the seismic response of tall cantilever wall buildings subjected to pulse type ground motion, with special focus on the relation between the characteristics of ground motion and the higher-modes of response. Buildings 10, 20, and 40 stories high were designed such that inelastic deformation was concentrated at a single flexural plastic hinge at their base. Using nonlinear response history analysis, the buildings were subjected to near-fault seismic ground motions as well as simple close-form pulses, which represented distinct pulses within the ground motions. Euler-Bernoulli beam models with lumped mass and lumped plasticity were used to model the buildings.
Next, this dissertation investigates numerically the seismic response of six seismically base-isolated (BI) 20-story reinforced concrete buildings and compares their response to that of a fixed-base (FB) building with a similar structural system above ground. Located in Berkeley, California, 2 km from the Hayward fault, the buildings are designed with a core wall that provides most of the lateral force resistance above ground. For the BI buildings, the following are investigated: two isolation systems (both implemented below a three-story basement), isolation periods equal to 4, 5, and 6 s, and two levels of flexural strength of the wall. The first isolation system combines tension-resistant friction pendulum bearings and nonlinear fluid viscous dampers (NFVDs); the second combines low-friction tension-resistant cross-linear bearings, lead-rubber bearings, and NFVDs.
Finally, this dissertation investigates the seismic response of four 20-story buildings hypothetically located in the San Francisco Bay Area, 0.5 km from the San Andreas fault. One of the four studied buildings is fixed-base (FB), two are base-isolated (BI), and one uses a combination of base isolation and a rocking core wall (BIRW). Above the ground level, a reinforced concrete core wall provides the majority of the lateral force resistance in all four buildings. The FB and BI buildings satisfy requirements of ASCE 7-10. The BI and BIRW buildings use the same isolation system, which combines tension-resistant friction pendulum bearings and nonlinear fluid viscous dampers. The rocking core-wall includes post-tensioning steel, buckling-restrained devices, and at its base is encased in a steel shell to maximize confinement of the concrete core. The total amount of longitudinal steel in the wall of the BIRW building is 0.71 to 0.87 times that used in the BI buildings. Response history two-dimensional analysis is performed, including the vertical components of excitation, for a set of ground motions scaled to the design earthquake and to the maximum considered earthquake (MCE). While the FB building at MCE level of shaking develops inelastic deformations and shear stresses in the wall that may correspond to irreparable damage, the BI and the BIRW buildings experience nominally elastic response of the wall, with floor accelerations and shear forces which are 0.36 to 0.55 times those experienced by the FB building. The response of the four buildings to two historical and two simulated near-fault ground motions is also studied, demonstrating that the BIRW building has the largest deformation capacity at the onset of structural damage.
(Abstract shortened by UMI.)
Moriam, Sadia [Verfasser], Gerhard [Gutachter] Fettweis, and Andreas [Gutachter] Herkersdorf. "On Fault Resilient Network-on-Chip for Many Core Systems / Sadia Moriam ; Gutachter: Gerhard Fettweis, Andreas Herkersdorf." Dresden : Technische Universität Dresden, 2019. http://d-nb.info/1226899838/34.
Повний текст джерелаLeipnitz, Marcos Tomazzoli. "Resilient regular expression matching on FPGAs with fast error repair." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2017. http://hdl.handle.net/10183/168788.
Повний текст джерелаThe Network Function Virtualization (NFV) paradigm promises to make computer networks more scalable and flexible by decoupling the network functions (NFs) from dedicated and vendor-specific hardware. However, network and compute intensive NFs may be difficult to virtualize without performance degradation. In this context, Field-Programmable Gate Arrays (FPGAs) have been shown to be a good option for hardware acceleration of virtual NFs that require high throughput, without deviating from the concept of an NFV infrastructure which aims at high flexibility. Regular expression matching is an important and compute intensive mechanism used to perform Deep Packet Inspection, which can be FPGA-accelerated to meet performance constraints. This solution, however, introduces new challenges regarding dependability requirements. Particularly for SRAM-based FPGAs, soft errors on the configuration memory are a significant dependability threat. In this work we present a comprehensive fault tolerance mechanism to deal with configuration faults on the functionality of FPGA-based regular expression matching engines. Moreover, a placement-aware scrubbing mechanism is introduced to reduce the system repair time, improving the system reliability and availability. Experimental results show that the overall failure rate and the system mean time to repair can be reduced in 95% and 90%, respectively, with manageable area and performance costs.
Ozturk, Erdinc. "Efficient and tamper-resilient architectures for pairing based cryptography." Worcester, Mass. : Worcester Polytechnic Institute, 2009. http://www.wpi.edu/Pubs/ETD/Available/etd-010409-225223/.
Повний текст джерелаKeywords: Pairing Based Cryptography; Identity Based Cryptography; Tate Pairing; Montgomery Multiplication; Robust Codes; Fault Detection; Tamper-Resilient Architecture. Includes bibliographical references (leaves 97-104).
Li, Yi Verfasser], Martin [Akademischer Betreuer] Kappas, Heiko [Akademischer Betreuer] Faust, Christoph [Akademischer Betreuer] Dittrich, Daniela [Akademischer Betreuer] Sauer, Renate [Akademischer Betreuer] Bürger-Arndt, and Hans [Akademischer Betreuer] [Ruppert. "Integrated approaches of social-ecological resilience assessment and urban resilience management : Resilience thinking, transformations and implications for sustainable city development in Lianyungang, China / Yi Li. Betreuer: Martin Kappas. Gutachter: Martin Kappas ; Heiko Faust ; Christoph Dittrich ; Daniela Sauer ; Renate Bürger-arndt ; Hans Ruppert." Göttingen : Niedersächsische Staats- und Universitätsbibliothek Göttingen, 2016. http://d-nb.info/1082425575/34.
Повний текст джерелаLi, Yi Verfasser], Martin [Akademischer Betreuer] Kappas, Heiko [Akademischer Betreuer] Faust, Christoph [Akademischer Betreuer] [Dittrich, Daniela Akademischer Betreuer] Sauer, Renate [Akademischer Betreuer] Bürger-Arndt, and Hans [Akademischer Betreuer] [Ruppert. "Integrated approaches of social-ecological resilience assessment and urban resilience management : Resilience thinking, transformations and implications for sustainable city development in Lianyungang, China / Yi Li. Betreuer: Martin Kappas. Gutachter: Martin Kappas ; Heiko Faust ; Christoph Dittrich ; Daniela Sauer ; Renate Bürger-arndt ; Hans Ruppert." Göttingen : Niedersächsische Staats- und Universitätsbibliothek Göttingen, 2016. http://nbn-resolving.de/urn:nbn:de:gbv:7-11858/00-1735-0000-0028-86BB-7-6.
Повний текст джерелаShoker, Ali. "Byzantine fault tolerance from static selection to dynamic switching." Toulouse 3, 2012. http://thesesups.ups-tlse.fr/1924/.
Повний текст джерелаByzantine Fault Tolerance (BFT) is becoming crucial with the revolution of online applications and due to the increasing number of innovations in computer technologies. Although dozens of BFT protocols have been introduced in the previous decade, their adoption by practitioners sounds disappointing. To some extant, this indicates that existing protocols are, perhaps, not yet too convincing or satisfactory. The problem is that researchers are still trying to establish 'the best protocol' using traditional methods, e. G. , through designing new protocols. However, theoretical and experimental analyses demonstrate that it is hard to achieve one-size-fits-all BFT protocols. Indeed, we believe that looking for smarter tac-tics like 'fasten fragile sticks with a rope to achieve a solid stick' is necessary to circumvent the issue. In this thesis, we introduce the first BFT selection model and algorithm that automate and simplify the election process of the 'preferred' BFT protocol among a set of candidate ones. The selection mechanism operates in three modes: Static, Dynamic, and Heuristic. For the two latter modes, we present a novel BFT system, called Adapt, that reacts to any potential changes in the system conditions and switches dynamically between existing BFT protocols, i. E. , seeking adaptation. The Static mode allows BFT users to choose a single BFT protocol only once. This is quite useful in Web Services and Clouds where BFT can be sold as a service (and signed in the SLA contract). This mode is basically designed for systems that do not have too fuctuating states. In this mode, an evaluation process is in charge of matching the user preferences against the profiles of the nominated BFT protocols considering both: reliability, and performance. The elected protocol is the one that achieves the highest evaluation score. The mechanism is well automated via mathematical matrices, and produces selections that are reasonable and close to reality. Some systems, however, may experience fluttering conditions, like variable contention or message payloads. In this case, the static mode will not be e?cient since a chosen protocol might not fit the new conditions. The Dynamic mode solves this issue. Adapt combines a collection of BFT protocols and switches between them, thus, adapting to the changes of the underlying system state. Consequently, the 'preferred' protocol is always polled for each system state. This yields an optimal quality of service, i. E. , reliability and performance. Adapt monitors the system state through its Event System, and uses a Support Vector Regression method to conduct run time predictions for the performance of the protocols (e. G. , throughput, latency, etc). Adapt also operates in a Heuristic mode. Using predefined heuristics, this mode optimizes user preferences to improve the selection process. The evaluation of our approach shows that selecting the 'preferred' protocol is automated and close to reality in the static mode. In the Dynamic mode, Adapt always achieves the optimal performance among available protocols. The evaluation demonstrates that the overall system performance can be improved significantly too. Other cases explore that it is not always worthy to switch between protocols. This is made possible through conducting predictions with high accuracy, that can reach more than 98% in many cases. Finally, the thesis shows that Adapt can be smarter through using heursitics
Kuentzer, Felipe Augusto. "More than a timing resilient template : a case study on reliability-oriented improvements on blade." Pontif?cia Universidade Cat?lica do Rio Grande do Sul, 2018. http://tede2.pucrs.br/tede2/handle/tede/8093.
Повний текст джерелаApproved for entry into archive by Sheila Dias (sheila.dias@pucrs.br) on 2018-06-01T12:13:22Z (GMT) No. of bitstreams: 1 FELIPE_AUGUSTO_KUENTZER_TES.pdf: 3277301 bytes, checksum: 7e77c5eb72299302d091329bde56b953 (MD5)
Made available in DSpace on 2018-06-01T12:33:57Z (GMT). No. of bitstreams: 1 FELIPE_AUGUSTO_KUENTZER_TES.pdf: 3277301 bytes, checksum: 7e77c5eb72299302d091329bde56b953 (MD5) Previous issue date: 2018-03-28
? medida que o projeto de VLSI avan?a para tecnologias ultra submicron, as margens de atraso adicionadas para compensar variabilidades de processo de fabrica??o, temperatura de opera??o e tens?o de alimenta??o, tornam-se uma parte significativa do per?odo de rel?gio em circuitos s?ncronos tradicionais. As arquiteturas resilientes a varia??es de atraso surgiram como uma solu??o promissora para aliviar essas margens de tempo projetadas para o pior caso, melhorando o desempenho do sistema e reduzindo o consumo de energia. Essas arquiteturas incorporam circuitos adicionais para detec??o e recupera??o de viola??es de atraso que podem surgir ao projetar o circuito com margens de tempo menores. Os sistemas ass?ncronos apresentam potencial para melhorar a efici?ncia energ?tica e o desempenho devido ? aus?ncia de um sinal de rel?gio global. Al?m disso, os circuitos ass?ncronos s?o conhecidos por serem robustos a varia??es de processo, tens?o e temperatura. Blade ? um modelo que incorpora as vantagens de projeto ass?ncrono e resilientes a varia??es de atraso. No entanto, o Blade ainda apresenta desafios em rela??o ? sua testabilidade, o que dificulta sua aplica??o comercial ou em larga escala. Embora o projeto visando testabilidade com Scan seja amplamente utilizado na ind?stria, os altos custos de sil?cio associados com o seu uso no Blade podem ser proibitivos. Por outro lado, os circuitos ass?ncronos podem apresentar vantagens para testes funcionais, enquanto o circuito resiliente fornece feedback cont?nuo durante o funcionamento normal do circuito, uma caracter?stica que pode ser aplicada para testes concorrentes. Nesta Tese, a testabilidade do Blade ? avaliada sob uma perspectiva diferente, onde o circuito implementado com o Blade apresenta propriedades de confiabilidade que podem ser exploradas para testes. Inicialmente, um m?todo de classifica??o de falhas que relaciona padr?es comportamentais com falhas estruturais dentro da l?gica de detec??o de erro e uma nova implementa??o orientada para teste desse m?dulo de detec??o s?o propostos. A parte de controle ? analisada para falhas internas, e um novo projeto ? proposto, onde o teste ? melhorado e o circuito pode ser otimizado pelo fluxo de projeto. Um m?todo original de medi??o de tempo das linhas de atraso tamb?m ? abordado. Finalmente, o teste de falhas de atrasos em caminhos cr?ticos do caminho de dados ? explorado como uma consequ?ncia natural de um circuito implementado com Blade, onde o monitoramento cont?nuo para detec??o de viola??es de atraso fornece a informa??o necess?ria para a detec??o concorrente de viola??es que extrapolam a capacidade de recupera??o do circuito resiliente. A integra??o de todas as contribui??es fornece uma cobertura de falha satisfat?ria para um custo de ?rea que, para os circuitos avaliados nesta Tese, pode variar de 4,24% a 6,87%, enquanto que a abordagem Scan para os mesmos circuitos apresenta custo que varia de 50,19% a 112,70% em ?rea, respectivamente. As contribui??es desta Tese demonstraram que, com algumas melhorias na arquitetura do Blade, ? poss?vel expandir sua confiabilidade para al?m de um sistema de toler?ncia a viola??es de atraso no caminho de dados, e tamb?m um avan?o para teste de falhas (inclusive falhas online) de todo o circuito, bem como melhorar seu rendimento, e lidar com quest?es de envelhecimento.
As the VLSI design moves into ultra-deep-submicron technologies, timing margins added due to variabilities in the manufacturing process, operation temperature and supply voltage become a significant part of the clock period in traditional synchronous circuits. Timing resilient architectures emerged as a promising solution to alleviate these worst-case timing margins, improving system performance and/or reducing energy consumption. These architectures embed additional circuits for detecting and recovering from timing violations that may arise after designing the circuit with reduced time margins. Asynchronous systems, on the other hand, have a potential to improve energy efficiency and performance due to the absence of a global clock. Moreover, asynchronous circuits are known to be robust to process, voltage and temperature variations. Blade is an asynchronous timing resilient template that leverages the advantages of both asynchronous and timing resilient techniques. However, Blade still presents challenges regarding its testability, which hinders its commercial or large-scale application. Although the design for testability with scan chains is widely applied in the industry, the high silicon costs associated with its use in Blade can be prohibitive. Asynchronous circuits can also present advantages for functional testing, and the timing resilient characteristic provides continuous feedback during normal circuit operation, which can be applied for concurrent testing. In this Thesis, Blade?s testability is evaluated from a different perspective, where circuits implemented with Blade present reliability properties that can be explored for stuck-at and delay faults testing. Initially, a fault classification method that relates behavioral patterns with structural faults inside the error detection logic and a new test-driven implementation of this detection module are proposed. The control part is analyzed for internal faults, and a new design is proposed, where the test coverage is improved and the circuit can be further optimized by the design flow. An original method for time measuring delay lines is also addressed. Finally, delay fault testing of critical paths in the data path is explored as a natural consequence of a Blade circuit, where the continuous monitoring for detecting timing violations provide the necessary feedback for online detection of these delay faults. The integration of all the contributions provides a satisfactory fault coverage for an area overhead that, for the evaluated circuits in this thesis, can vary from 4.24% to 6.87%, while the scan approach for the same circuits implies an area overhead varying from 50.19% to 112.70%, respectively. The contributions of this Thesis demonstrated that with a few improvements in the Blade architecture it is possible to expand its reliability beyond a timing resilient system to delay violations in the data path, but also advances for fault testing (including online faults) of the entire circuit, yield, and aging.
Cunha, Hugo Assis. "An architecture to resilient and highly available identity providers based on OpenID standard." Universidade Federal do Amazonas, 2014. http://tede.ufam.edu.br/handle/handle/4431.
Повний текст джерелаApproved for entry into archive by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2015-07-20T14:08:11Z (GMT) No. of bitstreams: 1 Dissertação - Hugo Assis Cunha.pdf: 4753834 bytes, checksum: 4304c038b5fb3c322af4b88ba5d58195 (MD5)
Approved for entry into archive by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2015-07-20T14:12:26Z (GMT) No. of bitstreams: 1 Dissertação - Hugo Assis Cunha.pdf: 4753834 bytes, checksum: 4304c038b5fb3c322af4b88ba5d58195 (MD5)
Made available in DSpace on 2015-07-20T14:12:26Z (GMT). No. of bitstreams: 1 Dissertação - Hugo Assis Cunha.pdf: 4753834 bytes, checksum: 4304c038b5fb3c322af4b88ba5d58195 (MD5) Previous issue date: 2014-09-26
Não Informada
Quando se trata de sistemas e serviços de autenticação seguros, há duas abordagens principais: a primeira procura estabelecer defesas para todo e qualquer tipo de ataque. Na verdade, a maioria dos serviços atuais utilizam esta abordagem, a qualsabe-sequeéinfactívelefalha. Nossapropostautilizaasegundaabordagem, a qual procura se defender de alguns ataques, porém assume que eventualmente o sistema pode sofrer uma intrusão ou falha e ao invés de tentar evitar, o sistema simplesmente as tolera através de mecanismos inteligentes que permitem manter o sistema atuando de maneira confiável e correta. Este trabalho apresenta uma arquiteturaresilienteparaserviçosdeautenticaçãobaseadosemOpenIDcomuso deprotocolosdetolerânciaafaltaseintrusões, bemcomoumprotótipofuncional da arquitetura. Por meio dos diversos testes realizados foi possível verificar que o sistema apresenta um desempenho melhor que um serviço de autenticação do OpenID padrão, ainda com muito mais resiliência, alta disponibilidade, proteção a dados sensíveis e tolerância a faltas e intrusões. Tudo isso sem perder a compatibilidade com os clientes OpenID atuais.
Secure authentication services and systems typically are based on two main approaches: the first one seeks to defend itself of all kind of attack. Actually, the major current services use this approach, which is known for present failures as well as being completely infeasible. Our proposal uses the second approach, which seeks to defend itself of some specific attacks, and assumes that eventually the system may suffer an intrusion or fault. Hence, the system does not try avoiding the problems, but tolerate them by using intelligent mechanisms which allow the system keep executing in a trustworthy and safe state. This research presents a resilient architecture to authentication services based on OpenID by the use of fault and intrusion tolerance protocols, as well as a functional prototype. Through the several performed tests, it was possible to note that our system presents a better performance than a standard OpenID service, but with additional resilience, high availability, protection of the sensitive data, beyond fault and intrusion tolerance, always keeping the compatibility with the current OpenID clients.
Araújo, José. "Design, Implementation and Validation of Resource-Aware and Resilient Wireless Networked Control Systems." Doctoral thesis, KTH, Reglerteknik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-152535.
Повний текст джерелаQC 20140929
Quistrebert, Yohann. "Pour un statut fondateur de la victime psychologique en droit de la responsabilité civile." Thesis, Rennes 1, 2018. http://www.theses.fr/2018REN1G001.
Повний текст джерелаThe psychological impact of the events, which are the source of responsibility, be they acts of terrorism, loss of a loved one, psychological harassment, is specific to characteristics both protean and invisible. The first among them is due to the fact that in psychological matter injuries and the resulting suffering are both varied. As such, from the injury point of view, certain events will prove to be more traumatizing than others. Principally those during which the subject has been faced with his own death. Concerning suffering, a subject can as well emotionally suffer a change in his own integrity – for example the physical one with a diagnosis of a serious illness – that of a sort damage which affects that of a loved one (e.g. death or handicap). Then, the impact is considered invisible. It appears much more simple indeed, to identify harm to physical integrity as a harm to psychic integrity. More so, certain psychological harms are totally imperceptible by reason of their eminently diffuse characteristic. The object of this demonstration is therefore to know how civil liability law will comprehend the victim of such a psychological impact. Its comprehension will be particular given the inevitable interaction between the judicial and psychological spheres.In order to better understand this, we will first propose a conceptualization of the psychological victim that blends into psychopathological reality. Two major distinctions feed this thought. One is legal nature, which relates to the distinction between prejudice and harm. The other is psychopathological in nature which opposes emotional shock and psychic trauma. Their intertwining allows us to elaborate different cases of manifestation of psychological suffering and define the contours of the qualities of the victim. Secondly, regarding compensation for a psychological victim, both the appreciation and the evaluation of these prejudices will be examined. The repercussions of psychic trauma, or even emotional shock can sometimes be so grave that compensation cannot restrict itself only to the experienced suffering. Consequences of different natures, for example patrimonial ones, must be taken into consideration. To this end, a division of the prejudices of the psychological victim should be put in place. Distinct rules of compensation will be established based on the prejudice endured. A prejudice presumed, originating notably from a harm, cannot logically be compensated in the same fashion as non-presumable prejudices that require a forensic assessment. In short, the system of compensation must be in phase with the system of disclosure of suffering that has been previously established. As a result, this study proposes to construct a true founding status of a psychological victim. Once this principal notion has been completely conceptualized, we can use it to create a rational compensation scheme
Shehaj, Marinela. "Robust dimensioning of wireless optical networks with multiple partial link failures." Thesis, Compiègne, 2020. http://www.theses.fr/2020COMP2540.
Повний текст джерелаThis thesis summaries the work we have done in optimization of wireless optical networks. More specifically, the main goal of this work is to propose appropriate network dimensioning algorithms for managing the demand and ensuring traffic satisfaction in a network under partial link failures (i.e. when some links and/or nodes are operational with reduced capacity) caused mostly by weather conditions. The primary criterion in deciding the efficiency of the proposed algorithms for the network is the dimensioning cost of the network while keeping the traffic satisfaction at high reasonable levels. The main application area we have in mind are the networks that apply Free Space Optics (FSO) - a well established broadband wireless optical transmission technology where the communication links are provided by means of a laser beam sent from the transmitter to the receiver placed in the line of sight. FSO networks exhibit several important advantages but the biggest disadvantage is vulnerability of the FSO links to weather conditions, causing substantial loss of the transmission power over optical channel. This makes the problem of network dimensioning important, and, as a matter of fact, di cult. Therefore, a proper approach to FSO network dimensioning should take such losses into account so that the level of carried traffic is satisfactory under all observed weather conditions. In this thesis, we firstly describe such an approach. In the first part of the thesis, we introduce a relevant dimensioning problem and present a robust optimization algorithm for such enhanced dimensioning. To construct our approach we start with building a reference failure set which uses a set of weather data records for a given time period against which the network must be protected. Next, a mathematical model formulation of the robust network dimensioning problem uses the above failure set. Yet, such obtained reference set will most likely contain an excessive number of states and at the same time will not contain all the states that will appear in the reality. Hence, we propose to approximate the reference failure set with a special kind of virtual failure set called K-set parameterized by an integer value K, where K is less than or equal to the number of all links in the network. For a given K, the K-set contains all states corresponding to all combinations of K, or less, simultaneously affected links. Sometimes, there are situations where the weather is extremely bad and what we propose is to build a hybrid network model composed of FSO and fiber links. The second part of this thesis is devoted to the improvement of the so-called uncertainty sets (or uncertainty polytopes). In the first part we have introduced the idea of link Ksets. Now we extend this by considering simultaneous degradations of K nodes (meaning degradation of all adjacent links). Finally, inspired by the hitting set problem a new idea was to find a large number of subsets of two or three affected links and to use all possible combinations (composed of 2 or at most 3 of this subsets) to build a new virtual failure set that covers as much as possible the reference failure set that we got from the study of real weather data records. Next, this new failure set will serve as input for our cut-generation xxi algorithm so that we can dimension the network at a minimum cost and for a satisfactory demand realization. A substantial part of the work is devoted to present numerical study for different network instances that illustrates the effectiveness of the proposed approach. A dedicated space is given to the construction of a realistic network instance called Paris Metropolitan Area Network (PMAN)
Huang, Chung-Hao, and 黃重豪. "Model Checking Collaboration,Competition and Dense Fault Resilience." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/22398042253671313459.
Повний текст джерела國立臺灣大學
電子工程學研究所
104
In this thesis, I introduce BSIL(basicstrategy-interactionlogic) and TCL(temporal cooperation logic) which can help in formally define and verify the strategy interaction property of a game. The former, BSIL, is an extension to ATL (alternating-timelogic)for the specification of strategies interaction of players in a system. BSIL is able to describe one system strategy that can cooperate with several strategies of the environment for different requirements. Such properties are important in practice and Is how that such properties are notexpressibleinATL*,GL(gamelogic),andAMC(alternatingμ-calculus). Specifically, BSIL is more expressive than ATL but incomparable with ATL*, GL, and AMC in expressiveness. I show that, for fulfilling a specification in BSIL, a memoryful strategy is necessary. I also show that the model checking complexity of BSIL is PSPACE-complete and is of lower complexity than those of ATL*, GL, AMC, and the general strategy logics. Which may imply that BSIL can be useful in closing the gap between large scale real-world projects and the time consuming game-theoretical results. I then show the feasibility of our techniques by implementation and experiment with our PSPACE model-checking algorithm for BSIL. On the other hand, TCL allows successive definition of strategies for agents and agencies. Like BSIL the expressiveness of TCL is still incompa rable with ATL*, GL and AMC. However, it can describe deterministic Nash equilibria while BSIL cannot. I prove that the model checking complexity of TCL is EXPTIME-complete. TCL enjoys this relatively cheap complexity by disallowing a too close entanglement between cooperation and competition while allowing such entanglement leads to an on-elementary complexity. I have implemented a model checker for TCL and shown the feasibility of model checking in the experimentonsomebenchmarks. Although BSIL and TCL have decent expressive power and benefit from relatively low complexity. PSPACE-complete and EXPTIME-complete is still not good enough for real problem. To adopt the game concept to real world problem, I introduce an algorithm to calculatethe highest degr ee of fault tolerance a system can achieve with the control of a safety critical systems. Which can be reduced to solving a game between a malicious environment and a controller. During the game play, the environment tries to break the system through injecting failures while the controller tries to keep the system safe by making correct decisions. I found a new control objective which offers a better balance between complexity and precision for such systems: we seek systems that are k-resilient. A systemisk-resilient means it is able to rapidly recover from a sequence of small number, up to k, of local faults infinitely many times if the blocks of up to k faults are separated by short recovery periods in which no fault occurs. k-resilience is a simple abstraction from the precise distribution of local faults, but I believe it is much more refined than the traditional objective to maximize the number of local faults. I will provide detail argument of why this is the right level of abstraction for safety critical systems when local faults are few and far between. I have proved, with respect to resilience, the computational complexity of constructing optimal control is low. And a demonstration of the feasibility through an implementation and experimental results will be in following chapters.
Herrmann, Linda. "Formal Configuration of Fault-Tolerant Systems." 2018. https://tud.qucosa.de/id/qucosa%3A34074.
Повний текст джерелаHamouda, Sara S. "Resilience in high-level parallel programming languages." Phd thesis, 2019. http://hdl.handle.net/1885/164137.
Повний текст джерелаMoriam, Sadia. "On Fault Resilient Network-on-Chip for Many Core Systems." Doctoral thesis, 2018. https://tud.qucosa.de/id/qucosa%3A34064.
Повний текст джерелаAmaro, Luís Alberto Pires. "Resilient Artificial Neural Networks." Master's thesis, 2020. http://hdl.handle.net/10316/92519.
Повний текст джерелаDiariamente estamos dependentes de Redes Neurais Artificiais para resolver problemas complexos. Estas estão persentes na tecnologia que é crucial para a nossa sociedade no presente, tal como deteção de fraude de cartão de crédito, dispositivos médicos e estarão presentes em tecnologia que será crucial para a nossa sociedade no futuro, como por exemplo, veículos autônomos. Redes Neurais Artificiais são parte de Sistemas Críticos de Segurança, sistemas esses que têm várias vidas humanas que dependem deles. Um sistema é considerado crítico de segurança se em casa de falha levar à perda de vidas ou causar danos substanciais tanto em propriedades como no ambiente. Atualmente, há um grande número de Sistemas Críticos de Segurança, que dependem de Redes Neurais Artificiais. Por este motivo, é fundamental garantir que estas tenham um nível de robustez/resiliência adequado ao facto de existirem vidas humanas que dependem dos resultados produzidos por estes sistemas. Neste trabalho, temos dois objetivos principais. O primeiro é estudar o impacto que a técnica de Dropout tem na resiliência/robustez das Redes Neurais Artificiais. O segundo é desenvolver uma nova técnica e estudar o impacto que esta tem na resiliência/robustez das Redes Neurais Artificiais. A nova técnica é chamada de Stimulated Dropout. De forma a atingir os objetivos propostos, foram treinadas várias redes neurais com diferentes probabilidades de Dropout e Stimulated Dropout. As redes foram treinadas e testadas usando duas base de dados diferentes. Durante os testes das redes neurais iremos realizar bitflips na memória, e de forma a entender o impacto de ambas as técnicas na resiliência das Redes Neurais Artificiais, os resultados foram analisados usando três parâmetros, o número de Silent Data Corruptions (SDCs), accuracy e o tempo de treino. Os resultados mostram que o número de SDCs vai decrescer em redes treinadas com Dropout e Stimulated Dropout. Na base de dados Mnist, quando temos uma rede treinada sem nenhuma das técnicas, temos uma percentagem de SDCs de 6.77%, enquanto que com uma probabilidade de dropout de 80%, a percentagem é de 3.76%, cerca de 45% a menos. No caso do Stimulated Dropout, a menor percentagem de SDCs ocore quando temos uma probabilidade de 20%, onde temos uma percentagem de SDCs de 5.15%, cerca de 24% a menos. Quanto a base de dados Fashion Mnist, quando temos uma rede treinada sem nenhuma das técnicas, temos umas percentagem de SDCs de 7.93%, enquanto que com uma probabilidade de 80%, temos umas percentagem de 4.8%, cerca de 46% menos. No caso do Stimulated Dropout, a menor percentagem de SDCs ocorre quanto temos uma probabilidade de 50%, onde temos uma percentagem de SDCs de 4.22%, cerca de 47% menos .No caso do Dropout, observamos que existe um tradeoff entre a accuracy e o número de SDCs. Isto porque, o menor número de SDCs em ambas as bases de dados acontece quando temos uma probabilidade de dropoutde 80%, sendo que aaccuracy decresce à medida que a probabilidade de dropout aumenta. Relativamente ao Stimulated Dropout, observamos um tradeoff entre o tempo de treino e o número de SDCs. Isto porque, o tempo de treino das redes treinadas com diferentes probabilidade de stimulated dropout é consideravelmente maior do que o tempo de treino da rede treinada sem nenhuma das técnicas.
Artificial Neural Networks (ANNs) are used daily to help humans solve complex problemsin real-life situations. They are present in technology that is key to our society in the present, such as Credit card fraud detection, medical devices, and they will be presentin technology that will be key to our society in the future, such as Autonomous Vehicles (AV). A system is considered safety-critical if in case of failure leads to a loss of life or substantial damage both in property or environment. There are currently a large number of Safety-Critical Systems (SCSs) that rely on ANNs. For this reason it is crucial to ensure that the ANNs have a level of robustness/resilience adequate to the fact that there are human lives that depend on the results produced by them. In this work we have two main objectives. The first one is to study the impact that the Dropout technique has on the resilience/robustness of ANNs. The second one is to develop a new technique and study its impact on the resilience/robustness of ANNs. The new technique is named Stimulated Dropout. To achieve these goals we are going to trainmultiple neural networks with different dropout and stimulated dropout probabilities. The networks were trained and tested using two different databases. During the tests of these neural networks we will perform memory bitflips, and to understand the impact of both techniques in the resilience of ANNs, the results were analyzed using three parameters, the number of SDCs, accuracy, and training time. Our results show that adding Dropout and Stimulated Dropout to the the neural networks will decrease the number of SDCs, which means that both techniques have a positive impact on the resilience/robustness of ANNs. For the Mnist database, when we have a network trained without any of the techniques, we have a percentage of SDCs of 6.77%, whereas with a dropout probability of 80% the percentage of SDCs will be 3.76%, about 45% less. While in the case of the Stimulated Dropout, the lowest percentage of SDCs happens when we have a probability of 20%, where we have a percentage of SDCs of 5.15% about 24% less. As for the Fashion Mnist database, when we have a network trained without any of the techniques, we have a percentage of SDCs of 7.93%, whereas with a dropout probability of 80%, we have a percentage of 4.8%, about 46% less. While in the case of the Stimulated Dropout, the lowest percentage of SDCs happens when we have a probability of 50%, where we have a percentage of SDCs of 4.22%, about 47% less. In the case of Dropout, we observed that there is a tradeoff between the accuracy and the number of SDCs. This because, the lowest number of SDCs in both databases happens when we have a dropout probability of 80%, and if we compare the accuracy of the different dropout probabilities, we can see that the accuracy decreases as the dropout probability increases. Regarding the Stimulated Dropout, we observed that there is a tradeoff between the training time and the number of SDCs. This because, the training time of the networks trained with different stimulated dropout probabilities is far higher that the training time of the network trained without any of the techniques.
Outro - This thesis is part of the AI4EU project
Kulkarni, Sameer G. "Resource Management for Efficient, Scalable and Resilient Network Function Chains." Doctoral thesis, 2018. http://hdl.handle.net/11858/00-1735-0000-002E-E477-8.
Повний текст джерела