To see the other types of publications on this topic, follow the link: Multi-Fault.

Dissertations / Theses on the topic 'Multi-Fault'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Multi-Fault.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Black, Natanya Maureen. "Fault length, multi-fault rupture, and earthquakes in California." Diss., Restricted to subscribing institutions, 2008. http://proquest.umi.com/pqdweb?did=1581647191&sid=1&Fmt=2&clientId=1564&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Navarro, Martin. "Fault roughness and fault complexity field study, multi-scale analysis and numerical fault model /." [S.l.] : [s.n.], 2002. http://deposit.ddb.de/cgi-bin/dokserv?idn=966415809.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Roman, Felipe de Fraga. "Fault supervision for multi robotics systems." Pontifícia Universidade Católica do Rio Grande do Sul, 2015. http://hdl.handle.net/10923/7732.

Full text
Abstract:
Made available in DSpace on 2015-12-09T01:03:58Z (GMT). No. of bitstreams: 1 000476572-Texto+Completo-0.pdf: 1895232 bytes, checksum: d360d64653c8369496f6a15db1b7c15f (MD5) Previous issue date: 2015
As robotics becomes more common and people start to use it in routine tasks, dependability becomes more and more relevant to create trustworthy solutions. A commonly used approach to provide reliability and availability is the use of multi robots instead of a single robot. However, in case of a large teams of robots (tens or more), determining the system status can be a challenge. This work presents a runtime monitoring solution for Multi Robotic Systems. It integrates Nagios IT Monitoring tool and ROS robotic middleware. One of the potential advantages of this approach is that the use of a consolidated IT infrastructure tool enables the reuse of several relevant features developed to monitor large datacenters. Another important advantage of that this solution does not require additional software at the robot side. The experimental results demonstrate that the proposed monitoring system has a small performance impact on the robot and the monitoring server can easily support hundreds or even thousands of monitored robots.
À medida que a robótica se torna mais comum e as pessoas começam a utilizá-la em suas tarefas de rotina, dependabilidade torna-se cada vez mais importante para a construção de uma solução digna de confiança. Uma abordagem comum de prover confiabilidade e disponibilidade é o uso de multi robôs ao invés de um único robô devido a sua redundância intrísica. Entretanto, no caso de um grande time de robôs (dezenas ou mais), uma tarefa aparentemente simples como a determinação do status do sistema pode se tornar um desafio. Este trabalho apresenta uma ferramenta de monitoramento de sistemas multi robôs em tempo de execução. Esta solução integra a ferramenta de monitoramento de TI Nagios com o middleware robótico ROS sem a necessidade de instalação de software adicional no robô. O uso de uma ferramenta de TI consolidada permite o reuso de diversas funcionalidades relevantes já empregadas amplamente no monitoramento de datacenters. Os resultados experimentais demonstram que a solução proposta tem um baixo impacto no desempenho do robô e o servidor de monitoramento pode facilmente monitorar centenas ou até milhares de robôs ao mesmo tempo.
APA, Harvard, Vancouver, ISO, and other styles
4

Covella, Vito Vincenzo. "Multi-node Fault Classification using Machine Learning." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/22867/.

Full text
Abstract:
An HPC system, a system with much more computational power than general computing systems, is a complex system made up of different sections and many computing nodes. In such systems failures can arise for different reasons: because of the interactions among the components, because of the specific technologies used or because of bugs in the software. In order to reach Exascale performances and guarantee availability and reliability it is important to detect and recover from these anomalies. In this thesis we propose a fault classification method based on machine learning. Other researchers have worked in this field, but their work mainly relies on per-node models. However per-node models are impractical because they require too much data and fault injection would be hard to control. For this reason our research involves single multi-node models, since for single general models there’s less operational effort for training and mantaining the model over time is easier. More specifically our methodology is focused not only on metaparameter exploration, but also on understanding how many nodes are necessary for training and which specific nodes are the best candidates. For these reasons, we compare two approaches: incremental training with nodes selected randomly and incremental training with nodes which are representative of a chosen number of clusters. In both cases the end result is a single general model that can be used on different nodes for fault detection. Using the dataset provided by LRZ, about 32 compute nodes, we show that the classification performances stabilize when using a small subset of compute nodes as training set and both the previously discussed selection methods outperform node-specific classifiers when using more than one training node. Finally we show that the clustering approach is more reliable and stable when using more training nodes, while the random approach gives better performances when using a lower number of training nodes.
APA, Harvard, Vancouver, ISO, and other styles
5

Herdeiro, Teixeira André. "Multi-Agent Systems with Fault and Security Constraints." Thesis, KTH, Reglerteknik, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-106224.

Full text
Abstract:
Over the last few years, a change in one of the classical paradigms in Control Systems has been happening. Partially due to the increase of the computational power, there exists now the possibility of having intelligent sensors and actuators which may take actions in a distributed fashion. This way, there is no longer the need for a centralized controller or data fusion station, which is really well suited for Networked Control Systems. This led the way to the development of distributed control and sensor fusion frameworks and there is still a a great amount of research to be done in this field. Since there is no longer a central station containing all the information but instead there are several distinct agents controlling or monitoring the system, communication between these agents is now necessary. Thus one important aspect in these kind of systems is security, regarding not only the communications between agents but also a possible malfunction of one or several agents. The scope of this thesis is to study methods to detect possible misbehaviors and security breaches in the communications between agents from a control theoretical perspective. In this sense, a specific kind of distributed controller - the consensus problem - will be analyzed under the effect of faults and communication attacks. A Fault Detection and Isolation method will be used in order to detect these events using only the local information available. An example regarding power systems will also be given, where a decentralized state estimator is used in order to detect faults within an area, requiring only local information.
APA, Harvard, Vancouver, ISO, and other styles
6

Kamiel, Berli Paripurna. "Vibration-based multi-fault diagnosis for centrifugal pumps." Thesis, Curtin University, 2015. http://hdl.handle.net/20.500.11937/1532.

Full text
Abstract:
The thesis proposes a new method for vibration fault diagnosis of centrifugal pumps by combining statistical features, Symlet Wavelet transform, Principal Component Analysis and k-Nearest Neighbors. Six statistical features were extracted from the low frequency part of wavelet decomposition which was then used as input features for the PCA model. The fault detection utilised T2 and Q statistics while fault classification and identification were carried out using score matrices and k-Nearest Neighbors respectively.
APA, Harvard, Vancouver, ISO, and other styles
7

Abdelaal, Ahmed Abdelmalek Abdelhafez. "Active rectifier control for multi-phase fault-tolerant generators." Thesis, University of Manchester, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.489542.

Full text
Abstract:
This research is concerned with the solid-state power conversion system that interfaces a multi-phase, fault-tolerant, permanent magnet generator to a high voltage DC bus. The generator is being developed for future aerospace application, it has five independent phases, a 3:1 speed range and a one per-unit output impedance.
APA, Harvard, Vancouver, ISO, and other styles
8

Ozcerit, Ahmet Turan. "Fault-tolerant embedded multi-processing system with bus switching." Thesis, University of Sussex, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.285122.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Chaitanya, Deshpande. "Multi-Agent Based Fault Localizationand Isolation in Active DistributionNetworks." Thesis, KTH, Industriella informations- och styrsystem, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-169221.

Full text
Abstract:
Liberalized electricity markets, increased awareness of clean energy resources and theirdecreasing costs have resulted in large numbers of distributed power generators beinginstalled on distribution network. Installation of distributed generation has altered thepassive nature of distribution grid. A concept of Active Distribution Network is proposedwhich will enable present day infrastructure to host renewable energy resources reliably.Fault management that includes fault localization, isolation and service restoration ispart of active management of distribution networks.This thesis aims to introduce a distributed protection methodology for fault localizationand isolation. The objective is to enhance reliability of the network. Faults are identifiedbased on root mean square values of current measurements and by comparing thesevalues with preset thresholds. The method based on multi-agent concept can be usedto locate the faulty section of a distribution network and for selection of faulty phases.The nodal Bus Agent controls breakers that are associated with it. Based on indicationof fault, adjacent bus Agents communicate with each other to identify location of fault.A trip signal is then issued to corresponding Breakers in adjacent Bus Agents, isolatingthe faulty section of line. A case study was carried out to verify suitability of the proposedmethod. A meshed network model and multi-agent based protection scheme wassimulated in Simulink SimPowerSystems. Considering nature of Distribution Network,separate breakers for each phase are considered. The distribution network protectionsystem identified fault introduced in the network correctly along with interrupting thefault current.Keywords
APA, Harvard, Vancouver, ISO, and other styles
10

Miller, Dawn Elizabeth. "Underground cable fault location using multi-element gas sensing." Thesis, University of Manchester, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.681492.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Pai, Raikar Siddhesh Prakash Sunita. "Network Fault Resilient MPI for Multi-Rail Infiniband Clusters." The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1325270841.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

MARTIN, ROBERT ROHAN. "MULTI-LEVEL CELL FLASH MEMORY FAULT TESTING AND DIAGNOSIS." University of Cincinnati / OhioLINK, 2005. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1120232606.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Valero, Masa Alicia. "High impedance fault detection method in multi-grounded distribution networks." Doctoral thesis, Universite Libre de Bruxelles, 2012. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209580.

Full text
Abstract:
High Impedance Faults (HIFs) are undetectable by conventional protection technology under certain

conditions. These faults occur when an energized conductor makes undesired contact with a

quasi-insulating object, such as a tree or a road. This contact restricts the level of the fault current to a very low value, from a few mA up to 75A. In solidly grounded distribution networks where the value of the residual current under normal conditions is considerable, overcurrent devices do not protect against HIFs. However, such a protection is essential for guaranteeing public security, because of the possibility of reaching the fallen conductor and the risk of fire.
Doctorat en Sciences de l'ingénieur
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
14

Hossack, John A. "A multi-agent system for automated post-fault disturbance analysis." Thesis, University of Strathclyde, 2005. http://oleg.lib.strath.ac.uk:80/R/?func=dbin-jump-full&object_id=21606.

Full text
Abstract:
Within today's privatised electricity industry, post-fault disturbance analysis is becoming an increasingly challenging prospect for protection engineers. Not only must they be proficient at operating a diverse range of data gathering tools but they must also be able to spend the time necessary to interpret the large volumes of data generated by modern network monitoring devices. Although a degree of automated assistance is provided by existing intelligent decision support tools, it remains for the protection engineer to manually collate and interpret the output of each system in order to compile a comprehensive understanding of each disturbance. As detailed in this thesis, the requirement for manual intervention has been eliminated through the development of the Protection Engineering Diagnostic Agents (PEDA) decision support architecture capable of automating all aspects of post-fault disturbance analysis. An essential component within this architecture is an alarm processor developed specifically to assist protection engineers with the early stages of post-fault disturbance analysis. The novel reasoning methodology employed emulates a protection engineer's approach to alarm analysis, providing automatic identification of transmission system disturbances and events. PEDA achieves fully automated post-fault disturbance analysis through the novel use of Multi-Agent Systems (MAS) to integrate the alarm processor with other automated systems for fault record retrieval, fault record interpretation and protection validation. As will be described in the thesis, achieving systems integration using MAS provides for levels of architecture flexibility and extensibility not previously realised within existing integrated decision support architectures. The PEDA architecture was developed following a comprehensive eleven stage methodology created as part of the reported research to assist with the specification of MAS for decision support within the power industry. Each stage of the PEDA specification process is detailed together with its implementation. Finally, the implemented architecture has been shown to offer automated retrieval, interpretation, collation and archiving of disturbance information within five minutes of a disturbance occurring. The beneficiaries of this near real-time provision of disturbance information need not be limited to protection engineers.
APA, Harvard, Vancouver, ISO, and other styles
15

Mellouli, Sehl. "FATMAS: A Methodology to Design Fault-tolerant Multi-agent Systems." Thesis, Université Laval, 2005. http://www.theses.ulaval.ca/2005/22674/22674.pdf.

Full text
Abstract:
Un système multi-agent (SMA) est un système dans lequel plusieurs agents opèrent et interagissent. Chaque agent a la responsabilité d’exécuter des tâches. Cependant, chaque agent, pour diverses raisons, peut rencontrer des problèmes pendant l’exécution de ses tâches ; ce qui peut induire un disfonctionnement du SMA. Cependant, le SMA doit être en mesure de détecter les sources de problèms (d’erreurs) afin de les contrôler et ainsi continuer son exécution correctement. Un tel SMA est appelé un SMA tolérant aux fautes. Il existe deux types de sources d’erreurs pour un agent : les erreurs causées par son environnment et les erreurs dûes à sa programmation. Dans la littérature, il existe plusieurs techniques qui traitent des erreurs de programmation au niveau des agents. Cependant, ces techniques ne traitent pas des erreurs causées par l’environnement de l’agent. Tout d’abord, nous distinguons entre l’environnment d’un agent et l’environnement du SMA. L’environnement d’un agent représente toutes les composantes matérielles ou logicielles que l’agent ne peut contrôler mais avec lesquelles il interagit. Cependant, l’environnment du SMA représente toutes les composantes que le système ne contrôle pas mais avec lesquelles il interagit. Ainsi, le SMA peut contrôler certaines des composantes avec lesquelles un agent interagit. Ainsi, une composante peut appartenir à l’environnement d’un agent et ne pas appartenir à l’environnement du système. Dans ce travail, nous présentons une méthodologie de conception de SMA tolérants aux fautes, nommée FATMAS, qui permet au concepteur du SMA de détecter et de corriger, si possible, les erreurs causées par les environnements des agents. Cette méthodologie permettra ainsi de délimiter la frontière du SMA de son environnement avec lequel il interagit. La frontière du SMA est déterminée par les différentes composantes (matérielles ou logicielles) que le système contrôle. Ainsi, le SMA, à l’intérieur de sa frontière, peut corriger les erreurs provenant de ses composantes. Cependant, le SMA n’a aucun contrôle sur toutes les composantes opérant dans son environnement. La méthodologie, que nous proposons, doit couvrir les trois premières phases d’un développement logiciel qui sont l’analyse, la conception et l’implémentation tout en intégrant, dans son processus de développement, une technique permettant au concepteur du système de délimiter la frontière du SMA et ainsi détecter les sources d’erreurs et les contrôler afin que le système multi-agent soit tolérant aux fautes (SMATF). Cependant, les méthodologies de conception de SMA, référencées dans la littérature, n’intègrent pas une telle technique. FATMAS offre au concepteur du SMATF quatre modèles pour décrire et développer le SMA ainsi qu’une technique de réorganisation du système qui lui permet de détecter et de contrôler ses sources d’erreurs, et ainsi définir la frontière du SMA. Chaque modèle est associé à un micro processus qui guide le concepteur lors du développement du modèle. FATMAS offre aussi un macro-processus, qui définit le cycle de développement de la méthodologie. FATMAS se base sur un développement itératif pour identifier et déterminer les tâches à ajouter au système afin de contrôler des sources d’erreurs. À chaque itération, le concepteur évalue, selon une fonction de coût/bénéfice s’il est opportun d’ajouter de nouvelles tâches de contrôle au système. Le premier modèle est le modèle de tâches-environnement. Il est développé lors de la phase d’analyse. Il identifie les différentes tâches que les agents doivent exécuter, leurs préconditions et leurs ressources. Ce modèle permet d’identifier différentes sources de problèmes qui peuvent causer un disfonctionnement du système. Le deuxième modèle est le modèle d’agents. Il est développé lors de la phase de conception. Il décrit les agents, leurs relations, et spécifie pour chaque agent les ressources auxquelles il a le droit d’accéder. Chaque agent exécutera un ensemble de tâches identifiées dans le modèle de tâches-environnement. Le troisième modèle est le modèle d’interaction d’agents. Il est développé lors de la phase de conception. Il décrit les échanges de messages entre les agents. Le quatrième modèle est le modèle d’implémentation. Il est développé lors de la phase d’implémentation. Il décrit l’infrastructure matérielle sur laquelle le SMA va opérer ainsi que l’environnement de développement du SMA. La méthodologie inclut aussi une technique de réorganisation. Cette technique permet de délimiter la frontière du SMA et contrôler, si possible, ses sources d’erreurs. Cette technique doit intégrer trois techniques nécessaires à la conception d’un système tolérant aux fautes : une technique de prévention d’erreurs, une technique de recouvrement d’erreurs, et une technique de tolérance aux fautes. La technique de prévention d’erreurs permet de délimiter la frontière du SMA. La technique de recouvrement d’erreurs permet de proposer une architecture du SMA pour détecter les erreurs. La technique de tolérance aux fautes permet de définir une procédure de réplication d’agents et de tâches dans le SMA pour que le SMA soit tolérant aux fautes. Cette dernière technique, à l’inverse des techniques de tolérance aux fautes existantes, réplique les tâches et les agents et non seulement les agents. Elle permet ainsi de réduire la complexité du système en diminuant le nombre d’agents à répliquer. Résumé iv De même, un agent peut ne pas être en erreur mais la composante matérielle sur laquelle il est exécuté peut ne plus être fonctionnelle. Ce qui constitue une source d’erreurs pour le SMA. Il faudrait alors que le SMA continue à s’exécuter correctement malgrè le disfonctionnement d’une composante. FATMAS fournit alors un support au concepteur du système pour tenir compte de ce type d’erreurs soit en contrôlant les composantes matérielles, soit en proposant une distribution possible des agents sur les composantes matérielles disponibles pour que le disfonctionnement d’une composante matérielle n’affecte pas le fonctionnement du SMA. FATMAS permet d’identifier des sources d’erreurs lors de la phase de conception du système. Cependant, elle ne traite pas des sources d’erreurs de programmation. Ainsi, la technique de réorganization proposée dans ce travail sera validée par rapport aux sources d’erreurs identifiées lors de la phase de conception et provenant de la frontière du SMA. Nous démontrerons formellement que, si une erreur provient d’une composante que le SMA contrôle, le SMA devrait être opérationnel. Cependant, FATMAS ne certifie pas que le futur système sera toujours opérationnel car elle ne traîte pas des erreurs de programmation ou des erreurs causées par son environnement.
A multi-agent system (MAS) consists of several agents interacting together. In a MAS, each agent performs several tasks. However, each agent is prone to individual failures so that it can no longer perform its tasks. This can lead the MAS to a failure. Ideally, the MAS should be able to identify the possible sources of failures and try to overcome them in order to continue operating correctly ; we say that it should be fault-tolerant. There are two kinds of sources of failures to an agent : errors originating from the environment with which the agents interacts, and programming exceptions. There are several works on fault-tolerant systems which deals with programming exceptions. However, these techniques does not allow the MAS to identify errors originating from an agent’s environment. In this thesis, we propose a design methodology, called FATMAS, which allows a MAS designer to identify errors originating from agents’ environments. Doing so, the designer can determine the sources of failures it could be able to control and those it could not. Hence, it can determine the errors it can prevent and those it cannot. Consequently, this allows the designer to determine the system’s boundary from its environment. The system boundary is the area within which the decision-taking process of the MAS has power to make things happen, or prevent them from happening.We distinguish between the system’s environment and an agent’s environment. An agent’s environment is characterized by the components (hardware or software) that the agent does not control. However, the system may control some of the agent’s environment components. Consequently, some of the agent’s environment components may not be a part of the system’s environment. The development of a fault-tolerant MAS (FTMAS) requires the use of a methodology to design FTMAS and of a reorganization technique that will allow the MAS designer to identify and control, if possible, different sources of system failure. However, current MAS design methodologies do not integrate such a technique. FATMAS provides four models used to design and implement the target system and a reorganization technique to assist the designer in identifying and controlling different sources of system’s failures. FATMAS also provides a macro process which covers the entire life cycle of the system development as well as several micro processes that guide the designer when developing each model. The macro-process is based on an iterative approach based on a cost/benefit evaluation to help the designer to decide whether to go from one iteration to another. The methodology has three phases : analysis, design, and implementation. The analysis phase develops the task-environment model. This model identifies the different tasks the agents will perform, their resources, and their preconditions. It identifies several possible sources of system failures. The design phase develops the agent model and the agent interaction model. The agent model describes the agents and their resources. Each agent performs several tasks identified in the task-environment model. The agent interaction model describes the messages exchange between agents. The implementation phase develops the implementation model, and allows an automatic code generation of Java agents. The implementation model describes the infrastructure upon which the MAS will operate and the development environment to be used when developing the MAS. The reorganization technique includes three techniques required to design a fault-tolerant system : a fault-prevention technique, a fault-recovery technique, and a fault-tolerance technique. The fault-prevention technique assists the designer in delimiting the system’s boundary. The fault-recovery technique proposes a MAS architecture allowing it to detect failures. The fault-tolerance technique is based on agent and task redundancy. Contrary to existing fault-tolerance techniques, this technique replicates tasks and agents and not only agents. Thus, it minimizes the system complexity by minimizing the number of agents operating in the system. Furthermore, FATMAS helps the designer to deal with possible physical component failures, on which the MAS will operate. It proposes a way to either control these components or to distribute the agents on these components in such a way that if a component is in failure, then the MAS could continue operating properly. The FATMAS methodology presented in this dissertation assists a designer, in its development process, to build fault-tolerant systems. It has the following main contributions : 1. it allows to identify different sources of system failure ; 2. it proposes to introduce new tasks in a MAS to control the identified sources of failures ; 3. it proposes a mechanism which automatically determines which tasks (agents) should be replicated and in which other agents ; 4. it reduces the system complexity by minimizing the replication of agents ; Abstract vii 5. it proposes a MAS reorganization technique which is embedded within the designed MAS and assists the designer to determine the system’s boundary. It proposes a MAS architecture to detect and recover from failures originating from the system boundary. Moreover, it proposes a way to distribute agents on the physical components so that the MAS could continue operating properly in case of a component failure. This could make the MAS more robust to fault prone environments. FATMAS alows to determine different sources of failures of a MAS. The MAS controls the sources of failures situated in its boundary. It does not control the sources of failures situated in its environments. Consequently, the reorganization technique proposed in this dissertation will be proven valid only in the case where the sources of failures are controlled by the MAS. However, it cannot be proven that the future system is fault-tolerant since faults originating from the environment or from coding are not dealt with.
APA, Harvard, Vancouver, ISO, and other styles
16

Zain, Ali Noohul Basheer. "An investigation of delay fault testing for multi voltage design." Thesis, University of Southampton, 2009. https://eprints.soton.ac.uk/66389/.

Full text
Abstract:
Multi Voltage Design(MVD) has been successfully applied in contemporary processors as a technique to reduce energy consumption. This work is aimed at finding a generalised delay testing method for MVD. There has been little work to date on testing such systems, but testing the smallest number of operating voltages reduces testing costs. In the initial stage, the impact of varying supply voltage on different types of physical defects is analysed. Simulation results indicate that it is neces- sary to conduct test at more than one operating voltage and the lowest operating voltage does not necessarily give the best fault coverage. The second part of this work is related to the issues in the testing of level shifters in a MVD environment. The testing of level shifters was analysed to determine if high test coverage can be achieved at a single supply voltage. Resistive opens and shorts were considered and it was shown that, for testing purposes, consideration of purely digital fault effects is sufficient. Multiple faults were also considered. In all cases, it can be concluded that a single supply voltage is sufficient to test the level shifters. To further enhance the quality of test, we have proposed fault modelling and simulations using VHDL-AMS. Our simulation results show that the model derived using simplified VHDL-AMS gives acceptable results and significantly reduces the fault simulations time.
APA, Harvard, Vancouver, ISO, and other styles
17

Cabezas, Rodríguez Juan Pablo. "Generative adversarial network based model for multi-domain fault diagnosis." Tesis, Universidad de Chile, 2019. http://repositorio.uchile.cl/handle/2250/170996.

Full text
Abstract:
Memoria para optar al título de Ingeniero Civil Mecánico
Con el uso de las redes neuronal profundas ganando terreno en el área de PHM, los sensores disminuyendo progresivamente su precio y mejores algoritmos, la falta de datos se ha vuelto un problema principal para los modelos enfocados en datos. Los datos etiquetados y aplicables a escenarios específicos son, en el mejor de los casos, escasos. El objetivo de este trabajo es desarrollar un método para diagnosticas el estado de un rodamiento en situaciones con datos limitados. Hoy en día la mayoría de las técnicas se enfocan en mejorar la precisión del diagnóstico y en estimar la vida útil remanente en componentes bien documentados. En el presente, los métodos actuales son ineficiente en escenarios con datos limitados. Se desarrolló un método en el cual las señales vibratorias son usadas para crear escalogramas y espectrogramas, los cuales a su vez se usan para entrenar redes neuronales generativas y de clasificación, en función de diagnosticar un set de datos parcial o totalmente desconocido, en base a uno conocido. Los resultados se comparan con un método más sencillo en el cual la red para clasificación es entrenada con el set de datos conocidos y usada directamente para diagnosticar el set de datos desconocido. El Case Western Reserve University Bearing Dataset y el Machine Failure Prevention Technology Bearing Dataset fueron usados como datos de entrada. Ambos sets se usaron como conocidos tanto como desconocidos. Para la clasificación una red neuronal convolucional (CNN por sus siglas en inglés) fue diseñada. Una red adversaria generativa (GAN por sus siglas en inglés) fue usada como red generativa. Esta red fue basada en una introducida en el paper StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation. Los resultados fueron favorables para la red CNN mientras que fueron -en general- desfavorables para la red GAN. El análisis de resultados sugiere que la función de costo es inapropiada para el problema propuesto. Las conclusiones dictaminan que la traducción imagen-a-imagen basada en la función ciclo no funciona correctamente en señal vibratorias para diagnóstico de rodamientos. With the use of deep neural networks gaining notoriety on the prognostics & health management field, sensors getting progressively cheaper and improved algorithms, the lack of data has become a major issue for data-driven models. Data which is labelled and applicable for specific scenarios is scarce at best. The purpose of this works is to develop a method to diagnose the health state of a bearing on limited data situations. Now a days most techniques focus on improving accuracy for diagnosis and estimating remaining useful life on well documented components. As it stands, current methods are ineffective on limited data scenarios. A method was developed were in vibration signals are used to create scalograms and spectrograms, which in turn are used to train generative and classification neural networks with the goal of diagnosing a partially or totally unknown dataset based on a fully labelled one. Results were compared to a simpler method in which a classification network is trained on the labelled dataset to diagnose the unknown dataset. As inputs the Case Western Reserve University Bearing Dataset (CWR) and the Society for Machine Failure Prevention Technology Bearing Dataset. Both datasets are used as labelled and unknown. For classification a Convolutional Neural Network (CNN) is designed. A Generative Adversarial Network (GAN) is used as generative model. The generative model is based of a previous paper called StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation. Results were favourable for the CNN network whilst generally negative for the GAN network. Result analysis suggests that the cost function is unsuitable for the proposed problem. Conclusions state that cycle based image-to-image translation does not work correctly on vibration signals for bearing diagnosis.
APA, Harvard, Vancouver, ISO, and other styles
18

Ghaeini, Bentolhoda. "A Fault-Aware Resource Manager for Multi-Processor System-on-Chip." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-101010.

Full text
Abstract:
The semiconductor technology development empowers fabrication of extremelycomplex integrated circuits (ICs) that may contain billions of transistors. Suchhigh integration density enables designing an entire system onto a single chip,commonly referred to as a System-on-Chip (SoC). In order to boost performance,it is increasingly common to design SoCs that contain a number of processors, socalled multi-processor system-on-chips (MPSoCs).While on one hand, recent semiconductor technologies enable fabrication ofdevices such as MPSoCs which provide high performance, on the other hand thereis a drawback that these devices are becoming increasingly susceptible to faults.These faults may occur due to escapes from manufacturing test, aging effects orenvironmental impacts. When present in a system, faults may disrupt functionalityand can cause incorrect system operation. Therefore, it is very importantwhen designing systems to consider methods to tolerate potential faults. To copewith faults, there is a need of fault handling which implies automatic detection,identification and recovery from faults which may occur during the system’s operation.This work is about the design and implementation of a fault handling methodsfor an MPSoC. A fault aware Resource Manager (RM) is designed and implementedto obtain correct system operation and maximize the system’s throughputin the presence of faults. The RM has the responsibility of scheduling jobs to availableresources, collecting fault states from resources in the system and performingfault handling tasks, based on fault states. The RM is also employed in multipleexperiments in order to study its behavior in different situations.
APA, Harvard, Vancouver, ISO, and other styles
19

Kim, Bruce Chang-Shik. "A fault detection and diagnosis technique for multi-chip module interconnects." Diss., Georgia Institute of Technology, 1996. http://hdl.handle.net/1853/13736.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Khalili, Mohsen. "Distributed Adaptive Fault-Tolerant Control of Nonlinear Uncertain Multi-Agent Systems." Wright State University / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=wright1503622016617833.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Davoodi, Samirmi Farhad. "Multi-agent and knowledge-based system for power transformer fault diagnosis." Thesis, University of Liverpool, 2013. http://livrepository.liverpool.ac.uk/14455/.

Full text
Abstract:
Transformer reliability and stability are the key concerns. In order to increase their efficiency, an automatic monitoring and fault diagnosing of the power transformers are required. Dissolved Gas Analysis (DGA) is one of the most important tools to diagnose the condition of oil-immersed transformer. Agents technology as a new, robust and helpful technique, successfully applied for various applications. Integration of the Multi-Agent System (MAS) with knowledge base provides a robust system for various applications, such as fault diagnosis and automated actions performing, etc. For this purpose, the present study was conducted in the field of MAS based on Gaia methodology and knowledge base. The developed MAS followed by Gaia methodology represents a generic framework that is capable to manage agents executions and message delivery. Real-time data is sampled from a power transformer and saved into a database, and it is also available to the user on request. Three types of knowledge-based systems, namely the rule-based reasoning, ontology and fuzzy ontology, were applied for the MAS. Therefore, the developed MAS is shown to be successfully applied for condition monitoring of power transformer using the real-time data. The Roger’s method was used with all of the knowledge-based systems named above, and the accuracy of the results was compared and discussed. Of the knowledge-based systems studied, fuzzy ontology is found to be the best performing one in terms of results accuracy, compared to the rule-based reasoning and ontology. The application of the developed fuzzy ontology allowed to improve the accuracy by over 22%. Unlike the previous works in this field, that were not capable of dealing with the uncertainty situations, the present work based on fuzzy ontology has a clear advantage of successfully solving the problem with some degree of uncertainty. This is especially important, as the most of the real-world situations involve some uncertainty. Overall, the work contributes the use of the knowledge base and the multi-agent system for the fault diagnosis of the power transformer, including the novel application of fuzzy ontology for dealing with the uncertain situations. The advantages of the proposed method are the ease of the upgrade, flexibility, efficient fault diagnosis and reliability. The application of the proposed technique would benefit the power system reliability, as it would result in reduction of the number of engineering experts required, lower maintenance expenses and extended lifetime of power transformer.
APA, Harvard, Vancouver, ISO, and other styles
22

Poluri, Pavan Kamal Sudheendra. "Fault Tolerant Network-on-Chip Router Architectures for Multi-Core Architectures." Diss., The University of Arizona, 2014. http://hdl.handle.net/10150/338752.

Full text
Abstract:
As the feature size scales down to deep nanometer regimes, it has enabled the designers to fabricate chips with billions of transistors. The availability of such abundant computational resources on a single chip has made it possible to design chips with multiple computational cores, resulting in the inception of Chip Multiprocessors (CMPs). The widespread use of CMPs has resulted in a paradigm shift from computation-centric architectures to communication-centric architectures. With the continuous increase in the number of cores that can be fabricated on a single chip, communication between the cores has become a crucial factor in its overall performance. Network-on-Chip (NoC) paradigm has evolved into a standard on-chip interconnection network that can efficiently handle the strict communication requirements between the cores on a chip. The components of an NoC include routers, that facilitate routing of data between multiple cores and links that provide raw bandwidth for data traversal. While diminishing feature size has made it possible to integrate billions of transistors on a chip, the advantage of multiple cores has been marred with the waning reliability of transistors. Components of an NoC are not immune to the increasing number of hard faults and soft errors emanating due to extreme miniaturization of transistor sizes. Faults in an NoC result in significant ramifications such as isolation of healthy cores, deadlock, data corruption, packet loss and increased packet latency, all of which have a severe impact on the performance of a chip. This has stimulated the need to design resilient and fault tolerant NoCs. This thesis handles the issue of fault tolerance in NoC routers. Within the NoC router, the focus is specifically on the router pipeline that is responsible for the smooth flow of packets. In this thesis we propose two different fault tolerant architectures that can continue to operate in the presence of faults. In addition to these two architectures, we also propose a new reliability metric for evaluating soft error tolerant techniques targeted towards the control logic of the NoC router pipeline. First, we present Shield, a fault tolerant NoC router architecture that is capable of handling both hard faults and soft errors in its pipeline. Shield uses techniques such as spatial redundancy, exploitation of idle resources and bypassing a faulty resource to achieve hard fault tolerance. The use of these techniques reveals that Shield is six times more reliable than baseline-unprotected router. To handle soft errors, Shield uses selective hardening technique that includes hardening specific gates of the router pipeline to increase its soft error tolerance. To quantify soft error tolerance improvement, we propose a new metric called Soft Error Improvement Factor (SEIF) and use it to show that Shield’s soft error tolerance is three times better than that of the baseline-unprotected router. Then, we present Soft Error Tolerant NoC Router (STNR), a low overhead fault tolerating NoC router architecture that can tolerate soft errors in the control logic of its pipeline. STNR achieves soft error tolerance based on the idea of dual execution, comparison and rollback. It exploits idle cycles in the router pipeline to perform redundant computation and comparison necessary for soft error detection. Upon the detection of a soft error, the pipeline is rolled back to the stage that got affected by the soft error. Salient features of STNR include high level of soft error detection, fault containment and minimum impact on latency. Simulations show that STNR has been able to detect all injected single soft errors in the router pipeline. To perform a quantitative comparison between STNR and other existing similar architectures, we propose a new reliability metric called Metric for Soft error Tolerance (MST) in this thesis. MST is unique in the aspect that it encompasses four crucial factors namely, soft error tolerance, area overhead, power overhead and pipeline latency overhead into a single metric. Analysis using MST shows that STNR provides better reliability while incurring low overhead compared to existing architectures.
APA, Harvard, Vancouver, ISO, and other styles
23

Einafshar, Atefeh. "Fault tolerant reconfiguration of multi-satellite interactions using high-level petri nets." Thesis, University of British Columbia, 2015. http://hdl.handle.net/2429/52798.

Full text
Abstract:
An integrated reconfiguration performance model for interacting satellite networks is an important tool in analyzing reliability and developing protocols for uninterrupted operation. However, such a quantitative model is not easy to develop since it involves many parameters related to the network’s operation and all the earth-linked operational information communicated through it. The aim of this study is to propose an integrated communication model for a network of interacting satellites using high-level petri nets which permit sub-network reconfiguration without loss of communication ability whenever there are satellite communication faults or full failures. To quantify the Vulnerability, Uncertainty and Probability (VUP) in a network a Stochastic Petri Net (SPN) based model is developed. Three indicators are proposed to determine the VUP definitions in interacting network of satellites. To model the overall reconfiguration schemes of a network of interacting satellites, Colored Petri Nets (CPN) paradigm is used so as to simulate the reconfiguration operation of the integrated Networked Control System (NCS). A modular representation of the interacting satellites with the network in terms of satellites’ subsystems and their interconnection together and through the network is provided. Transmission network is modeled through senders and receivers including packet-data transmission. The four developed reconfiguration methods are used to recover the network in case of partial/ full failures occur in the system. The proposed approaches are then used to study the overall response time of a given NCS in interacting satellites, as well as the delays between the mutual senders and receivers. Simulations of the detailed models show that the networked control performance of the interacting satellites, in particular with reference to any satellite failure, can be improved with inclusion of appropriate monitors within the networked system as represented by sub-networks in the CPN model. The suggested integrated networked control schemes can be used to obtain a fault tolerant reconfiguration for a required network performance.
Applied Science, Faculty of
Mechanical Engineering, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
24

Murphy, Jess McNeff. "A multi-dimensional approach to fault protection in deep space software systems." Diss., Connect to online resource, 2006. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:1439429.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Taylor, David George. "Multi-scale imaging of the North Anatolian Fault Zone using seismic interferometry." Thesis, University of Leeds, 2018. http://etheses.whiterose.ac.uk/21717/.

Full text
Abstract:
Seismic imaging allows us to examine the subsurface structure of fault zones. Accurate knowledge of the structure of fault zones is critical for our understanding of earthquake hazard, and the processes of strain accumulation within the crust and upper mantle. The North Anatolian Fault Zone is a ∼ 1200 km long continental strike-slip fault zone located in northern Turkey. In the 20th century, the North Anatolian Fault has accommodated a westward propagating sequence of twelve Mw > 6.5 earthquakes. The most recent of these earthquakes occurred at Izmit and Duzce in 1999, 86 km south-east of Istanbul. In this thesis I use techniques from seismic interferometry to create seismic images of the crustal and upper mantle structure along the Izmit-Adapazari section of the North Anatolian Fault, in the vicinity of the 1999 Izmit rupture. I develop methods for observing P-wave reverberations from the free surface that are contained within the ambient seismic noise field and the P-wave coda of teleseismic earthquakes. By autocorrelating the seismic records from a dense seismic array in north-western Turkey, I use these reverberations to create high resolution seismic reflection images of the crust and upper mantle beneath the North Anatolian Fault Zone. In addition, I calculate inter-station cross-correlations to observe Rayleigh and Love waves propagating between stations in the Izmit-Adapazari region. I then use Rayleigh and Love wave phase velocity measurements to perform surface wave tomography and construct an S-wave velocity model of the top 10 km of the crust in the Izmit-Adapazari region. In the reflection images, I observe a clear arrival associated with a Moho reflected P-wave (PPmP). A ~ 3 s variation in travel time of the PPmP arrival suggests that the Moho is vertically offset beneath the northern branch of the North Anatolian Fault Zone. The vertical offset in the Moho occurs over a region less than 7 km wide approximately 16 km north of the surface trace of the North Anatolian Fault. The location of the vertical offsets indicates that the North Anatolian Fault is a localised structure that dips at an angle between 60◦ and 70◦ through the entire crust and enters the upper mantle as a narrow shear zone. I also note a reduction in the amplitude of the PPmP phase beneath both the northern and southern branches of the North Anatolian Fault Zone. This amplitude reduction could result from the presence of fluids and serpentinite minerals in the upper mantle which reduce Moho reflectivity beneath the North Anatolian Fault. The surface wave tomography shows that the North Anatolian Fault Zone is a vertical zone of low S-wave velocity (2.8 – 3.0 km s−1) in the top 10 km of the crust. I also detect further low velocity anomalies (1.2 – 1.6 km s−1) associated with ~ 3 km deep pull-apart sedimentary basins along both branches of the North Anatolian Fault Zone. Both branches of the North Anatolian Fault appear to skirt the edges of the Armutlu Block, a tectonic unit of crystalline rocks that exhibits high S-wave velocity (3.2 – 3.6 km s−1). It is likely that the Armutlu Block has a strong rheology, and localises strain along the faults at its northern and southern edges. I also measure the azimuthal anisotropy of the phase velocity observations, which displays an average magnitude of ~ 2.5% with a fast direction of 70◦ from north. The 70◦ fast direction aligns parallel with the direction of maximum extension in the Izmit-Adapazari region, and indicates that deformation-aligned mineral fabrics may dominate the anisotropy signal in the top 10 km of the crust.
APA, Harvard, Vancouver, ISO, and other styles
26

Lii, Neal Yi-Sheng. "Fault-tolerant multi-sensor fusion for drive-by-wire driver interface design." Thesis, University of Cambridge, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.611434.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Vishnu, Abhinav. "High performance and network fault tolerant MPI with multi-pathing over infiniBand." The Ohio State University, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=osu1196262906.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Garg, Mohit. "Generalized Consensus for Practical Fault-Tolerance." Thesis, Virginia Tech, 2018. http://hdl.handle.net/10919/85049.

Full text
Abstract:
Despite extensive research on Byzantine Fault Tolerant (BFT) systems, overheads associated with such solutions preclude widespread adoption. Past efforts such as the Cross Fault Tolerance (XFT) model address this problem by making a weaker assumption that a majority of processes are correct and communicate synchronously. Although XPaxos of Liu et al. (using the XFT model) achieves similar performance as Paxos, it does not scale with the number of faults. Also, its reliance on a single leader introduces considerable downtime in case of failures. This thesis presents Elpis, the first multi-leader XFT consensus protocol. By adopting the Generalized Consensus specification from the Crash Fault Tolerance model, we were able to devise a multi-leader protocol that exploits the commutativity property inherent in the commands ordered by the system. Elpis maps accessed objects to non-faulty processes during periods of synchrony. Subsequently, these processes order all commands which access these objects. Experimental evaluation confirms the effectiveness of this approach: Elpis achieves up to 2x speedup over XPaxos and up to 3.5x speedup over state-of-the-art Byzantine Fault-Tolerant Consensus Protocols.
Master of Science
Online services like Facebook, Twitter, Netflix and Spotify to cloud services like Google and Amazon serve millions of users which include individuals as well as organizations. They use many distributed technologies to deliver a rich experience. The distributed nature of these technologies has removed geographical barriers to accessing data, services, software, and hardware. An essential aspect of these technologies is the concept of the shared state. Distributed databases with multiple replicated data nodes are an example of this shared state. Maintaining replicated data nodes provides several advantages such as (1) availability so that in case one node goes down the data can still be accessed from other nodes, (2) quick response times, by placing data nodes closer to the user, the data can be obtained quickly, (3) scalability by enabling multiple users to access different nodes so that a single node does not cause bottlenecks. To maintain this shared state some mechanism is required to maintain consistency, that is the copies of these shared state must be identical on all the data nodes. This mechanism is called Consensus, and several such mechanisms exist in practice today which use the Crash Fault Tolerance (CFT). The CFT model implies that these mechanisms provide consistency in the presence of nodes crashing. While the state-of-the-art for security has moved from assuming a trusted environment inside a firewall to a perimeter-less and semi-trusted environment with every service living on the internet, only the application layer is required to be secured while the core is built just with an idea of crashes in mind. While there exists comprehensive research on secure Consensus mechanisms which utilize what is called the Byzantine Fault Tolerance (BFT) model, the extra costs required to implement these mechanisms and comparatively lower performance in a geographically distributed setting has impeded widespread adoption. A new model recently proposed tries to find a cross between these models that is achieving security while paying no extra costs called the Cross Fault Tolerance (XFT). This thesis presents Elpis, a consensus mechanism which uses precisely this model that will secure the shared state from its core without modifications to the existing setups while delivering high performance and lower response times. We perform a comprehensive evaluation on AWS and demonstrate that Elpis achieves 3.5x over the state-of-the-art while improving response times by as much as 50%.
APA, Harvard, Vancouver, ISO, and other styles
29

Schwarz, Thoralf A. "Uncertainty Analysis of a Fault Detection and Isolation Scheme for Multi-Agent Systems." Thesis, KTH, Reglerteknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-104013.

Full text
Abstract:
Diagnostic techniques in model-based fault detection and isolation approaches are often based on residuals. If the residuals become greater than a certain threshold then an alarm can be triggered. However, disturbances, such as those caused by model uncertainty, affect the behavior of the residuals and therefore the performance of the diagnostic system. Fault detection becomes a matter of security when applied in multi-agent systems, since their distributed nature offers adversaries possibilities to attack the system. This thesis considers disturbances caused by model uncertainty which is often encountered during implementation. Their influence on a model-based fault detection and isolation scheme in multi-agent systems is analyzed and an evaluation technique for the residuals is proposed. Different attack scenarios are considered and their influence on the residuals will be discussed. Finally, experimental results circumstantiate the proposed approaches.
APA, Harvard, Vancouver, ISO, and other styles
30

Al-Hinai, Suleiman Mohammed. "Multi-phase fluid flow properties of fault rocks : implication for production simulation models." Thesis, University of Leeds, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.581870.

Full text
Abstract:
It is becoming increasingly common practise to model the impact of faults on fluid flow within petroleum reservoirs by applying transmissibility multipliers, calculated from the single-phase permeability of fault rocks, to the grid-blocks adjacent to faults in produc- tion simulations. The multi-phase flow properties (e.g. relative permeability and capillary pressure) of fault rocks are not considered because special core analysis has never previ- ously been conducted on fault rock samples. The principle aim of this thesis is to fill this knowledge gap. Two distinct approaches have been adopted. First, a considerable num- ber of experiments have been conducted to measure the multi-phase flow properties of faults. The measurements represent different type of fault rocks: cataclastic fault rocks, and fault rocks in impure sandstone; significant amount of effort was needed to evaluate and design new experimental methods. Second, an attempt has also been made to numer- ically model the multi-phase flow behaviour of fault rocks; several numerical techniques (lattice Boltzmann method, pore scale network modelling) have been used. In addition, production simulation modelling has been conducted to investigate the implications of the results. The relative permeability measurements were made using a gas pulse-decay technique on samples whose water saturation was varied using vapour chambers. The measurements indicate that if the same cataclastic fault rocks were present in gas reservoirs from the southern Permian Basin they would have k,.g values of < 0.02. Such large reduction in gas effective permeability was also seen for tight gas sandstones and siltstones. However, the steady-state oil relative permeability measurements for a kaolin rich sample which represents an analogue to fault in impure sandstone was found to be higher then those for the cataclastic fault rocks. The samples studied show also different sensitivity to effective stress. The gas relative permeability measurements proved far more stress sensitive than the single phase permeability values. Pore scale network models have a strong capability in modelling the relative permeability and capillary pressure curves for such low permeability rocks. The predicted results by the model were in good agreement with the experimental data presented in this work. Similarly, lattice Boltzmann method found to have a strong capability for modelling the multi-phase fluid flow in a variety of situation.
APA, Harvard, Vancouver, ISO, and other styles
31

Jiang, Chen. "MB-FICA: An ADL framework for multi-bit fault injection and coverage analysis." Thesis, McGill University, 2014. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=123312.

Full text
Abstract:
Safety-critical systems (SCS) may experience soft errors due to upsets caused by externalevents such as cosmic rays, packaging radiation and thermal neutrons. Traditional errormodeling techniques often only address single bit corruptions and analysis based on populartechniques such as architectural vulnerability factor (AVF) treat each bit as independent.However recent studies have shown a dramatic increase in multi-bit upset (MBU) wherethe failure of a single bit is highly correlated with its neighboring bits. This phenomenon isdue to shrinking transistors and resulting increases in transistor density, making a particlestrike capable of corrupting multiple bits at a time. To assist designers with MBU mitigation in microprocessor register les (RF), we havedeveloped a novel framework (available at http://bhm.ece.mcgill.ca/~mb-fica) to simulate and analyze the eect of MBU and the eectiveness of fault tolerance techniques.Unlike the prior work, our approach performs fault injection in microarchitecture includingmitigation technologies and simulates the consequent behavior of the system running various benchmarks. In this framework, we consider (a) the eect of SRAM layout on MBUpatterns, (b) the data-dependent nature of transient upsets, and (c) runs benchmarks tocompletion to accurately evaluate fault coverage under dierent mitigation techniques. Fault injection is computationally expensive, especially in the context of MBU; consequently, we propose a suite of fault injection acceleration techniques that reduce theexecution time of individual trials by only simulating mitigation techniques when faults arepresent, and stopping simulation entirely when all errors have been detected or corrected.When evaluating parity, SECDED, and 2-bit 2D ECC, our results demonstrate a speedupin the fault injection performance of 14x on average, and up to nearly 60x in one case.
Les systemes de securite critiques (SCS) peuvent rencontrer des erreurs doux en raison deperturbations causees par des evenements exterieurs tels que les rayons cosmiques, rayonnement de l'emballage et de neutrons thermiques. Les techniques traditionnelles demodelisation d'erreur souvent ne traitent que des corruptions et d'analyse uniques bitsbases sur des techniques populaires tels que le facteur de vulnerabilite architecturale (AVF)traiter chaque bit comme independant. Toutefois, des etudes rcentes ont montre une augmentation spectaculaire renversement multi-bits (MBU) ou la defaillance d'un seul bit estfortement correlee avec ses bits voisins. Ce phenomene est dû a la diminution des transistors et l'augmentation de la densite des transistors resultant, faisant une gresve de laparticule capable de corrompre plusieurs bits a la fois. Pour aider les concepteurs a MBU attenuation dans les chiers du registre du microprocesseur, nous avons developpe une structure original (disponible sur le site http://bhm.ece.mcgill.ca/~mb-fica) pour simuler et analyser l'eet de MBU et l'ecacite des techniques de tolerance aux pannes. Contrairement au travail avant, notre approche eectuel'injection de fautes dans la microarchitecture qui est integre avec les technologies fauted'attenuation et presente le comportement decoule du systeme executant divers criteres.Dans ce cadre , nous considerons (a) l'eet de la SRAM mise sur les modeles MBU ,(b) la nature des donnees dependant de troubles transitoires , et (c) execute des reperespour l'achevement d'evaluer avec precision la couverture de faute en vertu de dierentestechniques d'attenuation . Injection d'erreur est co^uteuse en ressources informatiques, en particulier dans le contexte de la MBU, par consequent, nous proposons une gamme de techniques d'accelerationde l'injection de fautes qui reduisent le temps d'execution des essais individuels que desimuler des techniques d'attenuation en cas de defauts sont presents, et l'arrêt de la simulation tout quand tout erreurs ont ete detectees ou corrigees. Lors de l'evaluation parite,SECDED, et 2 bits 2D ECC, nos resultats montrent une acceleration de la performance del'injection de fautes de 14x en moyenne, et jusqu'a pres de 60x dans un cas.
APA, Harvard, Vancouver, ISO, and other styles
32

Emmanouilidis, Christos. "Evolutionary multi-objective feature selection and its application to industrial machinery fault diagnosis." Thesis, University of Sunderland, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.391024.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Hamilton, Kevin Michael. "Heuristic based multi-manipulator motion planning in time varying environments with fault tolerance." Thesis, Queen's University Belfast, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.388093.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Taras, Petrica. "Investigation on multi-physics modelling of fault tolerant stator mounted permanent magnet machines." Thesis, University of Sheffield, 2018. http://etheses.whiterose.ac.uk/22449/.

Full text
Abstract:
This thesis investigates the stator mounted permanent magnet machines from the point of view of fault tolerant capability. The topologies studied are switched flux (and its derivatives C-Core, E-Core and modular), doubly salient and flux reversal permanent magnet machines. The study focuses on fault mode operation of these machines looking at severe conditions like short-circuit and irreversible demagnetization. The temperature dependence of the permanent magnet properties is taken into account. A complex multi-physics model is developed in order to assess the thermal state evolution of the switched flux machine during both healthy and faulty operation modes. This model couples the electro-mechanical domain with the thermal one, thus being able to consider a large range of operating conditions. It also solves issues such as large computational time and resources while still maintaining the accuracy. Experimental results are also provided for each chapter. A hierarchy in terms of fault tolerant capability is established. A good compromise can be reached between performance and fault tolerant capability. The mechanism of the magnet irreversible demagnetization process is explained based on magnetic circuit configuration. It is also found that the studied topology are extremely resilient against the demagnetizing influence of the short-circuit current and the magnet demagnetization is almost only affected by temperature.
APA, Harvard, Vancouver, ISO, and other styles
35

Huang, Eric Guang Jye M. S. "An Improved Fault Detection Methodology for Semiconductor Applications Based on Multi-regime Identification." University of Cincinnati / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1377870901.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Partin, Michael. "Scalable, Pluggable, and Fault Tolerant Multi-Modal Situational Awareness Data Stream Management Systems." Wright State University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=wright1567073723628721.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Väyrynen, Mikael. "Fault-Tolerant Average Execution Time Optimization for General-Purpose Multi-Processor System-On-Chips." Thesis, Linköping University, Linköping University, Linköping University, Department of Computer and Information Science, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-17705.

Full text
Abstract:

Fault tolerance is due to the semiconductor technology development important, not only for safety-critical systems but also for general-purpose (non-safety critical) systems. However, instead of guaranteeing that deadlines always are met, it is for general-purpose systems important to minimize the average execution time (AET) while ensuring fault tolerance. For a given job and a soft (transient) no-error probability, we define mathematical formulas for AET using voting (active replication), rollback-recovery with checkpointing (RRC) and a combination of these (CRV) where bus communication overhead is included. And, for a given multi-processor system-on-chip (MPSoC), we define integer linear programming (ILP) models that minimize the AET including bus communication overhead when: (1) selecting the number of checkpoints when using RRC or a combination where RRC is included, (2) finding the number of processors and job-to-processor assignment when using voting or a combination where voting is used, and (3) defining fault tolerance scheme (voting, RRC or CRV) per job and defining its usage for each job. Experiments demonstrate significant savings in AET.

APA, Harvard, Vancouver, ISO, and other styles
38

Manzoor, Umar. "An intelligent fault tolerant multi-agent framework for automated node monitoring and software deployment." Thesis, University of Salford, 2011. http://usir.salford.ac.uk/26799/.

Full text
Abstract:
Computer networks today are far more complex than in 1980s and managing such networks is a challenging job for network management team. With the ever growing complexity of computer networks and the limitations of the available assistance softwares / tools, it has become difficult, hectic, and time consuming for the network management team to execute the tasks such as traffic monitoring, node monitoring, performance monitoring, software deployment etc over the network. To address these issues, researchers as well as leading IT companies have moved towards a new paradigm called Autonomic Computing whose main is the development of self- managing systems. Autonomic system makes decision autonomously, constantly optimizes its status and adapts itself to the changing conditions. This research proposes a new autonomic framework based on multi-agent paradigm for autonomous network management. In this study, we particularly focused on monitoring node activities and software deployment, the aims were 1) to minimize the human interaction required to perform these tasks which optimizes the task processing time and reduces human resource requirement, and 2) to overcome some of the major problems (such as autonomous monitoring, autonomous installation for any type/kind of software, etc) related to these tasks. The proposed framework is fully autonomous, has an effective mechanism for achieving the said tasks and is based on Layered architecture. Once initialized with given rules / domain knowledge, it accomplishes the task(s) autonomously without human interaction / intervention. It uses mobile agents for task execution and fault / failure can affect the performance of the system; therefore, to make the system robust fault tolerance mechanism is incorporated at different levels of the system. The framework is implemented in Java using Java Agent Development (JADE) framework and supports platform independence; however, it has been tested and evaluated only on Microsoft Windows environment. In this research, the major challenges faced were 1) capturing unknown malicious applications running over the network, 2) development of generic approach which works for any type / kind of software set, 3) automatic generation of events required in software deployment process and 4) development of efficient approach for application setup transfer over network. The first challenge was related to monitoring node activities which was catered by analyzing the application content (i.e. text, image and video) using text analysis / image processing algorithms. Domain specific ontology was developed and populated using known malicious applications content for categorization purpose. The concepts extracted using the content analysis phase were mapped to domain specific ontology concepts and assigned score. The application was assigned the ontology class (if any) which has the highest score. The other challenges were related to software deployment which were catered by lunching application setup autonomously and for each step, window content (i.e. text, controls) were extracted, filtered using text processing algorithm and classified using rule based classifier. After classification, the appropriate window event was generated autonomously. The reason of using rule based classifier was that software deployment process is standardized and every installer follows the same standard. Furthermore, exponential file transfer algorithm was incorporated in the framework to transfer the application setup smartly and efficiently over the network. We have run this system on experimental basis at the university campus having seven labs equipped with 20-300 number of PCs running Microsoft Windows (any version) in various labs. For automated node monitoring evaluation, initially one hundred volunteers were selected for experimentation in these labs and all of them were told about the system. After initial experimentation, we announced about the system on the university blackboard, walls/doors of the labs etc and open the labs for all users. The announcement clearly states that "Your activities will be monitored and the collected data will be used only for educational/research purpose". The activities were monitored for one month and the monitored data was stored in database for analysis. For Software Deployment evaluation some of the popular softwares (such as Microsoft Office, Adobe Reader, FireFox etc) were deployed. The proposed framework has been tested on different scenarios and results prove that the overall performance of the proposed approach in terms of efficiency and time is far better than existing approaches / frameworks.
APA, Harvard, Vancouver, ISO, and other styles
39

Davies, Jessica. "Modelling, control and monitoring of high redundancy actuation." Thesis, Loughborough University, 2010. https://dspace.lboro.ac.uk/2134/6288.

Full text
Abstract:
The High Redundancy Actuator (HRA) project investigates a novel approach to fault tolerant actuation, which uses a high number of small actuation elements, assembled in series and parallel in order to form a single intrinsically fault tolerant actuator. Element faults affect the maximum capability of the overall actuator, but through control techniques, the required performance can be maintained. This allows higher levels of reliability to be attained in exchange for less over-dimensioning in comparison to conventional redundancy techniques. In addition, the combination of both serial and parallel elements provides intrinsic accommodation of both lock-up and loose faults. Research to date has concentrated on HRAs based on electromechanical technology, of relatively low order, controlled through passive Fault Tolerant Control (FTC) methods. The objective of this thesis is to expand upon this work. HRA configurations of higher order, formed from electromagnetic actuators are considered. An element model for a moving coil actuator is derived from first principles and verified experimentally. This element model is then used to form high-order, non-linear HRA models for simulation, and reduced-order representations for control design. A simple, passive FTC law is designed for the HRA configurations, the results of which are compared to a decentralised, active FTC approach applied through a framework based upon multi-agent concepts. The results indicate that limited fault tolerance can be achieved through simple passive control, however, performance degradation occurs, and requirements are not met under theoretically tolerable fault levels. Active FTC offers substantial performance improvements, meeting the requirements of the system under the vast majority of theoretically tolerable fault scenarios. However, these improvements are made at the cost of increased system complexity and a reliance on fault detection. Fault Detection (FD) and health monitoring of the HRA is explored. A simple rule-based FD method, for use within the active FTC, is described and simulated. An interacting multiple model FD method is also examined, which is more suitable for health monitoring in a centralised control scheme. Both of these methods provide the required level of fault information for their respective purposes. However, they achieve this through the introduction of complexity. The rule-based method increases system complexity, requiring high levels of instrumentation, and conversely the interacting multiple model approach involves complexity of design and computation. Finally, the development of a software demonstrator is described. Experimental rigs at the current project phase are restricted to relatively low numbers of elements for practical reasons such as cost, space and technological limitations. Hence, a software demonstrator has been developed in Matlab/Simulink which provides a visual representation of HRAs with larger numbers of elements, and varied configuration for further demonstration of this concept.
APA, Harvard, Vancouver, ISO, and other styles
40

Ahmaida, Anwar M. "Condition monitoring and fault diagnosis of a multi-stage gear transmission using vibro-acoustic signals." Thesis, University of Huddersfield, 2018. http://eprints.hud.ac.uk/id/eprint/34755/.

Full text
Abstract:
Gearbox condition monitoring(CM) plays a vital role in ensuring the reliability and operational efficiency of a wide range of industrial facilities such as wind turbines and helicopters. Many technologies have been investigated intensively for more accurate CM of rotating machines with using vibro-acoustic signature analysis. However, a comparison of CM performances between surface vibrations and airborne acoustics has not been carried out with the use of emerging signal processing techniques. This research has focused on a symmetric evaluation of CM performances using vibrations obtained from the surface of a multi stage gearbox housing and the airborne sound obtained remotely but close to the gearbox, in conjunction with state of the art signal processing techniques, in order to provide efficient and effective CM for gear transmissions subject to gradual and progressive deteriorations. By completing the comparative studies, this research has resulted in a number of new findings that show significant contributions to knowledge which are detailed as follows. In general, through a comprehensive review of the advancement in the subject, the research has been carried out by integrating an improved dynamic modelling, more realistic experiment verification and more advanced signal processing approaches. The improved modelling has led to an in-depth understanding of the nonlinear modulation in vibro-acoustic signals due to wear effects. Thereafter, Time Synchronous Average (TSA) and Modulation Signal Bispectrum (MSB) are identified to be the most promising signal processing methods to fulfil the evaluation because of their unique properties of simultaneous noise reduction and modulation enhancement. The more realistic tests have demonstrated that arun-to-failure test is necessary to develop effective diagnostic tools as it produces datasets from gear transmissions where deterioration naturally progresses over a long operation, rather than faults created artificially to gear systems, as is common in the majority of studies and the results unreliable. Particularly, the evaluation studies have clarified a number of key issues in the realisation of gearbox diagnostics based on TSA and MSB analysis of the vibrations from two accelerometers and acoustics from two microphones in monitoring the run-to-failure process, which showed slight gear wear of two back-to-back multiple stage helical gearboxes under variable load and speed operations. TSA analysis of vibration signals and acoustic signals allows for accurate monitoring and diagnosis results of the gradual deterioration in the lower speed transmission of both the tested gearboxes. However, it cannot give the correct indication of the higher speed stages in the second gearbox as the reference angle signal is too erroneous due to the distortion of long transmission trains. In addition, acoustic signals can indicate that there is a small determination in the higher speed transmission of the first gearbox. The MSB analysis of vibration signals and sound signals allows for the gathering of more corrective monitoring and diagnostic results of the deterioration in the four stages of transmissions of the two tested gearboxes. MSB magnitudes of both the two lower speed transmissions show monotonic increases with operational time and the increments over a longer period are in excess of three times higher than the baselines, the deteriorations are therefore regarded as severe. For the two higher speed transmissions, the MSB of vibrations and acoustics illustrates small deteriorations in the latter operating hours. Comparatively, acoustic signal based diagnostics can out-perform vibration as it can provide an early indication of deteriorations and correct diagnosis of the faults as microphones perceive a large area of dynamic responses from gearbox housing whereas accelerometers collect a very localised response which can be distorted by transmission paths. In addition, MSB analysis can out-perform conventional TSA as it maintains all diagnostic information regarding the rotating systems and can be implemented without any additional reference channels.
APA, Harvard, Vancouver, ISO, and other styles
41

Tzelepis, Dimitrios. "Protection, fault location & control in high voltage multi terminal direct current (HV-MTDC) grids." Thesis, University of Strathclyde, 2017. http://digitool.lib.strath.ac.uk:80/R/?func=dbin-jump-full&object_id=28877.

Full text
Abstract:
With an increased penetration of renewable energy sources, balancing the supply and demand is likely to be one of the major challenges in future power systems. Consequently, there is a growing need for meshed interconnections between countries in order to effectively share the available power capacity and thereby increase operational exibility and security of supply. This has been raised as a major issue in Europe but also in Asia and United States of America. The concept of supergrid has been identifed as a possible solution towards a newback bone transmission system, permitting massive integration of renewable energy sources. High voltage direct current (HVDC) links, utilising voltage-source converters (VSCs),are expected to become the preferred technology for the realisation of such a supergrid. This is due to the fact that such systems offer improvements in terms of system stability, lower cost and operational losses. A natural extension of the existing point-to-point HVDC transmission technology is a multi-terminal direct-current (MTDC) system which utilises more than two VSC stations, effectively forming a DC grid. Such a configuration can provide further technological and economical advantages and hence accelerate the realisation of a supergrid. However, technical limitations still exist, and it is not yet a straightforward task to construct and operate an MTDC grid, as several outstanding issues need to be solved. Consequently, it is essential to study, analyse and address potential challenges imposed by MTDC systems in order to enable widespread adoption. Even though numerous challenges are introduced for the practical implementation of MTDC networks, this thesis deals with the challenges related to the DC-side faults, which is the main issue when considering HVDC technology. DC-side faults in HVDC systems are characterised by large inrush currents caused by the discharge of trapped energy in the system capacitances, escalating over a very short period of time. These include lumped capacitors installed on the DC side of converters, transmission line capacitances, and also the sub-module capacitors contained within modular multi-level converters. When faults occur in multi-terminal HVDC grids, the DC protection system is expected to minimise the detrimental effects by disconnecting only the faulted section while permitting the remaining healthy part of the grid to continue normal operation. Such requirements introduce the need for transient DC fault characterisation and subsequent development of a discriminative, fast, sensitive and reliable DC protection method. Therefore, one of the main objectives of this thesis is to provide demonstrable solutions to the key challenges involved in protecting MTDC grids, and hence enabling the realisation of HVDC-based supergrids. Two alternative, novel protection schemes are proposed, designed and assessed with the aid of transient simulation. The key advantages of the proposed schemes consist in enhanced reliability, fast fault detection, superior stability, and high level of selectivity. To further validate the practical feasibility of the schemes, small-scale laboratory prototypes have been developed to test the performance of the schemes under real-time fault conditions. It should be also highlighted that when a permanent fault occurs in an HVDC transmission system, accurate estimation of its location is of major importance in order to accelerate restoration, reduce the system down-time, minimise repair cost, and hence, increase the overall availability and reliability of HVDC grids. As such, another contribution of this thesis is related to the challenges involved in accurate fault location in HVDC networks, including non-homogeneous transmission media (i.e. the lines which include multiple segments of both underground cables and overhead lines). Two novel fault location methods have been developed and systematically assessed. It is demonstrated that the schemes can reliably identify the faulted segment of the line while consistently maintaining high accuracy of fault location across a wide range of fault scenarios. Further sensitivity analysis demonstrates that the proposed schemes are robust against noisy inputs. In its concluding section, the thesis also outlines a few possible avenues of further research in this area.
APA, Harvard, Vancouver, ISO, and other styles
42

Norris, Natasha Louise. "Implementation of Multi-Constellation Baseline Fault Detection and Exclusion Algorithm Utilizing GPS and GLONASS Signals." Ohio University / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1535028817622931.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Alves, André Nunes Gomes. "Healing replicas in a software component replication system." Master's thesis, Faculdade de Ciências e Tecnologia, 2013. http://hdl.handle.net/10362/11353.

Full text
Abstract:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
Replication is a key technique for improving performance, availability and faulttolerance of systems. Replicated systems exist in different settings – from large georeplicated cloud systems, to replicated databases running in multi-core machines. One feature that it is often important is a mechanism to verify that replica contents continue in-sync, despite any problem that may occur – e.g. silent bugs that corrupt service state. Traditional techniques for summarizing service state require that the internal service state is exactly the same after executing the same set of operation. However, for many applications this does not occur, especially if operations are allowed to execute in different orders or if different implementations are used in different replicas. In this work we propose a new approach for summarizing and recovering the state of a replicated service. Our approach is based on a novel data structure, Scalable Counting Bloom Filter. This data structure combines the ideas in Counting Bloom Filters and Scalable Bloom Filters to create a Bloom Filter variant that allow both delete operation and the size of the structure to grow, thus adapting to size of any service state. We propose an approach to use this data structure to summarize the state of a replicated service, while allowing concurrent operations to execute. We further propose a strategy to recover replicas in a replicated system and describe how to implement our proposed solution in two in-memory databases: H2 and HSQL. The results of evaluation show that our approach can compute the same summary when executing the same set of operation in both databases, thus allowing our solution to be used in diverse replication scenarios. Results also show that additional work on performance optimization is necessary to make our solution practical.
APA, Harvard, Vancouver, ISO, and other styles
44

Ke, Ziwei. "Single-Submodule Open-Circuit Fault Diagnosis for a Modular Multi-level Converter Using Articial Intelligence-based Techniques." The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu156262961593976.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Kendeck, Clement Ndjewel. "Fault ride-through capability of multi-pole permanent magnet synchronous generator for wind energy conversion system." Thesis, Cape Peninsula University of Technology, 2019. http://hdl.handle.net/20.500.11838/3060.

Full text
Abstract:
Thesis (MEng (Electrical Engineering))--Cape Peninsula University of Technology, 2019
Wind has become one of the renewable energy technologies with the fastest rate of growth. Consequently, global wind power generating capacity is also experiencing a tremendous increase. This tendency is expected to carry on as time goes by, with the continuously growing energy demand, the rise of fossil fuels costs combined to their scarcity, and most importantly pollution and climate change concerns. However, as the penetration level increases, instabilities in the power system are also more likely to occur, especially in the event of grid faults. It is therefore necessary that wind farms comply with grid code requirements in order to prevent power system from collapsing. One of these requirements is that wind generators should have fault ride-through (FRT) capability, that is the ability to not disconnect from the grid during a voltage dip. In other words, wind turbines must withstand grid faults up to certain levels and durations without completely cutting off their production. Moreover, a controlled amount of reactive power should be supplied to the grid in order to support voltage recovery at the connection point. Variable speed wind turbines are more prone to achieve the FRT requirement because of the type of generators they use and their advanced power electronics controllers. In this category, the permanent magnet synchronous generator (PMSG) concept seems to be standing out because of its numerous advantages amongst which its capability to meet FRT requirements compared to other topologies. In this thesis, a 9 MW grid connected wind farm model is developed with the aim to achieve FRT according to the South African grid code specifications. The wind farm consists of six 1.5 MW direct-driven multi-pole PMSGs wind turbines connected to the grid through a fully rated, two-level back-to-back voltage source converter. The model is developed using the Simpowersystem component of MATLAB/Simulink. To reach the FRT objectives, the grid side controller is designed in such a way that the system can inject reactive current to the grid to support voltage recovery in the event of a grid low voltage. Additionally, a braking resistor circuit is designed as a protection measure for the power converter, ensuring by the way a safe continuous operation during grid disturbance.
APA, Harvard, Vancouver, ISO, and other styles
46

Velaga, Srikirti. "Fault Modeling and Analysis for Multiple-Voltage Power Supplies in Low-Power Design." University of Cincinnati / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1368026670.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Heiner, Brandon D. "Multi-Scale Neotectonic Study of the Clear Lake Fault Zone in the Sevier Desert Basin (Central Utah)." BYU ScholarsArchive, 2014. https://scholarsarchive.byu.edu/etd/3840.

Full text
Abstract:
A multi-scale high-resolution geophysical and geological study was conducted in the Sevier Desert, central Utah, found within the Colorado Plateau-Basin and Range Transition Zone. The region is marked by with Quaternary volcanics and faulting as young as 660 yr B.P., with many fault scarps thought to have the potential for 7+ magnitude earthquakes. Three locations within the Sevier Desert which represent three different tectonic expressions of possible faulting at the surface were selected. These include a location found within surface sedimentation, a location with surface sedimentation and sub-surface basalts and a location with basalts, at the surface with very limited sedimentation. A suite of geophysical data were obtained including the use of P-wave, SH-wave, ground-penetrating radar (GPR). Auger holes, microprobe glass analysis, and mapping information were also completed in order to constrain and gain a more complete understanding of the sub-surface structure. These data were used to determine if there are sub-surface expressions of the possible surface scarps and if all the faults within the fault zone have the same structural style. The possible surface fault expressions were found to be connected to sub-surface fault expressions but with differing results within both sediments and basalts. Our data show that a multi-scale approach is needed to obtain a complete view of tectonic activity. The area faulting in the Sevier Desert penetrates at depth involving multiple complex styles that include some faulting that cuts recent lava flows and some that do not. The evidence also indicates that in at least some area faulting was episodic and others may be single events having implications on level of activity and hazard.
APA, Harvard, Vancouver, ISO, and other styles
48

Nguyen, Thi Thanh Quynh. "Diagnostic distribué et commande tolérante aux défauts pour les systèmes multi-agents." Thesis, Reims, 2020. http://www.theses.fr/2020REIMS006.

Full text
Abstract:
Un système multi-agents (MAS) peut-être défini par un groupe d'agents qui communiquent entre eux. Au cours de la dernière décennie, les MAS se sont révélés être une solution efficace et économique à de nombreux problèmes d'ingénierie complexes, difficiles voire impossibles à résoudre par un seul agent.Malgré l’abondance de résultats dans la littérature concernant sur le contrôle coopératif des MAS, il y a encore des points d’amélioration, en particulier en termes de fiabilité et de performance de fonctionnement du contrôle coopératif en cas de panne. Cette thèse vise une contribuer à la résolution des problèmes de diagnostic de défaut distribué et de FTC pour les MAS non homogènes/hétérogènes à topologies commutées. Dans un premier temps, une approche basée sur un observateur de détection de défaut distribué (FD) pour un réseau d'agents non homogènes ayant des topologies de commutation est proposé. Nous avons commencé par la formalisation d'un modèle virtuel correspondant à chaque agent. Ce modèle prend en compte toutes les informations locales disponibles pour l'agent, à savoir le modèle virtuel, ainsi que la fonction de commutation de topologie. Cette représentation se présente sous la forme d'un système commuté impulsif continu. Par la suite, nous présentons une approche basée sur l'IMT pour concevoir un filtre FD distribué. Dans cette approche proposée, nous utilisons des indices H_ /Hinf pour garantir la sensibilité du résidu aux défauts ainsi que sa robustesse à la perturbation. Nous utilisons également plusieurs fonctions de Lyapunov qui satisfont à la contrainte de commutation lente pour assurer la convergence des observateurs synthétisés.Par la suite, notre étude porte sur l’estimation distribuée des défauts (FE) pour un réseau d'agents non homogènes avec des défauts actionneurs et des topologies de commutation. Dans ce travail, nous continuons à utiliser le modèle virtuel commuté résultant de nos travaux sur FD pour représenter le modèle de chaque agent. Nous proposons une nouvelle méthode de décomposition qui permet de décomposer l'état de l'agent et de ses voisins en deux sous-états, l'un est affecté par les défauts actionneur et l'autre n'est pas affecté par les défauts. Un observateur distribué pour chaque agent est également proposé pour estimer les sous-ensembles d'état. Enfin, les estimations de défaut sont obtenues en utilisant simultanément l'estimation d'état et un différenciateur exact robuste. Il convient de noter que cette approche proposée est distribuée à la fois dans la conception et la mise en œuvre. En effet, elle n'a pas besoin des informations de l'ensemble des systèmes et elle permet également à chaque agent d'estimer ses défauts et ceux de ses voisins. Par conséquent, nous pouvons ainsi réduire les temps de calcul et de communication lors d’une mise en œuvre dans des applications pratiques.Enfin, le développement de FE et FTC pour un réseau d'agents hétérogènes soumis à des défauts actionneurs et à un consensus de sortie est abordé. L'objectif est d'améliorer par FTC la fiabilité et les performances lors d’un fonctionnement coopératif des MAS hétérogènes avec présence de défauts. Cette approche est basée sur des modèles de référence internes et un observateur d'estimation des défauts. Les agents s'appuient sur les informations fournies par les modules FE et ne nécessitent aucune connaissance a priori sur le défaut. Un FE décentralisé basé sur l'observateur est synthétisé pour estimer les états et les défauts de l'actionneur. La conception des observateurs est donnée après des décompositions d'état en utilisant des matrices de transformation. Ensuite, un contrôleur à consensus tolérant aux défauts est proposé. Il utilise l'état estimé et les défauts estimés résultant de l'observateur d'estimation des défauts. L’accord entre les agents est obtenu en résolvant le problème du consensus des références internes
A multi-agent system (MAS) can be defined by a group of agents that communicate with each other. Over the past decade, MAS have proven to be an effective and economical solution to many complex engineering problems that are difficult or even impossible to solve by a single agent.Despite the abundance of results in the literature on cooperative control of SAM, there are still areas for improvement, in particular in terms of reliability and operational performance of cooperative control in the event of a failure. This thesis aims to contribute to the resolution of the problems of distributed fault diagnosis and FTC for non-homogeneous / heterogeneous MAS with switched topologies. First, an approach based on a distributed fault detection (FD) observer for a network of non-homogeneous agents with switching topologies is proposed. We started with the formalization of a virtual model corresponding to each agent. This model takes into account all the local information available to the agent, namely the virtual model, as well as the topology switching function. This representation is presented in the form of a switched continuous impulsive system. Next, we present an IMT-based approach to design a distributed FD filter. In this proposed approach, we use H_ / Hinf indices to guarantee the sensitivity of the residue to defects as well as its robustness to disturbance. We also use several Lyapunov functions which satisfy the slow switching constraint to ensure the convergence of the synthesized observers.Subsequently, our study focuses on distributed fault estimation (FE) for a network of non-homogeneous agents with actuator faults and switching topologies. In this work, we continue to use the switched virtual model resulting from our work on FD to represent the model of each agent. We propose a new method of decomposition which makes it possible to decompose the state of the agent and its neighbors in two sub-states, one is affected by the actuator faults and the other is not affected by the faults. A distributed observer for each agent is also proposed to estimate the state subsets. Finally, default estimates are obtained by simultaneously using state estimation and a robust exact differentiator. It should be noted that this proposed approach is distributed both in design and implementation. Indeed, it does not need information from all the systems and it also allows each agent to estimate its faults and those of its neighbors. As a result, we can reduce computation and communication times when implemented in practical applications.Finally, the development of FE and FTC for a network of heterogeneous agents subject to actuator faults and an exit consensus is discussed. The objective is to improve reliability and performance by FTC during cooperative operation of heterogeneous MAS with the presence of faults. This approach is based on internal reference models and an observer for estimating faults. The agents rely on the information provided by the FE modules and do not require any prior knowledge of the fault. A decentralized FE based on the observer is synthesized to estimate the states and faults of the actuator. The design of the observers is given after state decompositions using transformation matrices. Next, a fault-tolerant consensus controller is proposed. It uses the estimated state and the estimated faults resulting from the defect estimation observer. The agreement between the agents is obtained by solving the problem of consensus of internal references
APA, Harvard, Vancouver, ISO, and other styles
49

Dai, Pre Michele. "Analysis and design of Fault-Tolerant drives." Doctoral thesis, Università degli studi di Padova, 2008. http://hdl.handle.net/11577/3425500.

Full text
Abstract:
The field of fault-tolerant applications is surely among the most exciting and potentially innovative modern research of the electrical motor where the design is freedom and new solution can be explored. The cost of the permanent magnets and the drives allow to develop new solution, in particular surface mounted permanent magnet machine with fractional-slot winding and reluctance motor assisted from the permanent magnet. The reliability of these machines allows to apply these motors into critical applications where the electrical or mechanical redundancy are required. As regard this argument the literature compare the performance of different solution. In this thesis I have applied a different approach, in particular a mathematical model is combined with the finite element method. This approach allows to use the Fexibility of the analytical model and the precision of the finite element method. The larger part of my research activity has regarded the motors with fractiona-slot winding and the multi-phase machines. The final part of my thesis tells the activity developed during the period spent in ABB Corporate Research Sweden. The aim of my research was to design several solution of electric generators for wave energy, in particular the aim was to design the optimal system that is a compromise among the different component: generator, mechanical converter, inverter, etc..
APA, Harvard, Vancouver, ISO, and other styles
50

Arun, Balaji. "A Low-latency Consensus Algorithm for Geographically Distributed Systems." Thesis, Virginia Tech, 2017. http://hdl.handle.net/10919/79945.

Full text
Abstract:
This thesis presents Caesar, a novel multi-leader Generalized Consensus protocol for geographically replicated systems. Caesar is able to achieve near-perfect availability, provide high performance - low latency and high throughput compared to the existing state-of-the- art, and tolerate replica failures. Recently, a number of state-of-the-art consensus protocols that implement the Generalized Consensus definition have been proposed. However, the major limitation of these existing approaches is the significant performance degradation when application workload produces conflicting requests. Caesar's main goal is to overcome this limitation by changing the way a fast decision is taken: its ordering protocol does not reject a fast decision for a client request if a quorum of nodes reply with different dependency sets for that request. It only switches to a slow decision if there is no chance to agree on the proposed order for that request. Caesar is able to achieve this using a combination of wait condition and logical time stamping. The effectiveness of Caesar is demonstrated through an evaluation study performed on Amazon's EC2 infrastructure using 5 geo-replicated sites. Caesar outperforms other multi-leader (e.g., EPaxos) competitors by as much as 1.7x in presence of 30% conflicting requests, and single-leader (e.g., Multi-Paxos) by as much as 3.5x. The protocol is also resistant to heavy client loads unlike existing protocols.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography