Dissertations / Theses on the topic 'Electrical resilience'

To see the other types of publications on this topic, follow the link: Electrical resilience.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Electrical resilience.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Sarkar, Tuhin. "Understanding resilience in large networks." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/107374.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis: S.M. in Electrical Engineering, Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 63-64).
This thesis focuses on the analysis of robustness in large interconnected networks. Many real life systems in transportation, economics, finance and social sciences can be represented as networks. The individual constituents, or nodes, of the network may represent vehicles in the case of vehicular platoons, production sectors in the case of economic networks, banks in the case of financial sector, or people in the case of social networks. Due to interconnections between constituents in these networks, a disturbance to any one of the constituents of the network may propagate to other nodes of the network. In any stable network, an incident noise, or disturbance, to any node of the network eventually fades away. However, in most real life situations, the object of interest is a finite time analysis of individual node behavior in response to input shocks, or noise, i.e., how the effect of an incident disturbance fades away with time. Such transient behavior depends heavily on the interconnections between the nodes of the network. In this thesis we build a framework to assess the transient behavior of large interconnected networks. Based on this formulation, we categorize each network into one of two broad classes - resilient or fragile. Intuitively, a network is resilient if the transient trajectory of every node of the network remains sufficiently close to the equilibrium, even as the network dimension grows. This is different from standard notion of stability wherein the trajectory excursion may grow arbitrarily with the network size. In order to quantify these transient excursions, we introduce a new notion of resilience that explicitly captures the effect of network interconnections on the resilience properties of the network. We further show that the framework presented here generalizes notions of robustness studied in many other applications, e.g., economic input-output production networks, vehicular platoons and consensus networks. The main contribution of this thesis is that it builds a general framework to study resilience in arbitrary networks, thus aiding in more robust network design.
by Tuhin Sarkar.
S.M. in Electrical Engineering
2

Lewis, John Arundel. "Carrier grade resilience in geographically distributed software defined networks." Master's thesis, University of Cape Town, 2017. http://hdl.handle.net/11427/24975.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The Internet is a fundamental infrastructure in modern life, supporting many different communication services. One of the most critical properties of the Internet is its ability to recover from failures, such as link or equipment failure. The goal of network resilience heavily influenced the design of the Internet, leading to the use of distributed routing protocols. While distributed algorithms largely solve the issue of network resilience, other concerns remain. A significant concern is network management, as it is a complex and error-prone process. In addition, network control logic is tightly integrated into the forwarding devices, making it difficult to upgrade the logic to introduce new features. Finally, the lack of a common control platform requires new network functions to provide their own solutions to common, but challenging, issues related to operating in a distributed environment. A new network architecture, software-defined networking (SDN), aims to alleviate many of these network challenges by introducing useful abstractions into the control plane. In an SDN architecture, control functions are implemented as network applications, and run in a logically centralized network operating system (NOS). The NOS provides the applications with abstractions for common functions, such as network discovery, installation of forwarding behaviour, and state distribution. Network management can be handled programmatically instead of manually, and new features can be introduced by simply updating or adding a control application in the NOS. Given proper design, an SDN architecture could improve the performance of reactive approaches to restoring traffic after a network failure. However, it has been shown in this dissertation that a reactive approach to traffic restoration will not meet the requirements of carrier grade networks, which require that traffic is redirected onto a back-up route less than 50 ms after the failure is detected. To achieve 50 ms recovery, a proactive approach must be used, where back-up rules are calculated and installed before a failure occurs. Several different protocols implement this proactive approach in traditional networks, and some work has also been done in the SDN space. However, current SDN solutions for fast recovery are not necessarily suitable for a carrier grade environment. This dissertation proposes a new failure recovery strategy for SDN, based on existing protocols used in traditional carrier grade networks. The use of segment routing allows for back-up routes to be encoded into the packet header when a failure occurs, without needing to inform other switches of the failure. Back-up routes follow the post-convergence path, meaning that they will not violate traffic engineering constraints on the network. An MPLS (multiprotocol label switching) data plane is used to ensure compatibility with current carrier networks, as MPLS is currently a common protocol in carrier networks. The proposed solution was implemented as a network application, on top of an open-source network operating system. A geographically distributed network testbed was used to verify the suitability for a geographically distributed carrier network. Proof of concept tests showed that the proposed solution provides complete protection for any single link, link aggregate or node failure in the network. In addition, communication latencies in the network do not influence the restoration time, as they do in reactive approaches. Finally, analysis of the back-up path metrics, such as back-up path lengths and number of labels required, showed that the application installed efficient back-up paths.
3

Mustafi, Urmi. "Investigating system resilience in distributed evolutionary GAN training." Thesis, Massachusetts Institute of Technology, 2021. https://hdl.handle.net/1721.1/130707.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, February, 2021
Cataloged from the official PDF of thesis.
Includes bibliographical references (pages 57-58).
General Adverserial Networks (GANs) provide a useful approach to new data generation with a few common problems of mode collapsing and oscillating behavior. Lipizzaner improves the performance of distributed GAN training with the use of a spatially distributed coevolutionary algorithm and gradient-based optimizers. However, in its current state the use of Lipizzaner is limited by its vulnerabilities on systems that encounter frequent node failures. When faced with a single node failure, Lipizzaner's entire experiment comes to a halt and must be restarted. We see a need for increasing Lipizzaner's resilience to such failures and do the following. We apply a combination of uncoordinated checkpointing, attempted reconnecting, and restarting nodes to form a simple and efficient solution for system resilience in Lipizzaner. We find that checkpointing and reconnecting are essential and simple solutions to failure recovery in Lipizzaner, while restarting nodes requires a more nuanced approach that shows promising results when used correctly to address node failures.
by Urmi Mustafi.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
4

Pourvali, Mahsa. "Resilience of Cloud Networking Services for Large Scale Outages." Scholar Commons, 2017. http://scholarcommons.usf.edu/etd/6664.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cloud infrastructure services are enabling organizations and enterprises to outsource a wide range of computing, storage, and networking needs to external service providers. These offerings make extensive use of underlying network virtualization, i.e., virtual network (VN) embedding, techniques to provision and interconnect customized storage/computing resource pools across large network substrates. However, as cloud-based services continue to gain traction, there is a growing need to address a range of resiliency concerns, particularly with regards to large-scale outages. These conditions can be triggered by events such as natural disasters, malicious man-made attacks, and even cascading power failures. Overall, a wide range of studies have looked at network virtualization survivability, with most efforts focusing on pre-fault protection strategies to set aside backup datacenter and network bandwidth resources. These contributions include single node/link failure schemes as well as recent studies on correlated multi-failure \disaster" recovery schemes. However, pre-fault provisioning is very resource-intensive and imposes high costs for clients. Moreover this approach cannot guarantee recovery under generalized multi-failure conditions. Although post-fault restoration (remapping) schemes have also been studied, the effectiveness of these methods is constrained by the scale of infrastructure damage. As a result there is a pressing need to investigate longer-term post-fault infrastructure repair strategies to minimize VN service disruption. However this is a largely unexplored area and requires specialized consideration as damaged infrastructures will likely be repaired in a time-staged, incremental manner, i.e., progressive recovery. Furthermore, more specialized multicast VN (MVN) services are also being used to support a range of content distribution and real-time streaming needs over cloud-based infrastructures. In general, these one-to-many services impose more challenging requirements in terms of geographic coverage, delay, delay variation, and reliability. Now some recent studies have looked at MVN embedding and survivability design. In particular, the latter contributions cover both pre-fault protection and post-fault restoration methods, and also include some multi-failure recovery techniques. Nevertheless, there are no known efforts that incorporate risk vulnerabilities into the MVN embedding process. Indeed, there is a strong need to develop such methods in order to reduce the impact of large-scale outages, and this remains an open topic area. In light of the above, this dissertation develops some novel solutions to further improve the resiliency of the network virtualization services in the presence of large outages. Foremost, new multi-stage (progressive) infrastructure repair strategies are proposed to improve the post-fault recovery of VN services. These contributions include advanced simulated annealing metaheuristics as well as more scalable polynomial-time heuristic algorithms. Furthermore, enhanced \risk-aware" mapping solutions are also developed to achieve more reliable multicast (MVN) embedding, providing a further basis to develop more specialized repair strategies in the future. The performance of these various solutions is also evaluated extensively using custom-developed simulation models.
5

Black, Travis Glenn. "Resilience of Microgrid during Catastrophic Events." Thesis, University of North Texas, 2018. https://digital.library.unt.edu/ark:/67531/metadc1157603/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Today, there is a growing number of buildings in a neighborhood and business parks that are utilizing renewable energy generation, to reduce their electric bill and carbon footprint. The most current way of implementing a renewable energy generation is to use solar panels or a windmill to generate power; then use a charge controller connected to a battery bank to store power. Once stored, the user can then access a clean source of power from these batteries instead of the main power grid. This type of power structure is utilizing a single module system in respect of one building. As the industry of renewable power generation continues to increase, we start to see a new way of implementing the infrastructure of the power system. Instead of having just individual buildings generating power, storing power, using power, and selling power there is a fifth step that can be added, sharing power. The idea of multiple buildings connected to each other to share power has been named a microgrid by the power community. With this ability to share power in a microgrid system, a catastrophic event which cause shutdowns of power production can be better managed. This paper then discusses the data from simulations and a built physical model of a resilient microgrid utilizing these principles.
6

Bal, Aatreyi. "Revamping Timing Error Resilience to Tackle Choke Points at NTC." DigitalCommons@USU, 2019. https://digitalcommons.usu.edu/etd/7456.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The growing market of portable devices and smart wearables has contributed to innovation and development of systems with longer battery-life. While Near Threshold Computing (NTC) systems address the need for longer battery-life, they have certain limitations. NTC systems are prone to be significantly affected by variations in the fabrication process, commonly called process variation (PV). This dissertation explores an intriguing effect of PV, called choke points. Choke points are especially important due to their multifarious influence on the functional correctness of an NTC system. This work shows why novel research is required in this direction and proposes two techniques to resolve the problems created by choke points, while maintaining the reduced power needs.
7

Arjona, Villicaña Pedro David. "Chain Routing : A novel routing framework for increasing resilience and stability in the Internet." Thesis, University of Birmingham, 2010. http://etheses.bham.ac.uk//id/eprint/434/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This study investigates the Internet's resilience to instabilities caused by the mismatch of its topological state and routing information. A first numerical analysis proves that the Internet possesses unused path diversity which could be employed to strengthen its resilience to failures. Therefore, a new routing framework called Chain Routing, which takes advantage of such path diversity, is proposed. This novel idea is based in the mathematical concept of complete order, which is a binary relation that is irreflexive, asymmetric, transitive and complete. More important is the fact that complete orders, when represented as a graph, are the most connected digraph that does not contain any cycles. Consequently, a complete order could be applied to route information from a source to a destination with the guarantee that cycles will not develop in a path. A second numerical analysis demonstrates the feasibility of implementing Chain Routing as part of a routing protocol. Finally, an analysis is presented on how network stability could be maintained if a routing protocol integrates complete orders in time and topology.
8

Watson, Eileen B. "Modeling Electrical Grid Resilience under Hurricane Wind Conditions with Increased Solar Photovoltaic and Wind Turbine Power Generation." Thesis, The George Washington University, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10844532.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:

The resource mix for the U.S. electrical power grid is undergoing rapid change with increased levels of solar photovoltaic (PV) and wind turbine electricity generating capacity. There are potential negative impacts to grid resilience resulting from hurricane damage to wind and solar power stations connected to the power transmission grid. Renewable power sources are exposed to the environment more so than traditional thermal power sources. To our knowledge, damage to power generating stations is not included in studies on hurricane damage to the electrical power grid in the literature. The lack of a hurricane wind damage prediction model for power stations will cause underestimation of predicted hurricane wind damage to the electrical grid with high percentages of total power generation capacity provided by solar photovoltaic and wind turbine power stations.

Modeling hurricane wind damage to the transmission grid and power stations can predict damage to electrical grid components including power stations, the resultant loss in power generation capacity, and restoration costs for the grid. This Praxis developed models for hurricane exposure, fragility curve-based damage to electrical transmission grid components and power generating stations, and restoration cost to predict resiliency factors including power generation capacity lost and the restoration cost for electrical transmission grid and power generation system damages. Synthetic grid data were used to model the Energy Reliability Council of Texas (ERCOT) electrical grid. A case study was developed based on Hurricane Harvey. This work is extended to evaluate the changes to resiliency as the percentage of renewable sources is increased from 2017 levels to levels corresponding to the National Renewable Energy Lab (NREL) Futures Study 2050 Texas scenarios for 50% and 80% renewable energy.

9

Austin, Kate. "The Queensland community’s propensity to invest in the resilience of their community and the electrical distribution network." Thesis, Austin, Kate (2019) The Queensland community’s propensity to invest in the resilience of their community and the electrical distribution network. Masters by Coursework thesis, Murdoch University, 2019. https://researchrepository.murdoch.edu.au/id/eprint/50292/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Electricity supply is vital for community response and recovery in the aftermath of a disaster. Everything from disaster response coordination, communication, public lighting and safety, as well as the provision of health services, basic household operations and the economic recovery of the community, relies on electricity to function. This dependency, coupled with the vulnerability of our electricity networks, highlights the need to establish resilient distribution networks. The notion that small-scale solar PV (SSPV) and battery energy storage systems (BESS) might contribute to network resilience, has become a popular avenue of investigation, with the growing uptake of these technologies. Beyond the technical challenges of establishing a smart grid network and reaching the required uptake of the technology to have sufficient storage capacity, a third factor relating to householders’ willingness to share stored energy with their community, remains largely unexplored. In a marked departure from the existing literature, this thesis investigates the use of SSPV and BESS for distribution network resilience and the community’s attitudes towards sharing energy resources. The research focusses, not on the technical and regulatory aspects of network resilience which are favoured by researchers’, but the behavioural component founded in social sciences. A model for network resilience utilising SSPV and BESS is presented, which argues that a key component of resilience in the aftermath of a disaster event, hinges on the community’s commitment to conservation of energy resources and their willingness to share their stored reserves for the common good. This research investigates the community’s perspectives on this resilience approach, by exploring attitudinal and behavioural aspects associated with helping the community, to determine the viability of pursuing SSPV and BESS as a practical network resilience option.
10

Lai, Kexing. "Security Improvement of Power System via Resilience-oriented Planning and Operation." The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1556872200222431.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Gonzalez, Villasanti Hugo Jose. "Feedback Controllers as Financial Advisors for Low Income Individuals." The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1429614036.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Rodriguez, Velasquez Rusber Octavio. "Impact characterisation on the low-voltage electrical networks resilience level facing the integration of photovoltaic generation and hydrogen-based energy storage." Electronic Thesis or Diss., Bourgogne Franche-Comté, 2023. http://www.theses.fr/2023UBFCD047.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
La capacité installée des systèmes PV augmente dans les réseaux MT et BT. En outre, les systèmes de stockage d'énergie (SSE) sont utilisés pour améliorer les performances des systèmes de production distribuée et d'autoproduction qui intègrent des énergies renouvelables. L'interconnexion inadéquate des systèmes PV et SSE peut altérer le fonctionnement des réseaux électriques. Elles peuvent également modifier la réponse du système à des perturbations de faible ou de forte intensité. Ce fait peut être favorable ou nuisible et peut être surmonté par le réseau électrique sans nécessiter d'interventions telles que des manœuvres correctives. La capacité de résister, d'absorber et de surmonter des événements défavorables peut être définie comme la "résilience électrique". La résilience est un concept qui prend de l'ampleur dans les systèmes électriques. Elle évalue leur performance face à des événements perturbateurs.Les approches correspondent principalement à des événements à fort impact et faible probabilité (HILP) affectant l'infrastructure des systèmes électriques. Toutefois, la résilience peut englober des événements à impact moyen ou faible, tels que des accidents mineurs, des défauts d'éclairage et des perturbations de l'approvisionnement. Les progrès de l'évaluation de la résilience des réseaux BT portent sur la vulnérabilité aux catastrophes naturelles, la probabilité de coupures de courant et la qualité du service. Ces études utilisent généralement des approches indépendantes les unes des autres, ce qui laisse un vide entre leur relation et leur interprétation. Il est donc nécessaire de consolider une stratégie d'évaluation de la résilience pour guider l'analyse des vulnérabilités et des forces dans la même direction.Cette thèse propose donc une approche globale pour évaluer la résilience électrique des réseaux BT. Elle présente une méthodologie intégrant la fragilité de l'infrastructure du système électrique, la continuité de l'approvisionnement et la qualité du service. Les effets favorables potentiels de l'intégration des systèmes d'énergie à base d'hydrogène (H2-SSE) dans les réseaux BT sont également pris en compte afin d'accroître la fiabilité de ces réseaux. L'approche proposée est appliquée au réseau BT du Bâtiment d'Ingénierie Électrique (EEB-UIS) de l'Universidad Industrial de Santander (UIS), en Colombie. À cette fin, le réseau EEB-UIS a été équipé de compteurs intelligents aux nœuds d'alimentation, et au point de couplage du système PV. Des informations sur les coupures de courant survenues entre 2012 et 2021 ont également été collectées.L'analyse de l'étude de cas permet de tester l'évaluation de la résilience proposée pour les réseaux BT. L'évaluation des conditions réelles de l'EEB-UIS indique que son infrastructure électrique présente un faible risque d'effondrement dû à des événements HILP. Sa fiabilité pourrait être renforcée en augmentant la couverture des charges non critiques. L'analyse de la résilience opérationnelle montre une alerte pour les problèmes de surtension et de déséquilibre de la charge. Ensuite, une analyse de retour d'information est développée pour déterminer les moyens de renforcer la résilience. Les stratégies proposées consistent à dimensionner le H2-SSE en tant que système de secours et à mettre en œuvre une stratégie de gestion de l'énergie. Le réseau électrique EEB-UIS est modélisé dans MATLAB & Simulink. Les simulations permettent d'évaluer l'influence de l'emplacement, de la capacité installée et du mode de fonctionnement du H2-SSE sur les performances du réseau BT. Il est identifié qu'une bonne gestion des sources distribuées peut renforcer la résilience électrique, principalement en termes de fiabilité et de qualité de fonctionnement. Le résultat global montre une analyse complète de la résilience et la possibilité d'étendre la méthodologie aux micro-réseaux et aux réseaux de distribution BT
The installed capacity of on-grid photovoltaic (PV) solar systems is growing in medium-voltage (MV) and low-voltage (LV) networks composed of residential and commercial users. In addition, energy storage systems (ESS) are being used to improve the performance of distributed generation and self-generation systems that incorporate renewable energy. The unplanned and inadequate interconnection of PV and ESS can cause alterations in the operation of electrical networks. These could also alter the response of the electrical system to low or high-impact disturbance events. This fact can be favourable or harmful and can be overcome by the power grid without requiring interventions such as reconfigurations or corrective manoeuvres. The ability to withstand, absorb and overcome adverse events can be defined as "network electrical resilience". Resilience is a concept gaining strength in power systems, microgrids and low-power electrical installations. It evaluates their performance against disruptive events.The approaches mainly correspond to high-impact, low-probability (HILP) events such as natural disasters and intentional attacks affecting the electrical systems infrastructure. However, resilience can encompass medium and low-impact events such as minor infrastructure accidents, light faults, and supply disturbances. Resilience assessment advances on the LV networks include vulnerability to natural disasters, the probability of power outages, and service quality. These studies usually use approaches independent of each other, leaving a gap between their relationship and interpretation. Then, there is a need to consolidate a resilience assessment strategy to guide the analysis of vulnerabilities and strengths in the same direction.Thus, this thesis proposes a comprehensive approach to evaluate electrical resilience for LV networks. It compiles quantitative strategies for studying electrical resilience, focusing on LV systems. It presents a methodology integrating the electrical system infrastructure's fragility, the supply's continuity, and the service's quality. The potential favourable effects of integrating hydrogen-based ESS (H2-ESS) in LV networks are also considered to increase the reliability of the LV networks. The proposed approach is applied in the LV network of the Electrical Engineering Building (EEB-UIS) at the Universidad Industrial de Santander (UIS), Colombia. For this purpose, the EEB-UIS network has been equipped with smart meters at the supply nodes, the PV system coupling point and the load board circuits. Information on power supply outages during 2012-2021 was also collected.The case study analysis allows for testing the effectiveness of the comprehensive resilience evaluation proposed for LV networks. The assessment regarding the actual conditions of the EEB-UIS indicates that its electrical infrastructure has a low risk of collapse due to HILP events. Its reliability could be strengthened by increasing the backup system's coverage of the non-critical loads. Operation resilience analysis shows a general alert for overvoltage issues and load unbalance. Then, a feedback analysis is developed to determine ways to strengthen electrical resilience. The proposed strategies are sizing H2-ESS as a power backup system and implementing an energy management strategy. The EEB-UIS power grid is modelled in MATLAB & Simulink, and quasi-static power flows are run. The simulations allow evaluation of the influence of the H2-ESS's location, installed capacity and operation mode on the LV network performance. It is identified that proper distributed sources management can strengthen electrical resilience, mainly in reliability and operation quality. The overall result shows a comprehensive resilience analysis and the possibility of extending the methodology to microgrids and LV distribution networks
13

Gong, Ning. "Resilient Control Strategy and Analysis for Power Systems using (n, k)-Star Topology." Diss., Temple University Libraries, 2016. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/410406.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Electrical Engineering
Ph.D.
This research focuses on developing novel approaches in load balancing and restoration problems in electrical power distribution systems. The first approach introduces an inter-connected network topology, referred to as (n, k)-star topology. While power distribution systems can be constructed in different communication network topologies, the performance and fault assessment of the networked systems can be challenging to analyze. The (n, k)-star topologies have well defined performance and stability analysis metrics. Typically, these metrics are defined based on: i) degree, ii) diameter, and iii) conditional diagnosability of a faulty node. These parameters could be evaluated and assessed before a physical (n, k)-star topology power distribution system is constructed. Moreover, in the second approach, we evaluate load balancing problems by using a decentralized algorithm, i.e., the Multi-Agent System (MAS) based consensus algorithm on an (n, k)-star power topology. With aforementioned research approaches, an (n, k)-star power distribution system can be assessed with proposed metrics and assessed with encouraging results compared to other topology networked systems. Other encouraging results are found in efficiency and performance enhancement during information exchange using the decentralized algorithm. It has been proven that a load balance solution is convergent and asymptotically stable with a simple gain controller. The analysis can be achieved without constructing a physical network to help evaluate the design. Using the (n, k)-star topology and MAS, the load balancing/restoration problems can be solved much more quickly and accurately compared to other approaches shown in the literature.
Temple University--Theses
14

Beyene, Mussie Abraham. "Modelling the Resilience of Offshore Renewable Energy System Using Non-constant Failure Rates." Thesis, Uppsala universitet, Institutionen för elektroteknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-445650.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Offshore renewable energy systems, such as Wave Energy Converters or an Offshore Wind Turbine, must be designed to withstand extremes of the weather environment. For this, it is crucial both to have a good understanding of the wave and wind climate at the intended offshore site, and of the system reaction and possible failures to different weather scenarios. Based on these considerations, the first objective of this thesis was to model and identify the extreme wind speed and significant wave height at an offshore site, based on measured wave and wind data. The extreme wind speeds and wave heights were characterized as return values after 10, 25, 50, and 100 years, using the Generalized Extreme Value method. Based on a literature review, fragility curves for wave and wind energy systems were identified as function of significant wave height and wind speed. For a wave energy system, a varying failure rate as function of the wave height was obtained from the fragility curves, and used to model the resilience of a wave energy farm as a function of the wave climate. The cases of non-constant and constant failure rates were compared, and it was found that the non-constant failure rate had a high impact on the wave energy farm's resilience. When a non-constant failure rate as a function of wave height was applied to the energy wave farm, the number of Wave Energy Converters available in the farm and the absorbed energy from the farm are nearly zero. The cases for non-constant and an averaged constant failure of the instantaneous non-constant failure rate as a function of wave height were also compared, and it was discovered that investigating the resilience of the wave energy farm using the averaged constant failure rate of the non-constant failure rate results in better resilience. So, based on the findings of this thesis, it is recommended that identifying and characterizing offshore extreme weather climates, having a high repair rate, and having a high threshold limit repair vessel to withstand the harsh offshore weather environment.
15

Yuan, Chen. "RESILIENT DISTRIBUTION SYSTEMS WITH COMMUNITY MICROGRIDS." The Ohio State University, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=osu1480478081556766.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Hu, Yao. "The Modeling, Analysis and Control of Resilient Manufacturing Enterprises." UKnowledge, 2013. http://uknowledge.uky.edu/ece_etds/15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The resilience of manufacturing enterprises is an important research topic, since disruptions have severe effects on the normal operation of manufacturing enterprises, especially as manufacturing supply chains become global. Although many case studies have been carried out to address resilience in organizations, a systematic method to model and analyze the resilience dynamics in manufacturing enterprises is not well developed. This study is intended to conduct research on quantitative analysis and control for resilience. After reviewing the literature addressing resilience, a modeling framework is presented to characterize the resilience of a manufacturing enterprise responding to disruptive events, which includes inventory ow between enterprise nodes, different costs, resource, demand, etc. Each node within the network is represented as a dynamic model with associated costs of production and inventory. This mathematical model is the foundation of quantitative analysis and control. With this model, an optimal control problem is formulated, by which the control can be solved to achieve minimum cost. Several different types of systems are defined and analyzed in this work. We develop the approach of aggregation to simplify the network structures. The study is mainly focused on two categories of network systems: serial network systems and assembly tree network systems. The analysis on these two categories covers two conditions: in discrete time domain without considering capacities, and in continuous time domain with considering capacities. The methods to determining optimal operations are developed under different conditions. In the serial network systems analysis, a practical case study is introduced to show the corresponding method developed. Finally, the problems are discussed for future research. Based on the results of these analyses, we present optimal control policies for resilience. Our method can support the analysis of the impact of disruptions, and the development of control strategies that reduce the impact of the disruption.
17

Mohammadi, Darestani Yousef. "Hurricane Resilience Quantification and Enhancement of Overhead Power Electric Systems." The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1565910362117519.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Andersen, David G. (David Godbe) 1975. "Resilient overlay networks." Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/86657.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Dodis, Yevgeniy 1976. "Exposure-resilient cryptography." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/86613.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Ma, Rui. "Error resilient multiple description coding." Thesis, McGill University, 2010. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=86757.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In order to combat channel failures in data communications, multiple description coding (MDC) produces two or more equally important bitstreams or descriptions, and transmits them independently over erasure channels. If only one description is correctly received, a coarse copy of the source is obtained. The more descriptions correctly received, the finer the accomplished quality. When all descriptions are correctly received, the transmitted signal can be completely reconstructed.
In this work, we apply MDC to accommodate multimedia transmissions over hybrid wireline-wireless networks, which require low delay and high robustness against both packet losses and bit errors. In addition to the classical MDC channel model, i.e., on/off channels, we study channels that are also suffering from bit errors. Based on this channel model, we design what we call ERMDC or error resilient multiple description coding.
The proposed ERMDC encoder maximizes the Hamming distance between used codewords in MDC, so as to make as many errors as possible detectable at the decoder. In order to reduce the reconstruction distortion, the proposed ERMDC decoder can detect binary transmission errors and estimates their output values in two means: (i) one is MSE-optimal, but requires information about channel conditions; (ii) the other is suboptimal, but does not require channel conditions. The ERMDC achieves graceful performance degradation associated with BERs, and outperforms classic MDC when meeting with both packet losses and bit errors.
In order to avoid long time of design optimization, simplified index assignment (IA) algorithm for easy ERMDC encoder design is developed. This algorithm obtains ``close-to-optimal'' solutions as well as low computational complexity. Furthermore, this IA algorithm can be extended to embedded coding in progressive transmissions.
Moreover, we study performance of the ERMDC over Rayleigh fading channels by utilizing modulated signals as inputs. We also discuss usages of the ERMDC and its system-level performance over channels with both packet losses and bit errors. Experimental results show that, in general, the ERMDC system outperforms classic MDC systems.
Le codage à descriptions multiples (MDC) vise à combattre les effets néfastes des défaillances du canal de transmission; à cette fin, il produit deux (ou plusieurs) flux binaires ou descriptions d'égale importance, qui sont ensuite transmis indépendamment sur des canaux à effacement. Si seulement une des descriptions est reçue correctement, une copie grossière de la source est alors obtenue. Plus le nombre de descriptions reçues correctement augmente, plus la qualité de reproduction augmente. Lorsque toutes les descriptions sont reçues correctement, le signal transmis peut être reconstruit complètement. Dans le présent travail, nous appliquons le MDC au cas de la transmission de multimédia sur des canaux hybrides filaire/sans-fil, qui requiert d'atteindre un délai faible et une grande robustesse vis-à-vis des pertes de paquets et des erreurs binaires. Au-delà du modèle classique de canal MDC (de type « on/off »), nous étudions des canaux qui créent des erreurs individuelles sur les bits transmis. En se basant sur ce modèle de canal, nous concevons ce que nous appelons ERMDC, pour codage à descriptions multiples résistant aux erreurs. Le codeur ERMDC proposé ici maximise la distance de Hamming entre les mots-codes du MDC, de manière à permettre au décodeur de détecter autant d'erreurs que possible. Afin de réduire la distorsion à la reconstruction, le décodeur ERMDC proposé ici a la capacité de détecter les erreurs de transmission binaires, et peut estimer les échantillons à reconstruire de deux façons : (i) l'une est optimale au sens de la distorsion quadratique moyenne, mais requiert la connaissance d'informations à propos de l'état du canal; (ii) l'autre est sous-optimale, mais ne nécessite pas cette connaissance. Le système ERMDC permet d'obtenir une dégradation graduelle de performance en fonction du taux d'erreur binaire (BER), et offre des performances supérieures au MDC classique dans le cas où les pertes de
21

Alves, Alexandre Eberle. "Habilidades de resiliência em distribuidora de energia elétrica : recrutamento, seleção e treinamento de eletricistas e operadores do centro de operações da distribuição." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2016. http://hdl.handle.net/10183/149797.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
A convergência das habilidades técnicas com as de resiliência contribui para promover um ambiente que opere de maneira segura e eficiente. Nesta busca, pretende-se obter um sistema que mantenha o processo operante durante os infortúnios, em especial, os inesperados. Além disso, utiliza-se o conhecimento de engenharia de resiliência, tema deste estudo, que é a capacidade de um sistema ajustar o seu desempenho e, desta forma, lidar com situações críticas. O objetivo geral desta dissertação é a identificação de habilidades de resiliência utilizadas na execução de atividades de manutenção emergencial de problemas na rede elétrica de uma distribuidora de energia. Neste trabalho, são abordadas as atividades de eletricistas e operadores do Centro de Operações da Distribuição (COD) da empresa estudada. Os objetivos específicos são: (1) investigar a forma de aplicação dos filtros utilizados no processo de recrutamento e seleção da empresa para verificar se as habilidades de resiliência são contempladas e (2) propor melhorias no processo de treinamento, com base nas informações e nos resultados obtidos, bem como nos processos da empresa para facilitar e minimizar a necessidade do uso das habilidades de resiliência identificadas. Assim, sob o prisma da Engenharia de Resiliência, este estudo visa à melhor compreensão do processo de seleção e recrutamento, bem como ao treinamento destes profissionais para a sua melhoria.
The convergence of technical skills with resilience contributes to foster an environment that works safely and efficiently. The objective of this search is to obtain a system in which the functioning of the process is maintained during misfortunes, or unexpected events. Furthermore, the study approaches resilience engineering knowledge as its core subject, which is the capacity of a system to adjust its performance in order to be able to deal with critical situations. The main goal of this thesis is to identify resilience skills while performing emergency maintenance activities and diagnosing problems in the electrical network during events involving equipment failures at an energy distributor. This paper approaches the activities of the operators of the Distribution Operations Center (COD) of the company in the study. The specific objectives are: (1) to investigate the way the filters are used in the recruitment and selection process of the company to verify whether resilience skills are included, and (2) to propose improvements in the training process based on the data and results obtained, as well as on other processes of the company to facilitate and minimize the identified need for resilience skills. This way, under the Resilience Engineering perspective, this study aims at obtaining a better understanding of the recruitment and selection process, as well as improving professional training.
22

Girish, Deeptha S. "Action Recognition in Still Images and Inference of Object Affordances." University of Cincinnati / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1595500102337155.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Zhang, Sizhuo. "WMM : a resilient Weak Memory Model." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/103667.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 61-65).
A good memory model should have a precise definition that can be understood by any computer architect readily. It should also be resilient in the sense that it should not break when new microarchitecture optimizations are introduced to improve single-threaded performance. We introduce WMM, a new weak memory model, which meets these criteria. WMM permits all load-store reorderings except a store is not allowed to overtake a load. WMM also permits both memory dependency speculation and load-value prediction. We define the operational semantics of WMM using a novel conceptual device called invalidation buffer, which achieves the effect of out-of-order instruction execution even when instructions are executed in-order and one-at-a-time. We show via examples where memory fences need to be inserted for different programming paradigms. We highlight the differences between WMM and other weak memory models including Release Consistency and Power. Our preliminary performance evaluation using the SPLASH benchmarks shows that WMM implementation performs significantly better than the aggressive implementations of SC. WMM holds the promise to be a vendor-independent stable memory model which will not stifle microarchitectural innovations.
by Sizhuo Zhang.
S.M.
24

Fonkwe, Fongang Edwin. "Towards resilient plug-and-play microgrids." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122685.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 159-164).
Microgrids have the potential to increase renewable energy penetration, reduce costs, and improve reliability of the electric grid. However, today's microgrids are unreliable, lack true modularity, and operate with rudimentary control systems. This thesis research makes contributions in the areas of microgrid modeling and simulation; microgrid testing and model validation; and advanced control design and tools in microgrids. These contributions are a step toward design, commissioning, and operation of resilient plug-and-play (pnp) microgrids, which will pave the way towards a more sustainable and electric energy abundant future for all.
"Facebook Inc. funded a portion of my PhD trajectory (2017 - 2019) by way of a Research Fellowship"
by Edwin Fonkwe Fongang.
Ph. D.
Ph.D. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
25

Maucho, Geoffrey Sunday. "Weighted distortion methods for error resilient video coding." Thesis, McGill University, 2012. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=110392.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Wireless and Internet video applications are hampered by bit errors and packet errors, respectively. In addition, packet losses in best effort Internet applications limit video communication applications. Because video compression uses temporal prediction, compressed video is especially susceptible to the problem of transmission errors in one frame propagating into subsequent frames. It is therefore necessary to develop methods to improve the performance of compressed video in the face of channel impairments. Recent work in this area has focused on estimating the end-to-end distortion, which is shown to be useful in building an error resilient encoder. However, these techniques require an accurate estimate of the channel conditions, which is not always accessible for some applications.Recent video compression standards have adopted a Rate Distortion Optimization (RDO) framework to determine coding options that address the trade-off between rate and distortion. In this dissertation, error robustness is added to the RDO framework as a design consideration. This dissertation studies the behavior of motion-compensated prediction (MCP) in a hybrid video coder, and presents techniques of improving the performance in an error prone environment. An analysis of the motion trajectory gives us insight on how to improve MCP without explicit knowledge of the channel conditions. Information from the motion trajectory analysis is used in a novel way to bias the distortion used in RDO, resulting in an encoded bitstream that is both error resilient and bitrate efficient.We also present two low complexity solutions that exploit past inter-frame dependencies. In order to avoid error propagation, regions of a frame are classified according to their potential of having propagated errors. By using this method, we are then able to steer the MCP engine towards areas that are considered ``safe" for prediction. Considering the impact error propagation may have in a RDO framework, our work enhances the overall perceived quality of compressed video while maintaining high coding efficiency. Comparison with other error resilient video coding techniques show the advantages offered by the weighted distortion techniques we present in this dissertation.
Les applications vidéo pour l'Internet et les systèmes de communication sans fil sont respectivement entravées par les erreurs de paquets et de bits. De plus, les pertes de paquets des meilleures applications Internet limitent les communications vidéo. Comme la compression vidéo utilise des techniques de prédiction temporelle, les transmissions de vidéos comprimés sont particulièrement sensibles aux erreurs se propageant d'une trame à l'autre. Il est donc nécessaire de développer des techniques pour améliorer la performance de la compression vidéo face au bruit des canaux de transmission. De récents travaux sur le sujet ont mis l'emphase sur l'estimation de la distorsion point-à-point, technique utile pour construire un codeur vidéo tolérant aux erreurs. Ceci étant dit, cette approche requiert une estimation précise des conditions du canal de transmission, ce qui n'est pas toujours possible pour certaines applications.Les standards de compression récents utilisent un cadre d'optimisation dèbit distorsion (RDO) afin de déterminer les options de codage en fonction du compromis souhaité entre distorsion et taux de transmission. Dans cette thèse, nous ajoutons la robustesse aux erreurs au cadre RDO en tant que critère de conception. Nous étudions le comportement de la prédiction de mouvement compensé (MCP) dans un codeur vidéo hybride et présentons des techniques pour en améliorer la performance dans des environnements propices aux erreurs. L'analyse de la trajectoire du mouvement nous permet d'améliorer la MCP sans connatre explicitement les conditions du canal de transmission. L'information de l'analyse de la trajectoire du mouvement est utilisée de façon à contrer le biais de la distorsion utilisée dans le cadre RDO, ce qui permet d'obtenir un encodage binaire d'un taux eficace et résistant aux erreurs. Nous présentons également deux techniques à faible complexité qui exploitent la dépendance entre la trame à coder et les trames qui précèdent. Afin d'éviter la propagation des erreurs, les régions d'une trame sont classées en fonction de leur potentiel à contenir des erreurs propagées. Avec cette méthode, nous sommes ` même de diriger l'outil MCP vers les régions où la prédiction peut être faite de façon "sécuritaire". Considérant l'impact que peut avoir la propagation des erreurs dans un cadre RDO, nos travaux améliorent la qualité globale perçue de vidéos comprimés tout en maintenant de bons taux de transmission. Des comparaisons avec les meilleures techniques robustes de codage vidéo présentement utilisées démontrent les avantages offerts par les techniques de distorsion pondérée présentées dans cette thèse.
26

Naghdinezhad, Amir. "Error resilient methods in scalable video coding (SVC)." Thesis, McGill University, 2014. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=121379.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
With the rapid development of multimedia technology, video transmission over unreliable channels like Internet and wireless networks, is widely used. Channel errors can result in a mismatch between the encoder and the decoder, and because of the predictive structures used in video coding, the errors will propagate both temporally and spatially. Consequently, the quality of the received video at the decoder may degrade significantly. In order to improve the quality of the received video, several error resilient methods have been proposed. Furthermore, in addition to compression efficiency and error robustness, flexibility has become a new multimedia requirement in advanced multimedia applications. In these applications such as video conferencing and video streaming, compressed video is transmitted over heterogeneous networks with a broad range of clients with different requirements and capabilities in terms of power, bandwidth and display resolution, simultaneously accessing the same coded video. The scalable video coding concept was proposed to address the flexibility issue by generating a single bit stream that meets the requirement of these users. This dissertation is concerned with novel contributions in the area of error resilience for scalable extension of H.264/AVC. The first part of the dissertation focuses on modifying the conventional prediction structure in order to reduce the propagation of error to succeeding frames. We propose two new prediction structures that can be used in temporal and spatial scalability of SVC. The proposed techniques improve the previous methods by efficiently exploiting the Intra macroblocks (MBs) in the reference frames and exponential decay of error propagation caused by the introduced leaky prediction.In order to satisfy both coding efficiency and error resilience in error prone channels, we combine error resilience mode decision technique with the proposed prediction structures. The end-to-end distortion of the proposed prediction structure is estimated and used instead of the source coding distortion in the rate distortion optimization. Furthermore, accurately analysing the utility of each video packet in unequal error protection techniques is a critical and usually very complex process. We present an accurate low complexity utility estimation technique. This technique estimates the utility of each network abstraction layer (NAL) by considering the error propagation to future frames. Also, a low delay version of this technique, which can be used in delay constrained applications, is presented.
La révolution technologique de l'information et des communications a donné lieu à un élargissement du marché des applications multimédias. Sur des canaux non fiables comme Internet et les réseaux sans fil, la présence des erreurs de transmission est considérée comme l'une des principales causes de la dégradation de la qualité vidéo au niveau du récepteur. Et en raison des structures de prédiction utilisées dans le codage vidéo, ces erreurs ont tendance à se propager à la fois temporellement et spatialement. Par conséquent, la qualité de la vidéo reçue risque de se dégrader d'une façon considérable. Afin de minimiser ce risque, des outils qui permettent de renforcer la robustesse contre les erreurs ont été proposés. En plus de la résistance aux erreurs, la flexibilité est devenue une nouvelle exigence dans des applications multimédias comme la vidéo conférence et la vidéo en streaming. En effet, la vidéo compressée est transmise sur des réseaux hétérogènes avec un large éventail de clients ayant des besoins différents et des capacités différentes en termes de puissance, de résolution vidéo et de bande passante, d'où la nécessite d'une solution pour l'accès simultané à la même vidéo codée. La scalabilité est venue répondre aux exigences de tous ces utilisateurs.Cette thèse, élaborée dans le cadre du développement de la version scalable de la norme H.264/AVC (aussi connue sous le nom de SVC), présente des idées innovantes dans le domaine de la résilience aux erreurs. La première partie de la thèse expose deux nouvelles structures de prédiction qui aident à renforcer la résistance aux erreurs. Les structures proposées peuvent être utilisées dans la scalabilité temporelle et spatiale et visent essentiellement à améliorer les méthodes antérieures en exploitant de manière plus efficace les MBs "Intra" dans les images de référence et en profitant de la prédiction "Leaky" qui permet de réduire de façon exponentielle la propagation des erreurs de transmission.Afin de satisfaire à la fois l'efficacité du codage et la résilience aux erreurs, nous avons combiné les techniques proposées avec les modules de décision. En plus, une estimation de la distorsion de bout en bout a été utilisée dans le calcul du coût des différents modes. En outre, analyser avec précision l'importance de chaque paquet de données vidéo dans de telles structures est un processus critique et généralement très complexe. Nous avons proposé une méthode simple et fiable pour cette estimation. Cette méthode consiste à évaluer l'importance de chaque couche d'abstraction réseau (NAL) en considérant la propagation des erreurs dans les images futures. En plus, une version avec un faible délai de réponse a été présentée.
27

Kross, Cory Kenneth. "A Method for Evaluating Aircraft Electric Power System Sizing and Failure Resiliency." DigitalCommons@CalPoly, 2017. https://digitalcommons.calpoly.edu/theses/1709.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
With the More Electric Aircraft paradigm, commercial commuter aircraft are increasing the size and complexity of electrical power systems by increasing the number of electrical loads. With this increase in complexity comes a need to analyze electrical power systems using new tools. The Hybrid Power System Optimizer (HyPSO) developed by Airbus SAS is a simulator designed to analyze new aircraft power systems. This thesis project will first provide a method to assess the reliability of complex aircraft electrical power systems before and after failure and reconfiguration events. Next, an add-on to HyPSO is developed to integrate the previously developed reliability calculations. Proof-of-concepts including new data visualizations are performed and provided.
28

Neff, Clayton. "Analysis of Printed Electronic Adhesion, Electrical, Mechanical, and Thermal Performance for Resilient Hybrid Electronics." Scholar Commons, 2018. https://scholarcommons.usf.edu/etd/7551.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Today’s state of the art additive manufacturing (AM) systems have the ability to fabricate multi-material devices with novel capabilities that were previously constrained by traditional manufacturing. AM machines fuse or deposit material in an additive fashion only where necessary, thus unlocking advantages of mass customization, no part-specific tooling, near arbitrary geometric complexity, and reduced lead times and cost. The combination of conductive ink micro-dispensing AM process with hybrid manufacturing processes including: laser machining, CNC machining, and pick & place enables the fabrication of printed electronics. Printed electronics exploit the integration of AM with hybrid processes and allow embedded and/or conformal electronics systems to be fabricated, which overcomes previously limited multi-functionality, decreases the form factor, and enhances performance. However, AM processes are still emerging technologies and lack qualification and standardization, which limits widespread application, especially in harsh environments (i.e. defense and industrial sectors). This dissertation explores three topics of electronics integration into AM that address the path toward qualification and standardization to evaluate the performance and repeatable fabrication of printed electronics for resilience when subjected to harsh environments. These topics include: (1) the effect of smoothing processes to improve the as-printed surface finish of AM components with mechanical and electrical characterization—which highlights the lack of qualification and standardization within AM printed electronics and paves the way for the remaining topics of the dissertation, (2) harsh environmental testing (i.e. mechanical shock, thermal cycling, die shear strength) and initiation of a foundation for qualification of printed electronic components to demonstrate survivability in harsh environments, and (3) the development of standardized methods to evaluate the adhesion of conductive inks while also analyzing the effect of surface treatments on the adhesive failure mode of conductive inks. The first topic of this dissertation addresses the as-printed surface roughness from individually fusing lines in AM extrusion processes that create semi-continuous components. In this work, the impact of surface smoothing on mechanical properties and electrical performance was measured. For the mechanical study, surface roughness was decreased with vapor smoothing by 70% while maintaining dimensional accuracy and increasing the hermetic seal to overcome the inherent porosity. However, there was little impact on the mechanical properties. For the electrical study, a vapor smoothing and a thermal smoothing process reduced the surface roughness of the surfaces of extruded substrates by 90% and 80% while also reducing measured dissipative losses up to 24% and 40% at 7 GHz, respectively. The second topic of this dissertation addresses the survivability of printed electronic components under harsh environmental conditions by adapting test methods and conducting preliminary evaluation of multi-material AM components for initializing qualification procedures. A few of the material sets show resilience to high G impacts up to 20,000 G’s and thermal cycling in extreme temperatures (-55 to 125ºC). It was also found that coefficient of thermal expansion matching is an important consideration for multi-material printed electronics and adhesion of the conductive ink is a prerequisite for antenna survivability in harsh environments. The final topic of this dissertation addresses the development of semi-quantitative and quantitative measurements for standardizing adhesion testing of conductive inks while also evaluating the effect of surface treatments. Without standard adhesion measurements of conductive inks, comparisons between materials or references to application requirements cannot be determined and limit the adoption of printed electronics. The semi-quantitative method evolved from manual cross-hatch scratch testing by designing, printing, and testing a semi-automated tool, which was coined scratch adhesion tester (SAT). By cross-hatch scratch testing with a semi-automated device, the SAT bypasses the operator-to-operator variance and allows more repeatable and finer analysis/comparison across labs. Alternatively, single lap shear testing permits quantitative adhesion measurements by providing a numerical value of the nominal interfacial shear strength of a coating upon testing while also showing surface treatments can improve adhesion and alter the adhesive (i.e. the delamination) failure mode of conductive inks.
29

Stine, Daniel E. (Daniel Evans). "Digital signatures for a Byzantine resilient computer system." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/36578.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Sutherland, Andrew 1980. "Towards RSEAM : resilient serial execution on amorphous machines." Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/87884.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2003.
Includes bibliographical references (p. 94-95).
by Andrew Sutherland.
M.Eng.
31

Nightingale, Todd 1979. "A simulation study of reordering-resilient TCP enhancements." Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/29693.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2003.
Includes bibliographical references (leaves 57-59).
TCP traffic makes up a large portion of the Internet's load. The throughput TCP connections are able to obtain depends heavily on the underlying network providing in-order packet delivery. IP networks do not guarantee in-order delivery, but the design of hardware, networks, protocols have been influenced by TCP's in-order requirement. Despite this the Internet today does reorder packets on some links. More importantly, more throughput could be achieved if techniques such as multipath routing could be used. Unfortunately, the parallelism in these schemes results in packet reordering and a resulting TCP performance loss. This work examines methods for allowing TCP connections to obtain high throughput in the presence of packet reordering. We review the existing proposals, describe a new, receiver based proposal, and provide a detailed simulation-based evaluation. In this thesis we present results which show that our modified receiver with an unmodified Reno sender was able to perform as well or better than any of the other proposed solutions. In addition, Eifel is able to consistently out perform DSACK despite using much less packet overhead and internal state.
by Todd Nightingale.
M.Eng.
32

Simaie, Babak. "Integrated error resilient solutions for Motion JP2 video streaming." Thesis, University of Ottawa (Canada), 2007. http://hdl.handle.net/10393/27485.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Two of the main challenges and critical issues facing real-time wireless multimedia transmission are packet loss and transmission errors. This thesis has selected Motion JPEG 2000, the newest standard for still image coding, for real time video streaming. JPEG2000 offers a number of qualities of service and revenue strategies that make it a leading contender for image streaming applications. We offer extra protection for the wireless transmission of JPEG2000 code streams due to error prone and changeable wireless channel conditions, and since the baseline JPEG 2000 error resilience tools are not sufficient in the context of wireless transmissions. Also, to accommodate the video streaming, avoid packet loss, and achieve efficiency, a payload format for JPEG 2000 video streams over the Real-time Transport Protocol has been defined. The main contribution of this thesis is adopting the packetization strategies and proposing an unequal error protection approach to improve the error resilience tools in baseline JPEG 2000 utilized in wireless streaming over the RTP. Our unequal error protection solution is directly supplied by a mechanism embedded in the JPEG 2000 syntax. At the end, the proposed approach is evaluated subjectively and objectively against existing wireless imaging techniques to demonstrate the achieved quality.
33

Jamal, Alden Mohammed Kais. "Robust and Resilient Control for Time Delayed Power Systems." Thesis, Southern Illinois University at Edwardsville, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=1588452.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:

Power system is the backbone of modern society. Traditionally, over 90% of the electrical energy is produced by power generation systems driven by steam turbines. Recently, with the development of renewable energy resources, wind energy conversion systems are the proven solutions for the next generation sustainable energy resources. Stability and performance of these power systems are the primary concerns of power system engineers. To better characterize the dynamical behaviors of power systems in practical applications, time delays in the feedback state variables, systems modeling uncertainties, and external disturbances are included in the state space model of the power system in this work. Linear matrix inequality based robust and resilient controllers satisfying the H_infinty performance objective for time delayed power systems are proposed. Fixed time delays are assumed to exist within the system state and input signals. The system model is assumed to have unstructured bounded uncertainties and L_2 type of disturbances. Furthermore, controller gain perturbations are assumed to be of additive type. The proposed control techniques have been applied to variable speed permanent magnet synchronous generator based wind energy conversion systems, and electrical power generation systems driven by steam turbine. Computer simulations conducted in MATLAB show the eectiveness of the proposed control algorithms.

34

Barbar, Marc(Marc F. ). "Resiliency and reliability planning of the electric grid in natural disaster affected areas." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122752.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 101-102).
The recent spike in the frequency of hurricanes in Central America has severely damaged the conventional electrical grid. Notably, the government of Puerto Rico laid out a plan to reinvent its energy sector to improve its level of resiliency against natural disasters. Better planning and preparation can minimize the damage that needs to be repaired on time. For instance, when necessary facilities, such as hospitals, lose access to electricity, the ability to manage a displaced population after a hurricane is diminished. Computational planning tools allow policymakers and planners to take reliability metrics, resource constraints, interactions between off-grid and traditional grid-extension projects into account when designing contingency plans for the electric grid. The goal of this thesis is to explore the role of a hybrid decentralized structure of the electrical grid to improve the level of reliability through extraordinary circumstances. In this thesis, I develop algorithms that are shown via several case studies. Given the proper input data, these algorithms can provide insight into the technical feasibility of where to deploy microgrids given the existing infrastructure. This research emphasizes the need for granular spatial data at the distribution level to make better planning decisions.
by Marc Barbar.
S.M.
S.M. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
35

Clark, Anne L. (Anne Lauren). "An architecture study of a Byzantine-resilient processor using authentication." Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/34101.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1994.
Includes bibliographical references (p. 121-123).
This architecture study provides the ground work for implementing a new generation of Byzantine resilient processors using authentication. The use of authentication allows a significant reduction in the theoretical requirements necessary for providing Byzantine resilience, or the ability to continue correct operation in the presence of arbitrary or even malicious faults. This decrease in requirements led to a goal of providing a system which combines the stringent standards embodied by Byzantine resilience with the lower costs necessary to make the system viable for more markets than previous Byzantine resilient processors. A layering scheme is proposed which can be placed between the user and hardware. These layers consist of protocols which provide the basic building blocks of the architecture. The proposed authentication protocol which provides the digital signatures used to verify the origin and contents of messages is a public-key protocol using 32-bit Cyclic Redundancy Codes (CRC's) to encode the message with 32-bit modular inverse key pairs to sign and authenticate the CRC. An interactive consistency protocol responsible for correctly distributing single-source data between processors is built using the SM(m) algorithm from [LSP82] with improvements suggested in [Dol83]. A voting protocol responsible for generating a group consensus value guaranteed to be the same on all nonfaulty processors suggests exchanging unsigned messages and then using a full-set majority vote choice() function to calculate the group consensus value. Finally, the proposed synchronization protocol needed to provide synchronized virtual clocks on all nonfaulty processors is placed on top of a full message exchange (FME) known as a From_all exchange to read the clocks on other processors. A time adjustment is then calculated using a technique suggested in [LM84].
by Anne L. Clark.
M.S.
36

Heng, Brian A. 1977. "Adaptive multiple description mode selection for error resilient video communications." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/34463.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.
Includes bibliographical references (p. 127-132).
Streaming video applications must be able to withstand the potentially harsh conditions present on best-effort networks like the Internet, including variations in available bandwidth, packet losses, and delay. Multiple description (MD) video coding is one approach that can be used to reduce the detrimental effects caused by transmission over best-effort networks. In a multiple description system, a video sequence is coded into two or more complementary streams in such a way that each stream is independently decodable. The quality of the received video improves with each received description, and the loss of any one of these descriptions does not cause complete failure. A number of approaches have been proposed for MD coding, where each provides a different tradeoff between compression efficiency and error resilience. How effectively each method achieves this tradeoff depends on network conditions as well as on the characteristics of the video itself. This thesis proposes an adaptive MD coding approach that adapts to changing conditions through the use of MD mode selection. The encoder in this system is able to accurately estimate the expected end-to-end distortion, accounting for both compression and packet-loss-induced distortions, as well as for the bursty nature of channel losses and the effective use of multiple transmission paths.
(cont.) With this model of the expected end-to-end distortion, the encoder selects between MD coding modes in a rate-distortion (R-D) optimized manner to most effectively trade-off compression efficiency for error resilience. We show how this approach adapts to both the local characteristics of the video and to network conditions and demonstrate the resulting gains in performance using an H.264-based adaptive MD video coder. We also analyze the sensitivity of this system to imperfect knowledge of channel conditions and explore the benefits of using such a system with both single and multiple paths.
by Brian A. Heng.
Ph.D.
37

Jevtić, Ana Ph D. Massachusetts Institute of Technology. "Cyber-attack detection and resilient state estimation in power systems." Thesis, Massachusetts Institute of Technology, 2020. https://hdl.handle.net/1721.1/127025.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, May, 2020
Cataloged from the official PDF of thesis.
Includes bibliographical references (pages 99-108).
Many critical infrastructures, such as transportation and electric energy networks, and health care, are now becoming highly integrated with information and communication technology, in order to be more efficient and reliable. These cyber-physical systems (CPS) now face an increasing threat of cyber-attacks. Intelligent attackers can leverage their knowledge of the system, disruption, and disclosure resources to critically damage the system while remaining undiscovered. In this dissertation, we develop a defense strategy, with the ability to uncover malicious and intelligent attacks and enable resilient operation of cyber-physical systems. Specifically, we apply this defense strategy to power systems, described by linear frequency dynamics around the nominal operating point. Our methodology is based on the notion of data aggregation as a tool for extracting internal information about the system that may be unknown to the attacker. As the first step to resilience and security, we propose several methods for active attack detection in cyber-physical systems. In one approach we design a clustering-based moving-target active detection algorithm and evaluate it against stealthy attacks on the 5-bus and 24-bus power grids. Next, we consider an approach based on Interaction Variables (IntVar), as another intuitive way to extract internal information in power grids. We evaluate the eectiveness of this approach on Automatic Generation Control (AGC), a vital control mechanism in today's power grid. After an attack has been detected, mitigation procedures must be put in place to allow continued reliable operation or graceful degradation of the power grid. To that end, we develop a resilient state estimation algorithm, that provides the system operator with situational awareness in the presence of wide-spread coordinated cyber-attacks when many system measurements may become unavailable.
by Ana Jevtić.
Ph. D.
Ph.D. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
38

Nilamboor, Sanjay N. "A Study on Performance Binning in Error Resilient Circuits." University of Cincinnati / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1427798251.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Fargo, Farah Emad. "Resilient Cloud Computing and Services." Diss., The University of Arizona, 2015. http://hdl.handle.net/10150/347137.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cloud Computing is emerging as a new paradigm that aims at delivering computing as a utility. For the cloud computing paradigm to be fully adopted and effectively used it is critical that the security mechanisms are robust and resilient to malicious faults and attacks. Securing cloud is a challenging research problem because it suffers from current cybersecurity problems in computer networks and data centers and additional complexity introduced by virtualizations, multi-tenant occupancy, remote storage, and cloud management. It is widely accepted that we cannot build software and computing systems that are free from vulnerabilities and that cannot be penetrated or attacked. Furthermore, it is widely accepted that cyber resilient techniques are the most promising solutions to mitigate cyberattacks and change the game to advantage defender over attacker. Moving Target Defense (MTD) has been proposed as a mechanism to make it extremely challenging for an attacker to exploit existing vulnerabilities by varying different aspects of the execution environment. By continuously changing the environment (e.g. Programming language, Operating System, etc.) we can reduce the attack surface and consequently, the attackers will have very limited time to figure out current execution environment and vulnerabilities to be exploited. In this dissertation, we present a methodology to develop an Autonomic Resilient Cloud Management (ARCM) based on MTD and autonomic computing. The proposed research will utilize the following capabilities: Software Behavior Obfuscation (SBO), replication, diversity, and Autonomic Management (AM). SBO employs spatiotemporal behavior hiding or encryption and MTD to make software components change their implementation versions and resources randomly to avoid exploitations and penetrations. Diversity and random execution is achieved by using AM that will randomly "hot" shuffling multiple functionally-equivalent, behaviorally-different software versions at runtime (e.g., the software task can have multiple versions implemented in a different language and/or run on a different platform). The execution environment encryption will make it extremely difficult for an attack to disrupt normal operations of cloud. In this work, we evaluated the performance overhead and effectiveness of the proposed ARCM approach to secure and protect a wide range of cloud applications such as MapReduce and scientific and engineering applications.
40

Bangalore, Satyan Ramdas. "Novel prediction and end-to-end estimation techniques for error resilient video coding." Thesis, McGill University, 2011. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=104640.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In recent years video communication over packet switched networks such as the Internet has become universal and very important. These networks currently provide very limited or no end-to-end Quality of Service. Transmission of compressed video over these networks is thus highly susceptible to errors and packet losses. This problem is severe and leads to a substantial deterioration of received video quality.This dissertation investigates end-to-end distortion estimation techniques and novel prediction schemes for enhancing the error resilience of video transmission over unreliable networks, such as Internet and wireless networks. The major objective is to balance the tradeoff between coding efficiency and robustness within a rate distortion framework, so that a good reproduction quality is achieved under the bit-rate budget and network constraints. The overall quality of video transmission depends on several factors including the source and channel coding schemes, channel constraints and decoder recovery schemes. The first part of this dissertation is motivated by this fact and focuses on an end-to-end approach. An efficient algorithm is derived to estimate the end-to-end distortion of compressed video at the encoder. This algorithm accounts for the effects of quantization, packet loss, temporal and spatial error propagation and error concealment. This estimate is used to choose the optimal mode for coding each macroblock in a video frame in order to minimize the end-to-end distortion.In the next part of the dissertation the concentration is on a low complexity end-to-end distortion model for applications and devices with less computational power. Specifically a low complexity frame level transmission distortion model is proposed for the latest state of the art video coding standard H.264/AVC. This model is suitable for performing error control by suitable resource allocation for real time video applications. Comparisons are made with similar methods present in literature, revealing the increased accuracy of our proposed solution. The final part of the dissertation focuses on modifying the conventional motion compensated prediction (MCP) framework of a video codec to improve the error resilience of H.264/AVC when transmitted over these networks. The entire MCP is redesigned at both the encoder and decoder and three novel prediction schemes are proposed to achieve a good tradeoff between efficiency and robustness.
Depuis quelques années, la transmission de vidéos sur les réseaux de commutation par paquets comme l'Internet est devenue universelle. Ces réseaux procurent une qualité de service point-à-point très limitée, sinon inexistante. La transmission de vidéos compressés sur ces réseaux est par conséquent très susceptible aux erreurs et aux pertes de paquets. Ce problème est sévère et entraîne une détérioration significative de la qualité des vidéos.Cette thèse étudie les techniques d'estimation de distortion point-à-point ainsi que de nouveaux schémas de prédiction pour améliorer la tolérance aux erreurs de vidéos transmis sur des réseaux peu fiables comme l'Internet et les réseaux sans-fil. L'objectif principal et d'obtenir un juste équilibre entre l'efficacité du codage et la robustesse à même un cadre de taux de distorsion de façon à obtenir une bonne qualité de transmission pour une série de contraintes et un budget taux-bit donnés. La qualité globale de la transmission vidéo dépend de plusieurs facteurs comme les schémas de codage de source et de codage de canal, les contraintes du canal et les techniques de correction des erreurs. La première partie de cette thèse est motivée par ce fait et insiste sur une approche point-à-point. Un algorithme efficace est présenté pour estimer la distorsion point-à-point de vidéo compressé au niveau du codeur. L'algorithme prend en compte les effets de la quantification, les pertes de paquets, la propagation spatiale et temporelle des erreurs ainsi que la dissimulation des erreurs. Cet estimé est utilisé pour choisir le mode de codage optimal pour chaque macro-bloc des trames vidéo afin de minimiser la distorsion point-à-point.La deuxième partie de cette thèse met l'emphase sur un modèle de distorsion point-à-point à faible complexité pour des appareils à puissance limitée. Plus précisément, un modèle est proposé pour le plus récent standard de codage vidéo H.264/AVC. Ce modèle peut être utilisé pour le contrôle d'erreur en allouant les ressources appropriées pour des applications vidéo en temps réel. Des comparaisons sont effectuées avec des méthodes similaires décrites dans la littérature, démontrant la précision accrue de la solution proposée. La dernière partie de la thèse présente une modification du cadre de prédiction de mouvement compensé (MCP) d'un codec vidéo afin d'améliorer la tolérance aux erreurs du standard H.264/AVC lorsque celui-ci est utilisé sur des réseaux peu fiables. Le MCP est entièrement repensé du côté codeur comme du codé décodeur. Trois nouveaux schémas de prédiction sont aussi proposés pour obtenir un bon compromis entre efficacité et robustesse.
41

Butler, Bryan P. (Bryan Philip). "A fault-tolerant shared memory system architecture for a Byzantine resilient computer." Thesis, Massachusetts Institute of Technology, 1989. http://hdl.handle.net/1721.1/13360.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1989.
Includes bibliographical references (leaves 145-147).
by Bryan P. Butler.
M.S.
42

Xu, Zhiheng. "Cross-Layer Design for Secure and Resilient Control of Cyber-Physical Systems in Smart Cities." Thesis, New York University Tandon School of Engineering, 2019. http://pqdtopen.proquest.com/#viewpdf?dispub=10840627.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:

With the rapid development of smart cities, there is a growing need to integrate the physical systems, ranging from large-scale infrastructures to small embedded systems, with networked communications. The integration of the physical and cyber systems forms a Cyber-Physical System (CPS). The architecture of cyber-physical systems brings many advantages. For example, the cyber networks facilitate the information exchange among multiple systems. Despite the benefits of a CPS, its cyber-physical nature exposes the system to cyber-physical attacks, which aim to damage the physical layer (e.g., physical devices and equipment) through the cyber network. Even though researchers have studied cybersecurity issues for decades, it is challenging to use traditional technologies to protect CPSs due to the cyber-physical feature. For instance, in general, the conventional information technologies are insufficient to guarantee control performance of the physical layer.

Due to the new challenges in CPSs, in Part I, we introduce a cross-layer design to achieve security and resilience for CPSs. In our basic framework, we combine various technical tools and methods to capture the different properties between cyber and physical layers. In Part II, we address the challenging of the cloud-enabled systems (e.g., networked sensing systems or control systems), which outsources their massive computations to a cloud server with extensive computational resources. The cloud-enabled systems introduce new challenges, which arise from the trustworthiness of the cloud and the cyber-physical connections between the control system and the cloud. To address issues, we use leverage control theory and cryptography to develop secure mechanisms for different layers. For control systems, we use a Model Predictive Control (MPC) approach to develop the controller. For large-scale sensing networks, we use a Kalman filter to achieve massive data assimilation. To guarantee security in the outsourcing process, we establish homomorphic encryption based on the customized and standard encryption scheme. The homomorphic encryption allows the cloud-enabled systems to achieve data privacy during the outsourcing process. Finally, we use an Unmanned Aerial Vehicle (UAV) and a large-scale sensing network in our numerical experiments to corroborate our analytical results.

The growing complexity of CPS makes it challenging and costly to achieve perfect security. Hence, we aim to find the optimal protection for the systems based on limited resources. Game theory provides mathematical tools and models for investigating multiple strategic decision making, where decision makers compete for a resource. In Part III, we use game analytical tools to develop cross-layer strategies to defend the CPSs from specific attacks. Due to the features of specific applications, we use different game models to establish security mechanisms based on various requirements.

The first application based on the game framework is the networked 3D printer. As a result of the high costs of 3D-printing infrastructure, outsourcing the production to third parties specializing in the 3D-printing process becomes necessary. The integration of a 3D-printing system with networked communications constitutes a cyber-physical system, bringing new security challenges. Adversaries can explore the vulnerabilities of networks to damage the physical parts of the system. To address the issues, at the physical layer, we use a Markov jump system to model the system and develop a robust control policy to deal with uncertainties. At the cyber-layer, we use a FlipIt game to model the contention between the defender and attacker for the control of the 3D-printing system. To connect these two layers, we develop a Stackelberg framework to capture the interactions between cyber-layer attacker and defender game and the physical-layer controller and disturbance game and define a new equilibrium concept that captures interdependence of the zero-sum and FlipIt games. We present numerical examples to demonstrate the computation of the equilibria and design defense strategies for 3D printers as a tradeoff between security and robustness.

The second one is the train control system. To meet the increasing railway-transportation demand, researchers have designed a new train control system, communication-based train control (CBTC) system, to maximize the ability of train lines by reducing the headway of each train. However, the wireless communications expose the CBTC system to new security threats. Due to the cyber-physical nature of the CBTC system, a jamming attack can damage the physical part of the train system by disrupting the communications. To address this issue, we develop a secure framework to mitigate the impact of the jamming attack based on a security criterion. At the cyber layer, we use a multi-channel model to enhance the reliability of the communications and develop a zero-sum stochastic game to capture the interactions between the transmitter and jammer. We present analytical results and use dynamic programming to find the equilibrium of the stochastic game. (Abstract shortened by ProQuest.)

43

George, Jason. "Harnessing resilience: biased voltage overscaling for probabilistic signal processing." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/42812.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
A central component of modern computing is the idea that computation requires determinism. Contrary to this belief, the primary contribution of this work shows that useful computation can be accomplished in an error-prone fashion. Focusing on low-power computing and the increasing push toward energy conservation, the work seeks to sacrifice accuracy in exchange for energy savings. Probabilistic computing forms the basis for this error-prone computation by diverging from the requirement of determinism and allowing for randomness within computing. Implemented as probabilistic CMOS (PCMOS), the approach realizes enormous energy sav- ings in applications that require probability at an algorithmic level. Extending probabilistic computing to applications that are inherently deterministic, the biased voltage overscaling (BIVOS) technique presented here constrains the randomness introduced through PCMOS. Doing so, BIVOS is able to limit the magnitude of any resulting deviations and realizes energy savings with minimal impact to application quality. Implemented for a ripple-carry adder, array multiplier, and finite-impulse-response (FIR) filter; a BIVOS solution substantially reduces energy consumption and does so with im- proved error rates compared to an energy equivalent reduced-precision solution. When applied to H.264 video decoding, a BIVOS solution is able to achieve a 33.9% reduction in energy consumption while maintaining a peak-signal-to-noise ratio of 35.0dB (compared to 14.3dB for a comparable reduced-precision solution). While the work presented here focuses on a specific technology, the technique realized through BIVOS has far broader implications. It is the departure from the conventional mindset that useful computation requires determinism that represents the primary innovation of this work. With applicability to emerging and yet to be discovered technologies, BIVOS has the potential to contribute to computing in a variety of fashions.
44

Graham, John Kyle. "A payload-centric approach towards resilient and robust electric-propulsion enabled constellation mission design." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/112591.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis: S.B., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2017.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 187-189).
Recent studies have shown that distributed spacecraft missions, or constellations, can offer similar performance to monolithic satellite missions for lower cost and less risk. Additionally, recent developments in and implementation of electric propulsion (EP) technologies further the case for the use of constellations because they enable operational possibilities otherwise unavailable to satellites with chemical thrusters by reducing costly fuel requirements. Through more efficient fuel usage, EP allows for wide-scale rendezvous of satellites for refueling/maintenance as well as constellation reshuffling and orbit raising to recover system performance after losing a satellite. With these constellation-wide maneuvers at an operator's disposal, distributed space-craft missions will be able to operate longer and will have more flexibility to adapt and respond to malfunctions in the constellation. This thesis analyzes the performance gains of distributed spacecraft missions that utilize EP by analyzing satellite constellations at both microscopic and macroscopic levels - first, by understanding how payloads of different types, when combined with higher power requirements for EP systems, impact and influence an individual satellite's design and mass, and then exploring how, within a 2D orbital plane, this individual satellite can use its greater endurance to move within the network and influence entire constellation performance. Together, these different levels of understanding provide the necessary framework to effectively design and analyze robust and effective constellations, regardless of mission type. A case study of the OneWeb global internet mission demonstrates that use of currently available electric propulsion technologies can save up to 3000 kg per plane over chemical thrusters and can completely eliminate the need for spare satellites for lifetime failure rates of up to 10%.
by John Kyle Graham.
S.B.
45

Chia, Daniel Kim Boon. "Simulation of physical and media access control (MAC) for resilient and scalable wireless sensor networks." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2006. http://library.nps.navy.mil/uhtbin/hyperion/06Mar%5FChia.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (M.S. in Electrical Engineering)--Naval Postgraduate School, March 2006.
Thesis Advisor(s): Tri T. Ha, Weilian Su. "March 2006." Includes bibliographical references (p. 83-90). Also available online.
46

Xiao, Jimin. "Real-time interactive video streaming over lossy networks : high performance low delay error resilient algorithms." Thesis, University of Liverpool, 2013. http://livrepository.liverpool.ac.uk/14293/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
According to Cisco's latest forecast, two-thirds of the world's mobile data traffic and 62 percent of the consumer Internet traffic will be video data by the end of 2016. However, the wireless networks and Internet are unreliable, where the video traffic may undergo packet loss and delay. Thus robust video streaming over unreliable networks, i.e., Internet, wireless networks, is of great importance in facing this challenge. Specifically, for the real-time interactive video streaming applications, such as video conference and video telephony, the allowed end-to-end delay is limited, which makes the robust video streaming an even more difficult task. In this thesis, we are going to investigate robust video streaming for real-time interactive applications, where the tolerated end-to-end delay is limited. Intra macroblock refreshment is an effective tool to stop error propagations in the prediction loop of video decoder, whereas redundant coding is a commonly used method to prevent error from happening for video transmission over lossy networks. In this thesis two schemes that jointly use intra macroblock refreshment and redundant coding are proposed. In these schemes, in addition to intra coding, we proposed to add two redundant coding methods to enhance the transmission robustness of the coded bitstreams. The selection of error resilient coding tools, i.e., intra coding and/or redundant coding, and the parameters for redundant coding are determined using the end-to-end rate-distortion optimization. Another category of methods to provide error resilient capacity is using forward error correction (FEC) codes. FEC is widely studied to protect streamed video over unreliable networks, with Reed-Solomon (RS) erasure codes as its commonly used implementation method. As a block-based error correcting code, on the one hand, enlarging the block size can enhance the performance of the RS codes; on the other hand, large block size leads to long delay which is not tolerable for real-time video applications. In this thesis two sub-GOP (Group of Pictures, formed by I-frame and all the following P/B-frames) based FEC schemes are proposed to improve the performance of Reed-Solomon codes for real-time interactive video applications. The first one, named DSGF (Dynamic sub-GOP FEC Coding), is designed for the ideal case, where no transmission network delay is taken into consideration. The second one, named RVS-LE (Real-time Video Streaming scheme exploiting the Late- and Early-arrival packets), is more practical, where the video transmission network delay is considered, and the late- and early-arrival packets are fully exploited. Of the two approaches, the sub-GOP, which contains more than one video frame, is dynamically tuned and used as the RS coding block to get the optimal performance. For the proposed DSGF approach, although the overall error resilient performance is higher than the conventional FEC schemes, that protect the streamed video frame by frame, its video quality fluctuates within the Sub-GOP. To mitigate this problem, in this thesis, another real-time video streaming scheme using randomized expanding Reed-Solomon code is proposed. In this scheme, the Reed-Solomon coding block includes not only the video packets of the current frame, but also all the video packets of previous frames in the current group of pictures (GOP). At the decoding side, the parity-check equations of the current frameare jointly solved with all the parity-check equations of the previous frames. Since video packets of the following frames are not encompassed in the RS coding block, no delay will be caused for waiting for the video or parity packets of the following frames both at encoding and decoding sides. The main contribution of this thesis is investigating the trade-off between the video transmission delay caused by FEC encoding/decoding dependency, the FEC error-resilient performance, and the computational complexity. By leveraging the methods proposed in this thesis, proper error-resilient tools and system parameters could be selected based on the video sequence characteristics, the application requirements, and the available channel bandwidth and computational resources. For example, for the applications that can tolerate relatively long delay, sub-GOP based approach is a suitable solution. For the applications where the end-to-end delay is stringent and the computational resource is sufficient (e.g. CPU is fast), it could be a wise choice to use the randomized expanding Reed-Solomon code.
47

Pawar, Sohum(Sohum Parag). "Resilient decarbonization for the United States : lessons for electric systems from a decade of extreme weather." Thesis, Massachusetts Institute of Technology, 2020. https://hdl.handle.net/1721.1/127175.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis: S.M. in Technology and Policy, Massachusetts Institute of Technology, School of Engineering, Institute for Data, Systems, and Society, May, 2020
Cataloged from the official PDF of thesis.
Includes bibliographical references (pages 159-171).
The past decade has seen an unprecedented surge of climate change-driven extreme weather events that have wrought over $800 billion in damage and taken more than 5,200 lives across the United States -- a trend that appears poised to intensify. At the same time, the need for a large-scale effort to decarbonize the U.S. electric power system has become clear, along with the growing climate risks and impacts that any such effort will face. This thesis argues that the principles of resilience can play a valuable role by enabling the decarbonization of the U.S. electric system, in the face of the escalating risks and impacts of climate-driven extreme weather. By emphasizing targeted hardening, proactive planning, graceful failure, and effective recoveries in the design, operation, and oversight of electric systems in the United States, we can both protect against growing climate risks and catalyze decarbonization efforts --
an integrated process we call resilient decarbonization. This work seeks to inform present and future resilient decarbonization efforts by examining the lessons of the past decade of extreme weather, and its impact on electric systems in the United States. To do so, we consider three cases: Hurricane Maria, which struck Puerto Rico in 2017, causing the world's second-largest blackout; the 2017-2019 Northern California wildfire seasons, which sent the nation's largest investor-owned-utility into bankruptcy and remain the most devastating on record; and Superstorm Sandy, which served as a wakeup call for the New York/New Jersey area when it made a sudden left turn towards the region in 2012. We find that resilient decarbonization, while a challenging process to set into motion, does in fact meet its dual mission of protecting electric systems against growing climate risks, while enabling their decarbonization.
We also examine the ways in which electric system institutions take climate risks into account, the strengths and weaknesses of resilience-based measures for electric systems, and overarching questions about the role of electricity and electric utilities in American society today.
by .Sohum Pawar
S.M. in Technology and Policy
S.M.inTechnologyandPolicy Massachusetts Institute of Technology, School of Engineering, Institute for Data, Systems, and Society
48

Cano-Andrade, Sergio. "Thermodynamic Based Framework for Determining Sustainable Electric Infrastructures as well as Modeling of Decoherence in Quantum Composite Systems." Diss., Virginia Tech, 2014. http://hdl.handle.net/10919/25878.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this dissertation, applications of thermodynamics at the macroscopic and quantum levels of description are developed. Within the macroscopic level, an upper-level Sustainability Assessment Framework (SAF) is proposed for evaluating the sustainable and resilient synthesis/design and operation of sets of small renewable and non-renewable energy production technologies coupled to power production transmission and distribution networks via microgrids. The upper-level SAF is developed in accord with the four pillars of sustainability, i.e., economic, environmental, technical and social. A superstructure of energy producers with a fixed transmission network initially available is synthesized based on the day with the highest energy demand of the year, resulting in an optimum synthesis, design, and off-design network configuration. The optimization is developed in a quasi-stationary manner with an hourly basis, including partial-load behavior for the producers. Since sustainability indices are typically not expressed in the same units, multicriteria decision making methods are employed to obtain a composite sustainability index. Within the quantum level of description, steepest-entropy-ascent quantum thermodynamics (SEA-QT) is used to model the phenomenon of decoherence. The two smallest microscopic composite systems encountered in Nature are studied. The first of these is composed of two two-level-type particles, while the second one is composed of a two-level-type particle and an electromagnetic field. Starting from a non-equilibrium state of the composite and for each of the two different composite systems, the time evolution of the state of the composite as well as that of the reduced and locally-perceived states of the constituents are traced along their relaxation towards stable equilibrium at constant system energy. The modeling shows how the initial entanglement and coherence between constituents are reduced during the relaxation towards a state of stable equilibrium. When the constituents are non-interacting, the initial coherence is lost once stable equilibrium is reached. When they are interacting, the coherence in the final stable equilibrium state is only that due to the interaction. For the atom-photon field composite system, decoherence is compared with data obtained experimentally by the CQED group at Paris. The SEA-QT method applied in this dissertation provides an alternative and comprehensive explanation to that obtained with the "open system" approach of Quantum Thermodynamics (QT) and its associated quantum master equations of the Kossakowski-Lindblad-Gorini-Sudarshan type.
Ph. D.
49

Demirtas, Ali Murat. "Error Resilient Coding Using Flexible Macroblock Ordering In Wired And Wireless Communications." Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/12609860/index.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Error Resilient Coding tools are the methods to avoid or reduce the amount of corruption in video by altering the encoding algorithm. One of them is Flexible Macroblock Ordering (FMO) which provides us with ordering macroblocks of the frames flexibly. Six of them have definite ordering pattern and the last one, called explicit type, can get any order. In this thesis two explicit type algorithms, one of which is new, are explained and the performance of different FMO types in wired and wireless communication are evaluated. The first algorithm separates the important blocks into separate packets, so it equalizes the importance of packets. The proposed method allocates the important macroblocks according to a checkerboard pattern and employs unequal error protection to protect them more. The simulations are performed for wired and wireless communication and Forward Error Correction is used in the second stage of the simulations. Lastly the results of the new algorithms are compared with the performance of the other FMO types. According to the simulations the Proposed algorithm performs better than others when the error rate is very high and FEC is employed.
50

Souto, Laiz. "Data-driven approaches for event detection, fault location, resilience assessment, and enhancements in power systems." Doctoral thesis, Universitat de Girona, 2021. http://hdl.handle.net/10803/671402.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis presents the study and development of distinct data-driven techniques to support event detection, fault location, and resilience assessment towards enhancements in power systems. It is divided in three main parts as follows. The first part investigates improvements in power system monitoring and event detection methods with focus on dimensionality reduction techniques in wide-area monitoring systems. The second part focuses on contributions to fault location tasks in power distribution networks, relying on information about the network topology and its electrical parameters for short-circuit simulations over a range of scenarios. The third part assesses enhancements in power system resilience to high-impact, lowprobability events associated with extreme weather conditions and human-made attacks, relying on information about the system topology combined with simulations of representative scenarios for impact assessment and mitigation. Overall, the proposed data-driven algorithms contribute to event detection, fault location, and resilience assessment, relying on electrical measurements recorded by intelligent electronic devices, historical data of past events, and representative scenarios, together with information about the network topology, electrical parameters, and operating status. The validation of the algorithms, implemented in MATLAB, is based on computer simulations using network models implemented in OpenDSS and Simulink
Esta tesis presenta el estudio y el desarrollo de distintas técnicas basadas en datos para respaldar las tareas de detección de eventos, localización de fallos y resiliencia hacia mejoras en sistemas de energía eléctrica. Los contenidos se dividen en tres partes principales descritas a continuación. La primera parte investiga mejoras en el monitoreo de sistemas de energía eléctrica y métodos de detección de eventos con enfoque en técnicas de reducción de dimensionalidad en wide-area monitoring systems. La segunda parte se centra en contribuciones a tareas de localización de fallos en redes eléctricas de distribución, basándose en información acerca de la topología de la red y sus parámetros eléctricos para simulaciones de cortocircuito en una variedad de escenarios. La tercera parte evalúa mejoras en la resiliencia de sistemas de energía eléctrica ante eventos de alto impacto y baja probabilidad asociados con condiciones climáticas extremas y ataques provocados por humanos, basándose en información sobre la topología del sistema combinada con simulaciones de escenarios representativos para la evaluación y mitigación del impacto. En general, los algoritmos propuestos basados en datos contribuyen a la detección de eventos, la localización de fallos, y el aumento de la resiliencia de sistemas de energía eléctrica, basándose en mediciones eléctricas registradas por dispositivos electrónicos inteligentes, datos históricos de eventos pasados y escenarios representativos, en conjunto con información acerca de la topología de la red, parámetros eléctricos y estado operativo. La validación de los algoritmos, implementados en MATLAB, se basa en simulaciones computacionales utilizando modelos de red implementados en OpenDSS y Simulink

To the bibliography