Dissertations / Theses on the topic 'Computers Reliability Testing'

To see the other types of publications on this topic, follow the link: Computers Reliability Testing.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 26 dissertations / theses for your research on the topic 'Computers Reliability Testing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Kamara, Elisha Tingha. "Testing and inverse modelling for solder joint reliability assessment." Thesis, University of Greenwich, 2013. http://gala.gre.ac.uk/11996/.

Full text
Abstract:
As the trends in green manufacturing, miniaturization, and enhanced functionality of electronics devices continue without any sign of slowing down, the reliability of lead free solder joints with diminishing size has become more and more a challenge to the design engineers and the electronics manufacturing industry. In order to predict the reliability of solder joints accurately, it is necessary to develop test techniques to test solder joints efficiently under conditions that are comparable to those in application environment. In this day and age when computer simulation has become an indispensable tool in many areas, it is also very important that suitable material models are available for solder materials so that virtual design tools can be used to predict device reliability performance accurately. In this work, the aim was to develop vibration and cyclic shear test methods and equipment, and to use computer modelling techniques in the analysis of lead free solder joints in microelectronics devices, and to develop an inverse Finite Element technique and experimental data to obtain constitutive laws for lead-free solder alloys. In the development of the vibration test machine, a prototype test machine that uses piezoelectric cell as actuators for the loading was modelled using the Finite Element Analysis method, and the behaviours of the test specimen which is similar to a BGA solder joint in dimensions was analysed. The static and dynamic response of the equipment was modelled and compared with experimental results. A novel multi-joint test specimen in which the solder deformation is similar to that in solder joints of BGAs that are under thermal loading was analysed so that test results can be interpreted and the specimens and loading conditions can be improved. The response of the joints reinforced the understanding that the interface of the solder and the copper or printed circuit board is the mostly likely region for crack growth and hence failure of the package. In the inverse Finite Element Analysis of solder joints, cyclic shear test data and Finite Element Analysis methods were used to improve the Anand's visco-plastic constitutive law for the SAC solder specimens under the test conditions. To reduce the possibility of spurious experimental data skewing the entire analysis, a technique was employed that uses limited experimental datasets in determining the material parameters. Simulation results using the new constitutive law showed significant improvement in accuracy. The main contribution of this research work to the manufacturing, testing and virtual design of solder joints can be summarised as follows: (1) A unique dedicated high cycle fatigue test equipment that is especially suited for testing very small solder joints and other surface mounted technologies under vibration conditions has been successfully designed, and manufactured. This is expected to enhance the capability of the industry in solder joint tests. (2) The behaviours of individual solder joints in a BGA-like multi-joint test specimen under isothermal cyclic loading condition have been characterised making the prediction of solder properties more accurate and efficient. (3) A novel procedure that is based on inverse Finite Element Analysis to obtain nonlinear creep parameters of, for example, Anand’s model, has been proposed and demonstrated. This method reduces the effect of spurious dataset, the high reliance of the skill of the individuals who perform the analysis and makes it possible for small institutions with limited resources to obtain the necessary model parameters for virtual product design and reliability analysis.
APA, Harvard, Vancouver, ISO, and other styles
2

Powers, Brenda Joy. "A test methodology for reliability assessment of collaborative tools." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2004. http://library.nps.navy.mil/uhtbin/hyperion/04Sep%5FPowers.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Mondal, Subhajit. "Extension of E([theta]) metric for evaluation of reliability." Kansas State University, 2005. http://hdl.handle.net/2097/144.

Full text
Abstract:
Master of Science
Department of Computing and Information Sciences
David A. Gustafson
The calculation of reliability based on running test cases refers to the probability of the software not generating faulty output consequent to the testing process. The metric used to measure this reliability is referred in terms of E(Θ) value. The concept of E(Θ) gives precise formulae to calculate the probability of failure of software after testing, debug or operational. This report aims at extending the functionalities of E(Θ) into the realm of multiple faults spread across multiple sub-domains. This generalization involves introduction of a new set of formulae for E(Θ) calculation which can account for faults spread over both single as well as multiple sub-domains in a code. The validity of the formulae is verified by matching the obtained theoretical results against the empirical data generated from running a test case simulator. The report further examines the possibility of an upper bound calculation on the derived formulae and its possible ramifications.
APA, Harvard, Vancouver, ISO, and other styles
4

Koneru, Narendra. "Quantitative analysis of domain testing effectiveness." [Johnson City, Tenn. : East Tennessee State University], 2001. http://etd-submit.etsu.edu/etd/theses/available/etd-0404101-011933/unrestricted/koneru0427.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Vasudev, R. Sashin, and Ashok Reddy Vanga. "Accuracy of Software Reliability Prediction from Different Approaches." Thesis, Blekinge Tekniska Högskola, Avdelningen för för interaktion och systemdesign, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-1298.

Full text
Abstract:
Many models have been proposed for software reliability prediction, but none of these models could capture a necessary amount of software characteristic. We have proposed a mixed approach using both analytical and data driven models for finding the accuracy in reliability prediction involving case study. This report includes qualitative research strategy. Data is collected from the case study conducted on three different companies. Based on the case study an analysis will be made on the approaches used by the companies and also by using some other data related to the organizations Software Quality Assurance (SQA) team. Out of the three organizations, the first two organizations used for the case study are working on reliability prediction and the third company is a growing company developing a product with less focus on quality. Data collection was by the means of interviewing an employee of the organization who leads a team and is in the managing position for at least last 2 years.
svra06@student.bth.se
APA, Harvard, Vancouver, ISO, and other styles
6

Sekgweleo, Tefo Gordon. "A decision support system framework for testing and evaluating software in organisations." Thesis, Cape Peninsula University of Technology, 2018. http://hdl.handle.net/20.500.11838/2772.

Full text
Abstract:
Thesis (DPhil (Informatics))--Cape Peninsula University of Technology, 2018.
Increasingly, organisations in South African and across the world rely on software for various reasons, such as competitiveness and sustainability. The software are either developed in-house or purchased from the shelf. Irrespective of how the software was acquired, they do encounter challenges, from implementation to support, and use stages. The challenges sometimes hinder and are prohibitive to processes and activities that the software is intended to enable and support. Majority of the challenges that are encountered with software are attributed to the fact that they were not tested or appropriately tested before implementation. Some of the challenges has been costly to many organisations, particularly in South Africa. As a result, some organisations have been lacking in their efforts toward growth, competitiveness and sustainability. The challenges manifest from the fact that there are no testing tools and methods that can be easily customised for an organisation’s purposes. As a result, some organisations adopt more tools and methods for the same testing purposes, which has not solved the problem, as the challenges continue among South Africa organisations. Based on the challenges as stated above, this study was undertaken. The aim was to develop a decision support system framework, which can be used for software testing by any organisation, owing to its flexibility for customisation. The interpretivist and inductive approaches were employed. The qualitative methods and the case study design approach were applied. Three South African organisations, a private, public and small to medium enterprise (SME) were used as cases in this study. A set of criteria was used to select the organisations. The analysis of the data was guided by two sociotechnical theories, actor network theory (ANT) and diffusion of innovation (DOI). The theories were complementarily applied because of their different focuses. The actor network theory focuses on actors, which are both human and non-human, heterogeneity of networks, and the relationship between the actors within networks. This includes the interactions that happen at different moments as translated within the heterogeneous networks. Thus, ANT was employed to examine and gain better understanding of the factors that influence software testing in organisations. The DOI focuses on how new (fresh) ideas are diffused in an environment, with particular focus on innovation decision process, which constitute five stages: knowledge, persuasion, decision, implementation and confirmation. Findings from the data analysis of the three cases were further interpreted. Based on the interpretation, a decision support system framework was developed. The framework is intended to be of interest to software developers, software project managers and other stakeholders, most importantly, to provide guide to software testers in their tasks of testing software. Thus, this research is intended to be of interest and benefit to organisations and academic through its theoretical, practical and methodological contribution as detailed in the chapter seven (conclusion). In conclusion, even though this research is rigorous, comprehensive and holistic, there are room for future studies. I would like to propose that future research should be in the areas of measurement of software testing. Also, sociotechnical theories like structuration theory and technology acceptance model should be considered in the analysis of such studies.
APA, Harvard, Vancouver, ISO, and other styles
7

Rowberry, Hayden Cole. "A Soft-Error Reliability Testing Platform for FPGA-Based Network Systems." BYU ScholarsArchive, 2019. https://scholarsarchive.byu.edu/etd/7739.

Full text
Abstract:
FPGAs are frequently used in network systems to provide the performance and flexibility that is required of modern computer networks while allowing network vendors to bring products to market quickly. Like all electronic devices, FPGAs are vulnerable to ionizing radiation which can cause applications operating on an FPGA to fail. These low-level failures can have a wide range of negative effects on the performance of a network system. As computer networks play a larger role in modern society, it becomes increasingly important that these soft errors are addressed in the design of network systems.This work presents a framework for testing the soft-error reliability of FPGA-based networking systems. The framework consists of the NetFPGA development board, a custom traffic generator, and a custom high-speed JTAG configuration device. The NetFPGA development board is versatile and can be used to implement a wide range of network applications. The traffic generator is used to exercise the network system on the NetFPGA and to determine the health of that system. The JTAG configuration device is used to manage reliability experiments, to perform fault injection into the FPGA, and to monitor the NetFPGA during radiation tests.This thesis includes soft-error reliability tests that were performed on an Ethernet switch network system. Using both fault injection and accelerate radiation testing, the soft error sensitivity of the Ethernet switch was measured. The Ethernet switch design was then mitigated using triple module redundancy and duplication with compare. These mitigated designs were also tested and compared against the baseline design. Radiation testing shows that TMR provides a 5.05x improvement in reliability over the baseline design. DWC provides a 5.22x improvement in detectability over the baseline design without reducing the reliability of the system.
APA, Harvard, Vancouver, ISO, and other styles
8

Shieh, Jung-Sheng. "Some applications of the Bechhofer-Kiefer-Sobel generalized sequential probability ratio test to software reliability testing." Diss., Georgia Institute of Technology, 1992. http://hdl.handle.net/1853/28928.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kwok, Wing-hong, and 郭永康. "Streamlined and prioritized hierarchical relations: a technique for improving the effectiveness of theclassification-tree methodology." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2001. http://hub.hku.hk/bib/B2975107X.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Thiraviam, Amar Raja. "Accelerated life testing of subsea equipment under hydrostatic pressure." Doctoral diss., University of Central Florida, 2010. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/4525.

Full text
Abstract:
Accelerated Life Testing (ALT) is an effective method of demonstrating and improving product reliability in applications where the products are expected to perform for a long period of time. ALT accelerates a given failure mode by testing at amplified stress level(s) in excess of operational limits. Statistical analysis (parameter estimation) is then performed on the data, based on an acceleration model to make life predictions at use level. The acceleration model thus forms the basis of accelerated life testing methodology. Well established accelerated models such as the Arrhenius model and the Inverse Power Law (IPL) model exist for key stresses such as temperature and voltage. But there are other stresses like subsea pressure, where there is no clear model of choice. This research proposes a pressure-life (acceleration) model for the first time for life prediction under subsea pressure for key mechanical/physical failure mechanisms. Three independent accelerated tests were conducted and their results analyzed to identify the best model for the pressure-life relationship. The testing included material tests in standard coupons to investigate the effect of subsea pressure on key physical, mechanical, and electrical properties. Tests were also conducted at the component level on critical components that function as a pressure barrier. By comparing the likelihood values of multiple reasonable candidate models for the individual tests, the exponential model was identified as a good model for the pressure-life relationship. In addition to consistently providing good fit among the three tests, the exponential model was also consistent with field data (validation with over 10 years of field data) and demonstrated several characteristics that enable robust life predictions in a variety of scenarios.; In addition the research also used the process of Bayesian analysis to incorporate prior information from field and test data to bolster the results and increase the confidence in the predictions from the proposed model.
ID: 029051131; System requirements: World Wide Web browser and PDF reader.; Mode of access: World Wide Web.; Thesis (Ph.D.)--University of Central Florida, 2010.; Includes bibliographical references (p. 165-173).
Ph.D.
Doctorate
Department of Industrial Engineering and Management Systems
Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
11

Classen, Elizabeth Maria. "Investigation of the optimal response scale for personality measurement : computer–based testing / Elizabeth Maria Classen." Thesis, North-West University, 2011. http://hdl.handle.net/10394/6918.

Full text
Abstract:
return and payback period. All these above techniques will be analysed in three different scenarios, namely: 1. Mine X will stay with its current operations without any new projects. 2. The development project will begin immediately. 3. A six–month delay in development of the project. The study found that the net present value was positive, the internal rate of return was more than the discount rate and the payback period was shorter than the project’s life–time regarding to all three above–mentioned scenarios. The highest net present value is calculated in case the project starts immediately. Both the internal rate of return and the payback period indicated that a six month delay in the project is the most viable. After considering all the facts, the study concluded due to the highest net present value the best feasible recommendation would be to start the project immediately. The value of this study is that it is the first study to investigate the relationship between the viability to delay or to start the investment project immediately in the South African mining industry. This study is also unique, since it takes into account how mining industries world–wide can achieve long–term success through development projects without losing key players, due to impulsive short–term downsizing decisions. To be able to use personality tests in the most reliable and valid manner there are many considerations to be taken into account. Variables such as the population used, the culture of the test–takers, the mode of administration, whether pencil–and–paper or computer–based testing procedures, familiarity with computers when using computer–based tests and the response format to be used when administering the personality questionnaire are but some of the considerations. Within South Africa it is that much more important to consider the mode of administration, whether pencil–and–paper tests or computer–based tests, as there are many individual groups who have been historically disadvantaged when it comes to the use of computers as a testing method. It is just as important to consider the response scale to be utilised when administering personality testing as this may influence the results obtained and can influence the reliability and validity of these results. The objective of this study was to determine which response scale, dichotomous or polytomous, was the best to use when conducting computer–based personality testing. The questionnaire that was utilised was the South African Personality Inventory (SAPI) questionnaire; however, only items from the Soft–Heartedness cluster were employed as the objective was not to test the questionnaire but to test the most reliable and valid response scale to be used in conjunction with the questionnaire. A convenience sampling approach was utilised and the questionnaire was administered to students who were available and able to take the test (N = 724). Descriptive statistics, factor analysis and Cronbach Alpha coefficients were used to analyse the data obtained.
Thesis (M.Com. (Industrial Psychology))--North-West University, Potchefstroom Campus, 2012.
APA, Harvard, Vancouver, ISO, and other styles
12

Sun, Boya. "PRECISION IMPROVEMENT AND COST REDUCTION FOR DEFECT MINING AND TESTING." Case Western Reserve University School of Graduate Studies / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=case1321827962.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Stoddard, Aaron Gerald. "Configuration Scrubbing Architectures for High-Reliability FPGA Systems." BYU ScholarsArchive, 2015. https://scholarsarchive.byu.edu/etd/5704.

Full text
Abstract:
Field Programmable Gate Arrays (FPGAs) are being used more frequently in space applications because of their reconfigurability and intensive processing capabilities. FPGAs in environments like space are susceptible to ionizing radiation which can cause Single Event Upsets (SEUs) in the FPGA's configuration memory. These upsets may cause the programmed user design on the FPGA to deviate from its normal behavior. Space missions cannot afford to allow important data processing applications to become corrupted due to these radiation upsets.Configuration scrubbing is an upset mitigation technique that detects and corrects upsets in an FPGA's configuration memory. Configuration scrubbing periodically monitors an FPGA's configuration memory utilizing mechanisms such as Error Correction Codes (ECCs), Cyclic Redundancy Checks (CRCs), a protected golden file, and partial reconfiguration to detect and correct upset memory bits. This work presents improved Xilinx 7-Series configuration scrubbing architectures that achieve minimal hardware footprints, competitive performance metrics, and robust detection and correction capabilities. The two principal scrubbing architectures presented in this work are the readback and hybrid scrubbers which detect and correct Single Bit Upsets (SBUs) and Multi-Bit Upsets (MBUs). Harnessing the performance advantages granted by the 7-Series internal Readback CRC scan, a hybrid scrubber built in software for the Zynq XZC07020 FPGA has been measured to correct SBUs in 8.024 ms, even-numbered MBUs in 13.38 ms, and odd-numbered MBUs in 21.40 ms. It can also perform a full readback scrub of the entire device in under two seconds. These scrubbing architectures were validated in radiation beam tests, where one of the architectures corrected MBUs as large as sixteen bits in a single frame.
APA, Harvard, Vancouver, ISO, and other styles
14

Shewell, Justin Reed. "Hearing the Difference: A Computer-Based Speech-Perception Diagnostic Tool for Non-Native Speakers of English." Diss., CLICK HERE for online access, 2004. http://contentdm.lib.byu.edu/ETD/image/etd456.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Ellsworth, Kevin M. "Understanding Design Requirements for Building Reliable, Space-Based FPGA MGT Systems Based on Radiation Test Results." BYU ScholarsArchive, 2012. https://scholarsarchive.byu.edu/etd/3159.

Full text
Abstract:
Space-based computing applications often demand reliable, high-bandwidth communication systems. FPGAs with Mulit-Gigabit Transceivers (MGTs) provide an effective platform for such systems, but it is important that system designers understand the possible susceptibilities MGTs present to the system. Previous work has provided a foundation for understanding the susceptibility of raw FPGA MGTs but has fallen short of testing MGTs as part of a larger system. This work focuses on answering the questions MGT system designers need to know in order to build a reliable space-based MGT system. Two radiation tests were performed with a test architecture built on the Aurora protocol. These tests were specifically designed to discover system susceptibilities, and effective mechanisms for upset detection, recovery, and recovery detection. Test results reveal that the Aurora protocol serves as an effective basis for simple point-to-point communication for space-based systems but that some additional logic is necessary for high reliability. Particularly, additional upset detection and recovery mechanisms are necessary as well as additional status indicators. These additions are minimal, however, and not all are necessary depending on system requirements. The most susceptible part of the MGT system is the MGT tile components on the RX data path. Upsets to these components most often results in data corruption only and do not affect system operation or disrupt the communication link. Most other upsets which do disrupt normal system operation can be recovered automatically by the Aurora protocol with built-in mechanisms. Only 1% of observed events in testing required additional recovery mechanisms not supplied by Aurora. In addition to test data results, this work also provides suggestions for system designers based on various system requirements and a proposed MGT system design based on the Aurora protocol. The proposed system serves as an example to illustrate how test data can be used to guide the system design and determine system availability. With this knowledge designers are able to build reliable MGT systems for a variety of space-based systems.
APA, Harvard, Vancouver, ISO, and other styles
16

Harding, Alexander Stanley. "Single Event Mitigation for Aurora Protocol Based MGT FPGA Designs in Space Environments." BYU ScholarsArchive, 2014. https://scholarsarchive.byu.edu/etd/4117.

Full text
Abstract:
This work has extended an existing Aurora protocol for high-speed serial I/O between FPGAs to provide greater fault recovery in the presence of high-energy radiation. To improve on the Aurora protocol, additional resets that affect larger portions of the system were used. Detection for additional error modes that occurred but were not detected by the Aurora protocol was designed. Radiation testing was performed on the Aurora protocol with the additional mitigation hardware. The test gathered large amounts of data on the various error modes of the Aurora protocol and how the additional mitigation circuitry affected the system. The test results showed that the addition of the recovery circuitry greatly enhanced the Aurora protocol's ability to recover from errors. The recovery circuit recovered from all but 0.01% of errors that the Aurora protocol could not. The recovery circuit further increased the availability of the transmission link by proactively applying resets at much shorter intervals than used in previous testing. This quick recovery caused the recovery mechanism to fix some errors that may have recovered automatically with enough time. However, the system still showed an increase in performance, and unrecoverable errors were reduced 100x. The estimated unrecoverable error rate of the system is 5.9E-07 in geosynchronous orbit. The bit error rate of the enhanced system was 8.47754E-015, an order of magnitude improvement.
APA, Harvard, Vancouver, ISO, and other styles
17

Holmberg, Daniel, and Victor Nyberg. "Functional and Security Testing of a Mobile Client-Server Application." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-148710.

Full text
Abstract:
Today’s massive usage of smartphones has put a high demand on all application developers in the matter of security. For us to be able to keep using all existing and new applications, a process that removes significant security vulnerabilities is essential. To remove these vulnerabilities, the applications have to be tested. In this thesis, we identify six methods for functional and security testing of client-server applications running Android and Python Flask. Regarding functional testing, we implement Espresso testing and RESTful API testing. In regards to the security testing of the system, we do not only implement fuzz testing, sniffing, reverse engineering and SQL injection testing on a system developed by a student group in a parallel project, but also discover a significant security vulnerability that directly affects the integrity and reliability of this system. Out of the six identified testing techniques, reverse engineering exposed the vulnerability. In conjunction with this, we verified that the system’s functionality works as it is supposed to.
APA, Harvard, Vancouver, ISO, and other styles
18

Baldwin, Andrew Lockett. "A Fault-Tolerant Alternative to Lockstep Triple Modular Redundancy." PDXScholar, 2012. https://pdxscholar.library.pdx.edu/open_access_etds/331.

Full text
Abstract:
Semiconductor manufacturing defects adversely affect yield and reliability. Manufacturers expend vast resources to reduce defects within their processes. As the minimum feature size get smaller, defects become increasingly difficult to prevent. Defects can change the behavior of a logic circuit resulting in a fault. Manufacturers and designers may improve yield, reliability, and profitability by using design techniques that make products robust even in the presence of faults. Triple modular redundancy (TMR) is a fault tolerant technique commonly used to mask faults using voting outcomes from three processing elements (PE). TMR is effective at masking errors as long as no more than a single processing element is faulty. Time distributed voting (TDV) is proposed as an active fault tolerant technique. TDV addresses the shortcomings of triple modular redundancy (TMR) in the presence of multiple faulty processing elements. A faulty PE may not be incorrect 100% of the time. When a faulty element generates correct results, a majority is formed with the healthy PE. TDV observes voting outcomes over time to make a statistical decision whether a PE is healthy or faulty. In simulation, fault coverage is extended to 98.6% of multiple faulty PE cases. As an active fault tolerant technique, TDV identifies faulty PE's so that actions may be taken to replace or disable them in the system. TDV may provide a positive impact to semiconductor manufacturers by improving yield and reliability even as fault frequency increases.
APA, Harvard, Vancouver, ISO, and other styles
19

Perkins, Andrew Eugene. "Investigation and Prediction of Solder Joint Reliability for Ceramic Area Array Packages under Thermal Cycling, Power Cycling, and Vibration Environments." Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/14518.

Full text
Abstract:
Microelectronic systems are subjected to thermal cycling, power cycling, and vibration environments in various applications. These environments, whether applied sequentially or simultaneously, affect the solder joint reliability. Literature is scarce on predicting solder joint fatigue failure under such multiple loading environments. This thesis aims to develop a unified modeling methodology to study the reliability of electronic packages subjected to thermal cycling, power cycling, and vibration loading conditions. Such a modeling methodology is comprised of an enriched material model to accommodate time-, temperature-, and direction-dependent behavior of various materials in the assembly, and at the same time, will have a geometry model that can accommodate thermal- and power-cycling induced low-cycle fatigue damage mechanism as well as vibration-induced high-cycle fatigue damage mechanism. The developed modeling methodology is applied to study the reliability characteristics of ceramic area array electronic packages with lead-based solder interconnections. In particular, this thesis aims to study the reliability of such solder interconnections under thermal, power, and vibration conditions individually, and validate the model against these conditions using appropriate experimental data either from in-house experiments or existing literature. Once validated, this thesis also aims to perform a design of simulations study to understand the effect of various materials, geometry, and thermal parameters on solder joint reliability of ceramic ball grid array and ceramic column grid array packages, and use such a study to develop universal polynomial predictive equations for solder joint reliability. The thesis also aims to employ the unified modeling methodology to develop new understanding of the acceleration factor relationship between power cycling and thermal cycling. Finally, this thesis plans to use the unified modeling methodology to study solder joint reliability under the sequential application of thermal cycling and vibration loading conditions, and to validate the modeling results with first-of-its-kind experimental data. A nonlinear cumulative damage law is developed to account for the nonlinearity and effect of sequence loading under thermal cycling, power cycling, and vibration loading.
APA, Harvard, Vancouver, ISO, and other styles
20

Bohman, Matthew Kendall. "Compiler-Assisted Software Fault Tolerance for Microcontrollers." BYU ScholarsArchive, 2018. https://scholarsarchive.byu.edu/etd/6724.

Full text
Abstract:
Commercial off-the-shelf (COTS) microcontrollers can be useful for non-critical processing on spaceborne platforms. Many of these microprocessors are inexpensive and consume little power. However, the software running on these processors is vulnerable to radiation upsets, which can cause unpredictable program execution or corrupt data. Space missions cannot allow these errors to interrupt functionality or destroy gathered data. As a result, several techniques have been developed to reduce the effect of these upsets. Some proposed techniques involve altering the processor hardware, which is impossible for a COTS device. Alternately, the software running on the microcontroller can be modified to detect or correct data corruption. There have been several proposed approaches for software mitigation. Some take advantage of advanced architectural features, others modify software by hand, and still others focus their techniques on specific microarchitectures. However, these approaches do not consider the limited resources of microcontrollers and are difficult to use across multiple platforms. This thesis explores fully automated software-based mitigation to improve the reliability of microcontrollers and microcontroller software in a high radiation environment. Several difficulties associated with automating software protection in the compilation step are also discussed. Previous mitigation techniques are examined, resulting in the creation of COAST (COmpiler-Assisted Software fault Tolerance), a tool that automatically applies software protection techniques to user code. Hardened code has been verified by a fault injection campaign; the mean work to failure increased, on average, by 21.6x. When tested in a neutron beam, the neutron cross sections of programs decreased by an average of 23x, and the average mean work to failure increased by 5.7x.
APA, Harvard, Vancouver, ISO, and other styles
21

MOURA, JOAO A. "Desenvolvimento e construção de sistema automatizados para controle de qualidade na produção de sementes de iodo-125." reponame:Repositório Institucional do IPEN, 2015. http://repositorio.ipen.br:8080/xmlui/handle/123456789/26454.

Full text
Abstract:
Submitted by Claudinei Pracidelli (cpracide@ipen.br) on 2016-07-01T11:08:37Z No. of bitstreams: 0
Made available in DSpace on 2016-07-01T11:08:37Z (GMT). No. of bitstreams: 0
Tese (Doutorado em Tecnologia Nuclear)
IPEN/T
Instituto de Pesquisas Energeticas e Nucleares - IPEN-CNEN/SP
APA, Harvard, Vancouver, ISO, and other styles
22

Keller, Andrew Mark. "Using On-Chip Error Detection to Estimate FPGA Design Sensitivity to Configuration Upsets." BYU ScholarsArchive, 2017. https://scholarsarchive.byu.edu/etd/6302.

Full text
Abstract:
SRAM-based FPGAs provide valuable computation resources and reconfigurability; however, ionizing radiation can cause designs operating on these devices to fail. The sensitivity of an FPGA design to configuration upsets, or its SEU sensitivity, is an indication of a design's failure rate. SEU mitigation techniques can reduce the SEU sensitivity of FPGA designs in harsh radiation environments. The reliability benefits of these techniques must be determined before they can be used in mission-critical applications and can be determined by comparing the SEU sensitivity of an FPGA design with and without these techniques applied to it. Many approaches can be taken to evaluate the SEU sensitivity of an FPGA design. This work describes a low-cost easier-to-implement approach for evaluating the SEU sensitivity of an FPGA design. This approach uses additional logic resources on the same FPGA as the design under test to determine when the design has failed, or deviated from its specified behavior. Three SEU mitigation techniques were evaluated using this approach: triple modular redundancy (TMR), configuration scrubbing, and user-memory scrubbing. Significant reduction in SEU sensitivity is demonstrated through fault injection and radiation testing. Two LEON3 processors operating in lockstep are compared against each other using on-chip error detection logic on the same FPGA. The design SEU sensitivity is reduced by 27x when TMR and configuration scrubbing are applied, and by approximately 50x when TMR, configuration scrubbing, and user-memory scrubbing are applied together. Using this approach, an SEU sensitivity comparison is made of designs implemented on both an Altera Stratix V FPGA and a Xilinx Kintex 7 FPGA. Several instances of a finite state machine are compared against each other and a set of golden output vectors, all on the same FPGA. Instances of an AES cryptography core are chained together and the output of two chains are compared using on-chip error detection. Fault injection and neutron radiation testing reveal several similarities between the two FPGA architectures. SEU mitigation techniques reduce the SEU sensitivity of the two designs between 4x and 728x. Protecting on-chip functional error detection logic with TMR and duplication with compare (DWC) is compared. Fault injection results suggest that it is more favorable to protect on-chip functional error detection logic with DWC than it is to protect it with TMR for error detection.
APA, Harvard, Vancouver, ISO, and other styles
23

Gruwell, Ammon Bradley. "High-Speed Programmable FPGA Configuration Memory Access Using JTAG." BYU ScholarsArchive, 2017. https://scholarsarchive.byu.edu/etd/6321.

Full text
Abstract:
Over the past couple of decades Field Programmable Gate Arrays (FPGAs) have become increasingly useful in a variety of domains. This is due to their low cost and flexibility compared to custom ASICs. This increasing interest in FPGAs has driven the need for tools that both qualify and improve the reliability of FPGAs for applications where the reconfigurability of FPGAs makes them vulnerable to radiation upsets such as in aerospace environments. Such tools ideally work with a wide variety of devices, are highly programmable but simple to use, and perform tasks at relatively high speeds. Of the various FPGA configuration interfaces available, the Joint Test Action Group (JTAG) standard for serial communication is the most universally compatible interface due to its use for verifying integrated circuits and testing printed circuit board connectivity. This universality makes it a good interface for tools seeking to access FPGA configuration memory. This thesis introduces a new tool architecture for high-speed, programmable JTAG access to FPGA configuration memory. This tool, called the JTAG Configuration Manager (JCM), is made up of a large C++ software library that runs on an embedded micro-processor coupled with a hardware JTAG controller module implemented in programmable logic. The JCM software library allows for the development of custom JTAG communication of any kind, although this thesis focuses on applications related to FPGA reliability. The JCM hardware controller module allows these software-generated JTAG sequences to be streamed out at very high speeds. Together the software and hardware provide the high-speed and programmability that is important for many JTAG applications.
APA, Harvard, Vancouver, ISO, and other styles
24

Chakraborty, Ashis Kumar. "Software Quality Testing And Remedies." Thesis, 1996. http://etd.iisc.ernet.in/handle/2005/1887.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

"Coverage-based testing strategies and reliability modeling for fault-tolerant software systems." Thesis, 2006. http://library.cuhk.edu.hk/record=b6074302.

Full text
Abstract:
Finally, we formulate the relationship between code coverage and fault detection. Although our two current models are in simple mathematical formats, they can predict the percentage of fault detected by the code coverage achieved for a certain test set. We further incorporate such formulation into traditional reliability growth models, not only for fault-tolerant software, but also for general software system. Our empirical evaluations show that our new reliability model can achieve more accurate reliability assessment than the traditional Non-homogenous Poisson model.
Furthermore, to investigate some "variants" as well as "invariants" of fault-tolerant software, we perform an empirical investigation on evaluating reliability features by a comprehensive comparison between two projects: our project and NASA 4-University project. Based on the same specification for program development, these two projects encounter some common as well as different features. The testing results of two comprehensive operational testing procedures involving hundreds of thousands test cases are collected and compared. Similar as well as dissimilar faults are observed and analyzed, indicating common problems related to the same application in both projects. The small number of coincident failures in the two projects, nevertheless, provide a supportive evidence for N-version programming, while the observed reliability improvement implies some trends in the software development in the past twenty years.
Motivated by the lack of real-world project data for investigation on software testing and fault tolerance techniques together, we conduct a real-world project and engage multiple programming teams to independently develop program versions based on an industry-scale avionics application. Detailed experimentations are conducted to study the nature, source, type, detectability, and effect of faults uncovered in the program versions, and to learn the relationship among these faults and the correlation of their resulting failures. Coverage-based testing as well as mutation testing techniques are adopted to reproduce mutants with real faults, which facilitate the investigation on the effectiveness of data flow coverage, mutation coverage, and fault coverage for design diversity.
Next, we investigate the effect of code coverage on fault detection which is the underlying intuition of coverage-based testing strategies. From our experimental data, we find that code coverage is a moderate indicator for the capability of fault detection on the whole test set. But the effect of code coverage on fault detection varies under different testing profiles. The correlation between the two measures is high with exceptional test cases, but weak in normal testing. Moreover, our study shows that code coverage can be used as a good filter to reduce the size of the effective test set, although it is more evident for exceptional test cases.
Software permeates our modern society, and its complexity and criticality is ever increasing. Thus the capability to tolerate software faults, particularly for critical applications, is evident. While fault-tolerant software is seen as a necessity, it also remains as a controversial technique and there is a lack of conclusive assessment about its effectiveness.
Then, based on the preliminary experimental data, further experimentation and detailed analyses on the correlations among these faults and the relation to their resulting failures are studied. The results are further applied to the current reliability modeling techniques for fault-tolerant software to examine their effectiveness and accuracy.
This thesis aims at providing a quantitative assessment scheme for a comprehensive evaluation of fault-tolerant software including reliability model comparisons and trade-off studies with software testing techniques. First of all, we propose a comprehensive procedure in assessing fault-tolerant software for software reliability engineering, which is composed of four tasks: modeling, experimentation, evaluation and economics. Our ultimate objective is to construct a systematic approach to predicting the achievable reliability based on the software architecture and testing evidences, through an investigation of testing and modeling techniques for fault-tolerant software.
Cai Xia.
"September 2006."
Adviser: Rung Tsong Michael Lyu.
Source: Dissertation Abstracts International, Volume: 68-03, Section: B, page: 1715.
Thesis (Ph.D.)--Chinese University of Hong Kong, 2006.
Includes bibliographical references (p. 165-181).
Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Abstracts in English and Chinese.
School code: 1307.
APA, Harvard, Vancouver, ISO, and other styles
26

(7039253), Rakesh Kumar. "The Mystery of the Failing Jobs: Insights from Operational Data from Two University-Wide Computing Systems." Thesis, 2019.

Find full text
Abstract:
Node downtime and failed jobs in a computing cluster translate into wasted resources and user dissatisfaction. Therefore understanding why nodes and jobs fail in HPC clusters is essential. This paper provides analyses of node and job failures in two university-wide computing clusters at two Tier I US research universities. We analyzed approximately 3.0M job execution data of System A and 2.2M of System B with data sources coming from accounting logs, resource usage for all primary local and remote resources (memory, IO, network), and node failure data. We observe different kinds of correlations of failures with resource usages and propose a job failure prediction model to trigger event-driven checkpointing and avoid wasted work. We provide generalizable insights for cluster management to improve reliability, such as, for some execution environments local contention dominates, while for others system-wide contention dominates.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography