Dissertations / Theses on the topic 'Form Error Evaluation'

To see the other types of publications on this topic, follow the link: Form Error Evaluation.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Form Error Evaluation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

WANG, ZHUO. "MODELING AND SAMPLING OF WORK PIECE PROFILES FOR FORM ERROR EVALUATION." University of Cincinnati / OhioLINK, 2000. http://rave.ohiolink.edu/etdc/view?acc_num=ucin975356333.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

SARAVANAN, SHANKAR. "EVALUATION OF SPHERICITY USING MODIFIED SEQUENTIAL LINEAR PROGRAMMING." University of Cincinnati / OhioLINK, 2005. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1132343760.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

YUSUPOV, JAMBUL. "On the assessment of the form error using Probabilistic Approach based on Symmetry Classes." Doctoral thesis, Politecnico di Torino, 2015. http://hdl.handle.net/11583/2588833.

Full text
Abstract:
Nowadays, a Coordinate Measuring Machine (CMM) is one of the essential tools used in the product verification process. Measurement points provided by a CMM are conveyed to the CMM data analysis software. As a matter of fact, the software can contribute significantly to the measurement uncertainty, which is very important from the metrological point of view. Mainly, it is related to the association algorithm used in the software, which is intended to find an optimum fitting solution necessary to ensure that the calculations performed satisfy functional requirements. There are various association methods, which can be used in these algorithms (such as Least squares, Minimum zone, etc.). However, the current standards do not specify any of the methods that have to be established. Moreover, there are different techniques for the evaluation of uncertainty (such as experimental resamplings, Monte Carlo simulations, theoretical approaches based on gradients, etc.), which can be used with association methods for the further processing. Uncertainty evaluated by a combination of an association method and uncertainty evaluation technique is a term of implementation uncertainty, which in its turn is a contributor to measurement uncertainty according to the Geometrical Product Specification and Verification project (GPS). This work is focused on the analysis of the impact of the association method on the implementation uncertainty, by assuming that all the other factors (such as the sampling strategy, the measurement equipment parameters, etc.) are fixed and chosen according to standards, within the GPS framework. The objective of the study is Probabilistic method (PM), which is based on the classification of continuous subgroups of a rigid motion (a mathematical principle of the GPS language) and non-parametric density estimation techniques. The method has essentially been developed to decompose complex surfaces and showed promising future in the shape partitioning. However, it comprises geometric fitting procedures, which are considered in this work in more detail. The methodology of the research is based on the comparison of PM with another statistical association method, namely the Least squares method (LS) by means of the parameter estimation and uncertainty evaluation. For the uncertainty evaluation two different techniques, the Gradient-based and Bootstrap methods are used in a combination with the both association methods, PM and LS. The comparison is performed through both the analysis of the results obtained by the parameter estimation and analysis of variance. Variances of the estimated parameters and estimated form error are considered as the response variables in the analysis of variance. The case study is restricted to the roundness geometric tolerance evaluation. Despite the measurement process was simulated, the methodology can be applied for real measurement data. The obtained results during the work can be interesting both in the theoretical and in the practical points of view.
APA, Harvard, Vancouver, ISO, and other styles
4

PANCIANI, GIUSY DONATELLA. "Intelligent procedures for workpiece inspection: the role of uncertainties in design, production and measurement phases." Doctoral thesis, Politecnico di Torino, 2013. http://hdl.handle.net/11583/2543345.

Full text
Abstract:
The actors involved in the manufacturing process need a common technical language from ideation to verification. The Geometrical Product Specification and Verification standards, defined by ISO/TC 213, lay the groundwork for a new operator based language (ISO/TS 17450-2), able to manage a verification coherent with specifications, and, therefore, to reduce the total uncertainty, arising during the product lifecycle. Manufactured parts are necessarily affected by size and form errors, whose control relies on the analysis of the manufacturing technological signature and the evaluation of the associated uncertainty. In order for the study to be unscathed by the impossibility to know the actual shape of a workpiece before measuring it, five roundness profiles, affected by systematic deviation, have been simulated. The impact of simplified verification operators, the relative uncertainty contribution to measurement uncertainty and the ability to properly assess roundness deviation have been evaluated. Then, since literature proposes different approaches for the evaluation of implementation uncertainty, but a standardized method has not been yet achieved, an analysis of the various elements to be considered when choosing a specific approach has been carried out. The most common manufacturing signatures have been considered, together with the number of points necessary for a reliable estimation of implementation uncertainty [2]. Finally, in the case of flatness, a statistical predictive model combined with adaptive sampling strategies has been implemented, as the best “simplified verification operator”, in order to obtain accuracy in the estimation of the flatness error of a clamp for an industrial air cushion guide [1]. Only the most relevant points have been measured, instead of inspecting the whole surface, as required by the “perfect verification operator”, defined in the ISO standards. This consistent reduction is due to the effectiveness of the adaptive methods in presence of technological signatures.
APA, Harvard, Vancouver, ISO, and other styles
5

Zhang, Zhi. "Error-rate evaluation and optimization for space-time codes." Click to view the E-thesis via HKUTO, 2007. http://sunzi.lib.hku.hk/hkuto/record/B39634218.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zhang, Zhi, and 張治. "Error-rate evaluation and optimization for space-time codes." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2007. http://hub.hku.hk/bib/B39634218.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Dawson, Phillip Eng. "Evaluation of human error probabilities for post-initiating events." Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/42339.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Nuclear Science and Engineering, 2007.
Includes bibliographical references (leaves 84-85).
The United States Nuclear Regulatory Commission is responsible for the safe operation of the United States nuclear power plant fleet, and human reliability analysis forms an important portion of the probabilistic risk assessment that demonstrates the safety of sites. Treatment of post-initiating event human error probabilities by three human reliability analysis methods are compared to determine the strengths and weaknesses of the methodologies and to identify how they may be best used. A Technique for Human Event Analysis (ATHEANA) has a unique approach because it searches and screens for deviation scenarios in addition to the nominal failure cases that most methodologies concentrate on. The quantification method of ATHEANA also differs from most methods because the quantification is dependent on expert elicitation to produce data instead of relying on a database or set of nominal values. The Standardized Plant Analysis Risk Human Reliability Analysis (SPAR-H) method uses eight performance shaping factors to modify nominal values in order to represent the quantification of the specifics of a situation. The Electric Power Research Institute Human Reliability Analysis Calculator is a software package that uses a combination of five methods to calculate human error probabilities. Each model is explained before comparing aspects such as the scope, treatment of time available, performance shaping factors, recovery and documentation. Recommendations for future work include creating a database of values based on the nuclear data and emphasizing the documentation of human reliability analysis methods in the future to improve traceability of the process.
by Phillip E. Dawson.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
8

Hassanien, Mohamed A. M. "Error rate performance metrics for digital communications systems." Thesis, Swansea University, 2011. https://cronfa.swan.ac.uk/Record/cronfa42497.

Full text
Abstract:
In this thesis, novel error rate performance metrics and transmission solutions are investigated for delay limited communication systems and for co-channel interference scenarios. The following four research problems in particular were considered. The first research problem is devoted to analysis of the higher order ergodic moments of error rates for digital communication systems with time- unlimited ergodic transmissions and the statistics of the conditional error rates of digital modulations over fading channels are considered. The probability density function and the higher order moments of the conditional error rates are obtained. Non-monotonic behavior of the moments of the conditional bit error rates versus some channel model parameters is observed for a Ricean distributed channel fading amplitude at the detector input. Properties and possible applications of the second central moments are proposed. The second research problem is the non-ergodic error rate analysis and signaling design for communication systems processing a single finite length received sequence. A framework to analyze the error rate properties of non-ergodic transmissions is established. The Bayesian credible intervals are used to estimate the instantaneous bit error rate. A novel degree of ergodicity measure is introduced using the credible interval estimates to quantify the level of ergodicity of the received sequence with respect to the instantaneous bit error rate and to describe the transition of the data detector from the non-ergodic to ergodic zone of operation. The developed non-ergodic analysis is used to define adaptive forward error correction control and adaptive power control policies that can guarantee, with a given probability, the worst case instantaneous bit error rate performance of the detector in its transition fi'om the non-ergodic to ergodic zone of operation. In the third research problem, novel retransmission schemes are developed for delay-limited retransmissions. The proposed scheme relies on a reliable reverse link for the error-free feedback message delivery. Unlike the conventional automatic repeat request schemes, the proposed scheme does not require the use of cyclic redundancy check bits for error detection. In the proposed scheme, random permutations are exploited to locate the bits for retransmission in the predefined window within the packet. The retransmitted bits are combined using the maximal-ratio combining. The complexity-performance trade-offs of the proposed scheme is investigated by mathematical analysis as well as computer simulations. The bit error rate of the proposed scheme is independent of the packet length while the throughput is dependent on the packet length. Three practical techniques suitable for implementation are proposed. The performance of the proposed retransmission scheme was compared to the block repetition code corresponding to a conventional ARQ retransmission strategy. It was shown that, for the same number of retransmissions, and the same packet length, the proposed scheme always outperforms such repetition coding, and, in some scenarios, the performance improvement is found to be significant. Most of our analysis has been done for the case of AWGN channel, however, the case of a slow Rayleigh block fading channel was also investigated. The proposed scheme appears to provide the throughput and the BER reduction gains only for the medium to large SNR values. Finally, the last research problem investigates the link error rate performance with a single co-channel interference. A novel metric to assess whether the standard Gaussian approximation of a single interferer underestimates or overestimates the link bit error rate is derived. This metric is a function of the interference channel fading statistics. However, it is otherwise independent of the statistics of the desired signal. The key step in derivation of the proposed metric is to construct the standard Gaussian approximation of the interference by a non-linear transformation. A closed form expression of the metric is obtained for a Nakagami distributed interference fading amplitude. Numerical results for the case of Nakagami and lognormal distributed interference fading amplitude confirm the validity of the proposed metric. The higher moments, interval estimators and non-linear transformations were investigated to evaluate the error rate performance for different wireless communication scenarios. The synchronization channel is also used jointly with the communication link to form a transmission diversity and subsequently, to improve the error rate performance.
APA, Harvard, Vancouver, ISO, and other styles
9

Rymer, J. W. "ERROR DETECTION AND CORRECTION -- AN EMPIRICAL METHOD FOR EVALUATING TECHNIQUES." International Foundation for Telemetering, 2000. http://hdl.handle.net/10150/606802.

Full text
Abstract:
International Telemetering Conference Proceedings / October 23-26, 2000 / Town & Country Hotel and Conference Center, San Diego, California
This paper describes a method for evaluating error correction techniques for applicability to the flight testing of aircraft. No statistical or math assumptions about the channel or sources of error are used. An empirical method is shown which allows direct “with and without” comparative evaluation of correction techniques. A method was developed to extract error sequences from actual test data independent of the source of the dropouts. Hardware was built to allow a stored error sequence to be repetitively applied to test data. Results are shown for error sequences extracted from a variety of actual test data. The effectiveness of Reed-Solomon (R-S) encoding and interleaving is shown. Test bed hardware configuration is described. Criteria are suggested for worthwhile correction techniques and suggestions are made for future investigation.
APA, Harvard, Vancouver, ISO, and other styles
10

Wennbom, Marika. "Impact of error : Implementation and evaluation of a spatial model for analysing landscape configuration." Thesis, Stockholms universitet, Institutionen för naturgeografi och kvartärgeologi (INK), 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-79214.

Full text
Abstract:
Quality and error assessment is an essential part of spatial analysis which with the increasingamount of applications resulting from today’s extensive access to spatial data, such as satelliteimagery and computer power is extra important to address. This study evaluates the impact ofinput errors associated with satellite sensor noise for a spatial method aimed at characterisingaspects of landscapes associated with the historical village structure, called the HybridCharacterisation Model (HCM), that was developed as a tool to monitor sub goals of theSwedish Environmental Goal “A varied agricultural landscape”. The method and errorsimulation method employed for generating random errors in the input data, is implemented andautomated as a Python script enabling easy iteration of the procedure. The HCM is evaluatedqualitatively (by visual analysis) and quantitatively comparing kappa index values between theoutputs affected by error. Comparing the result of the qualitative and quantitative evaluationshows that the kappa index is an applicable measurement of quality for the HCM. Thequalitative analysis compares impact of error for two different scales, the village scale and thelandscape scale, and shows that the HCM is performing well on the landscape scale for up to30% error and on the village scale for up to 10% and shows that the impact of error differsdepending on the shape of the analysed feature. The Python script produced in this study couldbe further developed and modified to evaluate the HCM for other aspects of input error, such asclassification errors, although for such studies to be motivated the potential errors associatedwith the model and its parameters must first be further evaluated.
APA, Harvard, Vancouver, ISO, and other styles
11

Sharp, Gary David. "Lag length selection for vector error correction models." Thesis, Rhodes University, 2010. http://hdl.handle.net/10962/d1002808.

Full text
Abstract:
This thesis investigates the problem of model identification in a Vector Autoregressive framework. The study reviews the existing research, conducts an extensive simulation based analysis of thirteen information theoretic criterion (IC), one of which is a novel derivation. The simulation exercise considers the evaluation of seven alternative error restricted vector autoregressive models with four different lag lengths. Alternative sample sizes and parameterisations are also evaluated and compared to results in the existing literature. The results of the comparative analysis provide strong support for the efficiency based criterion of Akaike and in particular the selection capability of the novel criterion, referred to as a modified corrected Akaike information criterion, demonstrates useful finite sample properties.
APA, Harvard, Vancouver, ISO, and other styles
12

Lu, Bin. "Energy Usage Evaluation and Condition Monitoring for Electric Machines using Wireless Sensor Networks." Diss., Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/14152.

Full text
Abstract:
Energy usage evaluation and condition monitoring for electric machines are important in industry for overall energy savings. Traditionally these functions are realized only for large motors in wired systems formed by communication cables and various types of sensors. The unique characteristics of the wireless sensor networks (WSN) make them the ideal wireless structure for low-cost energy management in industrial plants. This work focuses on developing nonintrusive motor-efficiency-estimation methods, which are essential in the wireless motor-energy-management systems in a WSN architecture that is capable of improving overall energy savings in U.S. industry. This work starts with an investigation of existing motor-efficiency-evaluation methods. Based on the findings, a general approach of developing nonintrusive efficiency-estimation methods is proposed, incorporating sensorless rotor-speed detection, stator-resistance estimation, and loss estimation techniques. Following this approach, two new methods are proposed for estimating the efficiencies of in-service induction motors, using air-gap torque estimation and a modified induction motor equivalent circuit, respectively. The experimental results show that both methods achieve accurate efficiency estimates within ¡À2-3% errors under normal load conditions, using only a few cycles of input voltages and currents. The analytical results obtained from error analysis agree well with the experimental results. Using the proposed efficiency-estimation methods, a closed-loop motor-energy-management scheme for industrial plants with a WSN architecture is proposed. Besides the energy-usage-evaluation algorithms, this scheme also incorporates various sensorless current-based motor-condition-monitoring algorithms. A uniform data interface is defined to seamlessly integrate these energy-evaluation and condition-monitoring algorithms. Prototype wireless sensor devices are designed and implemented to satisfy the specific needs of motor energy management. A WSN test bed is implemented. The applicability of the proposed scheme is validated from the experimental results using multiple motors with different physical configurations under various load conditions. To demonstrate the validity of the measured and estimated motor efficiencies in the experiments presented in this work, an in-depth error analysis on motor efficiency measurement and estimation is conducted, using maximum error estimation, worst-case error estimation, and realistic error estimation techniques. The conclusions, contributions, and recommendations are summarized at the end.
APA, Harvard, Vancouver, ISO, and other styles
13

Bates, Alan T. "Abnormal error-processing : evaluation of a potential trait marker for schizophrenia." Thesis, University of Nottingham, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.415787.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Presley, Mary R. "On the evaluation of human error probabilities for post-initiating events." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/41274.

Full text
Abstract:
Thesis (S.M. and S.B.)--Massachusetts Institute of Technology, Dept. of Nuclear Science and Engineering, 2006.
Includes bibliographical references (p. 109-111).
Quantification of human error probabilities (HEPs) for the purpose of human reliability assessment (HRA) is very complex. Because of this complexity, the state of the art includes a variety of HRA models, each with its own objectives, scope and quantification method. In addition to varying methods of quantification, each model is replete with its own terminology and categorizations, therefore making comparison across models exceedingly difficult. This paper demonstrates the capabilities and limitations of two prominent HRA models: the Electric Power Research Institute (EPRI) HRA Calculator (using the HRC/ORE and Cause Based Decision Tree methods), used widely in industry, and A Technique for Human Error Analysis (ATHEANA), developed by the US Nuclear Regulatory Commission. This demonstration includes a brief description of the two models, a comparison of what they incorporate in HEP quantification, a "translation" of terminologies, and examples of their capabilities via the Halden Task Complexity experiments. Possible ways to incorporate learning from simulator experiments, such as those at Halden, to improve the quantification methods are also addressed. The primary difference between ATHEANA and the EPRI HRA Calculator is in their objectives. EPRI's objective is to provide a method that is not overly resource intensive and can be used by a PRA analyst without significant HRA experience. Consequently, EPRI quantifies HEPs using time reliability curves (TRCs) and cause based decision trees (CBDT). ATHEANA attempts to find contexts where operators are likely to fail without recovery and quantify the associated HEP. This includes finding how operators can further degrade the plant condition while still believing their actions are correct. ATHEANA quantifies HEPs through an expert judgment elicitation process.
(cont.) ATHEANA and the EPRI Calculator are very similar in the contexts they consider in HEP calculation: both factor in the accident sequence context, performance shaping factors (PSFs), and cognitive factors into HEP calculation. However, stemming from the difference in objectives, there is a difference in how deeply into a human action each model probes. ATHEANA employs a HRA team (including a HRA expert, operations personnel and a thermo-hydraulics expert) to examine a broad set of PSFs and contexts. It also expands the accident sequences to include the consequences of a misdiagnosis beyond simple failures in implementing the procedures (what will the operator likely do next given a specific misdiagnosis?) To limit the resource burden, the EPRI Calculator is prescriptive and limits the PSFs and cognitive factors for consideration thus enhancing consistency among analysts and reducing needed resources. However, CBDT and ATHEANA have the same approach to evaluating the cognitive context. The Halden Task Complexity experiments looked at different factors that would increase the probability of human failures such as the effects of time pressure/information load and masked events. EPRI and ATHEANA could use the design of the Halden experiments as a model for future simulations because they produced results that showed important differences in crew performance under certain conditions. Both models can also use the Halden experiments and results to sensitize the experts and analysts to the real effects of an error forcing context.
by Mary R. Presley.
S.M.and S.B.
APA, Harvard, Vancouver, ISO, and other styles
15

Fung, Casey Kin-Chee. "A methodology for the collection and evaluation of software error data /." The Ohio State University, 1985. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487263399022312.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Santos, Fernando Fernandes dos. "Reliability evaluation and error mitigation in pedestrian detection algorithms for embedded GPUs." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2017. http://hdl.handle.net/10183/159210.

Full text
Abstract:
A confiabilidade de algoritmos para detecção de pedestres é um problema fundamental para carros auto dirigíveis ou com auxílio de direção. Métodos que utilizam algoritmos de detecção de objetos como Histograma de Gradientes Orientados (HOG - Histogram of Oriented Gradients) ou Redes Neurais de Convolução (CNN – Convolutional Neural Network) são muito populares em aplicações automotivas. Unidades de Processamento Gráfico (GPU – Graphics Processing Unit) são exploradas para executar detecção de objetos de uma maneira eficiente. Infelizmente, as arquiteturas das atuais GPUs tem se mostrado particularmente vulneráveis a erros induzidos por radiação. Este trabalho apresenta uma validação e um estudo analítico sobre a confiabilidade de duas classes de algoritmos de detecção de objetos, HOG e CNN. Esta pesquisa almeja não somente quantificar, mas também qualificar os erros produzidos por radiação em aplicações de detecção de objetos em GPUs embarcadas. Os resultados experimentais com HOG foram obtidos usando duas arquiteturas de GPU embarcadas diferentes (Tegra e AMD APU), cada uma foi exposta por aproximadamente 100 horas em um feixe de nêutrons em Los Alamos National Lab (LANL). As métricas Precision e Recall foram usadas para validar a criticalidade do erro. Uma análise final mostrou que por um lado HOG é intrinsecamente resiliente a falhas (65% a 85% dos erros na saída tiveram um pequeno impacto na detecção), do outro lado alguns erros críticos aconteceram, tais que poderiam resultar em pedestres não detectados ou paradas desnecessárias do veículo. Este trabalho também avaliou a confiabilidade de duas Redes Neurais de Convolução para detecção de Objetos:Darknet e Faster RCNN. Três arquiteturas diferentes de GPUs foram expostas em um feixe de nêutrons controlado (Kepler, Maxwell, e Pascal), com as redes detectando objetos em dois data sets, Caltech e Visual Object Classes. Através da análise das saídas corrompidas das redes neurais, foi possível distinguir entre erros toleráveis e erros críticos, ou seja, erros que poderiam impactar na detecção de objetos. Adicionalmente, extensivas injeções de falhas no nível da aplicação (GDB) e em nível arquitetural (SASSIFI) foram feitas, para identificar partes críticas do código para o HOG e as CNNs. Os resultados mostraram que não são todos os estágios da detecção de objetos que são críticos para a confiabilidade da detecção final. Graças a injeção de falhas foi possível identificar partes do HOG e da Darknet, que se protegidas, irão com uma maior probabilidade aumentar a sua confiabilidade, sem adicionar um overhead desnecessário. A estratégia de tolerância a falhas proposta para o HOG foi capaz de detectar até 70% dos erros com 12% de overhead de tempo.
Pedestrian detection reliability is a fundamental problem for autonomous or aided driving. Methods that use object detection algorithms such as Histogram of Oriented Gradients (HOG) or Convolutional Neural Networks (CNN) are today very popular in automotive applications. Embedded Graphics Processing Units (GPUs) are exploited to make object detection in a very efficient manner. Unfortunately, GPUs architecture has been shown to be particularly vulnerable to radiation-induced failures. This work presents an experimental evaluation and analytical study of the reliability of two types of object detection algorithms: HOG and CNNs. This research aim is not just to quantify but also to qualify the radiation-induced errors on object detection applications executed in embedded GPUs. HOG experimental results were obtained using two different architectures of embedded GPUs (Tegra and AMD APU), each exposed for about 100 hours to a controlled neutron beam at Los Alamos National Lab (LANL). Precision and Recall metrics are considered to evaluate the error criticality. The reported analysis shows that, while being intrinsically resilient (65% to 85% of output errors only slightly impact detection), HOG experienced some particularly critical errors that could result in undetected pedestrians or unnecessary vehicle stops. This works also evaluates the reliability of two Convolutional Neural Networks for object detection: You Only Look Once (YOLO) and Faster RCNN. Three different GPU architectures were exposed to controlled neutron beams (Kepler, Maxwell, and Pascal) detecting objects in both Caltech and Visual Object Classes data sets. By analyzing the neural network corrupted output, it is possible to distinguish between tolerable errors and critical errors, i.e., errors that could impact detection. Additionally, extensive GDB-level and architectural-level fault-injection campaigns were performed to identify HOG and YOLO critical procedures. Results show that not all stages of object detection algorithms are critical to the final classification reliability. Thanks to the fault injection analysis it is possible to identify HOG and Darknet portions that, if hardened, are more likely to increase reliability without introducing unnecessary overhead. The proposed HOG hardening strategy is able to detect up to 70% of errors with a 12% execution time overhead.
APA, Harvard, Vancouver, ISO, and other styles
17

Eidestedt, Richard, and Stefan Ekberg. "Evaluating forecast accuracy for Error Correction constraints and Intercept Correction." Thesis, Uppsala universitet, Statistiska institutionen, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-194423.

Full text
Abstract:
This paper examines the forecast accuracy of an unrestricted Vector Autoregressive (VAR) model for GDP, relative to a comparable Vector Error Correction (VEC) model that recognizes that the data is characterized by co-integration. In addition, an alternative forecast method, Intercept Correction (IC), is considered for further comparison. Recursive out-of-sample forecasts are generated for both models and forecast techniques. The generated forecasts for each model are objectively evaluated by a selection of evaluation measures and equal accuracy tests. The result shows that the VEC models consistently outperform the VAR models. Further, IC enhances the forecast accuracy when applied to the VEC model, while there is no such indication when applied to the VAR model. For certain forecast horizons there is a significant difference in forecast ability between the VEC IC model compared to the VAR model.
APA, Harvard, Vancouver, ISO, and other styles
18

wang, yubing. "Modeling and Evaluating Feedback-Based Error Control for Video Transfer." Digital WPI, 2008. https://digitalcommons.wpi.edu/etd-dissertations/397.

Full text
Abstract:
"Packet loss can be detrimental to real-time interactive video over lossy networks because one lost video packet can propagate errors to many subsequent video frames due to the encoding dependency between frames. Feedback-based error control techniques use feedback information from the decoder to adjust coding parameters at the encoder or retransmit lost packets to reduce the error propagation due to data loss. Feedback-based error control techniques have been shown to be more effective than trying to conceal the error at the encoder or decoder alone since they allow the encoder and decoder to cooperate in the error control process. However, there has been no systematic exploration of the impact of video content and network conditions on the performance of feedback-based error control techniques. In particular, the impact of packet loss, round-trip delay, network capacity constraint, video motion and reference distance on the quality of videos using feedback-based error control techniques have not been systematically studied. This thesis presents analytical models for the major feedback-based error control techniques: Retransmission, Reference Picture Selection (both NACK and ACK modes) and Intra Update. These feedback-based error control techniques have been included in H.263/H.264 and MPEG4, the state of the art video in compression standards. Given a round-trip time, packet loss rate, network capacity constraint, our models can predict the quality for a streaming video with retransmission, Intra Update and RPS over a lossy network. In order to exploit our analytical models, a series of studies has been conducted to explore the effect of reference distance, capacity constraint and Intra coding on video quality. The accuracy of our analytical models in predicting the video quality under different network conditions is validated through simulations. These models are used to examine the behavior of feedback-based error control schemes under a variety of network conditions and video content through a series of analytic experiments. Analysis shows that the performance of feedback-based error control techniques is affected by a variety of factors including round-trip time, loss rate, video content and the Group of Pictures (GOP) length. In particular: 1) RPS NACK achieves the best performance when loss rate is low while RPS ACK outperforms other repair techniques when loss rate is high. However RPS ACK performs the worst when loss rate is low. Retransmission performs the worst when the loss rate is high; 2) for a given round-trip time, the loss rate where RPS NACK performs worse than RPS ACK is higher for low motion videos than it is for high motion videos; 3) Videos with RPS NACK always perform the same or better than videos without repair. However, when small GOP sizes are used, videos without repair perform better than videos with RPS ACK; 4) RPS NACK outperform Intra Update for low-motion videos. However, the performance gap between RPS NACK and Intra Update drops when the round-trip time or the intensity of video motion increases. 5) Although the above trends hold for both VQM and PSNR, when VQM is the video quality metric the performance results are much more sensitive to network loss. 6) Retransmission is effective only when the round-trip time is low. When the round-trip time is high, Partial Retransmission achieves almost the same performance as Full Retransmission. These insights derived from our models can help determine appropriate choices for feedback-based error control techniques under various network conditions and video content. "
APA, Harvard, Vancouver, ISO, and other styles
19

Wang, Yubing. "Modeling and evaluating feedback-based error control for video transfer." Worcester, Mass. : Worcester Polytechnic Institute, 2008. http://www.wpi.edu/Pubs/ETD/Available/etd-102408-150542/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Dutta, Rahul Kumar. "A Framework for Software Security Testing and Evaluation." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-121645.

Full text
Abstract:
Security in automotive industry is a thought of concern these days. As more smart electronic devices are getting connected to each other, the dependency on these devices are urging us to connect them with moving objects such as cars, buses, trucks etc. As such, safety and security issues related to automotive objects are becoming more relevant in the realm of internet connected devices and objects. In this thesis, we emphasize on certain factors that introduces security vulnerabilities in the implementation phase of Software Development Life Cycle (SDLC). Input invalidation is one of them that we address in our work. We implement a security evaluation framework that allows us to improve security in automotive software by identifying and removing software security vulnerabilities that arise due to input invalidation reasons during SDLC. We propose to use this framework in the implementation and testing phase so that the critical deficiencies of software in security by design issues could be easily addressed and mitigated.
APA, Harvard, Vancouver, ISO, and other styles
21

Koniakowski, Isabella. "When Should Feedback be Provided in Online Forms? : Using Revisits as a Measurement of Optimal Scanpath Disruption and Re-evaluating the Modal Theory of Form Completion." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-138800.

Full text
Abstract:
In web forms, feedback can be provided to users at different points in time. This study investigates these three ways of providing feedback to find which results in the shortest completion time, which results in the lowest number of gaze revisits to input fields, and which type of feedback the users prefer. This was investigated through development of prototypes that were tested with 30 participants in a within-group design after which they were interviewed about their experiences. Providing feedback instantly or after form submission resulted in significantly shorter completion times than providing feedback after users left a field. Providing feedback instantly also resulted in significantly fewer revisits to input fields compared to providing feedback after leaving a field. Through a thematic analysis, users’ experiences were shown to be the most negative when given feedback after form submission, while the most positive experiences occurred when users were given feedback immediately. The results indicate that providing feedback immediately may be an equally good or better alternative to earlier research recommendations to provide feedback after form submission and that revisits to areas of interest may, with further research, be a measurement of optimal scanpath disruption.
APA, Harvard, Vancouver, ISO, and other styles
22

Li, Yingjie. "Bit error rate simulation of a CDMA system for personal communications." Thesis, This resource online, 1993. http://scholar.lib.vt.edu/theses/available/etd-07282008-135717/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Miura, Tomoaki. "Evaluation and characterization of vegetation indices with error/uncertainty analysis for EOS-MODIS." Diss., The University of Arizona, 2000. http://hdl.handle.net/10150/284157.

Full text
Abstract:
A set of error/uncertainty analyses were performed on several "improved" vegetation indices (VIs) planned for operational use in the Moderate Resolution Imaging Spectroradiometer (MODIS) VI products onboard the Terra (EOS AM-1) and Aqua (EOS PM-1) satellite platforms. The objective was to investigate the performance and accuracy of the satellite-derived VI products under improved sensor characteristics and algorithms. These include the "atmospheric resistant" VIs that incorporate the "blue" band for normalization of aerosol effects and the most widely-used, normalized difference vegetation index (NDVI). The analyses were conducted to evaluate specifically: (1) the impact of sensor calibration uncertainties on VI accuracies, (2) the capabilities of the atmospheric resistant VIs and various middle-infrared (MIR) derived VIs to minimize smoke aerosol contamination, and (3) the performances of the atmospheric resistant VIs under "residual" aerosol effects resulting from the assumptions in the MODIS aerosol correction algorithm. The results of these studies showed both the advantages and disadvantages of using the atmospheric resistant VIs for operational vegetation monitoring. The atmospheric resistant VIs successfully minimized optically thin aerosol smoke contamination (aerosol optical thickness (AOT) at 0.67 μm < 1.0) but not optically thick smoke (AOT at 0.67 μm > 1.0). On the other hand, their resistances to "residual" aerosol effects were greater when the effects resulted from the correction of optically-thick aerosol atmosphere. The atmospheric resistant VIs did not successfully minimize the residual aerosol effects from optically-thin aerosol atmosphere (AOT at 0.67 μm ≤ ∼0.15), which was caused mainly by the possible wrong choice of aerosol model used for the AOT estimation and correction. The resultant uncertainties of the atmospheric resistant Vls associated with calibration, which were twice as large as that of the NDVI, increased with increasing AOT. These results suggest that the atmospheric resistant VIs be computed from partially (Rayleigh/O₃) corrected reflectances under normal atmospheric conditions (e.g., visibility > 10 km). Aerosol corrections should only be performed when biomass burning, urban/industrial pollution, and dust storms (larger AOT) are detected.
APA, Harvard, Vancouver, ISO, and other styles
24

Erikmats, John, and Johan Sjösten. "Sustainable Investment Strategies : A Quantitative Evaluation of Sustainable Investment Strategies For Index Funds." Thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-160941.

Full text
Abstract:
Modern society is faced with the complex and intractable challenge of global warming, along with other environmental issues that could potentially alter our way of life if not managed properly. Is it possible that financial markets and equity investors could have a huge part to play in the transformation towards a greener and more sustainable world? Previous studies about investment strategies regarding sustainability have for the most part been centered around possibly less objective ESG-scores or around carbon and GHG-emissions only, with little or no consideration for water usage and waste management. This thesis aims to amend to the previous work on carbon reducing strategies and ESG-investing with the addition of water usage and waste management, especically using raw data of these measures instead of ESG-ratings. Index replicating portfolios have become more and more popular as it proves harder and harder to beat the index, offering good returns along with cheap and uncomplicated portfolio construction and management. In a trending market, the fear of missing out and the demand for market return can make an index replicating strategy a way for investors to have market exposure but still remain diversied and without confusion about which horses to bet on. This thesis studies the relationship between tracking-error and the increase of sustainability in a portfolio through reduction of the intensity of carbon emissions, water usages and poor waste management. To be able to make a fair comparison, these measures are normalized by dividing each measure by the reported annual revenue. These three obtained intensities are then implemented individually, as well as all together into index replicating portfolios in order to study the effect from decreasing them. First and foremost we study the effect on the tracking-error, but also the effects on returns and volatility. We also study the effect on liquidity and turnover in the portfolios to show that it is possible to implement extensive sustainability increasing methods into an index replication equity portfolio. We follow the UCITS-directory to avoid overweightin specic companies and only allow the portfolios to overweight a sector with maximum 2%, in order to avoid an unwanted exposure to sectors with naturally lower intensities. The portfolios are obtained by using a multi-factor risk model to predict the expected statistical behaviour in relation to the chosen factors. Followed by applying Markowitz Modern Portfolio Theory through a convex optimization problem with the objective function to minimize tracking-error. All displayed portfolios had stable and convex optimization and were compliant with the UCITS-directory. We limited our study to only North American stocks and chose the index "MCSI NA" to replicate. Only stocks that were a part of the index were allowed to invest in and we did not allow negative weights for any stocks. The portfolios were constructed and backtested for the period 2014-12-01 until 2019-03-01 with rebalancing quarterly at the same points in time that the index is rebalanced by MCSI. We found that it was possible to implement extensive sustainability considerations into the portfolios and still keep a high correlation with the index whilst keeping low tracking-errors. We believe that most index replicating investors should be able to implement reductions of above mentioned intensities of about 40-60% without compromising tracking-errors,returns and volatility too much. We found evidence that during this time and in this market our low-intensities portfolios would have overperformed the index. We also found that returns increased and volatility decreased as we increased the reduction of each individual measure and all three collectively. Reducing carbon intensity seemed to drive positive returns and lower volatility the most, but we also observed apositive effect from reduction of all intensities. Our belief before conducting this study was that sustainability should have a negative effect on returns due to the limitation of the feasible area of investing. This motivated us to build portfolios with intent to makeup for these lesser returns and hopefully "beat the index". This failed in almost all cases and the only way we were able to beat the index were through implementing sustainability in our portfolios.
APA, Harvard, Vancouver, ISO, and other styles
25

Al-Liabi, Majda Majeed. "Computational support for learners of Arabic." Thesis, University of Manchester, 2012. https://www.research.manchester.ac.uk/portal/en/theses/computational-support-for-learners-of-arabic(abd20b76-3ba2-4e11-8aa5-459ec6d8d7d2).html.

Full text
Abstract:
This thesis documents the use of Natural Language Processing (NLP) in Computer Assisted Language Learning (CALL) and its contribution to the learning experience of students studying Arabic as a foreign language. The goal of this project is to build an Intelligent Computer Assisted Language Learning (ICALL) system that provides computational assistance to learners of Arabic by teaching grammar, producing homework and issuing students with immediate feedback. To produce this system we use the Parasite system, which produces morphological, syntactic and semantic analysis of textual input, and extend it to provide error detection and diagnosis. The methodology we adopt involves relaxing constraints on unification so that correct information contained in a badly formed sentence may still be used to obtain a coherent overall analysis. We look at a range of errors, drawn from experience with learners at various levels, covering word internal problems (addition of inappropriate affixes, failure to apply morphotactic rules properly) and problems with relations between words (local constraints on features, and word order problems). As feedback is an important factor in learning, we look into different types of feedback that can be used to evaluate which is the most appropriate for the aim of our system.
APA, Harvard, Vancouver, ISO, and other styles
26

Boone, Amanda Carrie. "Methodology for evaluating and reducing medication administration errors." Master's thesis, Mississippi State : Mississippi State University, 2003. http://library.msstate.edu/etd/show.asp?etd=etd-07202003-190139.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Nakagawa, N., H. Okada, T. Wada, T. Yamazato, and M. Katayama. "Performance Evaluation for Error Correcting Scheme on Multiple Routes in Wireless Multi-hop Networks." IEEE, 2004. http://hdl.handle.net/2237/7753.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Romann, Alexandra. "Evaluating the performance of simulation extrapolation and Bayesian adjustments for measurement error." Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/5236.

Full text
Abstract:
Measurement error is a frequent issue in many research areas. For instance, in health research it is often of interest to understand the relationship be tween an outcome and an exposure, which is often mismeasured if the study is observational or a gold standard is costly or absent. Measurement error in the explanatory variable can have serious effects, such as biased parame ter estimation, loss of power, and masking of the features of the data. The structure of the measurement error is usually not known to the investigators, leading to many difficulties in finding solutions for its correction. In this thesis, we consider problems involving a correctly measured con tinuous or binary response, a mismeasured continuous exposure variable, along with another correctly measured covariate. We compare our proposed Bayesian approach to the commonly used simulation extrapolation (SIMEX) method. The Bayesian model incorporates the uncertainty of the measure ment error variance and the posterior distribution is generated by using the Gibbs sampler as well as the random walk Metropolis algorithm. The com parison between the Bayesian and SIMEX approaches is conducted using different cases of a simulated data including validation data, as well as the Framingham Heart Study data which provides replicates but no validation data. The Bayesian approach is more robust to changes in the measurement error variance or validation sample size, and consistently produces wider credible intervals as it incorporates more uncertainty. The underlying theme of this thesis is the uncertainty involved in the es timation of the measurement error variance. We investigate how accurately this parameter has to be estimated and how confident one has to be about this estimate in order to produce better results by choosing the Bayesian measurement error correction over the naive analysis where measurement error is ignored.
APA, Harvard, Vancouver, ISO, and other styles
29

JIA, BIN. "An Empirical Study on OCLC Catalog Record Errors for the Copy Cataloging of Chinese Monographs." Thesis, School of Information and Library Science, 2007. http://hdl.handle.net/1901/376.

Full text
Abstract:
This paper explores and analyzes the errors in the sampled catalog records of Chinese monographs in the OCLC database. The author examined the errors by content, frequency, type and MARC field location to study the effect on both copy cataloging activities and library patrons’ access to an online catalog record. Finally, a number of recommendations are proposed for future research and error decreasing measures.
APA, Harvard, Vancouver, ISO, and other styles
30

Nassr, Husam, and Kurt Kosbar. "PERFORMANCE EVALUATION FOR DECISION-FEEDBACK EQUALIZER WITH PARAMETER SELECTION ON UNDERWATER ACOUSTIC COMMUNICATION." International Foundation for Telemetering, 2017. http://hdl.handle.net/10150/626999.

Full text
Abstract:
This paper investigates the effect of parameter selection for the decision feedback equalization (DFE) on communication performance through a dispersive underwater acoustic wireless channel (UAWC). A DFE based on minimum mean-square error (MMSE-DFE) criterion has been employed in the implementation for evaluation purposes. The output from the MMSE-DFE is input to the decoder to estimate the transmitted bit sequence. The main goal of this experimental simulation is to determine the best selection, such that the reduction in the computational overload is achieved without altering the performance of the system, where the computational complexity can be reduced by selecting an equalizer with a proper length. The system performance is tested for BPSK, QPSK, 8PSK and 16QAM modulation and a simulation for the system is carried out for Proakis channel A and real underwater wireless acoustic channel estimated during SPACE08 measurements to verify the selection.
APA, Harvard, Vancouver, ISO, and other styles
31

Chitwood, Tara Marshall. "SECOND VICTIM: SUPPORT FOR THE HEALTHCARE TEAM." Case Western Reserve University Doctor of Nursing Practice / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=casednp1554820138107259.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Su, QingLang. "Automatic image alignment for clinical evaluation of patient setup errors in radiotherapy." Thesis, University of Central Lancashire, 2004. http://clok.uclan.ac.uk/20692/.

Full text
Abstract:
In radiotherapy, the treatment is typically pursued by irradiating the patient with high energy x-ray beams conformed to the shape of the tumour from multiple directions. Rather than administering the total dose in one session, the dose is often delivered in twenty to thirty sessions. For each session several settings must be reproduced precisely (treatment setup). These settings include machine setup, such as energy, direction, size and shape of the radiation beams as well as patient setup, such as position and orientation of the patient relative to the beams. An inaccurate setup may result in not only recurrence of the tumour but also medical complications. The aim of the project is to develop a novel image processing system to enable fast and accurate evaluation of patient setup errors in radiotherapy by automatic detection and alignment of anatomical features in images acquired during treatment simulation and treatment delivery. By combining various image processing and mathematical techniques, the thesis presents the successful development of an effective approach which includes detection and separation of collimation features for establishment of image correspondence, region based image alignment based on local mutual information, and application of the least-squares method for exhaustive validation to reject outliers and for estimation of global optimum alignment. A complete software tool was developed and clinical validation was performed using both phantom and real radiotherapy images. For the former, the alignment accuracy is shown to be within 0.06 cm for translation and 1.14 degrees for rotation. More significantly, the translation is within the ±0.1 cm machine setup tolerance and the setup rotation can vary between ±1 degree. For the latter, the alignment was consistently found to be similar or better than those based on manual methods. Therefore, a good basis is formed for consistent, fast and reliable evaluation of patient setup errors in radiotherapy.
APA, Harvard, Vancouver, ISO, and other styles
33

Onuorah, Chinedum Anthony. "Evaluation of pavement roughness and vehicle vibrations for road surface profiling." Thesis, University of Hertfordshire, 2018. http://hdl.handle.net/2299/21107.

Full text
Abstract:
The research explores aspects of road surface measurement and monitoring, targeting some of the main challenges in the field, including cost and portability of high-speed inertial profilers. These challenges are due to the complexities of modern profilers to integrate various sensors while using advanced algorithms and processes to analyse measured sensor data. Novel techniques were proposed to improve the accuracy of road surface longitudinal profiles using inertial profilers. The thesis presents a Half-Wavelength Peak Matching (HWPM) model, designed for inertial profilers that integrate a laser displacement sensor and an accelerometer to evaluate surface irregularities. The model provides an alternative approach to drift correction in accelerometers, which is a major challenge when evaluating displacement from acceleration. The theory relies on using data from the laser displacement sensor to estimate a correction offset for the derived displacement. The study also proposes an alternative technique to evaluating vibration velocity, which improves on computational factors when compared to commonly used methods. The aim is to explore a different dimension to road roughness evaluation, by investigating the effect of surface irregularities on vehicle vibration. The measured samples show that the drift in the displacement calculated from the accelerometer increased as the vehicle speed at which the road measurement was taken increased. As such, the significance of the HWPM model is more apparent at higher vehicle speeds, where the results obtained show noticeable improvements to current techniques. All results and analysis carried out to validate the model are based on real-time data obtained from an inertial profiler that was designed and developed for the research. The profiler, which is designed for portability, scalability and accuracy, provides a Power Over Ethernet (POE) enabled solution to cope with the demand for high data transmission rates.
APA, Harvard, Vancouver, ISO, and other styles
34

Bjurenfalk, Jonatan, and August Johnson. "Automated error matching system using machine learning and data clustering : Evaluating unsupervised learning methods for categorizing error types, capturing bugs, and detecting outliers." Thesis, Linköpings universitet, Programvara och system, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-177280.

Full text
Abstract:
For large and complex software systems, it is a time-consuming process to manually inspect error logs produced from the test suites of such systems. Whether it is for identifyingabnormal faults, or finding bugs; it is a process that limits development progress, and requires experience. An automated solution for such processes could potentially lead to efficient fault identification and bug reporting, while also enabling developers to spend more time on improving system functionality. Three unsupervised clustering algorithms are evaluated for the task, HDBSCAN, DBSCAN, and X-Means. In addition, HDBSCAN, DBSCAN and an LSTM-based autoencoder are evaluated for outlier detection. The dataset consists of error logs produced from a robotic test system. These logs are cleaned and pre-processed using stopword removal, stemming, term frequency-inverse document frequency (tf-idf) and singular value decomposition (SVD). Two domain experts are tasked with evaluating the results produced from clustering and outlier detection. Results indicate that X-Means outperform the other clustering algorithms when tasked with automatically categorizing error types, and capturing bugs. Furthermore, none of the outlier detection methods yielded sufficient results. However, it was found that X-Means’s clusters with a size of one data point yielded an accurate representation of outliers occurring in the error log dataset. Conclusively, the domain experts deemed X-means to be a helpful tool for categorizing error types, capturing bugs, and detecting outliers.
APA, Harvard, Vancouver, ISO, and other styles
35

Adolfsson, Jonathan. "Lane Following for Autonomous Vehicles - Novel Heading Error Definition and Evaluation of Discrete vs. Continuous Time Control." Thesis, KTH, Skolan för elektro- och systemteknik (EES), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-214712.

Full text
Abstract:
The autonomous vehicle has a multilayer controlarchitecture for determining its movements, in the top layer aplanner reads the environment and decides on a path and speedat which to drive, it is then up to a quicker and less complexcontroller to make sure that the generated lane is followed.The thesis considers a controller system for lane following ofan autonomous vehicle. The first purpose of the thesis is toanalyze the performance of a lane following controller, usinga new heading error definition. The second is to investigate thedifference in performance between a controller that accounts for,and one that does not account for, the sample rate of the discretetime system representing the actual sampled digital controller. Itis shown that the new heading error definition gives reliableresults in both normal and extreme situations and that thedifference in performance when considering the sample rate ornot is very small.
APA, Harvard, Vancouver, ISO, and other styles
36

Neese, Jay M. "Evaluating Atlantic tropical cyclone track error distributions for use in probabilistic forecasts of wind distribution." Thesis, Monterey, California. Naval Postgraduate School, 2010. http://hdl.handle.net/10945/5150.

Full text
Abstract:
Approved for public release; distribution is unlimited
This thesis investigates whether the National Hurricane Center (NHC) operational product for producing probabilistic forecasts of tropical cyclone (TC) wind distributions could be further improved by examining the distributions of track errors it draws upon to calculate probabilities. The track spread/skill relationship for several global ensemble prediction system forecasts is examined as a condition for a description of a full probability distribution function. The 2007, 2008, and 2009 NHC official track forecasts are compared to the ensemble prediction system model along-, cross-, and forecast-track errors. Significant differences in statistical properties were then identified among the groups to determine whether conditioning based on geographic location was warranted. Examination of each regional distribution interval suggests that differences in distributions existed for along-track and cross-track errors. Because errors for ensemble mean and deterministic forecasts typically have larger mean errors and larger variance than official forecast errors, it is unlikely that independent error distributions based on these models would refine the PDFs used in the probabilistic model. However, this should be tested with a sensitivity analysis and verified with the probability swath. Overall, conditional formatting suggests that the NHC probability product may be improved if the Monte Carlo (MC) model would draw from refined distributions of track errors based on TC location.
APA, Harvard, Vancouver, ISO, and other styles
37

White, Robert J. "Exploration of a Strategy for Reducing Gear Noise in Planetary Transmissions and Evaluation of Laser Vibrometry as a Means for Measuring Transmission Error." Case Western Reserve University School of Graduate Studies / OhioLINK, 2006. http://rave.ohiolink.edu/etdc/view?acc_num=case1129928063.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Harite, Shibani. "Evaluation of 10-fold cross validation and prediction error sums of squares statistic for population pharmacokinetic model validation." Scholarly Commons, 2003. https://scholarlycommons.pacific.edu/uop_etds/585.

Full text
Abstract:
It was the objective of the current study to evaluate the ability of 10-fold crossvalidation and prediction error sum of squares (PRESS) statistic to identify population pharmacokinetic models (PPKM) that were estimated from data without influence observations versus PPKMs from data containing influence observations. The evaluation of 10-fold cross validation and PRESS statistic from Leave-one-out cross-validation for PPK model validation was performed in 3 Phases. In Phase 1 model parameters (theta and clearance) were estimated for datasets with and without influence observations. It was found that influence observations caused an over-estimation of the model parameters. In Phase II the statistics from 10-fold and leave-one-out cross validation methods were used to detect models developed from influence data. The metrics of choice are RATIOK and RATIOPR statistics that can be used to identify models developed from influence data and these metrics may then find applicability across differing drugs and models. A cut-off value of 1.05 for RATIOK and RATIOPR was proposed as a discrete breakpoint to classify models that were generated from influence data versus noninfluence data. In Phase III data analysis was carried out using logistic regression and the sensitivity and specificity of Leave-one-out and 10-fold cross-validation methods were evaluated. It was found that RATIOK and RATIOPR were significant predictors when used individually in the model. Multicollinearity was detected when RATIOK and RATIOPR were present in the model at the same time. In terms of sensitivity and specificity both 10-fold cross validation and leave-one-out cross validation showed similar performance.
APA, Harvard, Vancouver, ISO, and other styles
39

Hagiwara, Magnus. "Development and Evaluation of a Computerised Decision Support System for use in pre-hospital care." Doctoral thesis, Hälsohögskolan, Högskolan i Jönköping, HHJ. Kvalitetsförbättring och ledarskap inom hälsa och välfärd, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-23781.

Full text
Abstract:
The aim of the thesis was to develop and evaluate a Computerised Decision Support System (CDSS) for use in pre-hospital care. The thesis was guided by a theoretical framework for developing and evaluating a complex intervention. The four studies used different designs and methods. The first study was a systematic review of randomised controlled trials. The second and the last studies had experimental and quasi-experimental designs, where the CDSS was evaluated in a simulation setting and in a clinical setting. The third study included in the thesis had a qualitative case study design. The main findings from the studies in the thesis were that there is a weak evidence base for the use of CDSS in pre-hospital care. No studies have previously evaluated the effect of CDSS in pre-hospital care. Due to the context, pre-hospital care is dependent on protocol-based care to be able to deliver safe, high-quality care. The physical format of the current paper based guidelines and protocols are the main obstacle to their use. There is a request for guidelines and protocols in an electronic format among both clinicians and leaders of the ambulance organisations. The use of CDSS in the pre-hospital setting has a positive effect on compliance with pre-hospital guidelines. The largest effect is in the primary survey and in the anamnesis of the patient. The CDSS also increases the amount of information collected in the basic pre-hospital assessment process. The evaluated CDSS had a limited effect on on-the-scene time. The developed and evaluated CDSS has the ability to increase pre-hospital patient safety by reducing the risks of cognitive bias. Standardising the assessment process, enabling explicit decision support in the form of checklists, assessment rules, differential diagnosis lists and rule out worst-case scenario strategies, reduces the risk of premature closure in the assessment of the pre-hospital patient.
APA, Harvard, Vancouver, ISO, and other styles
40

Chaudhari, Pragat. "Analytical Methods for the Performance Evaluation of Binary Linear Block Codes." Thesis, University of Waterloo, 2000. http://hdl.handle.net/10012/904.

Full text
Abstract:
The modeling of the soft-output decoding of a binary linear block code using a Binary Phase Shift Keying (BPSK) modulation system (with reduced noise power) is the main focus of this work. With this model, it is possible to provide bit error performance approximations to help in the evaluation of the performance of binary linear block codes. As well, the model can be used in the design of communications systems which require knowledge of the characteristics of the channel, such as combined source-channel coding. Assuming an Additive White Gaussian Noise channel model, soft-output Log Likelihood Ratio (LLR) values are modeled to be Gaussian distributed. The bit error performance for a binary linear code over an AWGN channel can then be approximated using the Q-function that is used for BPSK systems. Simulation results are presented which show that the actual bit error performance of the code is very well approximated by the LLR approximation, especially for low signal-to-noise ratios (SNR). A new measure of the coding gain achievable through the use of a code is introduced by comparing the LLR variance to that of an equivalently scaled BPSK system. Furthermore, arguments are presented which show that the approximation requires fewer samples than conventional simulation methods to obtain the same confidence in the bit error probability value. This translates into fewer computations and therefore, less time is needed to obtain performance results. Other work was completed that uses a discrete Fourier Transform technique to calculate the weight distribution of a linear code. The weight distribution of a code is defined by the number of codewords which have a certain number of ones in the codewords. For codeword lengths of small to moderate size, this method is faster and provides an easily implementable and methodical approach over other methods. This technique has the added advantage over other techniques of being able to methodically calculate the number of codewords of a particular Hamming weight instead of calculating the entire weight distribution of the code.
APA, Harvard, Vancouver, ISO, and other styles
41

Eriksson, Daniel. "Portable BizTalk solutions : Evaluating portable solutions to search for errors in BizTalkplatforms." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-142407.

Full text
Abstract:
This report evaluates possible infrastructures to create portable BizTalk solutions. BizTalk is an integration based software mostly used at larger companies. Errors can occur in BizTalk and experts need an easy and portable solution to identify these. No such solution exists today, and this report focuses on how it could be performed. The results show that various tools need to be used to access information from BizTalk. Information about BizTalk must be protected by access rights, which are preferably controlled from a cloud portal. The cloud portal used in this project is Windows Azure, but other solutions have been considered. Azure has a specialized service to access secure locations, which other provider’s lack. Finally, a prototype application in Windows Phone 8 was developed. The solution has been shown to BizTalk experts, who were enthusiastic by the proposed solution and has proceeded with the project. They are currently analyzing what it would cost to develop a product and what could be charged for such a service.
BizTalk Server är ett integrationsystem som underlättar integrering av olika system. Med hjälp av olika adaptrar kan BizTalk enkelt koppla samman tjänster som inte pratar på samma språk. Fel inträffar i BizTalk och experter behöver en smidig och portabel lösning för att identifiera dessa. Rapporten utvärderar olika infrastrukturer för att skapa mobila övervakningslösningar till BizTalk. Resultaten visar på att flera verktyg måste användas för att ansluta till BizTalk. Informationen som dessa vektyg kommer åt måste skyddas med rättigheter, som helst implementeras i molnet. Molnportalen som används i det här projektet är Windows Azure, men andra molnlösningar har också utvärderats. Azure har en tjänst som är specialiserad på att nå skyddaded nätverk bakom brandväggar, vilket andra leverantörer saknar. Slutligen utvecklades en prototyp i Windows Phone 8 för att demonstrera lösningen. Demonstrationen skedde inför BizTalk-experter, som var intresserade av lösningen och har valt att fortsätta projektet. De har tagit nästa steg i processen, som handlar om att analysera vad det skulle kosta att utveckla en komersiell produkt samt vad som skulle kunna tas betalt för en sådan produkt.
APA, Harvard, Vancouver, ISO, and other styles
42

Hedin, Rasmus. "Evaluation of an Appearance-Preserving Mesh Simplification Scheme for CET Designer." Thesis, Linköpings universitet, Informationskodning, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-152583.

Full text
Abstract:
To decrease the rendering time of a mesh, Level of Detail can be generated by reducing the number of polygons based on some geometrical error. While this works well for most meshes, it is not suitable for meshes with an associated texture atlas. By iteratively collapsing edges based on an extended version of Quadric Error Metric taking both spatial and texture coordinates into account, textured meshes can also be simplified. Results show that constraining edge collapses in the seams of a mesh give poor geometrical appearance when it is reduced to a few polygons. By allowing seam edge collapses and by using a pull-push algorithm to fill areas located outside the seam borders of the texture atlas, the appearance of the mesh is better preserved.
APA, Harvard, Vancouver, ISO, and other styles
43

Zhou, Xiaoqin. "Evaluation of instantaneous and cumulative models for reactivity ratio estimation with multiresponse scenarios." Thesis, Waterloo, Ont. : University of Waterloo, 2004. http://etd.uwaterloo.ca/etd/x5zhou2004.pdf.

Full text
Abstract:
Thesis (M.A.Sc.)--University of Waterloo, 2004.
"A thesis presented to the University of Waterloo in fulfillment of the thesis requirement for the degree of Master of Applied Science in Chemical Engineering". Includes bibliographical references. Issued also in PDF fomat and available via the World Wide Web. Requires internet connectivity, World Wide Web browser, and Adobe Acrobat Reader to view and print files.
APA, Harvard, Vancouver, ISO, and other styles
44

Gruner, Greg L. "The design and evaluation of a computer-assisted error detection skills development program for beginning conductors utilizing synthetic sound sources." Virtual Press, 1993. http://liblink.bsu.edu/uhtbin/catkey/861377.

Full text
Abstract:
The purposes of this study were to design and to evaluate an online system designed to enhance communication skills and project tracking of computer software courses at Ball State University (BSU). Student Online Project Planning and Tracking System (SOPPTS) was designed and field tested to provide real-time feedback from faculty on student progress, offer online guidance for software project planning, produce tracking automation, and facilitate communication between faculty and students.SOPPTS technology was designed under the supervision of W. Zage and D. Zage, professors in the Computer Science Department at BSU.Participants in this study included six BSU undergraduate students, six BSU graduate students and seven BSU faculty members. Each participant was interviewed for one hour in an instructor’s office in the BSU Computer Science Department. With the participants’ permission, each interview was audio-taped and coded with a letter and number.Data evaluation consisted of narrative summaries of the interviews, an analysis of the evidence in terms of the research questions and the compilation of data to show both emerging themes and major trends.Analysis of the data showed that learning was definitely enhanced, and that faculty evaluations were also strongly enhanced. Participants recommended more SOPPTS applications, both industrial and academic. The emerging themes showed that faculty and students:1) Had more and easier access to information; students' work was better organized; student team spirit grew; students were more accurately evaluated by instructors;2) Had more efficient methods for report submission and record keeping; students interaction with teachers increased; students found SOPPTS better than email;3) Students and teachers could work from various locations, with greater access to record retrieval and submission of reports, so that documents submitted were available to all instead of getting lost;4) Students were motivated by the nature of online task assignment and tracking because of greater accountability; faculty members were happy to see students' project progress online;5) Improved time and project management through greater awareness of milestones,deadlines and date/ time "stamping" of report submissions.Major trends show that improved access to information and communication facilitated learning, and that planning and tracking skills improved.
School of Music
APA, Harvard, Vancouver, ISO, and other styles
45

Feng, Yunyi. "Identification of Medical Coding Errors and Evaluation of Representation Methods for Clinical Notes Using Machine Learning." Ohio University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1555421482252775.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Durand, Marcus L. "The evaluation of methods for the prospective patient safety hazard analysis of ward-based oxygen therapy." Thesis, Cranfield University, 2009. http://dspace.lib.cranfield.ac.uk/handle/1826/4480.

Full text
Abstract:
When even seemingly benign and routine processes fail in healthcare, people sometimes die. The profound effect on the patient’s families and the healthcare staff involved is clear (Vincent and Coulter, 2002), while further consequences are felt by the institution involved, both financially and by damage to reputation. The trend in healthcare for learning through experience of adverse events is no longer a viable philosophy (Department of Health,Sir Ian Carruthers OBE and Pauline Philip, 2006). In order to make progress towards preventative learning, three Prospective Hazard Analysis (PHA) methods used in other industries were evaluated for use in the area of ward based healthcare. Failure Modes and Effects Analysis (FMEA), Fault Tree Analysis (FTA) and Hazard and Operability Analysis (HAZOP) were compared to each other in terms of ease of use, information they provide and the manner in which it is presented. Their results were also compared to baseline data produced through empirical research. Oxygen Therapy was used in this research as an example of a common ward based therapy. The resulting analysis listed 186 hazards almost all of which could lead to death, especially if combined. FTA and FMEA provided better system coverage than HAZOP and identified more hazards than were contained in the initial hazard identification method common to both techniques. FMEA and HAZOP needed some modification before use, with HAZOP requiring the most extensive adjustment. FTA has a very useful graphical presentation and was the only method capable of displaying causal linkage, but required that hazards be translated into events for analysis. It was concluded that formal Prospective Hazard Analysis (PHA) was applicable to this area of healthcare and presented added value through a combination of detailed information on possible hazards and accurate risk assessment based on a combination of expert opinion and empirical data. This provides a mechanism for evidence based identification of hazard barriers and safeguards as well as a method for formal communication of results at any stage of an analysis. It may further provide a very valuable vehicle for documented learning through prospective analysis incorporating feedback from previous experience and adverse incidents. The clear definition of systems and processes that form part of these methods provides a valuable opportunity for learning and the enduring capture and dissemination of tacit knowledge that can be continually updated and used for the formulation of strategies for safety and quality improvement.
APA, Harvard, Vancouver, ISO, and other styles
47

Holzschuch, Nicolas. "Le contrôle de l'erreur dans la méthode de radiosité hiérarchique." Phd thesis, Université Joseph Fourier (Grenoble), 1996. http://tel.archives-ouvertes.fr/tel-00004994.

Full text
Abstract:
Nous présentons ici plusieurs améliorations d'un algorithme de modélisation de l'éclairage, la méthode de radiosité. Pour commencer, une analyse détaillée de la méthode de radiosité hiérarchique permet de souligner ses points faibles et de mettre en évidence deux améliorations simples : une évaluation paresseuse des interactions entre les objets, et un nouveau critère de raffinement qui élimine en grande partie les raffinements inutiles. Un bref rappel des propriétés des fonctions de plusieurs variables et de leurs dérivées suit, qui permet d'abord de déduire une réécriture de l'expression de la radiosité, d'où un calcul numérique plus précis. Les méthodes d'estimation de l'erreur produite au cours du processus de modélisation de la lumière sont introduites. Nous voyons alors comment les propriétés de concavité de la fonction de radiosité permettent -- grâce au calcul des dérivées successives de la radiosité -- un contrôle complet de l'erreur commise dans la modélisation des interactions entre les objets, et donc un encadrement précis de la radiosité. Nous présentons un critère de raffinement basé sur cette modélisation des interactions, et un algorithme complet de radiosité hiérarchique intégrant ce critère de raffinement, et donc permettant un contrôle de l'erreur commise sur la radiosité au cours de la résolution. Finalement, nous présentons les méthodes de calcul pratique des dérivées successives de la radiosité (gradient et Hessien) dans le cas d'un émetteur constant sans obstacles tout d'abord, puis dans le cas d'un émetteur constant en présence d'obstacles et dans le cas d'un émetteur sur lequel la radiosité varie de façon linéaire.
APA, Harvard, Vancouver, ISO, and other styles
48

Devadi, Anil Kumar Reddy. "Evaluating Cache Vulnerability to Transient Errors for The Uni-processor and Multi-processor Systems." Available to subscribers only, 2009. http://proquest.umi.com/pqdweb?did=1967969471&sid=4&Fmt=2&clientId=1509&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Krome, Lesly R. "The influence of core self-evaluations on determining blame for workplace errors: an ANOVA-attribution-model approach." Thesis, Kansas State University, 2013. http://hdl.handle.net/2097/16221.

Full text
Abstract:
Master of Science
Department of Psychological Sciences
Patrick Knight
The current study examined attributions of blame for workplace errors through the lens of Kelley’s (1967) ANOVA model of attribution-making, which addresses the consensus, consistency, and distinctiveness of a behavior. Consensus and distinctiveness information were manipulated in the description of a workplace accident. It was expected that participants would make different attributions regarding the cause of the event due to these manipulations. This study further attempted to determine if an individual’s core self-evaluations (CSE) impact how she or he evaluates a workplace accident and attributes blame, either from the perspective of the employee who made the error or that of a co-worker. Because CSE are fundamental beliefs about an individual’s success, ability, and self-worth, they may contribute to how the individual attributes blame for a workplace accident. It was found that CSE were positively related to participants’ inclination to make internal attributions of blame for a workplace error. Contrary to expectations, manipulations of the consensus and distinctiveness of the workplace error did not moderate participants’ attributions of blame. Explanations for these findings are discussed, as are possible applications of this research.
APA, Harvard, Vancouver, ISO, and other styles
50

Sindler, Petr. "Study of an error in engine ECU data collected for in-use emissions testing and development and evaluation of a corrective procedure." Morgantown, W. Va. : [West Virginia University Libraries], 2007. https://eidr.wvu.edu/etd/documentdata.eTD?documentid=5246.

Full text
Abstract:
Thesis (M.S.)--West Virginia University, 2007.
Title from document title page. Document formatted into pages; contains vii, 63 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 58).
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography