To see the other types of publications on this topic, follow the link: Error Analysis.

Dissertations / Theses on the topic 'Error Analysis'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Error Analysis.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Recski, Leonardo Juliano. "Computer-assisted error analysis." Florianópolis, SC, 2002. http://repositorio.ufsc.br/xmlui/handle/123456789/83383.

Full text
Abstract:
Dissertação (mestrado) - Universidade Federal de Santa Catarina, Centro de Comunicação e Expressão.
Made available in DSpace on 2012-10-19T23:29:05Z (GMT). No. of bitstreams: 0Bitstream added on 2014-09-26T01:57:45Z : No. of bitstreams: 1 181888.pdf: 3989754 bytes, checksum: faa45dc1485896a97588e233b76a1a61 (MD5)
Programas para, a análise de texto para microcomputadores já estão disponíveis há algum tempo. A técnica de análise de erros preposicionais auxiliada por computador, um novo
APA, Harvard, Vancouver, ISO, and other styles
2

Ozores, Ana Luiza Festa. "Entendendo alguns erros do Ensino Fundamental II que os alunos mantêm ao final do Ensino Médio." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/45/45135/tde-28102017-073627/.

Full text
Abstract:
É natural considerar o erro como algo que deve ser evitado, um indicador de mau desempenho. Desde pequenas, as crianças são habituadas a buscar os acertos, de forma que, quando o raciocínio está errado, elas devem refazê-lo. Tal resultado é cobrado em casa pela família e na escola pelos educadores. Porém, o erro é o mais antigo elemento no processo de aprendizagem, e, além de ser um indicador de desempenho, o erro também mostra aquilo que o aluno sabe ou pensa ter compreendido. É possível notar que alguns alunos do Ensino Médio mantêm erros e dúvidas que deveriam ter sido sanados ao longo do Ensino Fundamental. Neste trabalho, será analisado o porquê de essas dúvidas ainda se apresentarem, pois a análise desses erros pode auxiliar tanto o aluno como o professor. O aluno, com uma devolutiva do que foi feito para tentar aprimorar o seu saber e o professor, levando-o a elaborar novas estratégias didáticas e planos de ensino que melhor se adaptem ao seu público alvo.
It is expected to consider the error as something that must be avoided, a non-satisfactory performance indicator. Since childhood, the human being is used to seek the right answers, so that, when the reasoning is wrong, he/she should remake it. Such outcome is charged at home by the family and at school by the teachers. However, the error is the oldest element in the learning process and, in addition to being a performance indicator, the error also shows something that the student knows or thinks he/she has understood. It is possible to notice that some high school students make some mistakes or has some doubts that were supposed to be clarified during the elementary school. In this paper, it will be analyzed the reason why these doubts are still present, because the analysis of these errors can help both students and teachers. The students, with a feedback of what has been done to try to improve their knowledge and the teacher, leading him to design new teaching strategies and lesson plans to best suit his/her target audience.
APA, Harvard, Vancouver, ISO, and other styles
3

Attwal, Preet Singh. "Objective error measure techniques for error analysis and control within the finite element analysis process." Thesis, Cranfield University, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.340874.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lo, Sau Yee. "Measurement error in logistic regression model /." View abstract or full-text, 2004. http://library.ust.hk/cgi/db/thesis.pl?MATH%202004%20LO.

Full text
Abstract:
Thesis (M. Phil.)--Hong Kong University of Science and Technology, 2004.
Includes bibliographical references (leaves 82-83). Also available in electronic version. Access restricted to campus users.
APA, Harvard, Vancouver, ISO, and other styles
5

Masani, Deekshitha. "Analysis of radiation induced errors in transistors in memory elements." OpenSIUC, 2020. https://opensiuc.lib.siu.edu/theses/2791.

Full text
Abstract:
From the first integrated circuit which has 16-transistor chip built by Heiman and Steven Hofstein in 1962 to the latest 39.54 billion MOSFET’s using 7nm FinFET technology as of 2019 the scaling of transistors is still challenging. The scaling always needs to satisfy the minimal power constraint, minimal area constraint and high speed as possible. As of 2020, the worlds smallest transistor is 1nm long build by a team at Lawrence Berkeley National Laboratory. Looking at the latest trends of 14nm, 7nm technologies present where a single die holds more than a billion transistors on it. Thinking of it, it is more challenging for dyeing a 1nm technology. The scaling keeps going on and if silicon does not satisfy the requirement, they switch to carbon nanotubes and molybdenum disulfide or some newer materials. The transistor sizing is reducing but the pressure of radiation effects on transistor is in quench of more and more efficient circuits to tolerate errors. The radiation errors which are of higher voltage are capable of hitting a node and flipping its value. However, it is not possible to have a perfect material to satisfy no error requirement for a circuit. But it is possible to maintain the value before causing the error and retain the value even after occurrence of the error. In the advanced technologies due to transistor scaling multiple simultaneous radiation induced errors are the issue. Different latch designs are proposed to fix this problem. Using the CMOS 90nm technology different latch designs are proposed which will recover the value even after the error strikes the latch. Initially the errors are generally Single event upsets (SEUs) which when the high radiation particle strikes only one transistor. Since the era of scaling, the multiple simultaneous radiation errors are common. The general errors are Double Node Upset (DNU) which occurs when the high radiation particle strikes the two transistors due to replacing one transistor by more than one after scaling. Existing designs of SEUs and DNUs accurately determine the error rates in a circuit. However, with reference to the dissertation of Dr. Adam Watkins, proposed HRDNUT latch in the paper “Analysis and mitigation of multiple radiation induced errors in modern circuits”, the circuits can retain its error value in 2.13ps. Two circuits are introduced to increase the speed in retaining the error value after the high energy particle strikes the node. Upon the evaluation of the past designs how the error is introduced inside the circuit is not clear. Some designs used a pass gate to actually introduce the error logic value but not in terms of voltage. The current thesis introduces a method to introduce error with reduced power and delay overhead compared to the previous circuits. Introducing the error in the circuits from the literature survey and comparing the delay and power with and without introducing the error is shown. Introducing the errors in the two new circuits are also shown and compared with when no errors are injected.
APA, Harvard, Vancouver, ISO, and other styles
6

Tang, Stanley C. "Robot positioning error analysis and correction." Thesis, This resource online, 1987. http://scholar.lib.vt.edu/theses/available/etd-04122010-083623/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Juretic, Franjo. "Error analysis in finite volume CFD." Thesis, Imperial College London, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.420616.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Miguel, Angela Ruth. "Human error analysis for collaborative work." Thesis, University of York, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.441020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Bonnot, Justine. "Error Analysis for Approximate Computing Systems." Thesis, Rennes, INSA, 2019. https://partages.insa-rennes.fr/alfresco/s/api/node/content/workspace/SpacesStore/4b8b3149-5fc4-439b-abc5-ef663a070daf/2019ISAR0008_BONNOT_Justine.pdf?a=true.

Full text
Abstract:
Le calcul approximé est une technique de calcul efficace en énergie et reposant sur l'exploitation de la tolérance à l'imprécision d'une application. Développé pour faire face à la fin de la loi de Moore, il répond à une demande croissante en capacité de calcul. Les techniques d'approximation ont été proposées à différents niveaux d'abstraction, du circuit au système. Cette thèse porte sur le développement de méthodes et outils permettant d'évaluer rapidement l'impact des différentes techniques d’approximation sur la qualité du résultat en sortie d'une application. L'étude des erreurs induites est essentielle pour utiliser ces approximations dans l'industrie. Deux niveaux d’approximation ont été considérés, le niveau matériel avec l'étude des opérateurs arithmétiques inexacts et le niveau des données avec l'étude de l'arithmétique virgule fixe. Premièrement, des méthodes efficaces de caractérisation basées simulation ont été proposées pour obtenir des statistiques sur les erreurs induites par l'approximation considérée. Les statistiques inférentielles ont été utilisées pour quantifier le nombre d’observations nécessaires pour estimer le statistiques de l’erreur et ainsi réduire le temps d’évaluation. Les méthodes de caractérisation proposées sont basées sur des simulations adaptatives et caractérisent l'erreur d'approximation de façon statistique selon les exigences de confiance définies par l'utilisateur. Ensuite, les métriques d'erreur obtenues ont été reliées à la métrique de qualité de l'application. Pour les opérateurs inexacts, un simulateur a été conçu pour le processus d’exploration de l’espace d’approximation pour sélectionner la meilleure pour l’application considérée. Pour la virgule fixe, le modèle d’erreur a été intégré à un algorithme de raffinage pour déterminer la largeur optimisée des variables de l’application. Les résultats de cette thèse proposent des méthodes concrètes pour faciliter la mise en œuvre du calcul approximé dans les applications industrielles, accélérant les méthodes proposées dans l’état de l’art de un à trois ordres de grandeur
Approximate Computing is an energy- aware computing technique that relies on the exploitation of the tolerance to imprecision of an application. Developed to face the end of Moore’s law, it answers the growing demand in computing capacity. Approximation techniques have been proposed at different abstraction levels, from circuit to system level. This thesis focuses on the development of methods and tools to quickly evaluate the impact of different Approximate Computing techniques on the application quality metric. The study of the induced errors is critical to use approximations in the industry. Approximate Computing techniques have been considered at two different levels, the hardware level with the study of inexact arithmetic operators and the data level with the study of fixed-point arithmetic. First, efficient simulation-based characterization methods have been proposed to derive statistics on the errors induced by the considered approximation. Inferential statistics have been proposed to reduce the time for error characterization. The proposed characterization methods are based on adaptive simulations and statistically characterizes the approximation error according to user-defined confidence requirements. Then, the obtained error metrics are linked with the application quality metric. For inexact operators, a simulator has been proposed for the approximation design space exploration process to select the best approximation for the considered application. For fixed-point arithmetic, the proposed error model has been implemented in a fixed-point refinement algorithm to determine the optimized word-lengths of the internal variables in an application. The results of this thesis are proposing concrete methods to ease the implementation of Approximate Computing in industrial applications, speeding up state-of-the-art methods from one to three orders of magnitude
APA, Harvard, Vancouver, ISO, and other styles
10

Hirst, William Mark. "Outcome measurement error in survival analysis." Thesis, University of Liverpool, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.366352.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Apascaritei, Bogdan. "Error analysis for energy process simulations." Tönning Lübeck Marburg Der Andere Verl, 2008. http://d-nb.info/99184694X/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Justino, Júlia Maria da Rocha Vilaverde. "Nonstandard linear algebra with error analysis." Doctoral thesis, Universidade de Évora, 2013. http://hdl.handle.net/10174/16316.

Full text
Abstract:
Neste trabalho consideramos sistemas de equações lineares exíveis, sistemas de equações lineares cujos coe cientes têm incertezas de tipo o (:) ou O (:). Este tipo de incertezas irá ser analisado, à luz da análise não standard, como conjuntos de in nitesimais conhecidos como neutrizes. Em sistemas de equações lineares exíveis nem sempre existe uma solução exata. No entanto, neste trabalho apresentam-se condições que garantem a existência de pelo menos uma solução admissível, no sentido de inclusão, e as condições que garantem a existência de solução maximal nesse tipo de sistemas. Tais condições são restrições àcerca da ordem de grandeza do tipo de incertezas existentes, tanto na matriz dos coe cientes do sistema como na respetiva matriz dos termos independentes. Utilizando a regra de Cramer sob essas condições é possível produzir, pelo menos, uma solução admissível do sistema. No caso em que se garante a obtenção da solução maximal do sistema pela re- gra de Cramer, prova-se que essa solução corresponde à solução obtida pelo método de eliminação de Gauss; ABSTRACT: Systems of linear equations, called exible systems, with coe¢ cients having uncertain- ties of type o (:) or O (:) are studied from the point of view of nonstandard analysis. Then uncertainties of the afore-mentioned kind will be given in the form of so-called neutrices, for instance the set of all in nitesimals. In some cases an exact solution of a exible system may not exist. In this work conditions are presented that guarantee the existence of an admissible solution, in terms of inclusion, and also conditions that guarantee the existence of a maximal solution. These conditions concern restrictions on the size of the uncertainties appearing in the matrix of coe¢ cients and in the constant term vector of the system. Applying Cramer s rule under these conditions, one obtains, at least, an admis- sible solution of the system. In the case a maximal solution is produced by Cramer s rule, one proves that it is the same solution produced by Gauss-Jordan elimination.
APA, Harvard, Vancouver, ISO, and other styles
13

Ali-Mounla, Ibrahim. "Contrastive and error analysis : a case study." Thesis, Bangor University, 1992. https://research.bangor.ac.uk/portal/en/theses/contrastive-and-error-analysis--a-case-study(46dcdd55-cc53-4be2-a73f-878e1d5bd4ab).html.

Full text
Abstract:
Chapter One: this chapter presents an up-to-date account of Contrastive Analysis (CA), and Error Analysis (EA). Chapter Two: this deals with the syntactic descriptions of Inflectional Phrase, (IP) in English and Syrian Arabic respectively. The descriptions of (IP) system are executed within the framework of X-bar syntax in the version outlined in Chomsky (1970 and 1986b), and Radford (1988). These descriptions focus on the various syntactic movements which take place within the maximal categories referred to as IP all of which play an important role in the formation of YIN and Wh-questions. For the sake of this study, only three types of movement will be considered - i. e. I- movement, V- movement, and Wh - movement Chapter Three: this chapter describes the syntactic movements which take place within the maximal categories referred to as Complementiser Phrase (CP) of the two languages within the same framework. The description focuses on I-to-C and Wh-movement. Chapter Four: this deals with English Small Clauses (SCs) and Syrian Verbless Clauses (VCs) also within the same framework. Chapter Five: this deals with contrasting the interrogative patterns of the two languages as identified in chapters 2,3 and 4, and with formulating predictions on the basis of the contrasts identified. Chapter Six: this highlights the methodology of the experiment conducted - i. e. data collection, design of the elicitation instruments, etc. Chapter Seven: this consists of analysing the elicited errors in the light of my predictions. it compares CA predictions with the attested errors to evaluate the success of the predictions and hypotheses. Chapter Eight: offers the discussion of disconfirmed predictions and errors irrelevant to predictions. Chapter Nine: this contains conclusions, pedagogical implications and recommendation for further research.
APA, Harvard, Vancouver, ISO, and other styles
14

Alasfour, Aisha Saud. "Grammatical Errors by Arabic ESL Students| An Investigation of L1 Transfer through Error Analysis." Thesis, Portland State University, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10826886.

Full text
Abstract:

This study investigated the effect of first language (L1) transfer on Arabic ESL learners’ acquisition of the relative clauses, the passive voice and the definite article. I used Contrastive Analysis (CA) and Error Analysis (EA) to analyze 50 papers written by Arabic ESL students at the ACTFL Advanced Mid proficiency level. The analysis was paired with interviews with five advanced students to help determine whether L1 transfer was, in fact, influencing students’ errors predicted by CA.

Students in this study made L1 errors along with other errors. Although no statistical difference was found between the frequency of transfer and other (non-transfer) errors, L1 transfer errors were still common for many learners in this data. The frequency of the relative clause L1 transfer errors was slightly higher than other errors. However, passive voice L1 errors were as frequent as other errors whereas definite article L1 errors were slightly less frequent than other errors. The analysis of the interviews suggested that L1 still played a crucial role in influencing learners errors.

The analysis also suggested that the frequency of transfer errors in the papers used in this study might have been influenced by CA-informed instruction students received and students’ language level. Specifically, learners reported that both factors helped them reduce the frequency of L1 transfer errors in their writing.

The teaching implications of this study include familiarizing language instructors with possible sources of errors for Arabic ESL learners. Language instructors should try to identify sources of errors by conducting their own analyses or consulting existing literature on CA paired with EA. Finally, I recommend adopting a CA-informed instruction to help students reduce and overcome errors that are influenced by their L1.

APA, Harvard, Vancouver, ISO, and other styles
15

Ljung, Mikael. "DREAM : Driving Reliability and Error Analysis Method." Thesis, Linköping University, Department of Computer and Information Science, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2033.

Full text
Abstract:

Den här uppsatsen handlar om trafiksäkerhet och, mer specifikt, den mänskliga faktorns inverkan vid olyckor och incidenter. Arbetet är skrivet inom FICA- projeketet, vars målsättning är att förstå förarbeteende för att kunna utveckla aktiv säkerhetsteknologi för fordon. I första delen (kapitel 2–6) presenteras ett teoretiskt ramverk för att förstå förarbeteende i olika kontexter. I andra delen (kapitel 7–8) redovisas en metod för att analysera olycks- och incidentförlopp, kallad DREAM – Driving Reliability and Error Analysis Method. Metoden demonstreras med en walkthrough av en exempelolycka. Uppsatsen avrundas sedan med en diskussion av hur man utifrån analyser med hjälp av DREAM kan utforma olika tekniker för aktiv säkerhet.


This thesis concerns traffic safety, and more specifically, the way in which the human factor influence the development of accidents and incidents. A theoretical framework for understanding the human factor in the complex traffic environment is introduced. Then a method for analysing the interaction between driver, vehicle and traffic environment in accidents/incidents is presented, called DREAM (Driving Reliability and Error Analysis Method). The thesis concludes with a discussion of how DREAM can influence research and development of new active traffic safety technologies.

APA, Harvard, Vancouver, ISO, and other styles
16

Lourens, Rencia, Nico Molefe, and Karin Brodie. "Workshop: Error Analysis of Mathematics Test Items." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2012. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-82693.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Eibner, Tino, and Jens Markus Melenk. "p-FEM quadrature error analysis on tetrahedra." Universitätsbibliothek Chemnitz, 2007. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-200702059.

Full text
Abstract:
In this paper we consider the p-FEM for elliptic boundary value problems on tetrahedral meshes where the entries of the stiffness matrix are evaluated by numerical quadrature. Such a quadrature can be done by mapping the tetrahedron to a hexahedron via the Duffy transformation. We show that for tensor product Gauss-Lobatto-Jacobi quadrature formulas with q+1=p+1 points in each direction and shape functions that are adapted to the quadrature formula, one again has discrete stability for the fully discrete p-FEM. The present error analysis complements the work [Eibner/Melenk 2005] for the p-FEM on triangles/tetrahedra where it is shown that by adapting the shape functions to the quadrature formula, the stiffness matrix can be set up in optimal complexity.
APA, Harvard, Vancouver, ISO, and other styles
18

Beaulieu, Martin Ronald. "Launch detection satellite system engineering error analysis." Thesis, Monterey, California. Naval Postgraduate School, 1996. http://hdl.handle.net/10945/8611.

Full text
Abstract:
Approved for public release; distribution is unlimited.
An orbiting detector of infrared (IR) energy may be used to detect the rocket plumes generated by ballistic missiles during the powered segment of their trajectory. By measuring angular directions of the detections over several observations, the trajectory properties, launch location, and impact area may be estimated using a nonlinear least-squares iteration procedure. observations from two or more sensors may be combined to form stereoscopic lines of sight (LOS), increasing the accuracy of the estimation algorithm. The focus of this research has been to develop a computer-model of an estimation algorithm, and determine what parameter, or combination of parameters will significantly affect on the error of the tactical parameter estimation. This model is coded in MATLAB, and generates observation data, and produces an estimate for time, position, and heading at launch, at burnout, and calculates an impact time and position. The effects of time errors, LOS measurement errors, and satellite position errors upon the estimation accuracy were then determined using analytical and Monte Carlo simulation techniques.
APA, Harvard, Vancouver, ISO, and other styles
19

Hiller, Jean-François 1974. "On error estimators in finite element analysis." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/88830.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Ibáñez, Jiménez Jorge, Cid Daniela Jiménez, and Merino Naiomi Vera. "Error Analysis in Chilean Tourist Text Translations." Tesis, Universidad de Chile, 2014. http://www.repositorio.uchile.cl/handle/2250/129945.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Woodhouse, Geoffrey M. "Adjustment for measurement error in multilevel analysis." Thesis, University College London (University of London), 1998. http://discovery.ucl.ac.uk/10019113/.

Full text
Abstract:
Measurements in educational research are often subject to error. Where it is desired to base conclusions on underlying characteristics rather than on the raw measurements of them, it is necessary to adjust for measurement error in the modelling process. In this thesis it is shown how the classical model for measurement error may be extended to model the more complex structures of error variance and covariance that typically occur in multilevel models, particularly multivariate multilevel models, with continuous response. For these models parameter estimators are derived, with adjustment based on prior values of the measurement error variances and covariances among the response and explanatory variables. A straightforward method of specifring these prior values is presented. In simulations using data with known characteristics the new procedure is shown to be effective in reducing the biases in parameter estimates that result from unadjusted estimation. Improved estimates of the standard errors also are demonstrated. In particular, random coefficients of variables with error are successfully estimated. The estimation procedure is then used in a two-level analysis of an educational data set. It is shown how estimates and conclusions can vary, depending on the degree of measurement error that is assumed to exist in explanatory variables at level 1 and level 2. The importance of obtaining satisfactory prior estimates of measurement error variances and covariances, and of correctly adjusting for them during analysis, is demonstrated.
APA, Harvard, Vancouver, ISO, and other styles
22

Kang, Chunmei. "Meteor radar signal processing and error analysis." Connect to online resource, 2008. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3315846.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Li, Huitang. "Production log analysis and statistical error minimization." Digital version:, 2000. http://wwwlib.umi.com/cr/utexas/fullcit?p9992850.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Herman, Eric. "Efficient Error Analysis Assessment in Optical Design." Thesis, The University of Arizona, 2014. http://hdl.handle.net/10150/321608.

Full text
Abstract:
When designing a lens, cost and manufacturing concerns are extremely challenging, especially with radical optical designs. The tolerance process is the bridge between design and manufacturing. Three techniques which improve the interaction between lens design and engineers are successfully shown in this thesis along with implementation of these techniques. First, a method to accurately model optomechanical components within lens design is developed and implemented. Yield improvements are shown to increase by approximately 3% by modeling optomechanical components. Second, a method utilizing aberration theory is applied to discover potential tolerance sensitivity of an optical system through the design process. The use of aberration theory gives an engineer ways to compensate for errors. Third, a method using tolerance grade mapping is applied to error values of an optical system. This mapping creates a simplified comparison method between individual tolerances and lens designs.
APA, Harvard, Vancouver, ISO, and other styles
25

Wu, Zili. "Error bounds for an inequality system." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/NQ62533.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Lai, Pik-ying, and 黎碧瑩. "Lp regression under general error distributions." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2004. http://hub.hku.hk/bib/B30287844.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Ndamase-, Nzuzo Pumla Patricia. "Numerical error analysis in foundation phase (Grade 3) mathematics." Thesis, University of Fort Hare, 2014. http://hdl.handle.net/10353/5893.

Full text
Abstract:
The focus of the research was on numerical errors committed in foundation phase mathematics. It therefore explored: (1) numerical errors learners in foundation phase mathematics encounter (2) relationships underlying numerical errors and (3) the implementable strategies suitable for understanding numerical error analysis in foundation phase mathematics (Grade 3). From 375 learners who formed the population of the study in the primary schools (16 in total), the researcher selected by means of a simple random sample technique 80 learners as the sample size, which constituted 10% of the population as response rate. On the basis of the research questions and informed by positivist paradigm, a quantitative approach was used by means of tables, graphs and percentages to address the research questions. A Likert scale was used with four categories of responses ranging from (A) Agree, (S A) Strongly Agree, (D) Disagree and (S D) Strongly Disagree. The results revealed that: (1) the underlying numerical errors that learners encounter, include the inability to count backwards and forwards, number sequencing, mathematical signs, problem solving and word sums (2) there was a relationship between committing errors and a) copying numbers b) confusion of mathematical signs or operational signs c) reading numbers which contained more than one digit (3) It was also revealed that teachers needed frequent professional training for development; topics need to change and lastly government needs to involve teachers at ground roots level prior to policy changes on how to implement strategies with regards to numerical errors in the foundational phase. It is recommended that attention be paid to the use of language and word sums in order to improve cognition processes in foundation phase mathematics. Moreover, it recommends that learners are to be assisted time and again when reading or copying their work, so that they could have fewer errors in foundation phase mathematics. Additionally it recommends that teachers be trained on how to implement strategies of numerical error analysis in foundation phase mathematics. Furthermore, teachers can use tests to identify learners who could be at risk of developing mathematical difficulties in the foundation phase.
APA, Harvard, Vancouver, ISO, and other styles
28

Kim, Hyang-Ok Kennedy Larry DeWitt. "A descriptive analysis of errors and error patterns in consecutive interpretation from Korean into English." Normal, Ill. Illinois State University, 1994. http://wwwlib.umi.com/cr/ilstu/fullcit?p9521335.

Full text
Abstract:
Thesis (Ed. D.)--Illinois State University, 1994.
Title from title page screen, viewed April 11, 2006. Dissertation Committee: Larry Kennedy (chair), Kenneth Jerich, Marilyn Moore, Irene Brosnahan. Includes bibliographical references (leaves 90-96) and abstract. Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
29

Lundberg, Molly. "Error Identification in Tourniquet Use : Error analysis of tourniquet use in trained and untrained populations." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-171588.

Full text
Abstract:
The number of prehospital deaths caused by large bleedings could be decreased if civilian people would act in time to help the injured patient. One way to help is to stop the bleeding with a tourniquet application. However, the tourniquet needs to be placed correctly in order to stop the bleeding. Therefore laypersons need to be educated in bleeding control to increase the rate of successful tourniquet application. This study used human error identification techniques such as Hierarchical Task Analysis and Systematic Human Error Reduction and Prediction Approach to identify possible errors of four commonly used tourniquet models: the CAT-7, Delfi-EMT, SAM-X and SWAT-T. The results show that many predicted errors are time-oriented and critical. Video analysis of tourniquet application was performed to map occurred use errors from the videos with the predicted ones. The goal was to identify problems that could be solved by training or redesigns of the tourniquets. The results show that the most common errors for all participants during tourniquet application were of six error types. The errors were to not check time or write down time of application, to take too much time to place the tourniquet around the limb, to place the tourniquet upside down, to place the tourniquet band over the securing mechanism instead of between and lastly to not secure the tourniquet correctly before transporting the patient. The untrained laypersons made more errors than the trained laypersons and professional emergency personnel group. The trained laypersons also made fewer errors in a calm setting than in a stressed setting, comparing to the professional group who did the same error types in both settings. The results indicate that untrained laypersons not only make more errors but also more critical errors than trained laypersons and professional emergency personnel. Future research should empirically test other tourniquet models than the CAT in the goal of finding use errors to be reduced. Overall the results are in line with previous studies that show the need for education of bleeding control techniques in the civilian population.
APA, Harvard, Vancouver, ISO, and other styles
30

Huang, Fang-Lun. "Error analysis and tractability for multivariate integration and approximation." HKBU Institutional Repository, 2004. http://repository.hkbu.edu.hk/etd_ra/515.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Kawelke, Jens. "Perturbation and error analysis considerations in robust control." Thesis, University of Leicester, 1997. http://hdl.handle.net/2381/30177.

Full text
Abstract:
This thesis deals with perturbation and error analysis in robust control, mainly H control, but the H2 norm is also considered. Perturbation analysis investigates the sensitivity of a solution or structure to perturbations or uncertainties in the input data. Error analysis is used to make statements about the numerical stability of an algorithm and uses results from perturbation analysis. Although perturbation and error analysis is a well-developed field in linear algebra, very little work has been done to introduce these concepts into the field of control. This thesis attempts to improve this situation. The main emphasis of the thesis is on H norm computations. Nonlinear and linear perturbation bounds are derived for the H norm. A rigorous error analysis is presented for two methods of computing the H norm: the Hamiltonian method and the SVD method. Numerical instability of the Hamiltonian method is shown with several examples. The SVD method, which is shown to be numerically stable, is updated with new upper and lower bounds for the frequency response between two given frequency points. Then using an upper frequency bound, a new algorithm is presented. This new algorithm can be implemented in a parallel process and has a similar performance to the Hamiltonian method in terms of computing time. In addition, nonlinear and linear perturbation bounds are derived for the H2 norm, and for the solutions of Lyapunov equations. Finally the H control problem is considered and perturbation bounds for the corresponding parameterized Riccati equations are derived. This leads to an estimation of the norm of the perturbation in the H controller.
APA, Harvard, Vancouver, ISO, and other styles
32

Thom, Nguyen Thi, and n/a. "Error analysis and English language teaching in Vietnam." University of Canberra. Information Sciences, 1985. http://erl.canberra.edu.au./public/adt-AUC20061109.131913.

Full text
Abstract:
This field study report covers four major areas : 1. Error analysis in language teaching and learning and its procedures 2. The relevance of error analysis to the teaching of English as a foreign language in the Vietnamese situation 3. Analysis of errors made by Vietnamese speakers 4. The use of error analysis in teaching English to Vietnamese speakers. Error analysis can be a useful adjunct to second language teaching, since it serves two related but distinct functions : the one, practical and applied in everyday teaching, and the other, theoretical, leading to a better understanding of the second language learning acquisition process. This study emphasizes the practical uses of error analysis in teaching and correction techniques, materials development and syllabus design. It is hoped that error analysis will make some contribution to the teaching of English as a foreign language to Vietnamese speakers, whose language is quite different from English and whose culture is far from being similar to that of English native speakers. This study is aimed at helping Vietnamese teachers of English to change their attitude to students' errors and see them in a more positive way, rather than as signs of failure on the students' part. It is suggested that a teacher of English must be able to recognize errors when they occur, to form some idea of the kind of error made and also why they occur. Finally, he must then be able to draw, from the analysis thus made, some conclusions as to what and how he should teach.
APA, Harvard, Vancouver, ISO, and other styles
33

Kocak, Umut, Palmerius Karljohan Lundin, and Matthew Cooper. "An Error Analysis Model for Adaptive Deformation Simulation." Linköpings universitet, Medie- och Informationsteknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-79904.

Full text
Abstract:
With the widespread use of deformation simulations in medical applications, the realism of the force feedback has become an important issue. In order to reach real-time performance with sufficient realism the approach of adaptivity, solution of different parts of the system with different resolutions and refresh rates, has been commonly deployed. The change in accuracy resulting from the use of adaptivity, however, has been been paid scant attention in the deformation simulation field. Presentation of error metrics is rare, while more focus is given to the real-time stability. We propose an abstract pipeline to perform error analysis for different types of deformation techniques which can consider different simulation parameters. A case study is also performed using the pipeline, and the various uses of the error estimation are discussed.
APA, Harvard, Vancouver, ISO, and other styles
34

Putman, Edward R. J. "Digital particle image velocimetry (DPIV) : systematic error analysis." Thesis, Loughborough University, 2011. https://dspace.lboro.ac.uk/2134/9105.

Full text
Abstract:
Digital Particle Image Velocimetry (DPIV) is a flow diagnostic technique that is able to provide velocity measurements within a fluid whilst also offering flow visualisation during analysis. Whole field velocity measurements are calculated by using cross-correlation algorithms to process sequential images of flow tracer particles recorded using a laser-camera system. This technique is capable of calculating velocity fields in both two and three dimensions and is the most widely used whole field measurement technique in flow diagnostics. With the advent of time-resolved DPIV it is now possible to resolve the 3D spatio-temporal dynamics of turbulent and transient flows as they develop over time. Minimising the systematic and random errors associated with the cross-correlation of flow images is essential in providing accurate quantitative results for DPIV. This research has explored a variety of cross-correlation algorithms and techniques developed to increase the accuracy of DPIV measurements. It is shown that these methods are unable to suppress either the inherent errors associated with the random distribution of particle images within each interrogation region or the background noise of an image. This has been achieved through a combination of both theoretical modelling and experimental verification for a uniform particle image displacement. The study demonstrates that normalising the correlation field by the signal strength that contributes to each point of the correlation field suppresses both the mean bias and RMS error. A further enhancement to this routine has lead to the development of a robust cross-correlation algorithm that is able to suppress the systematic errors associated to the random distribution of particle images and background noise.
APA, Harvard, Vancouver, ISO, and other styles
35

Alboabidallah, Ahmed Hussein Hamdullah. "Error propagation analysis for remotely sensed aboveground biomass." Thesis, University of Plymouth, 2018. http://hdl.handle.net/10026.1/13074.

Full text
Abstract:
Above-Ground Biomass (AGB) assessment using remote sensing has been an active area of research since the 1970s. However, improvements in the reported accuracy of wide scale studies remain relatively small. Therefore, there is a need to improve error analysis to answer the question: Why is AGB assessment accuracy still under doubt? This project aimed to develop and implement a systematic quantitative methodology to analyse the uncertainty of remotely sensed AGB, including all perceptible error types and reducing the associated costs and computational effort required in comparison to conventional methods. An accuracy prediction tool was designed based on previous study inputs and their outcome accuracy. The methodology used included training a neural network tool to emulate human decision making for the optimal trade-off between cost and accuracy for forest biomass surveys. The training samples were based on outputs from a number of previous biomass surveys, including 64 optical data based studies, 62 Lidar data based studies, 100 Radar data based studies, and 50 combined data studies. The tool showed promising convergent results of medium production ability. However, it might take many years until enough studies will be published to provide sufficient samples for accurate predictions. To provide field data for the next steps, 38 plots within six sites were scanned with a Leica ScanStation P20 terrestrial laser scanner. The Terrestrial Laser Scanning (TLS) data analysis used existing techniques such as 3D voxels and applied allometric equations, alongside exploring new features such as non-plane voxel layers, parent-child relationships between layers and skeletonising tree branches to speed up the overall processing time. The results were two maps for each plot, a tree trunk map and branch map. An error analysis tool was designed to work on three stages. Stage 1 uses a Taylor method to propagate errors from remote sensing data for the products that were used as direct inputs to the biomass assessment process. Stage 2 applies a Monte Carlo method to propagate errors from the direct remote sensing and field inputs to the mathematical model. Stage 3 includes generating an error estimation model that is trained based on the error behaviour of the training samples. The tool was applied to four biomass assessment scenarios, and the results show that the relative error of AGB represented by the RMSE of the model fitting was high (20-35% of the AGB) in spite of the relatively high correlation coefficients. About 65% of the RMSE is due to the remote sensing and field data errors, with the remaining 35% due to the ill-defined relationship between the remote sensing data and AGB. The error component that has the largest influence was the remote sensing error (50-60% of the propagated error), with both the spatial and spectral error components having a clear influence on the total error. The influence of field data errors was close to the remote sensing data errors (40-50% of the propagated error) and its spatial and non-spatial Overall, the study successfully traced the errors and applied certainty-scenarios using the software tool designed for this purpose. The applied novel approach allowed for a relatively fast solution when mapping errors outside the fieldwork areas.
APA, Harvard, Vancouver, ISO, and other styles
36

van, de Beek C. Z., H. Leijnse, P. Hazenberg, and R. Uijlenhoet. "Close-range radar rainfall estimation and error analysis." COPERNICUS GESELLSCHAFT MBH, 2016. http://hdl.handle.net/10150/621508.

Full text
Abstract:
Quantitative precipitation estimation (QPE) using ground-based weather radar is affected by many sources of error. The most important of these are (1) radar calibration, (2) ground clutter, (3) wet-radome attenuation, (4) rain-induced attenuation, (5) vertical variability in rain drop size distribution (DSD), (6) non-uniform beam filling and (7) variations in DSD. This study presents an attempt to separate and quantify these sources of error in flat terrain very close to the radar (1–2 km), where (4), (5) and (6) only play a minor role. Other important errors exist, like beam blockage, WLAN interferences and hail contamination and are briefly mentioned, but not considered in the analysis. A 3-day rainfall event (25–27 August 2010) that produced more than 50 mm of precipitation in De Bilt, the Netherlands, is analyzed using radar, rain gauge and disdrometer data.

Without any correction, it is found that the radar severely underestimates the total rain amount (by more than 50 %). The calibration of the radar receiver is operationally monitored by analyzing the received power from the sun. This turns out to cause a 1 dB underestimation. The operational clutter filter applied by KNMI is found to incorrectly identify precipitation as clutter, especially at near-zero Doppler velocities. An alternative simple clutter removal scheme using a clear sky clutter map improves the rainfall estimation slightly. To investigate the effect of wet-radome attenuation, stable returns from buildings close to the radar are analyzed. It is shown that this may have caused an underestimation of up to 4 dB. Finally, a disdrometer is used to derive event and intra-event specific ZR relations due to variations in the observed DSDs. Such variations may result in errors when applying the operational Marshall–Palmer ZR relation.

Correcting for all of these effects has a large positive impact on the radar-derived precipitation estimates and yields a good match between radar QPE and gauge measurements, with a difference of 5–8 %. This shows the potential of radar as a tool for rainfall estimation, especially at close ranges, but also underlines the importance of applying radar correction methods as individual errors can have a large detrimental impact on the QPE performance of the radar.
APA, Harvard, Vancouver, ISO, and other styles
37

Wan, Abdullah Wan Saffiey. "Analysis of error functions in speckle shearing interferometry." Thesis, Loughborough University, 2001. https://dspace.lboro.ac.uk/2134/33652.

Full text
Abstract:
Electronic Speckle Pattern Shearing Interferometry (ESPSI) or shearography has successfully been used in NDT for slope (δw/δx and/or δw/δy) measurement while strain measurement (δu/δx, δv/δy, δu/δy and δv/δx) is still under investigation This method is well accepted in industrial applications especially in the aerospace industry. Demand of this method is increasing due to complexity of the test materials and objects. ESPSI has successfully performed in NOT only for qualitative measurement whilst quantitative measurement is the current aim of many manufacturers. Industrial use of such equipment is being completed without considering the errors arising from numerous sources, including wavefront divergence. The majority of commercial systems are operated with diverging object illumination wavefronts without considering the curvature of the object illumination wavefront or the object geometry, when calculating the interferometer fringe function and quantifying data. This thesis reports the novel approach in quantified maximum phase change difference analysis for derivative out-of-plane (OOP) and in-plane (IP) cases that propagate from the divergent illumination wavefront compared to collimated illumination.
APA, Harvard, Vancouver, ISO, and other styles
38

Casagrande, Luiz Gustavo. "Soft error analysis with and without operating system." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2016. http://hdl.handle.net/10183/149633.

Full text
Abstract:
A complexidade dos sistemas integrados em chips bem como a arquitetura de processadores comerciais vem crescendo dramaticamente nos últimos anos. Com isto, a dificuldade de avaliarmos a suscetibilidade às falhas em decorrência da incidência de partículas espaciais carregadas nestes dispositivos cresce com a mesma taxa. Este trabalho apresenta uma análise comparativa da susceptibilidade à erros de software em um microprocessador embarcado ARM Cortex-A9 single core de larga escala comercial, amplamente utilizado em aplicações críticas, executando um conjunto de 11 aplicações desenvolvidas para um ambiente bare metal e para o sistema operacional Linux. A análise de soft errors é executada por injeção de falhas na plataforma de simulação OVPSim juntamente com o injetor OVPSim-FIM, capaz de sortear o momento e local de injeção de uma falha. A campanha de injeção de falhas reproduz milhares de bit-flips no banco de registradores do microprocessador durante a execução do conjunto de benchmarks que possuem um comportamento de código diverso, desde dependência de fluxo de controle até aplicações intensivas em dados. O método de análise consiste em comparar execuções da aplicação onde falhas foram injetadas com uma execução livre de falhas. Os resultados apresentam a taxa de falhas que são classificadas em: mascaradas (UNACE), travamento ou perda de controle de fluxo (HANG) e erro nos resultados (SDC). Adicionalmente, os erros são classificados por registradores, separando erros latentes por sua localização nos resultados e por exceções detectadas pelo sistema operacional, provendo novas possibilidades de análise para um processador desta escala. O método proposto e os resultados obtidos podem ajudar a orientar desenvolvedores de software na escolha de diferentes arquiteturas de código, a fim de aprimorar a tolerância à falhas do sistema embarcado como um todo.
The complexity of integrated system on-chips as well as commercial processor’s architecture has increased dramatically in recent years. Thus, the effort for assessing the susceptibility to faults due to the incidence of spatial charged particles in these devices has growth at the same rate. This work presents a comparative analysis of soft errors susceptibility in the commercial large-scale embedded microprocessor ARM Cortex-A9 single core, widely used in critical applications, performing a set of 11 applications developed for a bare metal environment and the Linux operating system. The soft errors analysis is performed by fault injection in OVPSim simulation platform along with the OVPSim-FIM fault injector, able to randomly select the time and place to inject the fault. The fault injection campaign reproduces thousands of bit-flips in the microprocessor register file during the execution of the benchmarks set, with a diverse code behavior ranging from control flow dependency to data intensive applications. The analysis method is based on comparing applications executions where faults were injected with a fault-free implementation. The results show the error rate classified by their effect as: masked (UNACE), crash or loss of control flow (HANG) and silent data corruption (SDC); and by register locations. By separating latent errors by its location in the results and exceptions detected by the operating system, one can provide new better observability for a large-scale processor. The proposed method and the results can guide software developers in choosing different code architectures in order to improve the fault tolerance of the embedded system as a whole.
APA, Harvard, Vancouver, ISO, and other styles
39

Toprakseven, Suayip. "Error Analysis of Extended Discontinuous Galerkin (XdG) Method." University of Cincinnati / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1418733307.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Tsai, Rung-Shiou, and 蔡榮修. "Error analysis and error compensation in a phase-shifting interferometry." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/r4ab4v.

Full text
Abstract:
碩士
國立臺灣科技大學
機械工程系
94
This thesis is about using optical interference method to measure surface geometry. Phase-shifting techniques with simple optic elements were employed to increase the resolution of a Michelson interference system. The phase height of each surface point was obtained through Four-Frame technique. The required computer software were develop to compensate phase value between any two interference fringes and also among the entire surface. The principle to determine concave or convex of a surface is also provided. An optical flat was used as a sample to determine the error of the system. Demonstration of how to compensate the system error in a measurement is illustrated in details.
APA, Harvard, Vancouver, ISO, and other styles
41

葉培青. "Kinematic Error Modelling and Error Sensitivity Analysis of A Robot." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/69919830503191720508.

Full text
Abstract:
碩士
大葉大學
機電自動化研究所碩士班
92
The application areas for use of robots are increasing. A variety of robots are being developed for different applications and environments. The kinematic, inverse kinematic analysis and the positioning error analysis of a two-axis FPD transfer robot used in the LCD manufacturing industry are studied in this thesis. The kinematic model of the robot is obtained by modeling each linkage as a homogeneous transformation matrix. Therefore the posotion and the orientation of the end effector can be represented as the product of these matrices. However, the position and orientation of the end effector calculated by the homogeneous transformation can be erroneous due to sizing and geometric errors of each link, backlashes in the drive train etc. A new homogeneous transformation that takes these linkage errors into account is developed to calculate the correct position and orientation of the end effector. The model of estimating the sensitivity of positioning error with respect to the linkage error is then proposed. According to the positioning error sensitivity analysis, the influence of each error factor with respect to the position accuracy can be determined. As a result, reasonable tolerance design of the robot linkages that results in high positioning accuracy can be achieved by using these positioning error sensitivity information.
APA, Harvard, Vancouver, ISO, and other styles
42

Kgafela, Regina Gwendoline. "Error analysis, contrastive analysis and cohesive writing." Thesis, 2012. http://hdl.handle.net/10210/6698.

Full text
Abstract:
M.A.
This study wants to outline some of the errors made by the Motswana child when she/he communicates in English. I also want to look at the origin of the errors and how the errors affect cohesive writing and interpretation. This research also aims at making second language teachers aware of the influence a first language can have in the learning of a second language. Note: At school level, the student/child learns the language and does not acquire it and thus language learning skills or strategies should not be confused with language acquisition skills or strategies. Transfer has long been a controversial issue, bur recent studies support the view that crosslinguistic influences can have an important impact on second language learning. To elaborate on the above issue, the article, the pronoun and number will be looked at. I want to establish how much influence a learner's native language can have in making the learning of a new language easy or difficult. I want us to look at the following questions of which some will lead or develop into our hypotheses:- Will knowledge of the origin of errors eliminate or reduce the errors? Which errors will be eliminated and at what rate? Will the remedial lessons have an effect on the elimination or reduction of errors? Is the contrastive analysis method the best way to handle such a situation? To what extent do errors affect interpretation and connectivity? The study is conducted on the following language categories:- The pronoun (English vs Tswana) ; The article (English vs Tswana) ; Number (English vs Tswana) ; Cohesive writing (misconception/ambiguity)
APA, Harvard, Vancouver, ISO, and other styles
43

Tencer, John Thomas. "Error analysis for radiation transport." 2013. http://hdl.handle.net/2152/23247.

Full text
Abstract:
All relevant sources of error in the numerical solution of the radiative transport equation are considered. Common spatial discretization methods are discussed for completeness. The application of these methods to the radiative transport equation is not substantially different than for any other partial differential equation. Several of the most prevalent angular approximations within the heat transfer community are implemented and compared. Three model problems are proposed. The relative accuracy of each of the angular approximations is assessed for a range of optical thickness and scattering albedo. The model problems represent a range of application spaces. The quantified comparison of these approximations on the basis of accuracy over such a wide parameter space is one of the contributions of this work. The major original contribution of this work involves the treatment of errors associated with the energy-dependence of intensity. The full spectrum correlated-k distribution (FSK) method has received recent attention as being a good compromise between computational expense and accuracy. Two approaches are taken towards quantifying the error associated with the FSK method. The Multi-Source Full Spectrum k–Distribution (MSFSK) method makes use of the convenient property that the FSK method is exact for homogeneous media. It involves a line-by-line solution on a coarse grid and a number of k-distribution solutions on subdomains to effectively increase the grid resolution. This yields highly accurate solutions on fine grids and a known rate of convergence as the number of subdomains increases. The stochastic full spectrum k-distribution (SFSK) method is a more general approach to estimating the error in k-distribution solutions. The FSK method relies on a spectral reordering and scaling which greatly simplify the spectral dependence of the absorption coefficient. This reordering is not necessarily consistent across the entire domain which results in errors. The SFSK method involves treating the absorption line blackbody distribution function not as deterministic but rather as a stochastic process. The mean, covariance, and correlation structure are all fit empirically to data from a high resolution spectral database. The standard deviation of the heat flux prediction is found to be a good error estimator for the k-distribution method.
text
APA, Harvard, Vancouver, ISO, and other styles
44

Huang, Shiao-ling, and 黃小玲. "Error Analysis and Teaching Composition." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/44904303841136532075.

Full text
Abstract:
碩士
國立清華大學
外國語文學系
89
ABSTRACT This study aimed to investigate the nature and distribution of different kinds of grammatical errors made in English compositions written by students of Foreign Languages and Literature Department in National Tsing-Hua University, Taiwan. Its purposes were to find specific errors made more or less by a certain grade of students, to discover common errors made by students by calculating frequencies of error types, and to find out possible explanations for students’ grammatical errors, and provide teachers implications of how to reduce learners’ errors in English composition teaching. 15 freshmen, 15 sophomores, and 16 juniors participated in this study. Each of them wrote two compositions─the first one written in the first semester, and the second in the second semester. Numbers and frequencies of written errors were identified, calculated, and compared in terms of error types and the error types of each grade. A total 1700 written errors was found and categorized into 3 classes, and the subcategorized 13 error types. The hierarchy of difficulty of error types, in descending order, is listed as follows: (1) Verb, (2) Noun, (3) Spelling, (4) Article, (5) Preposition, (6) Word choice, (7) Pronoun, (8) Redundancy, (9) Adjective, (10) Conjunction, (11) Adverb, (12) Word order, and (13) Unclear. The average of errors made per 100 words is 4.62%. Freshmen committed errors most, and juniors were second higher than sophomores. By comparing the first semester to the second semester of each grade, it showed that all three levels made progress in the view of their error percentages. Besides, there were six major causes of errors made by students. The causes identified in this study, in descending order, were (1) overgeneralization, (2) ignorance of rule restrictions, (3) simplification, (4) incomplete application of rules, (5) L1 transfer, and (6) carelessness. The author found that interference from Chinese is not the major factor in the way students construct sentences and use the language. Rather, overgeneralization, ignorance of rule restrictions, and simplification comprised the largest part of error causes. This study suggests that composition teachers make use of the hierarchy of difficulty of error types to help them decide what should be taught and learned with more emphasis. Besides, the causes of errors can be identified as the aids to help teachers design the remedial work.
APA, Harvard, Vancouver, ISO, and other styles
45

Hong, Shang-Ming, and 洪尚銘. "Error Analysis of Laser Tracker." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/41991073961338041014.

Full text
Abstract:
碩士
國立成功大學
機械工程學系碩博士班
94
Laser tracking technique has widely used in many fields. For measurement, Coordinate Measurement Laser Trackers (CMLT) based on Laser tracking technique can be applied to precision measuring and surface modeling of large objects. Although CMLT is more convenient than CMM, CMLT’s precision is compromised by errors during manufacture and operation, so CMLT can’t take the place of CMM completely. In the first instance, this study introduces the CMLT, and then it presents a skew ray tracing method and homogeneous coordinate transformation matrices for optical modeling of CMLT. The study also uses the Laser tracker alignment parameter to align the errors of the CMLT. In the end, we simulate (1) The horizontal angle of the encoder (2)The vertical angle of the encoder (3) The distance between CMLT and Reflectors, with or without the Laser tracker alignment parameter. We hope the study can optimize the measurement precision of CMLT.
APA, Harvard, Vancouver, ISO, and other styles
46

林國珍. "Computation Error Analysis of CORDIC." Thesis, 1997. http://ndltd.ncl.edu.tw/handle/59559244300482804942.

Full text
Abstract:
碩士
中華大學
電機工程研究所
85
CORDIC(Coordinate Rotation Digital Computer), is an algorithm for performing a sequence of iteration computation using the coordinate rotation. It can generate some powerful elementary function only realized by a simple set of adder and shifter. As such, it is suitable to implement high performance chips which are applied in digital signal processing and image processing fields with high speed and massive function computation using VLSI technology. In the duration of the past thirty years, the CORDIC applications of implementation and algorithm had been discussed in different fields, but there were few papers proposed which were very important about computation error using CORDIC. As knowing where the errors are, we can design the hardware with the error consideration in order to get the best cost-performance and the desired outcomes. In this thesis, we had discussed the computation error of CORDIC. We split the error of CORDIC into different kinds in order to analyze and derive from it systematically. There were split into before and after expansion according to the expansion of input range, and then split into before and after iteration according to the compensation of scale factor which was applied or not. Every kind split into rotation and vector mode according to the characteristic of CORDIC, then split into approximation error and truncation error for kind of error. The truncation error split into fixed point and floating point. We had analyzed all errors to generate 108 formulas of error analysis or 72 formulas from overall view in three different coordinate system. We also revealed the reference error with functional base and some suggestions on design with the error tolerance in tables. In this thesis, we had got a far and wide and perfect error analysis of CORDIC. We had not only considered and promoted some neglected errors and formulas, but also revealed the outcomes of error analysis in the form which involved the iterative times and word-length only. We also had got some important discovery in the discussion of error analysis. Beside, we took examples to explain the CORDIC application for error computation. It was applied directly in the design and implementation of CORDIC from the outcomes of error analysis.
APA, Harvard, Vancouver, ISO, and other styles
47

Schroeder, Bernard L. "Errors involving violations of the zero-product principle an error analysis study /." 1989. http://catalog.hathitrust.org/api/volumes/oclc/20317281.html.

Full text
Abstract:
Thesis (Ph. D.)--University of Wisconsin--Madison, 1989.
Typescript. Vita. eContent provider-neutral record in process. Description based on print version record. Includes bibliographical references (leaves 177-181).
APA, Harvard, Vancouver, ISO, and other styles
48

Chen, Jia-Yan, and 陳佳彥. "Tool Path Planning and Geometric Errors Analysis for Five-Axis Error Measurement." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/j36chk.

Full text
Abstract:
碩士
國立虎尾科技大學
機械設計工程研究所
100
This paper is to probe the geometric errors of machine tool. The kinematic equations and geometric error model of the five-axis machine tool are derived using HTM (homogeneous transformation matrix). This static-error model is composed of 43 static errors include linear axis and rotation axis. The proposed method first analyzes the error model and identifies the critical errors and corresponding offsets of the five-axis machine tool which dominant the overall errors. Therefore, NC test paths for TCP such as K2 can be obtained according to inverse kinematics equations. Through Matlab conduct simulation and verification for NC path, then execute geometric errors simulatio and analysis for five-axis machine tool respectively. Simulations are performed to verify the feasibility of the geometric errors for machine tool. Finally, experiments are conducted and the results demonstrate that the geometric errors can be further compensated for the precise five-axis machine tool.
APA, Harvard, Vancouver, ISO, and other styles
49

Thella, 'Mamashome Amelia. "Refining geography teaching : an error analysis." Thesis, 2009. http://hdl.handle.net/10539/5925.

Full text
Abstract:
Sexual abstinence has become the primary response to prevention against sexually transmitted infection (STI) and unplanned pregnancies amongst young people. However, not much is known about the perceptions of young men on sexual abstinence. The central aim in this study was to explore the perceptions of sexual abstinence among young black males. The research aims to examine men’s understandings of their own sexuality and the way these might influence their decision on sexual abstinence. A total of 10 in-depth semi-structured interviews were conducted individually with young men aged between 18 and 25 years, studying at The University of the Witwatersrand. All data collected were then qualitatively analysed through the use of thematic content analysis (TCA). Findings show that in constructing their masculinities participants predominantly endorsed discourses of male hegemony. At some instances the young men retracted to subjective alternative masculinities, although there was a stronger need to fit in with their peers, to protect themselves from being ridiculed or rejected. As such conforming to the hegemonic masculinity was expected. The young men constructed women as sexual objects and as a means towards affirming their masculinity. A key conclusion drawn was that some traditional notions of manhood still held sway, and these tied in strongly with how these participants constructed their masculinity and this influenced most of them to not sexually abstain.
APA, Harvard, Vancouver, ISO, and other styles
50

Chen, Huadong Sam. "Error analysis of camera-space manipulation." 2007. http://etd.nd.edu/ETD-db/theses/available/etd-08282007-102007/.

Full text
Abstract:
Thesis (Ph. D.)--University of Notre Dame, 2007.
Thesis directed by Steven B. Skaar for the Department of Aerospace and Mechanical Engineering. "August 2007." Includes bibliographical references (leaves 150-152).
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography