Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Accuracy of testing.

Dissertationen zum Thema „Accuracy of testing“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Accuracy of testing" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Novela, George. „Testing maquiladora forecast accuracy“. To access this resource online via ProQuest Dissertations and Theses @ UTEP, 2008. http://0-proquest.umi.com.lib.utep.edu/login?COPT=REJTPTU0YmImSU5UPTAmVkVSPTI=&clientId=2515.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Ahmed, Anwar. „COST AND ACCURACY COMPARISONS IN MEDICAL TESTING USING SEQUENTIAL TESTING STRATEGIES“. VCU Scholars Compass, 2010. http://scholarscompass.vcu.edu/etd/103.

Der volle Inhalt der Quelle
Annotation:
The practice of sequential testing is followed by the evaluation of accuracy, but often not by the evaluation of cost. This research described and compared three sequential testing strategies: believe the negative (BN), believe the positive (BP) and believe the extreme (BE), the latter being a less-examined strategy. All three strategies were used to combine results of two medical tests to diagnose a disease or medical condition. Descriptions of these strategies were provided in terms of accuracy (using the maximum receiver operating curve or MROC) and cost of testing (defined as the proportion of subjects who need 2 tests to diagnose disease), with the goal to minimize the number of tests needed for each subject while maintaining test accuracy. It was shown that the cost of the test sequence could be reduced without sacrificing accuracy beyond an acceptable range by setting an acceptable tolerance (q) on maximum test sensitivity. This research introduced a newly-developed ROC curve reflecting this reduced sensitivity and cost of testing called the Minimum Cost Maximum Receiver Operating Characteristic (MCMROC) curve. Within these strategies, four different parameters that could influence the performance of the combined tests were examined: the area under the curve (AUC) of each individual test, the ratio of standard deviations (b) from assumed underlying disease and non-disease populations, correlation (rho) between underlying disease populations, and disease prevalence. The following patterns were noted: Under all parameter settings, the MROC curve of the BE strategy never performed worse than the BN and BP strategies, and it most frequently had the lowest cost. The parameters tended to have less of an effect on the MROC and MCMROC curves than they had on the cost curves, which were affected greatly. The AUC values and the ratio of standard deviations both had a greater effect on cost curves, MROC curves, and MCMROC curves than prevalence and correlation. The use of BMI and plasma glucose concentration to diagnose diabetes in Pima Indians was presented as an example of a real-world application of these strategies. It was found that the BN and BE strategies were the most consistently accurate and least expensive choice.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Ashton, Triss A. „Accuracy and Interpretability Testing of Text Mining Methods“. Thesis, University of North Texas, 2013. https://digital.library.unt.edu/ark:/67531/metadc283791/.

Der volle Inhalt der Quelle
Annotation:
Extracting meaningful information from large collections of text data is problematic because of the sheer size of the database. However, automated analytic methods capable of processing such data have emerged. These methods, collectively called text mining first began to appear in 1988. A number of additional text mining methods quickly developed in independent research silos with each based on unique mathematical algorithms. How good each of these methods are at analyzing text is unclear. Method development typically evolves from some research silo centric requirement with the success of the method measured by a custom requirement-based metric. Results of the new method are then compared to another method that was similarly developed. The proposed research introduces an experimentally designed testing method to text mining that eliminates research silo bias and simultaneously evaluates methods from all of the major context-region text mining method families. The proposed research method follows a random block factorial design with two treatments consisting of three and five levels (RBF-35) with repeated measures. Contribution of the research is threefold. First, the users perceived a difference in the effectiveness of the various methods. Second, while still not clear, there are characteristics with in the text collection that affect the algorithms ability to extract meaningful results. Third, this research develops an experimental design process for testing the algorithms that is adaptable into other areas of software development and algorithm testing. This design eliminates the bias based practices historically employed by algorithm developers.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Davison, Wayne. „Establishment of Accuracy Testing Facilities for Terrestrial Laser Scanners“. Master's thesis, University of Cape Town, 2018. http://hdl.handle.net/11427/29572.

Der volle Inhalt der Quelle
Annotation:
Measurement instruments that are required for high precision and reliable work need to have regular checks to ensure they are always performing at the required level of accuracy. A Terrestrial Laser Scanner is one such instrument and with the vast amount of information that this machine is able to capture, it is especially important to run regular accuracy checks. This research is building on the work that has been done by previous researchers on the assessment of instrument accuracy and the establishment of facilities specialized for this assessment. Theoretical principles are investigated in the form of Least Squares Adjustments, similarities to panorama photography and photogrammetric accuracy. Terrestrial Laser Scanners are reviewed with respect to their scanning principles and data acquisition. The methodology incorporated in this research encompasses the positioning of targets, their survey to establish high accuracy coordinates through various methods of adjustment and thereafter the scanning of those targets. Comparisons were done using derived angles and distances between the targets to discover the point accuracy of the Laser Scanner. This was done for two facilities; a short range facility (1 to 15 meters) and a medium range facility (1 to 75 meters). The medium range facility also included a range testing baseline for distance accuracy assessments. The outcomes from the comparisons between the surveyed control data and the laser scanner observed data indicated that the laser scanner is performing below the accuracy of the surveyed data. The laser scanner was further compared against the manufacturer quoted performance specifications and revealed the laser scanner to be performing below the quoted values. The laser scanner in question showed stronger results in the horizontal measurements over the vertical measurements. All results suggested the laser scanner was delivering weak results in the vertical observations due to a mis-alignment of individual scan halves. This research was able to establish two accuracy assessment facilities specialized for Terrestrial Laser Scanners under these same conditions. Both facilities were used in conjunction, to analyze the Z+F Imager 5010C laser scanner and determine the point accuracy in terms of the observed angles and distances from this machine. The results are also able to identify errors in the performance of the laser scanner and whether or not it is performing within the manufacturer specifications by noticing any large values such as in the case of the vertical observations for this instrument.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Wu, Chen. „Testing the predictive accuracy of possibly misspecified binary choice models /“. Full text available from ProQuest UM Digital Dissertations, 2008. http://0-proquest.umi.com.umiss.lib.olemiss.edu/pqdweb?index=0&did=1850449331&SrchMode=1&sid=2&Fmt=2&VInst=PROD&VType=PQD&RQT=309&VName=PQD&TS=1279224700&clientId=22256.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph.D.)--University of Mississippi, 2008.
Typescript. Vita. "April 2008." Major professor: Walter Mayer Includes bibliographical references (leaves 68-71). Also available online via ProQuest to authorized users.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Gillett, Simon D. „Accuracy in mechanistic pavement design consequent upon unbound material testing“. Thesis, University of Nottingham, 2002. http://eprints.nottingham.ac.uk/12226/.

Der volle Inhalt der Quelle
Annotation:
As part of a European Union funded research study (the "SCIENCE" project) performed between 1990 and 1993, granular road construction material and subgrade soil specimens were tested in the four participating laboratories of the project: Laboratório Nacional de Engenharia Civil Portugal University of Nottingham United Kingdom Laboratoire Central des Ponts et Chaussées France Delft University of Technology The Netherlands The author was based the first of these and visited the other participating laboratories, performing the majority of the work described. Inaccuracies in repeated load triaxial testing based on the use of different apparatus and instrumentation are identified. A detailed instrumentation comparison is undertaken, which results in the magnitude of potential errors being quantified. The author has derived material parameters and model coefficients for the materials tested using a number of previously published material models. In order to establish these parameters a method for removing outliers from test data based on the difference between the modelled and experimental material parameters for each stress path applied was developed. The consequences of repeatability and reproducibility, variability and inaccuracies in the output of repeated load triaxial testing, on the parameters and, hence, on computed pavement design thicknesses or life is investigated using a number of material models and the South African mechanistic pavement design method. Overall, it is concluded that: • Instrumentation differences are not as critical as variations in results obtained from different specimens tested in a single repeated load triaxial apparatus. It was found that specimen manufacture difference yielded greater variation that instrumentation differences. • Variation in results has some effect on the upper granular layers, where higher stress levels are experienced, but even quite considerable variation in the results from materials used in the lower layers has little effect on pavement life. • Analytical methods to determine the stresses and strains vary considerably as do the predicted pavement thicknesses consequent on using these methods. The inaccuracies in testing (large discrepancies are found when the same material is tested in the same laboratory) and the limitations of the available material models severely limit the usefulness of advanced testing and non-linear modelling in routine pavement design. On the basis of this study it is recommended that a more simplistic pavement design approach be taken keeping in line with future developments of testing and modelling and field validation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Vasudev, R. Sashin, und Ashok Reddy Vanga. „Accuracy of Software Reliability Prediction from Different Approaches“. Thesis, Blekinge Tekniska Högskola, Avdelningen för för interaktion och systemdesign, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-1298.

Der volle Inhalt der Quelle
Annotation:
Many models have been proposed for software reliability prediction, but none of these models could capture a necessary amount of software characteristic. We have proposed a mixed approach using both analytical and data driven models for finding the accuracy in reliability prediction involving case study. This report includes qualitative research strategy. Data is collected from the case study conducted on three different companies. Based on the case study an analysis will be made on the approaches used by the companies and also by using some other data related to the organizations Software Quality Assurance (SQA) team. Out of the three organizations, the first two organizations used for the case study are working on reliability prediction and the third company is a growing company developing a product with less focus on quality. Data collection was by the means of interviewing an employee of the organization who leads a team and is in the managing position for at least last 2 years.
svra06@student.bth.se
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Almowanes, Abdullah. „GENERATING RANDOM SHAPES FOR MONTE CARLO ACCURACY TESTING OF PAIRWISE COMPARISONS“. Thesis, Laurentian University of Sudbury, 2013. https://zone.biblio.laurentian.ca/dspace/handle/10219/2097.

Der volle Inhalt der Quelle
Annotation:
This thesis shows highly encouraging results as the gain of accuracy reached 18.4% when the pairwise comparisons method was used instead of the direct method for comparing random shapes. The thesis describes a heuristic for generating random but nice shapes, called placated shapes. Random, but visually nice shapes, are often needed for cognitive experiments and processes. These shapes are produced by applying the Gaussian blur to randomly generated polygons. Afterwards, the threshold is set to transform pixels to black and white from di erent shades of gray. This transformation produces placated shapes for easier estimation of areas. Randomly generated placated shapes are used to perform the Monte Carlo method to test the accuracy of cognitive processes by using pairwise comparisons. An on-line questionnaire has been implemented and participants were asked to estimate the areas of ve shapes using a provided unit of measure. They were also asked to compare the shapes in pairs. Such Monte Carlo experiment has never been conducted for 2D case. The received results are of considerable importance.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Arnold, Theresa Faye. „TESTING THE ACCURACY OF LIDAR FOREST MEASUREMENT REPLICATIONS IN OPERATIONAL SETTINGS“. MSSTATE, 2009. http://sun.library.msstate.edu/ETD-db/theses/available/etd-03232009-100909/.

Der volle Inhalt der Quelle
Annotation:
The repeatability of stand measurements derived from LiDAR data was tested in east-central Mississippi. Data collected from LiDAR missions and from ground plots were analyzed to estimate stand parameters. Two independent LiDAR missions were flown in approximate orthogonal directions. Field plots were generated where the missions overlapped, and tree data were taken in these plots. LiDAR data found 86-100% of mature pine trees, 64-81% of immature pine trees, and 63-72% of mature hardwood trees. Immature and mature pine tree heights measured from LiDAR were found to be significantly different (α= 0.05) than field measured heights. Individual tree volumes and plot volume for mature pines were precisely predicted in both flight directions. The results of this study showed that LiDAR repeatability in mature pines can be accurately achieved. But immature pine and hardwood plots were unable to match the repeatability of the mature pine plots.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Ogawa, Hiroyuki. „Testing the accuracy of a three-dimensional acoustic coupled mode model“. Thesis, Monterey, California. Naval Postgraduate School, 1991. http://hdl.handle.net/10945/26806.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Yoon, Jun Young Ph D. Massachusetts Institute of Technology. „Design and testing of a high accuracy robotic single-cell manipulator“. Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/68574.

Der volle Inhalt der Quelle
Annotation:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2011.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 137-139).
We have designed, built and tested a high accuracy robotic single-cell manipulator to be able to pick individual cells from array of microwells, each 30 Pm or 50 pm cubed. Design efforts have been made for higher accuracy, higher throughput, and compactness. The proposed system is designed to have a T-drive mechanism with two linear stages for XY-plane positioning to have higher stiffness and less structurally inherent error. Precision is especially required in Z-axis movement for successful cell-retrieval procedure and so a rotational mechanism with a voice coil actuator, among many options, is selected for the Z-axis motion because this results in relatively smaller reaction on the system and has advantages of direct drive. The prototype of the robotic single-cell picker integrates the Z-axis and XY stage motion, realtime microscopy imaging, and cell manipulation with a NI PXI-controller centered as a main real-time controller. This prototype is built to test performances of the proposed system in terms of single-cell retrieval and this thesis also discusses the experiments for the cell-retrieval process with microbeads of the equivalent size and the results as well. This proposed system will be used to help select and isolate an individual hybridoma from polyclonal mixture of cells producing various types of antibodies. It is important to be able to do this cell-retrieval task since a single isolated hybridoma cell produces monoclonal antibody that only recognizes specific antigens, and this monoclonal antibody can be used to develop cures and treatments for many diseases. Our research's development of accurate and dedicated mechatronics solution will contribute to more rapid and reliable investigation of cell properties. Such analysis techniques will act as catalyst for quicker discovery of treatments and vaccines on a wide range of diseases including HIV infection, tuberculosis, hepatitis C, and malaria with potential impact on the society.
by Jun Young Yoon.
S.M.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Larsson, David. „Accuracy Assessment of Shear Wave Elastography for Arterial Applications by Mechanical Testing“. Thesis, KTH, Hållfasthetslära (Avd.), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-160091.

Der volle Inhalt der Quelle
Annotation:
Arterial stiffness is an important biometric in predicting cardiovascular diseases, since mechanical properties serve as indicators of several pathologies such as e.g. atherosclerosis. Shear Wave Elastography (SWE) could serve as a valuable non-invasive diagnostic tool for assessing arterial stiffness, with the technique proven efficient in large homogeneous tissue. However the accuracy within arterial applications is still uncertain, following the lack of proper validation. Therefore, the aim of this study was to assess the accuracy of SWE in arterial phantoms of poly(vinyl alcohol) cryogel by developing an experimental setup with an additional mechanical testing setup as a reference method. The two setups were developed to generate identical stress states on the mounted phantoms, with a combination of axial loads and static intraluminal pressures. The acquired radiofrequency-data was analysed in the frequency domain with retrieved dispersion curves fitted to a Lamb-wave based wave propagation model. The results indicated a significant correlation between SWE and mechanical measurements for the arterial phantoms, with an average relative error of 10 % for elastic shear moduli in the range of 23 to 108 kPa. The performed accuracy quantification implies a satisfactory performance level and as well as a general feasibility of SWE in arterial vessels, indicating the potential of SWE as a future cardiovascular diagnostic tool.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Madsen, Lars Bang. „The utility and accuracy of post-conviction polygraph testing with sex offenders“. Thesis, University of Newcastle Upon Tyne, 2006. http://hdl.handle.net/10443/236.

Der volle Inhalt der Quelle
Annotation:
The aims of the present research were two-fold: firstly, to investigate the utility of post-conviction polygraphy with community-based sex offenders; and secondly, to examine the accuracy of the polygraph in this context. The initial study examined whether periodic polygraph testing acted as a deterrent for engaging in risk behaviour. Fifty adult male sex offenders taking part in community treatment programs were allocated into 2 groups: "Polygraph Aware" subjects were told they would receive a polygraph examination in 3 months regarding their high-risk behaviours, while "Polygraph Unaware" subjects were told their behaviour would be reviewed in 3 months. Relevant behaviours for each subject were established at baseline interviews, following which both groups were polygraphed at 3 months. All subjects were polygraphed again at 6 months. Thirty-two subjects (64%) attended the first polygraph examination, with 31 (97%) disclosing an average of 2.45 high-risk behaviours each previously unknown to supervising probation officers. There was no significant difference between the two groups. Twenty-one subjects (42%) completed the second polygraph test, with 71% disclosing an average of 1.57 behaviours, a significant decrease compared with the first test. Disclosures to treatment providers and probation officers also increased. Polygraph testing resulted in offenders engaging in less high-risk behaviour, although the possibility that offenders fabricated reports of high-risk behaviours to satisfy examiners is also considered; similarly offenders seemed to be more honest with their supervisors, but this only occurred after the experience of the test itself. The second study examined the accuracy of the polygraph as used in a postconviction context with sex offenders. One hundred and seventy-six sex offenders engaged in treatment and required to complete biannual polygraph tests focussed upon offending and other risk behaviours. The participant's regular polygraph maintenance test was used for the study, however, in addition to the regular issues covered in this test the examiner included `drug use' over the preceding three months as a relevant question. Immediately after the polygraph test a hair specimen was collected and subsequently analysed for drugs. The polygraph was reasonably accurate with identifying truth telling (79%), while 21% were wrongly accused of drug use. Only a small number of offenders (n = 5) were found to be taking drugs and lying about having done so. The blind scorers correctly identified all of these individuals (100%). The Area under the curve index was . 88. The inter-rater reliability between the blind scorers and the original examiners was poor. The original examiners were less accurate than the blind scorers (Area under the curve index = . 68) and only correctly identified two of the five liars (40%). False positives were associated with lower intelligence and having experienced a sanction due to a polygraph result. False negatives were not associated with demographic characteristics, personality variables or intelligence. The majority of offenders found the polygraph to be helpful in both treatment and supervision. Nine per cent of offenders claimed to have made false disclosures; these individuals -had higher scores on ratings of Neuroticism and lower scores on ratings of Conscientiousness. The implications of these results are discussed. Overall, the findings support the view that the polygraph is both useful and accurate in the treatment and supervision of sex offenders.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Wang, Daodang, Sen Zhang, Rengmao Wu, Chih Yu Huang, Hsiang-Nan Cheng und Rongguang Liang. „Computer-aided high-accuracy testing of reflective surface with reverse Hartmann test“. OPTICAL SOC AMER, 2016. http://hdl.handle.net/10150/621802.

Der volle Inhalt der Quelle
Annotation:
The deflectometry provides a feasible way for surface testing with a high dynamic range, and the calibration is a key issue in the testing. A computer-aided testing method based on reverse Hartmann test, a fringe-illumination deflectometry, is proposed for high-accuracy testing of reflective surfaces. The virtual "null" testing of surface error is achieved based on ray tracing of the modeled test system. Due to the off-axis configuration in the test system, it places ultra-high requirement on the calibration of system geometry. The system modeling error can introduce significant residual systematic error in the testing results, especially in the cases of convex surface and small working distance. A calibration method based on the computer-aided reverse optimization with iterative ray tracing is proposed for the highaccuracy testing of reflective surface. Both the computer simulation and experiments have been carried out to demonstrate the feasibility of the proposed measurement method, and good measurement accuracy has been achieved. The proposed method can achieve the measurement accuracy comparable to the interferometric method, even with the large system geometry calibration error, providing a feasible way to address the uncertainty on the calibration of system geometry. (C) 2016 Optical Society of America
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Jensen, Anne. „The accuracy and precision of kinesiology-style manual muscle testing : designing and implementing a series of diagnostic test accuracy studies“. Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:4fd95394-e812-402e-9195-6c82643eaa15.

Der volle Inhalt der Quelle
Annotation:
Introduction: Kinesiology-style manual muscle testing (kMMT) is a non-invasive assessment method used by various types of practitioners to detect a wide range of target conditions. It is distinctly different from the muscle testing performed in orthopaedic/neurological settings and from Applied kinesiology. Despite being estimated to be used by over 1 million people worldwide, the usefulness of kMMT has not yet been established. The aim of this thesis was to assess the validity of kMMT by examining its accuracy and precision. Methods: A series of 5 diagnostic test accuracy studies were undertaken. In the first study, the index test was kMMT, and the target condition was deceit in verbal statements spoken by Test Patients (TPs). The comparator reference standard was a true gold standard: the actual verity of the spoken statement. The outcomes of the muscle tests were interpreted consistently: a weak result indicated a Lie and a strong result indicated a Truth. A secondary index test was included as a comparator: Intuition, where Practitioners used intuition (without using kMMT) to ascertain if a Lie or Truth was spoken. Forty-eight Practitioners were recruited and paired with 48 unique kMMT-naïve TPs. Each Pair performed 60 kMMTs broken up into 6 blocks of 10, which alternated with blocks of 10 Intuitions. For each Pair, an overall percent correct was calculated for both kMMT and Intuition, and their means were compared. Also calculated for both tests were sensitivity, specificity, positive predictive value and negative predictive value. The second study was a replication of the first, using a sample size of 20 Pairs and a less complex procedure. In the third study, grip strength dynamometry replaced kMMT as the primary index test. In the fourth study, the reproducibility and repeatability of kMMT were examined. In the final study, TPs were presented with emotionally-arousing stimuli in addition to the affect-neutral stimuli used in previous studies, to assess if stimuli valence impacted kMMT accuracy. Results: Throughout this series of studies, mean kMMT accuracies (95% Confidence Intervals; CIs) ranged from 0.594 (0.541 – 0.647) to 0.659 (0.623 - 0.695) and mean Intuition accuracies, from 0.481 (0.456 - 0.506) to 0.526 (0.488 - 0.564). In all studies, mean kMMT accuracies were found to be significantly different from mean Intuition accuracies (p ≤ 0.01), and from Chance (p < 0.01). On the other hand, no difference was found between grip strength following False statements compared to grip strength following True statements (p = 0.61). In addition, the Practitioner-TP complex accounted for 57% of the variation in kMMT accuracy, with 43% unaccounted for. Also, there was no difference in the mean kMMT accuracy when using emotionally-arousing stimuli compared to when using affect-neutral stimuli (p = 0.35). Mean sensitivities (95% CI) ranged from 0.503 (0.421 - 0.584) to 0.659 (0.612 - 0.706) while mean specificities (95% CI) ranged from 0.638 (0.430 - 0.486) to 0.685 (0.616 - 0.754). Finally, while a number of participant characteristic seemed to influence kMMT accuracy during one study or another, no one specific characteristic was found to influence kMMT accuracy consistently (i.e. across the series of studies). Discussion: This series of studies has shown that kMMT can be investigated using rigorous evidence-based health care methods. Furthermore, for distinguishing lies from truths, kMMT has repeatedly been found to be significantly more accurate than both Intuition and Chance. Practitioners appear to be an integral part of the kMMT dynamic because when replaced by a mechanical device (i.e. a grip strength dynamometer), distinguishing Lies from Truth was not possible. In addition, since specificities seemed to be greater than sensitivities, Truths may have been easier to detect than Lies. A limitation of this series of studies is that I have a potential conflict of interest, in that I am a practitioner of kMMT who gets paid to perform kMMT. Another limitation is these results are not generalisable to other applications of kMMT, such as its use in other paradigms or using muscles other than the deltoid. Also, these results suggest that kMMT may be about 60% accurate, which is statistically different from Intuition and Chance; however it has not been established if 60% correct is "good enough" in a clinical context. As such, further research is needed to assess its clinical utility, such as randomised controlled trials investigating the effectiveness of whole kMMT technique systems. Also, future investigators may want to explore what factors, such as specific Practitioner and TP characteristics, influence kMMT accuracy, and to investigate the validity of using kMMT to detect other target conditions, using other reference standards and muscles other than the deltoid. Summary: This series of diagnostic test accuracy studies has found that kMMT can be investigated using rigorous methods, and that kMMT used to distinguish Lies from Truths is significantly more accurate that both Intuition and Chance. Further research is needed to assess kMMT’s clinical utility.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Parson, Lindsay. „Improving the Accuracy of the VO2 max Prediction Obtained from Submaxial YMCA Testing“. TopSCHOLAR®, 2004. http://digitalcommons.wku.edu/theses/510.

Der volle Inhalt der Quelle
Annotation:
Maximal oxygen uptake (VO2 max) is the best criterion measure for aerobic fitness and the prescription of exercise intensity for programs designed to enhance cardiorespiratory fitness. There are two ways of obtaining VO2 max: maximal tests, which require subjects to exercise to the point of volitional exhaustion and provide the most accurate measure; and submaximal tests, which are less physically strenuous but have lower accuracy. A popular submaximal protocol is the YMCA bike test. Steady state heart rate (HR) is measured at multiple submaximal workloads and extrapolated to the subject's estimated maximal HR (220-age). The VO2 corresponding to the estimated maximal HR is accepted as the estimated VO2 max. The accuracy of this submaximal testing protocol effects the ability to estimate a subject's actual aerobic capacity. To help better investigate the YMCA protocol, submaximal measures (HR, VO2) were utilized at specific workloads in an attempt to improve the accuracy of the prediction. The standard YMCA protocol was completed and then extended to actual maximal exertion. Submaximal measures (HR, VO2, etc.) were used to develop a regression equation predicting VO2 max. T-tests were used to compare VO2 data between protocols. Multiple regression analyses were performed to generate regression equations to enhance the accuracy of VO2 max estimations from the YMCA submaximal protocol. Results were considered to have no significant difference in the new regression equation and the actual measured VO2 max. Because submaximal measures (HR, VO2) could not be utilized to improve the accuracy of the prediction of the YMCA protocol, the original purpose was deemphasized and redirected. Considering the apparent utility of anthropometric measures in estimating VO2 max, this study sought to improve the accuracy of the YMCA protocol by adding anthropometric measures (BMI, Skinfolds) to develop two separate regression models. Results were significantly different (p < 0.05) between Measured V02 (MV02) and V02 estimated from the YMCA protocol (YV02). Additionally, results were significantly different (p = 0.003) between the Houston nonexercise test and MVO2. In conclusion, although a significant correlation resulted between MVO2 and YVO2, it was not stronger than other submaximal estimations. Also, it was not a strong predictor because of a significant difference between the Houston nonexercise test and MVO2. Therefore, by adding BMI and Skinfolds to the popular YMCA formula, r-values were increased (r = 0.817 and r = 0.822) and can therefore better estimate a subject's VO2 max versus solely using graphic plots of steady state HR responses at protocol-determined workloads.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Miao, Jessica. „The effects of gamification on engagement and response accuracy in discriminatory sensory testing“. The Ohio State University, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=osu161900284591034.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Chipanga, Tendai. „Determination of the accuracy of non-destructive residual stress measurements methods“. Thesis, [S.l. : s.n.], 2009. http://dk.cput.ac.za/cgi/viewcontent.cgi?article=1100&context=td_cput.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Horn, Sandra L. „Aggregating Form Accuracy and Percept Frequency to Optimize Rorschach Perceptual Accuracy“. University of Toledo / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1449513233.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Ryan, Keegan P. „Experimental Testing of the Accuracy of Attitude Determination Solutions for a Spin-Stabilized Spacecraft“. DigitalCommons@USU, 2011. https://digitalcommons.usu.edu/etd/1007.

Der volle Inhalt der Quelle
Annotation:
Spin-stabilized spacecraft generally rely on sun and three-axis magnetic field sensor measurements for attitude determination. This study experimentally determines the total accuracy of attitude determination solutions using modest quality sensors. This was ac- complished by having a test spacecraft collect data during spinning motions. The data was then post-processed to find the attitude estimates, which were then compared to the exper- imentally measured attitude. This same approach will be used to test the accuracy of the attitude determination system of the DICE spacecraft to be built by SDL/USU.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Cavanagh, Daniele. „Developing soft tissue thickness values for South African black females and testing its accuracy“. Diss., University of Pretoria, 2010. http://hdl.handle.net/2263/25716.

Der volle Inhalt der Quelle
Annotation:
In forensic science one frequently has to deal with unidentified skeletonised remains. When conventional methods of identification have proven unsuccessful, forensic facial reconstruction (FFR) may be used, often as a last resort, to assist the process. FFR relies on the relationships between the facial features, subcutaneous soft tissues and underlying bony structure of the skull. The aim of this study was to develop soft tissue thickness (STT) values for South African black females for application to FFR, to compare these values to existing literature or databases, and to test the accuracy and recognisability of reconstructions using these values. It also established whether population-specific STT values are necessary for FRR. Computerised tomography scanning was used to determine average populationspecific STT values at 28 facial landmarks of 154 black females. The Manchester method of facial reconstruction was employed to build faces, for which antemortem photographs were available, on two skulls that were provided by the South African Police Service’s (SAPS) Forensic Science Laboratory. Different data sets of STT values, namely values from this study, two sets of data from American blacks and a South African mixed ancestry group, were used to build four faces for each of the skulls. Two identification sessions were then held. In the first session, 30 observers were asked to select matches from a random group of 20 photographs of black females which included the two actual images. The identification rates calculated for each photograph revealed that the highest rates of a positive match were for the reconstructions based on South African values. In the second session another group of 30 volunteers were asked to match to each photograph the most similar of the four reconstructions made of that particular individual. The reconstructions with STT values from the current (South African) study were selected more often than the other data sets. Although shortcomings do exist, the identification sessions indicated that FFR can be of value. Furthermore, population-specific STT values are important, since skulls reconstructed using these values were selected or identified statistically significantly more often than the others. AFRIKAANS : In forensiese wetenskap het mens dikwels te doen met ongeïdentifiseerde skeletmateriaal. Wanneer die konvensionele metodes van identifikasie onsuksesvol is, mag forensiese gesigsrekonstruksie (FGR) gebruik word, dikwels as `n laaste uitweg, om die proses te help. FGR is afhanklik van die verhouding tussen die gelaatstrekke, subkutane sagte weefsels en onderliggende benige struktuur van die skedel. Die doel van hierdie studie was om sagte weefsel dikte (SWD) waardes vir Suid-Afrikaanse swart vroue te ontwikkel vir gebruik met FGR, om hierdie waardes te vergelyk met bestaande literatuur of databasisse, en die akkuraatheid en herkenbaarheid van rekonstruksies waar hierdie waardes gebruik was te toets. Dit is gedoen ten einde vas te stel of bevolking-spesifieke SWD waardes nodig is vir FGR. Gerekenariseerde tomografie skandering is gebruik om die gemiddelde bevolkingspesifieke SWD waardes op 28 gesigslandmerke van 154 swart vroue te bepaal. Die Manchester metode van gesigsrekonstruksie is gebruik om twee skedels, waarvan antemortem foto’s beskikbaar was en wat voorsien is deur die Suid Afrikaanse Polisie Diens (SAPD) se Forensiese Wetenskap Laboratorium, op te bou. Verskeie data stelle vir SWD waardes, naamlik waardes verkry in hierdie studie, twee stelle Amerikaanse waardes vir swart vroue en `n Suid Afrikaanse groep van gemengde afkoms, is vir hierdie studie gebruik om vier gesigte van elk van die skedels te bou. Twee identifikasie sessies is gehou. In die eerste sessie is 30 deelnemers gevra om passende foto’s uit `n algemene versameling van 20 foto’s van swart vroue te kies. Dit het die twee ware gesigte ingesluit. Die identifikasie waardes wat bereken is vir elke foto het getoon dat die hoogste waardes vir die werklike foto’s verkry is op rekonstruksies gebasseer op Suid-Afrikaanse waardes. In die tweede sessie was `n ander groep van 30 vrywillgers gevra om die mees soortgelyke van die vier rekonstruksies by die foto van die betrokke individu te pas. Die rekonstruksies met SWD waardes van die huidige (Suid Afrikaanse) studie was meer dikwels gekies as die van ander data stelle. Hoewel verskeie tekortkominge bestaan, het die identifikasie sessies getoon dat FGR van waarde kan wees. Verder is bevolking-spesifieke SWD waardes belangrik, aangesien skedels wat opgebou is met hierdie waardes statisties beduidend meer dikwels gekies of geïdentifiseer is as die ander.
Dissertation (MSc)--University of Pretoria, 2011.
Anatomy
unrestricted
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Mulelid, Tor Inge. „Testing the use and accuracy of satellite imagery for land registration in Angot Yedegera, Ethiopia“. Thesis, Norges teknisk-naturvitenskapelige universitet, Geografisk institutt, 2013. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-22945.

Der volle Inhalt der Quelle
Annotation:
Aim: The aim of this thesis was to study the suitability of using satellite imagery for land registration in Ethiopia. The primary focus was to investigate whether the accuracy of the derived coordinates from a WorldView-1 satellite image met the 1m requirement for rual cadastral mapping in Ethiopia. Another aim was to examine whether different border types could affect the accuracy. A final aim was to investigate the relationship between slope and the accuracy of the derived coordinates, to examine whether there was a correlation between the two. Methods: A number of 42 Ground Control Points (GCPs) were surveyed for the orthorectification of the satellite image. Static surveying of second order points was conducted prior to the Real-Time Kinematic Global Positioning System (RTK GPS) surveying. A number of 210 parcel corners in Angot Yedegera, Ethiopia, were surveyed, using RTK GPS. These RTK GPS data served as the basis for comparison with the coordinates derived from the satellite image. Statistical analysis of the discrepancies was performed by analyzing values of central tendency and dispersion. In addition, outlier tests were conducted using boxplot and percentile values, as well as a Moran I autocorrelation test. A Pearson r correlation test was performed, between slope and the accuracy of the derived coordinates. Results: 46.4 % of the coordinate values derived from the satellite image had discrepancies below the 1m requirement. The median of the discrepancies was 1.088m. Further, the 75th percentile was 2.386m, and the maximum deviation was 10.103m. It was found that the deviations varied according to different border types, both concerning central tendency and dispersion. The median for the border types ‘fence’, ‘pasture land’ and ‘parcel’ was below the 1m requirement, whereas the other border types had medians varying from 1.777m to 2.367m. The correlation test indicated that slope was not related to the accuracy achieved (Pearson r = 0.029). Conclusion: It was found that the coordinates derived from the WorldView-1 satellite image do not meet the requirement of 1m accuracy. It was also found that different border types have a large influence on the accuracy achieved. The border types ‘fence’, ‘pasture land’ and ‘parcel’ achieved the highest accuracy, while the border types ‘path’, ‘forest’ and ‘diffuse’ achieved the lowest accuracy. Slope was not proved to affect the accuracy of the coordinates in either positive or negative extent.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Ramudzuli, Zwivhuya Romeo. „Investigation into a GPS time pulse radiator for testing time-stamp accuracy of a radio telescope“. Master's thesis, Faculty of Engineering and the Built Environment, 2018. http://hdl.handle.net/11427/29995.

Der volle Inhalt der Quelle
Annotation:
The MeerKAT radio telescope in South Africa is required to tag the arrival time of a signal to within 10 ns of Coordinated Universal Time (UTC). The telescope has a local atomic clock ensemble and uses satellite based remote clock comparison techniques to compare the telescope time to UTC. The master clock timing edge is distributed to each telescope antenna via an optical fibre precise time transfer. Although the timing accuracy of the telescope time was measured internally by the telescope, there is a need for an independent method to verify how well each antenna and its associated processing stages are aligned to UTC. A portable GNSS time-pulse radiator (GTR) device for testing the time-stamp accuracy was developed. The GTR was calibrated at the National Metrology Institute of South Africa and laboratory characterisation tests measured its RF timing pulse to be 1.32 ± 0.100 µs ahead of the UTC second. The telescope’s time and frequency reference clock ensemble consists of two hydrogen masers, an ultrastable crystal and GPS disciplined Rubidium clocks. During operation, the GTR radiates a broadband GPS time synchronised RF timing signal at a known distance from the telescope antennas and the corresponding timestamps were compared to the expected value. Recent GTR timing tests performed on one of the MeerKAT antennas showed that the telescope’s generated timestamps associated with the GTR’s RF timing signal coincided with the expected delay of approximately 16 ± 0.1 µs measured from an antenna 4.8 km away from the telescope’s master clock transmitter. Ultimately we used the GTR to verify that the telescope time and UTC were aligned to within 100 ns. Future work is planned to improve the profile of the transmitted signal and timing critical hardware in order to reduce the GTR’s error budget.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Kraljevic, Matija. „Character recognition in natural images : Testing the accuracy of OCR and potential improvement by image segmentation“. Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-187991.

Der volle Inhalt der Quelle
Annotation:
In recent years, reading text from natural images has gained renewed research attention. One of the main reasons for this is the rapid growth of camera-based applications on smart phones and other portable devices. With the increasing availability of high performance, low-priced, image-capturing devices, the application of scene text recognition is rapidly expanding and becoming increasingly popular. Despite many efforts, character recognition in natural images, is still considered a challenging and unresolved problem. The difficulties stem from the fact that natural images suffer from a wide variety of obstacles such as complex backgrounds, font variation, uneven illumination, resolution problems, occlusions, perspective effects, just to mention a few. This paper aims to test the accuracy of OCR in character recognition of natural images as well as testing the possible improvement in accuracy after implementing three different segmentation methods.The results showed that the accuracy of OCR was very poor and no improvments in accuracy were found after implementing the chosen segmentation methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Damodara, Eswar Keran C. „Clinical trial to determine the accuracy of prefabricated trays for making alginate impressions“. Thesis, Birmingham, Ala. : University of Alabama at Birmingham, 2008. https://www.mhsl.uab.edu/dt/2009r/damodara.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Shaver, Jonathan A. „PC104 control environment development and use for testing the dynamic accuracy of the MicroStrain 3DM-GX1 sensor“. Thesis, Monterey, Calif. : Naval Postgraduate School, 2007. http://bosun.nps.edu/uhtbin/hyperion-image.exe/07Jun%5FShaver.pdf.

Der volle Inhalt der Quelle
Annotation:
Thesis (M.S. in Electrical Engineering)--Naval Postgraduate School, June 2007.
Thesis Advisor(s): Xiaoping Yun, Matthew Feemster, Matthew Feemster. "June 2007." Includes bibliographical references (p. 113-114). Also available in print.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Deckert, Christopher J. „Canopy, terrain, and distance effects on Global Positioning System position accuracy“. Thesis, This resource online, 1994. http://scholar.lib.vt.edu/theses/available/etd-09052009-040816/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Donkers, Adriane Martina Carleton University Dissertation Psychology. „Observer accuracy in usability testing; the effects of obviousness of usability problems, prior knowledge of problems, and training“. Ottawa, 1992.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Wolf, Lisa Adams. „Testing and refinement of an integrated, ethically-driven environmental model of clinical decision-making in emergency settings“. Thesis, Boston College, 2011. http://hdl.handle.net/2345/2224.

Der volle Inhalt der Quelle
Annotation:
Thesis advisor: Dorothy A. Jones
Thesis advisor: Pamela J. Grace
The purpose of the study was to explore the relationship between multiple variables within a model of critical thinking and moral reasoning that support and refine the elements that significantly correlate with accuracy and clinical decision-making. Background: Research to date has identified multiple factors that are integral to clinical decision-making. The interplay among suggested elements within the decision making process particular to the nurse, the patient, and the environment remain unknown. Determining the clinical usefulness and predictive capacity of an integrated ethically driven environmental model of decision making (IEDEM-CD) in emergency settings in facilitating accuracy in problem identification is critical to initial interventions and safe, cost effective, quality patient care outcomes. Extending the literature of accuracy and clinical decision making can inform utilization, determination of staffing ratios, and the development of evidence driven care models. Methodology: The study used a quantitative descriptive correlational design to examine the relationships between multiple variables within the IEDEM-CD model. A purposive sample of emergency nurses was recruited to participate in the study resulting in a sample size of 200, calculated to yield a power of 0.80, significance of .05, and a moderate effect size. The dependent variable, accuracy in clinical decision-making, was measured by scores on clinical vignettes. The independent variables of moral reasoning, perceived environment of care, age, gender, certification in emergency nursing, educational level, and years of experience in emergency nursing, were measures by the Defining Issues Test, version 2, the Revised Professional Practice Environment scale, and a demographic survey. These instruments were identified to test and refine the elements within the IEDEM-CD model. Data collection occurred via internet survey over a one month period. Rest's Defining Issues Test, version 2 (DIT-2), the Revised Professional Practice Environment tool (RPPE), clinical vignettes as well as a demographic survey were made available as an internet survey package using Qualtrics TM. Data from each participant was scored and entered into a PASW database. The analysis plan included bivariate correlation analysis using Pearson's product-moment correlation coefficients followed by chi square and multiple linear regression analysis. Findings: The elements as identified in the IEDEM-CD model supported moral reasoning and environment of care as factors significantly affecting accuracy in decision-making. Findings reported that in complex clinical situations, higher levels of moral reasoning significantly affected accuracy in problem identification. Attributes of the environment of care including teamwork, communication about patients, and control over practice also significantly affected nurses' critical cue recognition and selection of appropriate interventions. Study results supported the conceptualization of the IEDEM-CD model and its usefulness as a framework for predicting clinical decision making accuracy for emergency nurses in practice, with further implications in education, research and policy
Thesis (PhD) — Boston College, 2011
Submitted to: Boston College. Connell School of Nursing
Discipline: Nursing
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Matzke, Nicholas J. „Probabilistic Historical Biogeography| New Models for Founder-Event Speciation, Imperfect Detection, and Fossils Allow Improved Accuracy and Model-Testing“. Thesis, University of California, Berkeley, 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=3616487.

Der volle Inhalt der Quelle
Annotation:

Historical biogeography has a diversity of methods for inferring ancestral geographic ranges on phylogenies, but many of the methods have conflicting assumptions, and there is no common statistical framework by which to judge which models are preferable. Probabilistic modeling of geographic range evolution, pioneered by Ree and Smith (2008, Systematic Biology) in their program LAGRANGE, could provide such a framework, but this potential has not been implemented until now.

I have created an R package, "BioGeoBEARS," described in chapter 1 of the dissertation, that implements in a likelihood framework several commonly used models, such as the LAGRANGE Dispersal-Extinction-Cladogenesis (DEC) model and the Dispersal-Vicariance Analysis (DIVA, Ronquist 1997, Systematic Biology) model. Standard DEC is a model with two free parameters specifying the rate of "dispersal" (range expansion) and "extinction" (range contraction). However, while dispersal and extinction rates are free parameters, the cladogenesis model is fixed, such that the geographic range of the ancestral lineage is inherited by the two daughter lineages through a variety of scenarios fixed to have equal probability. This fixed nature of the cladogenesis model means that it has been indiscriminately applied in all DEC analyses, and has not been subjected to any inference or formal model testing.

BioGeoBEARS also adds a number of features not previously available in most historical biogeography software, such as distance-based dispersal, a model of imperfect detection, and the ability to include fossils either as ancestors or tips on a time-calibrated tree.

Several important conclusions may be drawn from this research. First, formal model selection procedures can be applied in phylogenetic inferences of historical biogeography, and the relative importance of different processes can be measured. These techniques have great potential for strengthening quantitative inference in historical biogeography. No longer are biogeographers forced to simply assume, consciously or not, that some processes (such as vicariance or dispersal) are important and others are not; instead, this can be inferred from the data. Second, founder-event speciation appears to be a crucial explanatory process in most clades, the only exception being some intracontinental taxa showing a large degree of sympatry across widespread ranges. This is not the same thing as claiming that founder-event speciation is the only important process; founder event speciation as the only important process is inferred in only one case (Microlophus lava lizards from the Galapagos). The importance of founder-event speciation will not be surprising to most island biogeographers. However, the results are important nonetheless, as there are still some vocal advocates of vicariance-dominated approaches to biogeography, such as Heads (2012, Molecular Panbiogeography of the Tropics), who allows vicariance and range-expansion to play a role in his historical inferences, but explicitly excludes founder-event speciation a priori. The commonly-used LAGRANGE DEC and DIVA programs actually make assumptions very similar to those of Heads, even though many users of these programs likely consider themselves dispersalists or pluralists. Finally, the inclusion of fossils and imperfect detection within the same likelihood and model-choice framework clears the path for integrating paleobiogeography and neontological biogeography, strengthening inference in both.

Model choice is now standard practice in phylogenetic analysis of DNA sequences: a program such as ModelTest is used to compare models such as Jukes-Cantor, HKY, GTR+I+G, and to select the best model before inferring phylogenies or ancestral states. It is clear that the same should now happen in phylogenetic biogeography. BioGeoBEARS enables this procedure. Perhaps more importantly, however, is the potential for users to create and test new models. Probabilistic modeling of geographic range evolution on phylogenies is still in its infancy, and undoubtedly there are better models out there, waiting to be discovered. It is also undoubtedly true that different clades and different regions will favor different processes, and that further improvements will be had by linking the evolution of organismal traits (e.g., loss of flight) with the evolution of geographic range, within a common inference framework. In a world of rapid climate change and habitat loss, biogeographical methods must maximize both flexibility and statistical rigor if they are to play a role. This research takes several steps in that direction.

BioGeoBEARS is open-source and is freely available at the Comprehensive R Archive Network (http://cran.r-project.org/web/packages/BioGeoBEARS/index.html). A step-by-step tutorial, using the Psychotria dataset, is available at PhyloWiki (http://phylo.wikidot.com/biogeobears).

(Abstract shortened by UMI.)

APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

McKillip, Kassandra. „Determination of the repeatability and accuracy of the Pressed Juice Percentage (PJP) method at sorting beef strip loin steaks into categories of known juiciness“. Thesis, Kansas State University, 2016. http://hdl.handle.net/2097/32578.

Der volle Inhalt der Quelle
Annotation:
Master of Science
Department of Animal Sciences and Industry
Travis G. O'Quinn
The objectives of this study were to determine the effect of enhancement on consumer and trained beef palatability scores of three quality grades when cooked to three degrees of doneness (DOD) and to determine the accuracy and repeatability of the Pressed Juice Percentage (PJP). Striploins of USDA Prime, Low Choice, and Low Select quality grades were used in this study. To maximize variation in juiciness, steaks were either enhanced (formulated for 108% pump with a solution of water, salt, and alkaline phosphates) or non-enhanced, and cooked to three degree of doneness (Rare: 60°C, Medium: 71°C, or Very Well-Done: 82°C). All samples were evaluated for Warner-Bratzler shear force (WBSF), Slice Shear Force (SSF), PJP, and palatability traits by consumer and trained panelists. Consumer panelists rated all enhanced treatments similar (P > 0.05) to each other and greater (P < 0.05) for juiciness, tenderness, flavor liking, and overall liking than all non-enhanced treatments. Consumer ratings of juiciness, tenderness, and overall liking scores increased (P < 0.05) as DOD decreased. Consumer panelists rated all enhanced treatments similar (P > 0.05) and greater (P < 0.05) for the percentage of steaks classified as premium quality. For trained panel initial juiciness, all enhanced treatments and non-enhanced Prime samples were similar (P > 0.05) and greater (P < 0.05) than other treatments cooked to Medium and Very Well Done. Results indicated PJP had a relatively high repeatability coefficient (0.70), indicating that only 30% of the variation observed was due to sample measurement differences. The PJP threshold values evaluated accurately segregated steaks by the probability of a sample being rated “juicy” by consumers, with the actual percentage of “juicy” samples determined to be 41.67%, 72.31%, 89.33%, and 98.08% for the <50%, 50 – 75%, 75 – 90%, and >90% categories, respectively. Therefore, enhancement has a substantial, positive effect on beef palatability. Enhancing higher quality beef does not provide an additional palatability benefit; hence the greatest economic advantage is in enhancing lower quality beef products. Results of this study indicate the PJP juiciness method is both repeatable and accurate at sorting steaks based on the likelihood of a steak being “juicy”.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Fu, Tiffany Szu-Ting. „Mood states, cognitive characteristics and accuracy of confidence assessments : testing the validity of the depressive realism vs. the negativity hypothesis“. Thesis, University of Reading, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.501353.

Der volle Inhalt der Quelle
Annotation:
This study tested the validity of the depressive realism versus the negativity hypotheses by investigating the relation between confidence and accuracy in clinically depressed, dysphoric and control participants when performing decision tasks. In Experiment 1, depressed and dysphoric recovered depressed patients showed systematic PTPE underconfidence across several tasks regardless of performance accuracy but displayed item-by-item overconfidence equivalent to that of controls on tests of general knowledge and object recognition. In Experiment 2, control participants demonstrated both item-by-item overconfidence and PTPE underconfidence on a newly developed self-relevant adjective recognition task, thus enabling a stronger contrasting test of the two hypotheses in subsequent experiments.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Nosek, Jakub. „Testování metody Precise Point Positioning“. Master's thesis, Vysoké učení technické v Brně. Fakulta stavební, 2020. http://www.nusl.cz/ntk/nusl-414313.

Der volle Inhalt der Quelle
Annotation:
This diploma thesis deals with the Precise Point Positioning (PPP) method in various variants. The thesis describes the theoretical foundations of the PPP method and the most important systematic errors that affect accuracy. The accuracy of the PPP method was evaluated using data from the permanent GNSS station CADM, which is part of the AdMaS research center. Data of the period 2018 – 2019 were processed. The results of combinations of different GNSS and the results of different observation periods were compared. Finally, the accuracy was verified at 299 IGS GNSS stations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Rathi, Nakul H. „Comparing the Accuracy of Intra-Oral Scanners for Implant Level Impressions Using Different Scanable Abutments“. The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1407200647.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Al-Rousan, Naief Mahmoud. „System calibration, geometric accuracy testing and validation of DEM and orthoimage data extracted from spot stereo-pairs using commercially available image processing systems“. Thesis, University of Glasgow, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.264262.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Sincock, Brenna Peggy. „Clinical applicability of adaptive speech testing : a comparison of the administration time, accuracy, efficiency and reliability of adaptive speech tests with conventional speech audiometry“. Thesis, University of Canterbury. Communication Disorders, 2008. http://hdl.handle.net/10092/2157.

Der volle Inhalt der Quelle
Annotation:
Adaptive procedures are a common method of investigating sensory abilities in research settings; however, their use in clinical settings is more limited. Little research has been done investigating the implementation of adaptive procedures into Audiological speech tests, and to date, no studies have compared and evaluated adaptive speech tests with current clinical speech audiometry. This study investigated the advantages of using both closed-set and openset adaptive speech tests in the clinical Audiology setting, with respect to administration time, accuracy, efficiency and reliability. Preliminary testing of the two major adaptive procedures (staircase and maximum-likelihood procedures) was conducted using a selection of different parameters chosen on the basis of the results of previous research (Kaernbach, 1991; García- Pérez, 1998) to determine the optimal procedures and parameters for use in clinical speech tests. Focus was given to the staircase procedures, with comparisons made between tests using variations in step size – constant step sizes and larger step sizes at the beginning – and different termination criteria. It was found that both adaptive closed-set staircase tests (with both step size variations) performed similarly, whereas the adaptive open-set staircase test with larger step sizes at the beginning showed advantages over the equivalent constant step size test in terms of administration time, accuracy and efficiency. The maximum-likelihood QUEST procedure showed advantages over the staircase procedures in terms of administration time; however, the reliability of both this test and conventional speech audiometry was poor, indicating that these tests are not the most suitable tests for a clinical setting. Subsequent clinical testing of the optimal adaptive speech tests using participants with varying degrees of hearing loss found that administration time was similar between conventional speech audiometry and the adaptive closed-set staircase tests when the optimal termination criteria identified in the Preliminary Testing Phase were employed. The adaptive open-set staircase test with larger step sizes at the beginning showed the best accuracy of any of the tests when using the pure-tone average as a reference, while the efficiency of all the adaptive staircase tests was similar. Overall, the results highlight some of the potential advantages of adaptive speech testing in the clinical Audiology setting; however, further studies are required to determine the specific parameters that achieve the best results.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Bongioanni, Vincent Italo. „Enhancing Network-Level Pavement Macrotexture Assessment“. Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/89326.

Der volle Inhalt der Quelle
Annotation:
Pavement macrotexture has been shown to influence a range of safety and comfort issues including wet weather friction, splash and spray, ambient and in-vehicle noise, tire wear, and rolling resistance. While devices and general guidance exist to measure macrotexture, the wide-scale collection and use of macrotexture is neither mandated nor is it typically employed in the United States. This work seeks to improve upon the methods used to calibrate, collect, pre-process, and distill macrotexture data into useful information that can be utilized by pavement managers. This is accomplished by 1. developing a methodology to evaluate and compare candidate data collection devices; 2. plans and procedures to evaluate the accuracy of high-speed network data collection devices with reference surfaces and measurements; 3. the development of a method to remove erroneous data from emerging 3-D macrotexture sensors; 4. development of a model to describe the change in macrotexture as a function of traffic; 5.finally, distillation of the final collected pavement surface profiles into parameters for the prediction of important pavement surface properties aforementioned. Various high-speed macrotexture measurement devices were shown to have good repeatability (between 0.06 to 0.09mm MPD) and interchangeability of single-spot laser dfevices was demonstrated via a limits of agreement analysis. The operational factors of speed and acceleration were shown to affect the resulting MPD of several devices and guidelines are given for vehicle speed and sensor exposure settings. Devices with single spot and line lasers were shown to reproduce reference waveforms on manufactured surfaces within predefined tolerances. A model was developed that predicts future macrotexture levels (as measured by RMS) for pavements prone to bleeding due to rich asphalt content. Finally, several previously published macrotexture parameters along with a suite of novel parameters were evaluated for their effectiveness in the prediction of wet weather friction and certain types of road noise. Many of the parameters evaluated outperformed the current metrics of MPD and RMS.
Doctor of Philosophy
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Nováčková, Soňa. „Testování přesnosti mobilního mapovacího systému MOMAS“. Master's thesis, Vysoké učení technické v Brně. Fakulta stavební, 2012. http://www.nusl.cz/ntk/nusl-225361.

Der volle Inhalt der Quelle
Annotation:
The aim of this thesis is to introduce the mobile mapping system MOMAS, which is owned by Geodis Brno, spol. s.r.o. and test the accuracy of the system. Perform data collection and processing of data in the workplace company Geodis. In addition, identical target points, determine their coordinates and compare them with the coordinates obtained MOMAS system. And finally processed statistically derived coordinate differences.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Manda, David. „Mobilní mapování“. Master's thesis, Vysoké učení technické v Brně. Fakulta stavební, 2013. http://www.nusl.cz/ntk/nusl-226350.

Der volle Inhalt der Quelle
Annotation:
The aim of this thesis is introduce the mobile mapping system IP-S2, which is using by company GEODIS BRNO, and perform data collection by this system. Measure the identical points, determine their coordinates and compare with coordinates obtained by mobile mapping system. The conclusion of this thesis is focused on testing the accuracy of the mobile mapping system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Duarte, Cláudia Filipa Pires. „Essays on mixed-frequency data : forecasting and unit root testing“. Doctoral thesis, Instituto Superior de Economia e Gestão, 2016. http://hdl.handle.net/10400.5/11662.

Der volle Inhalt der Quelle
Annotation:
Doutoramento em Economia
Nas últimas décadas, OS investigadores têm tido acesso a bases de dados cada vez mais abrangentes, que incluem séries com frequências temporais mais elevadas e que são divulgadas mais atempadamente. Em contraste, algumas variáveis, nomeadamente alguns dos principais indicadores macroeconómicos, são divulgados com urn desfasamento temporal significativo e com baixa frequência. Esta situação levanta questões sobre como lidar com séries com frequências temporais diferentes, mistas. Ao longo do tempo, várias técnicas têm sido propostas. Esta tese debruça-se sobre uma técnica em particular - a abordagem MI(xed) DA{ta) S{ampling), proposta por Ghysels et al. (2004). No Capitulo 1 eu utilizo a técnica MIDAS para prever o crescimento do PIB na área do euro com base num pequcno conjunto de indicadores, cobrindo séries com diferentes frequências temporais e divulgadas com diferentes desfasamentos. Eu cornparo o desempenho de urn conjunto alargado de regressões MIDAS, utilizando a raiz quadrada do erro quadrático média de previsão e tomando como ponto de referência quer regressões autoregressivas, quer multivariadas (bridge models). A questão sobre a forma de introduzir tcrmos autoregressivos nas equações MIDAS é dirirnida. São consideradas diferentes combinações de variáveis, obtidas através da agregação de previsões ou de regressões multivariadas, assim como diferentes frequências ternporais. Os resultados sugerern que, em geral, a utilização de regressões MIDAS contribui para o aurnento da precisão das previsões. Adicionalmente, nesta tese são propostos novas testes de raízes unitárias que exploram inforrnação com frequências rnistas. Tipicamente, os testes de raízes unitárias têm baixa potência, especialrnente em amostras pequenas. Uma forma de combatcr esta dificuldade consiste em recorrer a testes que exploram informação adicional de urn regressor estacionário incluído na regressão de teste. Eu avalio se é possível melhorar 0 desempenho de alguns testes deste tipo ao explorar dados com frequêcias temporais mistas, através de regressões MIDAS. No Capitulo 2 eu proponho uma nova classe de testes da familia Dickey-Fuller (DF) com regressores adicionais de frequência temporal mista, tomando por base os testes DF com regressores adicionais (CADF) propostos por Hansen (1995) e uma versão modificada proposta por Pesavento (2006), semelhante ao filtro GLS aplicado ao teste ADF univariado em Elliott et al. (1996). Em alternativa aos testes da familia DF, Elliott and Jansson (2003) propõem urn teste de raízes unitárias viável que retém propriedades óptimas mesmo na presenc;a de variáveis deterministicas (EJ), tomando por base a versão univariada proposta por Elliott et al. (1996). No Capitulo 3 eu alargo o âmbito de aplicação destes testes de forma a incluir dados com frequência temporal mista. Dado que para implementar o teste EJ é necessário estimar modclos VAR, eu proponho urn modelo VAR-MIDAS não restrito, parcimonioso, que inclui séries de frequência temporal mista e é estimado com técnicas econométricas tradicionais. Os resultados de urn exercício de Monte Carlo indicam que os testes com dados de frequência temporal mista têrn urn desempenho em termos de potência melhor do que os testes que agregam todas as variáveis para a mcsma frequência temporal (necessariamente a frequência mais baixa). Os ganhos são robustos à dimensão da amostra, à escolha do número de desfasamentos a incluir nas regressões de teste e às frequências temporais concretas. Adicionalmente, os testes da familia EJ tendem a ter urn melhor desempenho do que os testes da familia CADF, independentemente das frequências temporais consideradas. Para ilustrar empiricamentc a utilização destes testes, analisa-se a série da taxa de desemprego nos EUA.
Over the last decades, researchers have had access to more comprehensive datasets, which are released on a more frequent and timely basis. Nevertheless, some variables, namely some key macroeconomic indicators, are released with a significant time delay and at low frequencies. This situation raises the question on how to deal with series released at different, mixed time frequencies. Over the years and for different purposes, several techniques have been put forward. This essav focuses on a particular technique - the MI(xed) DA(ta) S(ampling) framework, proposed by Ghysels et al. (2004). In Chapter 1 I use MIDAS for forecasting euro area GDP growth using a small set of selected indicators in an environment with different sampling frequencies and asynchronous releases of information. I run a horse race between a wide set of MIDAS regressions and evaluate their performance, in terms of root mean squared forecast error, against AR and quarterly bridge models. The issue on how to include autoregressive terms in MIDAS regressions is disentangled. Different combinations of variables, through forecast pooling and multi-variable regressions, and different time frequencies are also considered. The results obtained suggest that in general, using MIDAS regressions contributes to increase forecast accuracy. In addition, I propose new unit root tests that exploit mixed-frequency information. Unit root tests typically suffer from low power in small samples. To overcome this shortcoming, tests exploiting information from stationary covariates have been proposed. I assess whether it is possible to improve the power performance of some of these tests by exploiting mixed-frequency data, through the MIDAS approach. In Chapter 2 I put forward a new class of mixed-frequency covariate-augmented Dickey-Fuller (DF) tests, extending the covariate-augmented DF test (CADF test) proposed by Hansen (1995) and its modified version, similar to the GLS generalisation of the univariate ADF test in Elliott et al. (1996), proposed by Pesavento (2006). Alternatively to the CADF tests, Elliott and Jansson (2003) proposed a feasible point optimal unit root test in the presence of deterministic components (EJ test hereafter), which extended the univariate results in Elliott et al. (1996). In Chapter 3 I go one step further and include mixed-frequency data in the EJ testing framework. Given that implementing the EJ test requires estimating VAR models, in order to plug in mixed-frequency data in the test regression I propose an unconstrained, though parsimonious, stacked skip-sampled reduced-form VAR-MIDAS model, which is estimated using standard econometric techniques. The results from a Monte Carlo exercise indicate that mixed-frequency tests have better power performance than low-frequency tests. The gains are robust to the size of the sample, to the lag specification of the test regressions and to different combinations of time frequencies. Moreover, the EJ-family of tests tends to have a better power performance than the CADF-family of tests, either with low or mixed-frequency data. An empirical illustration using the US unemployment rate is presented.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Cookson, Jeremy L. „A method for testing the dynamic accuracy of Microelectro-Mechanical Systems (MEMS) Magnetic, Angular Rate, and Gravity (MARG) sensors for Inertial Navigation Systems (INS) and human motion tracking applications“. Thesis, Monterey, California : Naval Postgraduate School, 2010. http://edocs.nps.edu/npspubs/scholarly/theses/2010/Jun/10Jun%5FCookson.pdf.

Der volle Inhalt der Quelle
Annotation:
Thesis (M.S. in Electrical Engineering)--Naval Postgraduate School, June 2010.
Thesis Advisor(s): Yun, Xiaoping ; Second Reader: Romano, Marcello. "June 2010." Description based on title screen as viewed on July 14, 2010. Author(s) subject terms: micro-electro-mechanical systems, MEMS, magnetic, angular rate, gravity sensor, MARG sensors, inertial navigation system, INS, inertial test, MicroStrain, 3DM-GX1, 3DMGX3, CompactRIO, MATLAB GUI, dynamic accuracy test. Includes bibliographical references (p. 187-189). Also available in print.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Uher, Daniel. „Analýza procesu testování bezpečnostních prvků s airbagy v automobilech“. Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2012. http://www.nusl.cz/ntk/nusl-230175.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Bailey, Matthew Marlando. „An Extended Calibration and Validation of a Slotted-Wall Transonic Wall-Interference Correction Method for the National Transonic Facility“. Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/95882.

Der volle Inhalt der Quelle
Annotation:
Correcting wind tunnel data for wall interference is a critical part of relating the acquired data to a free-air condition. Accurately determining and correcting for the interference caused by the presence of boundaries in wind tunnels can be difficult especially for facilities employing ventilated boundaries. In this work, three varying levels of ventilation at the National Transonic Facility (NTF) were modeled and calibrated with a general slotted wall (GSW) linear boundary condition to validate the computational model used to determine wall interference corrections. Free-air lift, drag, and pitching moment coefficient predictions were compared for a range of lift production and Mach conditions to determine the uncertainty in the corrections process and the expected domain of applicability. Exploiting a previously designed statistical validation method, this effort accomplishes the extension of a calibration and validation for a boundary pressure wall interference corrections method. The foundational calibration and validation work was based on blockage interference only, while this present work extends the assessment of the method to encompass blockage and lift interference production. The validation method involves the establishment of independent cases that are then compared to rigorously determine the degree to which the correction method can converge free-air solutions for differing interference fields. The process involved first establishing an empty-tunnel calibration to gain both a centerline Mach profile of the facility at various ventilation settings, and to gain a baseline wall pressure signature undisturbed by a test article. The wall boundary condition parameters were then calibrated with a blockage and lift interference producing test article, and final corrected performance coefficients were compared for varying test section ventilated configurations to validate the corrections process and assess its domain of applicability. During the validation process discrimination between homogeneous and discrete implementations of the boundary condition was accomplished and final results indicated comparative strength in the discrete implementation's ability to capture experimental flow physics. Final results indicate that a discrete implementation of the General Slotted Wall boundary condition is effective in significantly reducing variations caused by differing interference fields. Corrections performed with the discrete implementation of the boundary condition collapse differing measurements of lift coefficient to within 0.0027, drag coefficient to within 0.0002, and pitching moment coefficient to within 0.0020.
Doctor of Philosophy
The purpose of conducting experimental tests in wind tunnels is often to acquire a quantitative measure of test article aerodynamic characteristics in such a way that those specific characteristics can be accurately translated into performance characteristics of the real vehicle that the test article intends to simulate. The difficulty in accurately simulating the real flow problem may not be readily apparent, but scientists and engineers have been working to improve this desired equivalence for the better part of the last half-century. The primary aspects of experimental aerodynamics simulation that present difficulty in attaining equivalence are: geometric fidelity, accurate scaling, and accounting for the presence of walls. The problem of scaling has been largely addressed by adequately matching conditions of similarity like compressibility (Mach number), and viscous effects (Reynolds number). However, accounting for the presence of walls in the experimental setup has presented ongoing challenges for ventilated boundaries; these challenges include difficulties in the correction process, but also extend into the determination of correction uncertainties. Exploiting a previously designed statistical validation method, this effort accomplishes the extension of a calibration and validation effort for a boundary pressure wall interference corrections method. The foundational calibration and validation work was based on blockage interference only, while this present work extends the assessment of the method to encompass blockage and lift interference production. The validation method involves the establishment of independent cases that are then compared to rigorously determine the degree to with the correction method can converge free-air solutions for differing interference scenarios. The process involved first establishing an empty-tunnel calibration to gain both a centerline Mach profile of the facility at various ventilation settings, and to gain a baseline wall pressure signature undisturbed by a test article. The wall boundary condition parameters were then calibrated with a blockage and lift interference producing test article, and final corrected performance coefficients were compared for varying test section ventilated configurations to validate the corrections process and assess its domain of applicability. During the validation process discrimination between homogeneous and discrete implementations of the boundary condition was accomplished and final results indicated comparative strength in the discrete implementation's ability to capture experimental flow physics. Final results indicate that a discrete implementation of the General Slotted Wall boundary condition is effective in significantly reducing variations caused by differing interference fields. Corrections performed with the discrete implementation of the boundary condition collapse differing measurements of lift coefficient to within 0.0027, drag coefficient to within 0.0002, and pitching moment coefficient to within 0.0020.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Hájková, Alena. „Návrh interní metodiky pro měření výrobků a dílů na přístroji CMM UPMC Zeiss na pracovišti ČMI Brno“. Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2020. http://www.nusl.cz/ntk/nusl-417433.

Der volle Inhalt der Quelle
Annotation:
This diploma thesis deals with the proposal of internal methodology for measurement of products and parts on CMM UPMC Zeiss at CMI Brno. The first part of this work analyzes the current state of knowledge in the field of accurate measurement on coordinate measuring machines (CMM), which includes the definition of basic metrological concepts, methodology for determining and expressing uncertainties of measurement and a general description of CMM. The diploma thesis also contains a detailed description of the UPMC 850 CARAT S-ACC device from the company Zeiss and summarizes the requirements for the testing laboratory in accordance with the standard ČSN EN 17 025: 2018. The next part of the work is focused on defining and determining the measurement uncertainties for this CMM and on developing a testing procedure for measurements on this machine. The final part of this thesis summarizes the achieved results and recommendations for practice.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Suba, Madeleine, und Mattias Lundgren. „Utvärdering av sensitivitet och specificitet för Acro Biotech Multitest 15 vid drogscreening“. Thesis, Hälsohögskolan, Högskolan i Jönköping, HHJ, Avd. för naturvetenskap och biomedicin, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-44931.

Der volle Inhalt der Quelle
Annotation:
Akut- och psykiatriska avdelningar på länssjukhuset Ryhov i Jönköping använder sig av snabbtest för drogscreening med varierande kvalitet under de tider då analysinstrumentet Konelab Prime 30i inte är bemannat. Syftet med studien var att utvärdera sensitivitet och specificitet hos Multitest 15 från tillverkaren Acro Biotech, och jämföra resultat från två olika avläsningstider. Antalet urinprover som samlades in för analys uppgick till 272. Positiva och negativa urinprover med drogkoncentrationer inom ±50% från varje drogs gränsvärde insamlades. Senare inkluderades drogkoncentrationer utanför detta intervall. Proverna testades med Multitest 15 vid laboratoriet för klinisk kemi på Ryhov efter utförd analys med Konelab Prime 30i, vars analysresultat utgjorde referens. De droger som testades var amfetamin, metamfetamin, ecstasy, bensodiazepiner, buprenorfin, kokain, metadon, morfin, THC, oxykodon och tramadol. För alla droger sammantaget var sensitiviteten 86,7% - 100%, specificiteten 33,3% - 100% och träffsäkerheten 71,4% - 94,7%. Provurvalet inom intervallet ±50% från gränsvärdet var begränsat, vilket avsevärt påverkat dessa beräkningar, och Konelab Prime 30i använder semikvantitativ metod vilken endast ger approximativa koncentrationsvärden som referens.
The emergency and psychiatric wards on the county hospital Ryhov in Jönköping utilize onsite drug testing with varying quality during evenings and night-time when no staff are operating the chemistry analyzer Konelab Prime 30i. The aim of the study is to evaluate the performance of sensitivity and specificity of Acro Biotech Multitest 15 and comparing results from two different reading-times. The number of urine samples collected for analysis was 272. Positive and negative urine samples with drug concentrations within ± 50% from cut-off were collected. Later, concentrations outside of this range was included. The samples were tested with Multitest 15 at the laboratory for clinical chemistry at Ryhov after analysis with Konelab Prime 30i providing reference results. The drugs tested were amphetamine, methamphetamine, ecstasy, benzodiazepines, buprenorphine, cocaine, methadone, morphine, THC, oxycodone and tramadol. All drugs included, the sensitivity was 86.7% - 100%, the specificity 33% - 100% and the accuracy 71.4% - 94.7%. The sample selection within the range ±50% from the cut-off value was limited, which significantly affected these calculations, and Konelab Prime 30i uses a semi-quantitative method only providing approximate concentration values for reference.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Hoffmannová, Lada. „Testování přesnosti mobilního laserového skenování“. Master's thesis, Vysoké učení technické v Brně. Fakulta stavební, 2020. http://www.nusl.cz/ntk/nusl-414306.

Der volle Inhalt der Quelle
Annotation:
Diploma thesis describes collecting of data by mobile mapping system Riegl VMX-450. Science centre AdMas was captured with mobile mapping system. For the purpose of testing the accuracy, a calibration field was constructed in AdMaS. Main part of the thesis deals with testing of the accuracy of point cloud. Calibration field's coordinates were obtained by adjustment of the geodetic network using the least squares adjustment. During the testing, the coordinates of the calibration field points determined by the adjustment of the geodetic network and the coordinates of the points determined from the point cloud were compared. Another part of the work deals with testing of the accuracy, when target's position is in different height levels.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Haick, Angela. „Testing irregularities : are we getting accurate scores? /“. La Verne, Calif. : University of La Verne, 2003. http://0-wwwlib.umi.com.garfield.ulv.edu/dissertations/fullcit/3076863.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Fulová, Silvia. „Stanovení nejistoty měření optického měřicí stroje pomocí laserinterferometru“. Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2021. http://www.nusl.cz/ntk/nusl-443250.

Der volle Inhalt der Quelle
Annotation:
This final thesis is dealing with stating uncertainty of optical measuring device Micro-Vu Sol 311, which is located at Faculty of mechanical engineering in Brno. Overview of coordinate measuring machines (CMM for short) and analyzed present status of optical CMM is in summation. This part also includes basic metrology concepts and methodology of determination of uncertainty of measuring instrument. Content of following parts of thesis is detailed description of Micro-Vu SOL 311 machine and etalons that were used in determination of enhanced uncertainty of measurement such as gage blocks, laser interferometer and glass scale. Last part of this thesis includes summary of achieved results and recommendations for practice.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Yellowhair, Julius Eldon. „Advanced Technologies for Fabrication and Testing of Large Flat Mirrors“. Diss., The University of Arizona, 2007. http://hdl.handle.net/10150/195245.

Der volle Inhalt der Quelle
Annotation:
Classical fabrication methods alone do not enable manufacturing of large flat mirrors that are much larger than 1 meter. This dissertation presents the development of enabling technologies for manufacturing large high performance flat mirrors and lays the foundation for manufacturing very large flat mirrors. The enabling fabrication and testing methods were developed during the manufacture of a 1.6 meter flat. The key advantage over classical methods is that our method is scalable to larger flat mirrors up to 8 m in diameter.Large tools were used during surface grinding and coarse polishing of the 1.6 m flat. During this stage, electronic levels provided efficient measurements on global surface changes in the mirror. The electronic levels measure surface inclination or slope very accurately. They measured slope changes across the mirror surface. From the slope information, we can obtain surface information. Over 2 m, the electronic levels can measure to 50 nm rms of low order aberrations that include power and astigmatism. The use of electronic levels for flatness measurements is analyzed in detail.Surface figuring was performed with smaller tools (size ranging from 15 cm to 40 cm in diameter). A radial stroker was developed and used to drive the smaller tools; the radial stroker provided variable tool stroke and rotation (up to 8 revolutions per minute). Polishing software, initially developed for stressed laps, enabled computer controlled polishing and was used to generate simulated removal profiles by optimizing tool stroke and dwell to reduce the high zones on the mirror surface. The resulting simulations from the polishing software were then applied to the real mirror. The scanning pentaprism and the 1 meter vibration insensitive Fizeau interferometer provided accurate and efficient surface testing to guide the remaining fabrication. The scanning pentaprism, another slope test, measured power to 9 nm rms over 2 meters. The Fizeau interferometer measured 1 meter subapertures and measured the 1.6 meter flat to 3 nm rms; the 1 meter reference flat was also calibrated to 3 nm rms. Both test systems are analyzed in detail. During surface figuring, the fabrication and testing were operated in a closed loop. The closed loop operation resulted in a rapid convergence of the mirror surface (11 nm rms power, and 6 nm rms surface irregularity). At present, the surface figure for the finished 1.6 m flat is state of the art for 2 meter class flat mirrors.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Raclavský, David. „3D model vybraného objektu“. Master's thesis, Vysoké učení technické v Brně. Fakulta stavební, 2020. http://www.nusl.cz/ntk/nusl-414316.

Der volle Inhalt der Quelle
Annotation:
The result of the diploma thesis is a photogrammetrically evaluated georeferenced 3D model of an object with its environment, located in the AdMaS complex. The work describes in detail all phases of creating a 3D model of the object from the selection and calibration of the camera to editing the 3D model. Discuss about software and methods for evaluating 3D models. The thesis deals with the optimal setting of ContectCapture software. The accuracy of the resulting 3D model is tested by the methodology according to ČSN 013410 on the basis of control measurements.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie