To see the other types of publications on this topic, follow the link: Comparison assessment models.

Dissertations / Theses on the topic 'Comparison assessment models'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 29 dissertations / theses for your research on the topic 'Comparison assessment models.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Patil, Sumeet Rajshekhar. "Identification, Application, and Comparison of Sensitivity Analysis Methods for Food Safety Risk Assessment Models." NCSU, 2001. http://www.lib.ncsu.edu/theses/available/etd-20011206-174616.

Full text
Abstract:

Identification and qualitative comparison of sensitivity analysis methods that have been used across various disciplines, and that merit consideration for application to food safety risk assessment models are presented in this paper. Sensitivity analysis can help in identifying critical control points, prioritizing additional data collection or research, and verifying and validating a model. Ten sensitivity analysis methods, including four mathematical methods, five statistical methods and one graphical method, are identified. Application of these methods was also illustrated with the examples from various fields. These methods were compared on the basis of their applicability to different types of models, computational issues such as initial data requirement, time requirement, and complexity of their application, representation of the sensitivity, and the specific uses of these methods. No one method is clearly best for food safety risk models. In general, the use of two or more methods may be needed to increase confidence on the rank ordering of key inputs.To identify specific issues with respect to the application to a typical food safety risk model, the sensitivity analysis methods were applied to the risk assessment model of the public health impact of vibrio Parahaemolyticus (the Vp model). The Vp model was modified so that proper sensitivity analysis can be done on independent inputs. The results of the sensitivity analyses were interpreted and discussed in detail. The rank ordering of key inputs was reasonably similar for most of the methods. For example, five of the seven methods ranked water temperature, the number of oysters per meal, and a new input IUR in the top three. Time on water and an input IG were identified as the least important inputs by six methods.

APA, Harvard, Vancouver, ISO, and other styles
2

Plevrakis, Viktor. "Comparison of risk assessment methods for polluted soils in Sweden, Norway and Denmark." Thesis, Stockholms universitet, Institutionen för naturgeografi och kvartärgeologi (INK), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-109376.

Full text
Abstract:
Land contamination is an acknowledged problem around the world due to its potentially adverse impacts on human health and the environment. Specifically in Europe there are estimated to be 2,500,000 potentially contaminated sites. The risk that contaminated sites pose is investigated by risk assessments. The methods and the models though used in risk assessments, vary both on a national and an international level. In this study, the risk assessment methods and models for polluted soils used in Scandinavia and issued by the Environmental Protection Agencies were compared. The comparison aimed to (i) identify similarities and differences in the risk assessment methodology and risk assessment methods and to (ii) investigate to which extend these differences can impact the results of the models and the implications regarding mitigation measures. The method and model comparison showed that Sweden and Norway have great similarities in assessing risks for contaminated soil. However, there are differences with Denmark on a conceptual level. When a common hypothetical petrol station with 20 soil samples was assessed, the results and the conclusions of the three risk assessments were quite different; the site was seen as posing risk to human health with the Danish model when complied with the quality criteria issued by the Norwegian model. The Swedish risk assessment concluded that the contaminant concentration in 3 out of 20 samples was potentially harmful for the environment but not for human health. The demonstrated divergence of the conclusions of risk assessments has major implications and shows great interest for mainly four groups: Land-owners who may be called to cover the expenses for remedial action. Consultants and companies who perform risk assessments and land remediation. The countries that have to meet national and international environmental goals and can also share/ or cover the cost for remedial action. The people exposed to such environments that could be deemed as potentially harmful by a neighboring country. The study was conducted in collaboration with URS Nordic.
APA, Harvard, Vancouver, ISO, and other styles
3

Nemeth, Lyle John. "A Comparison of Risk Assessment Models for Pipe Replacement and Rehabilitation in a Water Distribution System." DigitalCommons@CalPoly, 2016. https://digitalcommons.calpoly.edu/theses/1599.

Full text
Abstract:
A water distribution system is composed of thousands of pipes of varying materials, sizes, and ages. These pipes experience physical, environmental, and operational factors that cause deterioration and ultimately lead to their failure. Pipe deterioration results in increased break rates, decreased hydraulic capacity, and adverse effects on water quality. Pipe failures result in economic losses to the governing municipality due to loss of service, cost of pipe repair/replacement, damage incurred due to flooding, and disruptions to normal business operations. Inspecting the entire water distribution system for deterioration is difficult and economically unfeasible; therefore, it benefits municipalities to utilize a risk assessment model to identify the most critical components of the system and develop an effective rehabilitation or replacement schedule. This study compared two risk assessment models, a statistically complex model and a simplified model. Based on the physical, environmental, and operational conditions of each pipe, these models estimate the probability of failure, quantify the consequences of a failure, and ultimately determine the risk of failure of a pipe. The models differ in their calculation of the probability of failure. The statistically complex model calculates the probability of failure based on pipe material, diameter, length, internal pressure, land use, and age. The simplified model only accounts for pipe material and age in its calculation of probability of failure. Consequences of a pipe failure include the cost to replace the pipe, service interruption, traffic impact, and customer criticality impact. The risk of failure of a pipe is determined as the combination of the probability of failure and the consequences of a failure. Based on the risk of failure of each pipe within the water distribution system, a ranking system is developed, which identifies the pipes with the most critical risk. Utilization of this ranking system allows municipalities to effectively allocate funds for rehabilitation. This study analyzed the 628-pipe water distribution system in the City of Buellton, California. Four analyses were completed on the system, an original analysis and three sensitivity analyses. The sensitivity analyses displayed the worst-case scenarios for the water distribution system for each assumed variable. The results of the four analyses are provided below. Risk Analysis Simplified Model Complex Model Original Analysis All pipes were low risk All pipes were low risk Sensitivity Analysis: Older Pipe Age Identified 2 medium risk pipes Identified 2 medium risk pipes Sensitivity Analysis: Lower Anticipated Service Life Identified 2 medium risk pipes Identified 9 high risk pipes and 283 medium risk pipes Sensitivity Analysis: Older Pipe Age and Lower Anticipated Service Life Identified 1 high risk pipe and 330 medium risk pipes Identified 111 critical risk pipes, 149 high risk pipes, and 137 medium risk pipes Although the results appeared similar in the original analysis, it was clear that the statistically complex model incorporated additional deterioration factors into its analysis, which increased the probability of failure and ultimately the risk of failure of each pipe. With sufficient data, it is recommended that the complex model be utilized to more accurately account for the factors that cause pipe failures. This study proved that a risk assessment model is effective in identifying critical components and developing a pipe maintenance schedule. Utilization of a risk assessment model will allow municipalities to effectively allocate funds and optimize their water distribution system. Keywords: Water Distribution System/Network, Risk of Failure, Monte Carlo Simulation, Normal Random Variable, Conditional Assessment, Sensitivity Analysis.
APA, Harvard, Vancouver, ISO, and other styles
4

Westerberg, Erik. "AI-based Age Estimation using X-ray Hand Images : A comparison of Object Detection and Deep Learning models." Thesis, Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-19598.

Full text
Abstract:
Bone age assessment can be useful in a variety of ways. It can help pediatricians predict growth, puberty entrance, identify diseases, and assess if a person lacking proper identification is a minor or not. It is a time-consuming process that is also prone to intra-observer variation, which can cause problems in many ways. This thesis attempts to improve and speed up bone age assessments by using different object detection methods to detect and segment bones anatomically important for the assessment and using these segmented bones to train deep learning models to predict bone age. A dataset consisting of 12811 X-ray hand images of persons ranging from infant age to 19 years of age was used. In the first research question, we compared the performance of three state-of-the-art object detection models: Mask R-CNN, Yolo, and RetinaNet. We chose the best performing model, Yolo, to segment all the growth plates in the phalanges of the dataset. We proceeded to train four different pre-trained models: Xception, InceptionV3, VGG19, and ResNet152, using both the segmented and unsegmented dataset and compared the performance. We achieved good results using both the unsegmented and segmented dataset, although the performance was slightly better using the unsegmented dataset. The analysis suggests that we might be able to achieve a higher accuracy using the segmented dataset by adding the detection of growth plates from the carpal bones, epiphysis, and the diaphysis. The best performing model was Xception, which achieved a mean average error of 1.007 years using the unsegmented dataset and 1.193 years using the segmented dataset.

Presentationen gjordes online via Zoom. 

APA, Harvard, Vancouver, ISO, and other styles
5

Corti, Rachele. "Benchmarking the ability of different stock-assessment models to capture the highly-fluctuating dynamics of small pelagics." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017.

Find full text
Abstract:
Small pelagics dynamics are characterised by extreme variability owing to environmental factors, fishing and natural mortality. Because of highly-fluctuating dynamics, it is difficult to evaluate the stock status through models. To assess these evaluation difficulties, a model comparison framework based on the Management Strategy Evaluation (MSE) approach has been developed and tested in the Gulf of Cadiz anchovy stock. We have used a minimum realistic model (MRM) as operating model, including well documented environmental drivers for this stock to simulate abundance indexes and catches, and also a TAC value based on population size that works as a reference. Outputs from simulations were used as inputs for the implementation of a Gadget integrated model and some data limited methods. This simulation approach allows testing how well Gadget and data limited methods capture the highly-fluctuating dynamics of anchovy measured as the distance from the estimated TAC value (by different models) to the known reference. The results indicate that Gadget TAC estimate was closer to the reference compared with the other methods in all the simulations. This high estimation power of Gadget suggests its suitability for the stock assessment of other small pelagics. This work presents a measure of how well this model accounts for external sources of variability coming from the effect of the environment and a methodology that is flexible enough to be used with different models in other fisheries assessments.
APA, Harvard, Vancouver, ISO, and other styles
6

Mhlongo, Nanikie Charity, and n/a. "Competency-Based assessment in Australia - does it work?" University of Canberra. Education and Community Studies, 2002. http://erl.canberra.edu.au./public/adt-AUC20050530.094237.

Full text
Abstract:
South Africa since the liberation in 1994 has faced a lot of changes. The changes include being a member of the international community. As part of the international community, South Africa is finding itself largely faced by the challenges associated with this position. Looking at other countries South Africa is realizing that the world is looking at better ways of educating their people and organizing their education and training systems so that they might gain the edge in an increasingly competitive economic global environment. Success and survival in such a world demands that South Africa has a national education and training system that provides quality learning and promotes the development of a nation that is committed to life-long learning. Institutions of higher education in South Africa are currently changing their present education system to conform to a Competency-Based Training (CBT) system. This system has only been planned but not implemented yet and it is not clear how CBT will be implemented, especially how the learners are going to be assessed. Competency-Based Assessment (CBA) is an integral part of CBT that needs particular attention if the new system is to succeed. The key aims of this thesis are to investigate the current assessment policy and practice at the Canberra Institute of Technology (CIT) underpinned by Competency- Based Training system. The project will describe and analyze the Competency-Based Assessment system used within CIT's CBT system. The project will focus on: Observing classroom practice of CBA, analyzing students' and teachers' perceptions of their involvement with CBA, and analyzing employers' perceptions of the effectiveness of CBA. The main aim of this thesis is to suggest recommendations for an assessment model that will be suitable to implement within hospitality training institutions in South Africa.
APA, Harvard, Vancouver, ISO, and other styles
7

Chee, Yenlai. "Remote sensing analysis of cratered surfaces Mars landing hazard assessment, comparison to terrestrial crater analogs, and Mars crater dating models /." To access this resource online via ProQuest Dissertations and Theses @ UTEP, 2007. http://0-proquest.umi.com.lib.utep.edu/login?COPT=REJTPTU0YmImSU5UPTAmVkVSPTI=&clientId=2515.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Negretto, Giacomo. "The impact of spatial representation on flood hazard assessment: a comparison between 1D, quasi-2D and fully 2D hydrodynamic models of Rio Marano (Rimini)." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/17600/.

Full text
Abstract:
Four hydrodynamic numerical models were constructed and compared for a case study along the Marano stream (Rimini) using the software HEC-RAS . The models include a 1D model, with extended cross-sections to represent the floodplain flow; a 1D model, with floodplain represented as hydrostatic Storage Areas; a coupled 1D/2D model, with a 1D representation of the main channel and a 2D representation of the floodplain inundation, and a fully 2D model, with both main channel and floodplains represented through a two-dimensional hydrodynamic numerical scheme. First, 1D steady flow simulations were performed to get a conservative estimate of the maximum water levels along the stream. Second, unsteady flow simulations were performed with all four hydraulic models in order to assess flood attenuation associated with the routing. Third, the floodplain inundation dynamics resulting from each model were compared. The results of the steady flow simulations showed that six bridges are inadequate and most of the natural floodplains of the stream are inundated proportionally to the flood discharge considered. Concerning the unsteady flow simulations, each model returned different results in terms of flood attenuation and floodplain inundation dynamics. The 1D model resulted inadequate for modeling channel-floodplain interactions and floodplain inundation dynamics. The 1D model with Storage Areas resulted to be suitable for assessing the flood attenuation induced by the introduction of levees separating the floodplain from the main channel. Regarding the coupled 1D/2D model, the results showed that the elevation profile of the structure coupling 1D and 2D flow areas has a significant impact on modelling results. The 2D model returned the most detailed information regarding flood propagation in both main channel and floodplains.
APA, Harvard, Vancouver, ISO, and other styles
9

Shen, Hui. "Model comparison and assessment by cross validation." Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/1286.

Full text
Abstract:
Cross validation (CV) is widely used for model assessment and comparison. In this thesis, we first review and compare three v-fold CV strategies: best single CV, repeated and averaged CV and double CV. The mean squared errors of the CV strategies in estimating the best predictive performance are illustrated by using simulated and real data examples. The results show that repeated and averaged CV is a good strategy and outperforms the other two CV strategies for finite samples in terms of the mean squared error in estimating prediction accuracy and the probability of choosing an optimal model. In practice, when we need to compare many models, conducting repeated and averaged CV strategy is not computational feasible. We develop an efficient sequential methodology for model comparison based on CV. It also takes into account the randomness in CV. The number of models is reduced via an adaptive, multiplicity-adjusted sequential algorithm, where poor performers are quickly eliminated. By exploiting matching of individual observations, it is sometimes even possible to establish the statistically significant inferiority of some models with just one execution of CV. This adaptive and computationally efficient methodology is demonstrated on a large cheminformatics data set from PubChem. Cross validated mean squared error (CVMSE) is widely used to estimate the prediction mean squared error (MSE) of statistical methods. For linear models, we show how CVMSE depends on the number of folds, v, used in cross validation, the number of observations, and the number of model parameters. We establish that the bias of CVMSE in estimating the true MSE decreases with v and increases with model complexity. In particular, the bias may be very substantial for models with many parameters relative to the number of observations, even if v is large. These results are used to correct CVMSE for its bias. We compare our proposed bias correction with that of Burman (1989), through simulated and real examples. We also illustrate that our method of correcting for the bias of CVMSE may change the results of model selection.
APA, Harvard, Vancouver, ISO, and other styles
10

Peterson, Viktor, and Zihao Wang. "Cross-comparison of Non-Linear Seismic Assessment Methods for Unreinforced Masonry Structures in Groningen." Thesis, KTH, Betongbyggnad, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-289386.

Full text
Abstract:
A large amount of low-rise unreinforced masonry structures (URM) can be foundin Groningen, the Netherlands. More and more induced earthquakes with shortduration have been detected in this region due to gas exploitation. Local unreinforcedmasonry (URM) buildings were initially not designed for withstanding seismicactions, so that unexpected damage may occur due to their vulnerability, raising insecurityamong residents. Existing low-rise masonry buildings in Groningen can bedivided into different categories based on their characteristics. Two types of residentialmasonry buildings that fulfil the prerequisites for performing non-linear seismicassessment are chosen to be studied in this thesis project, including the terracedhouse and the detached house.The seismic assessment of structures requires the use of both a discretization methodand a seismic assessment method. The discretization method is used to translate themechanical model into a finite element model used for the numerical analysis. Severalmethods have previously shown to be applicable for seismic assessment, but thiswork investigates the implications of using a continuum model (CM) and an equivalentframe model (EFM) approach to discretization in the general-purpose finiteelement package described in DIANA-FEA-BV (2017). The continuum model approachadopted was in a previous work by Schreppers et al. (2017) validated againstexperimental results and is as such deemed representative of the physical behaviourof the mechanical models investigated. An equivalent frame model approach to beused with DIANA is proposed in the work by Nobel (2017). The continuum modelapproach uses continuum elements with a constitutive model developed for the seismicassessment of masonry structures. This constitutive model captures both shearand flexural failure mechanisms. The equivalent frame model approach uses a combinationof numerically integrated beam elements and nodal interfaces, each witha distinct constitutive model, thus decoupling the description of the flexural andshear behaviour. This approach aims to capture the macro-behaviour at the structurallevel. The applicability of the proposed equivalent frame model approach isevaluated by how well it replicates the validated continuum model approach results.The two discretization methods described are evaluated using two types of seismicassessment methods. The first seismic assessment method used consists of first performinga quasi-static non-linear pushover analysis (NLPO) on the model. Thisresults in the pushover curve, which describes the global behaviour of the modelunder an equivalent lateral load based on the fundamental mode shape of the structure.The pushover curve is then used with the N2-method described in EN1998-1iii(2004) to assess at which peak ground acceleration (PGA) that the model reachesthe near-collapse (NC) limit state. The second seismic assessment method consistsof performing dynamic non-linear time-history analyses (NLTH). This method usesrecorded accelerograms to impose the inertial forces. The PGA for the accelerogramwhere the near-collapse limit state is reached is compared to the PGA fromthe use of the N2-method. The applicability of the pushover analysis in conjunctionwith the N2-method is evaluated by how well it replicates the PGA found from thetime-history analyses and by how well it replicates local failure mechanisms.Therefore, the main objectives of this project can be described by the following twoquestions:i. To what extent can the equivalent frame method be applicable as a properdiscretization method for pushover analyses and time-history analyses oflow-rise unreinforced masonry residential buildings in the Groningen region?ii. To what extent can the non-linear pushover method be adopted toassess the seismic behaviour of low-rise unreinforced masonry residentialbuildings in the Groningen region?The applicability of the equivalent frame model showed to vary. For describing localfailure mechanisms its applicability is poor. Further work on connecting the edgepiers to transverse walls is needed. For seismic assessment using the N2-method theapplicability of the equivalent frame model approach is sensible. The conservativedisplacement capacity counteracts the fact that it is worse at describing local unloading,which produced a larger initial equivalent stiffness of the bi-linear curvesin comparison to the continuum model. For seismic assessment using the timehistorysignals, its applicability is possible. While it could show different behaviourin terms of displacement and damping forces, it still showed a similar PGA at thenear-collapse limit state for the cases at hand.The seismic assessment of the terraced and detached houses by the N2-method issimilar to the seismic prediction by applying time-history analyses. However, thereare still some variations in the initial stiffness, force capacity and displacement capacitybetween these two assessment methods due to the assumptions and limitationsin this study. Overall, considering the pros and cons of the quasi-static pushovermethod, it is deemed applicable during the seismic assessment of the unreinforcedmasonry structures in the Groningen area.
APA, Harvard, Vancouver, ISO, and other styles
11

Rajele, Molefi Joseph. "A comparison of SAAS and chemical monitoring of the rivers of the Lesotho Highlands Water Project." Thesis, University of the Western Cape, 2004. http://etd.uwc.ac.za/index.php?module=etd&amp.

Full text
Abstract:
The Lesotho Highlands Development Authority routinely uses the South African Scoring System version 4 (SASS4) in conjunction with water chemistry to monitor water quality of rivers in the Lesotho Highlands Water Project areas. The objective of this study was to test the efficiency of SASS4 in these areas.
APA, Harvard, Vancouver, ISO, and other styles
12

Fitzgibbon, Daniel Nathan, and n/a. "Assessment and comparison of osseointegration in conventionally and immediately restored titanium implants in a sheep model." University of Otago. School of Dentistry, 2008. http://adt.otago.ac.nz./public/adt-NZDU20081201.161832.

Full text
Abstract:
Objectives: The present work was under taken to compare osseointegration of immediately and delayed restored implants in a sheep model, and to compare methods of assessing osseointegration. Methods: Twenty wide-platform implants were placed in the posterior mandibles of 10 sheep, 3 months after premolar extractions. Ten were control implants placed and restored after 3 months of submerged healing. Ten were test implants placed contralaterally and immediately restored. Animals were sacrificed after a further 3 months of healing. At each experimental stage implant stability was measured with resonance frequency analysis (RFA) and standardized radiographs were taken. Tissue blocks with the implants were embedded in acrylic resin. The specimens were analysed by three-dimensional micro tomogram (micro-CT) images. Ground sections of the tissue blocks were then prepared for light microscopy and quantitative morphometry. Morphometric parameters computed by both methods were mean percent bone-to-implant contact (BIC) and mean percent bone density (BD). Radiographic, stability and morphometric measurements were compared statistically. Results: The survival rate was 60% (controls) versus 40% (test) (p=0.28). Mean crestal bone levels after three months restoration did not differ significantly between control (5.54 � 0.92) and test groups (4.35 � 1.61) (p=0.56). All surviving implants were stable at stage three and RFA values in implant stability quotient (ISQ) did not differ significantly between the two groups (test 82.3 � 3.9 versus control 78.8 � 4.3, p=0.36). No correlation was found between crestal bone loss and RFA (Spearman�s rho =-0.27, p=0.46). Histomorphometric analysis found no statistical difference (%BIC test 65.65 � 12.7%, control 53.36 � 6.41%, p=0.18; and %BD test 54.84 � 8.45%, control 64.69 � 13.57%, p=0.11). A similar trend was observed for mean micro-CT (%BIC test 65.72 � 72, control 50.84 � 4.19, p=0.11). Histology revealed high density inflammatory infiltrates beneath the sulcular and pocket epithelium. No significant difference was found between histomorphometric (HMA) and microCT analysis (%BIC p=0.08, %BD p=0.08). A statistically significant correlation was observed between HMA and microCT for %BIC (Spearman�s rho = 0.89, p=0.02) but not %BD (Spearman�s rho = 0.51, p=0.30). Conclusions: The results suggest that the sheep mandibular model has limited potential for evaluation of implants designed for poor quality bone and for the assessment of implant loading protocols. This thesis does highlight the potential for the use of this model in peri-implantitis studies. The results suggest that morphometric variables determined by HMA and microCT analysis are comparable, however further studies are required to standardize the microCT protocol to reduce metal artifacts and enhance bone-implant contrast.
APA, Harvard, Vancouver, ISO, and other styles
13

Gomez, Vera Gabriela. "Languages as factors of reading achievement in PIRLS assessments." Phd thesis, Université de Bourgogne, 2011. http://tel.archives-ouvertes.fr/tel-00563710.

Full text
Abstract:
The starting point of this research is the question, may reading acquisition be more or less effective depending on the language in which it is perform? Two categories for classifying the languages have been developed. First the notion of linguistic family is employed to describe the languages from a cultural and historical perspective. Secondly, the notion of orthographic depth is used for differentiating the languages according to the correspondence between orthography and phonetic. These categories have been related to the databases PIRLS 2001 and 2006 (international assessments about reading developed by the IEA), the aim being to connect reading achievement to the language in which students answered the test. However, it is clear that the language is not an isolated factor, but part of a complex structure of determinants of reading. Therefore, factors related to students and schools have also been incorporated to this research. Moreover, the multidimensionality of the reading process has been taken into account by distinguishing in the analysis the different aspects that made the process according to PIRLS: informative reading, literary reading, process comprehension of high and low order. To answer to the questions proposed by this research a hierarchical statistical model (multilevel) was developed, it was able to account for the connection between reading achievement, language and other associated factors. As a result, contextual factors (home and school) were more significant than language. Moreover, determinacy may vary if taking into account educational systems.
APA, Harvard, Vancouver, ISO, and other styles
14

Sims, Maureen Estelle. "Rubric Rating with MFRM vs. Randomly Distributed Comparative Judgment: A Comparison of Two Approaches to Second-Language Writing Assessment." BYU ScholarsArchive, 2018. https://scholarsarchive.byu.edu/etd/7312.

Full text
Abstract:
The purpose of this study is to explore a potentially more practical approach to direct writing assessment using computer algorithms. Traditional rubric rating (RR) is a common yet highly resource-intensive evaluation practice when performed reliably. This study compared the traditional rubric model of ESL writing assessment and many-facet Rasch modeling (MFRM) to comparative judgment (CJ), the new approach, which shows promising results in terms of reliability and validity. We employed two groups of raters<&hyphen>”novice and experienced<&hyphen>”and used essays that had been previously double-rated, analyzed with MFRM, and selected with fit statistics. We compared the results of the novice and experienced groups against the initial ratings using raw scores, MFRM, and a modern form of CJ<&hyphen>”randomly distributed comparative judgment (RDCJ). Results showed that the CJ approach, though not appropriate for all contexts, can be valid and as reliable as RR while requiring less time to generate procedures, train and norm raters, and rate the essays. Additionally, the CJ approach is more easily transferable to novel assessment tasks while still providing context-specific scores. Results from this study will not only inform future studies but can help guide ESL programs to determine which rating model best suits their specific needs.
APA, Harvard, Vancouver, ISO, and other styles
15

Gómez, Vera Gabriela. "Languages as factors of reading achievement in PIRLS assessments." Thesis, Dijon, 2011. http://www.theses.fr/2011DIJOL013/document.

Full text
Abstract:
Le point de départ de cette recherche concerne la question suivante : l’acquisition de la lecture, peut-il être plus ou moins efficace en fonction de la langue dans laquelle il s’effectue? Deux catégories pour classer les langues on été définies dans ce travail. Premièrement, la notion de famille linguistique est à la base d’une description des langues à partir d'une perspective historique et culturelle. Deuxièmement la notion de profondeur orthographique est mobilisée, celle-ci différencie les langues en fonction de la correspondance entre l'orthographe et la phonétique. Ces catégories ont été mises en rapport avec les bases de données PIRLS 2001 et 2006 (étude internationale sur la lecture menée par l'IEA), afin de relier la performance en lecture et la langue dans laquelle les élèves ont répondu au test. Toutefois, il est clair que la langue n'est pas un facteur isolé, car elle fait partie d'un ensemble complexe de déterminants; ainsi, des facteurs liés aux élèves et au milieu scolaire ont également été incorporés dans l'étude. En outre, il a été tenu compte de la multidimensionnalité du processus de lecture, en distinguant dans les analyses les différents domaines mesurés par l’enquête : lecture d'informative, littéraire, et compréhension des processus d'ordre complexe et simple. Pour répondre aux questions de cette recherche nous avons élaboré un modèle statistique hiérarchique capable de rendre compte de la relation entre la compréhension de la lecture, la langue et les facteurs qui y sont associés. En dernière analyse, les facteurs contextuels (individuels et scolaires) se sont révélés être plus importants que la langue elle-même. En outre, les déterminants du niveau en lecture dépendent des systèmes éducatifs observés dans cette enquête
The starting point of this research is the question, may reading acquisition be more or less effective depending on the language in which it is perform? Two categories for classifying the languages have been developed. First the notion of linguistic family is employed to describe the languages from a cultural and historical perspective. Secondly, the notion of orthographic depth is used for differentiating the languages according to the correspondence between orthography and phonetic. These categories have been related to the databases PIRLS 2001 and 2006 (international assessments about reading developed by the IEA), the aim being to connect reading achievement to the language in which students answered the test. However, it is clear that the language is not an isolated factor, but part of a complex structure of determinants of reading. Therefore, factors related to students and schools have also been incorporated to this research. Moreover, the multidimensionality of the reading process has been taken into account by distinguishing in the analysis the different aspects that made the process according to PIRLS: informative reading, literary reading, process comprehension of high and low order. To answer to the questions proposed by this research a hierarchical statistical model (multilevel) was developed, it was able to account for the connection between reading achievement, language and other associated factors. As a result, contextual factors (home and school) were more significant than language. Moreover, determinacy may vary if taking into account educational systems
APA, Harvard, Vancouver, ISO, and other styles
16

Blum-Evitts, Shemariah. "Designing a foodshed assessment model guidance for local and regional planners in understanding local farm capacity in comparison to local food needs /." Amherst, Mass. : University of Massachusetts Amherst, 2009. http://scholarworks.umass.edu/theses/288/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Martinez, Nicole. "Selected techniques in radioecology| Model development and comparison for internal dosimetry of rainbow trout (Oncorhynchus mykiss) and feasibiltiy assessment of reflectance spectroscopy use as a tool in phytoremediation." Thesis, Colorado State University, 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=3624304.

Full text
Abstract:

Over the past five to ten years, public interest in nuclear energy, decommissioning, and waste management and stewardship has increased, leading to a renewed interest in radioecology (Kuhne 2012), or the study of the relationships between ionizing radiation and the environment (Whicker and Shultz 1982a). Several groups supporting collaborative radioecological research have recently been established, including the European Radioecology ALLIANCE in 2009 (Hinton et al. 2013), the Strategy for Allied Radioecology (STAR) network in 2011 (Kuhne 2012), and the National Center for Radioecology (NCoRE) in the United States in 2011 (Kuhne 2012). The earthquake, tsunami, and subsequent nuclear accident at Fukushima in March of 2011 further emphasized the importance of radioecology in providing timely and technically sound information (such as the transport and fate of radionuclides, potential doses and risks, etc.) for decision making in emergency response as well as in clean up and recovery (Kuhne 2012; Hinton et al. 2013) for both humans and their environment. Although the original and primary aims of the ICRP radiation protection recommendations have been to prevent deterministic effects and minimize stochastic effects to human beings from radiation exposure, the protection framework has recently been extended to include protecting the environment from harmful effects of radiation as well (ICRP 2007, 2008b, 2009). Radioecology is an interdisciplinary science that encompasses a wide array of topics, including, among others, radiation transport, effects, risk assessment, and remediation (Whicker and Shultz 1982a; Hinton et al. 2013). I consider two topics from different areas of radioecology in this dissertation: radionuclide uptake and dosimetry as well as an assessment of a technique for potential use in remediation. Part 1 outlines the development of empirical and computational models for prediction of activity concentration and subsequent radiation dose, respectively, in relevant rainbow trout (Oncorhynchus mykiss) organs for selected radionuclides. Radiation dose rates to biota are typically approximated utilizing dose conversion factors (DCF), which are values for absorbed dose rate per activity concentration in the body or organ (i.e. mGy d-1 per Bq g-1). The current methodology employed by both the International Commission on Radiological Protection (ICRP) and within the Environmental Risks from Ionizing Radiation in the Environment (ERICA) Integrated Approach for calculating dose conversion coefficients is to use Monte Carlo modeling of a homogenously distributed radionuclide within an ellipsoidal phantom chosen to represent a particular organism. However, more accurate estimates can be made based on specific absorbed fractions and activity concentrations. The first study in Part 1 examines the effects of lake tropic structure on the uptake of iodine-131 (131I) in rainbow trout and considers a simple computational model for the estimation of resulting radiation dose. Iodine-131 is a major component of the atmospheric releases following reactor accidents, and the passage of 131I through food chains from grass to human thyroids has been extensively studied. By comparison, the fate and effects of 131I deposition onto lakes and other aquatic systems has been less studied. In this study we reanalyze 1960s data from experimental releases of 131I into two small lakes and compare the effects of differences in lake trophic structures on 131I accumulation in fish. The largest concentrations in the thyroids of trout (Oncorhynchus mykiss) may occur from 8 to 32 days post initial release. DCFs for trout for whole body as well as thyroid were computed using Monte Carlo modeling with an anatomically-appropriate model of trout thyroid structure. Activity concentration data was used in conjunction with the calculated DCFs to estimate dose rates and ultimately determine cumulative radiation dose (Gy) to the thyroids after 32 days. The estimated cumulative thyroid doses at 32 days post-release ranged from 6 mGy to 18 mGy per 1 Bq mL-1 of initial 131I in the water, depending upon fish size. The subsequent studies in Part 1 seek to develop and compare different, increasingly detailed anatomical phantoms for O. mykiss for the purpose of estimating organ radiation dose and dose rates from 131I uptake and from molybdenum-99 (99Mo) uptake. Model comparison and refinement is important to the process of determining both dose rates and dose effects, and we develop and compare three models for O. mykiss: a simplistic geometry considering a single organ, a more specific geometry employing anatomically relevant organ size and location, and voxel reconstruction of internal anatomy obtained from CT imaging (referred to as CSUTROUT). Dose Conversion Factors (DCFs) for whole body as well as selected organs of O. mykiss were computed using Monte Carlo modeling, and combined with the empirical models for predicting activity concentration, to estimate dose rates and ultimately determine cumulative radiation dose (?Gy) to selected organs after several half-lives of either 131I or 99Mo. The different computational models provided similar results, especially for organs that were both the source and target of radiation (less than 30% difference between estimated doses). Although CSUTROUT was the most anatomically realistic phantom, it required much more resource dedication to develop than did the stylized phantom for similar results. Additionally, the stylized phantom can be scaled to represent trout sizes whereas CSUTROUT cannot be. There may be instances where a detailed phantom such as CSUTROUT is appropriate, as it will provide the most accurate radiation dose and dose rate information, but generally, the stylized phantom appears to be the best choice for an ideal balance between accuracy and resource requirements. Part 2 considers the use of reflectance spectroscopy as a remediation tool through its potential to determine plant stress from metal contaminants. Reflectance spectroscopy is a rapid and non-destructive analytical technique that may be used for assessing plant stress and has potential applications for use in remediation. Changes in reflectance such as that due to metal stress may occur before damage is visible, and existing studies have shown that metal stress does cause changes in plant reflectance. The studies in Part 2 further investigate the potential use of reflectance spectroscopy as a method for assessing metal stress in plants. In the first study, Arabidopsis thaliana plants were treated twice weekly in a laboratory setting with varying levels (0 mM, 0.5 mM, or 5 mM) of cesium chloride (CsCl) solution, and reflectance spectra were collected every week for three weeks using an ASD FieldSpec Pro spectroradiometer with both a contact probe and a field of view probe at 36.8 and 66.7 cm above the plant. As metal stress is known to mimic drought stress, plants were harvested each week after spectra collection for determination of relative water content and chlorophyll content. A visual assessment of the plants was also conducted using point observations on a uniform grid of 81 points. Two-way ANOVAs were performed on selected vegetation indices (VI) to determine the significance of the effects of treatment level and length of treatment. Linear regression was used to relate the most appropriate vegetation indices to the aforementioned endpoints and to compare results provided by the three different spectra collection techniques. One-way ANOVAs were performed on selected VI at each time point to determine which, if any, indices offered a significant prediction of the overall extent of Cs toxicity. Of the 14 vegetation indices considered, the two most significant were the slope at the red edge position (SREP) and the ratio of reflectance at 950 nm to the reflectance at 750 nm (R950/R750). Contact probe readings and field of view readings differed significantly. Field of view measurements were generally consistent at each height. The second study investigated the potential use of reflectance spectroscopy as a method for assessing metal stress across four different species of plants, namely Arabidopsis thaliana, Helianthus annuus, Brassica napus var. rapa, and Zea mays. The purpose of this study was to determine whether a quantifiable relationship exists between reflectance spectra and lithium (Li) contamination in each species of plant considered, and if such a relationship exists similarly across species. Reflectance spectra were collected every week for three weeks using an ASD FieldSpec Pro Spectroradiometer with a contact probe and a field of view probe for plants treated twice weekly in a laboratory setting with 0 mM or 15 mM of lithium chloride (LiCl) solution. Plants were harvested each week immediately after spectra collection for determination of relative water content and chlorophyll content. Linear regression was used to relate the most appropriate vegetation indices (determined by the Pearson correlation coefficient) to the aforementioned endpoints and to compare results provided by the different spectra collection techniques. Two-way ANOVAs were performed on 12 selected vegetation indices (VI) for each species individually to determine the significance of the effects of treatment level and length of treatment on a species basis. Balanced ANOVAs were conducted across all species to determine significance of treatment, time, and species. LiCl effects and corresponding reflectance shifts were significant for A. thaliana, but Z. mays and H. annuus showed little response to LiCl at the treatment level considered in this study, with no significant differences in relative water content or chlorophyll content by treatment level. B. rapa reflectance spectra responded similarly to Li exposure as Z. mays, but B. rapa did have significant differences in relative water content by treatment level. All species demonstrated a potential stimulatory effect of LiCl, with at least one week of increased reflectance in the near-IR. Different VI proved to be the best predictor of endpoint values for each species, with only SIPI and the ratio of reflectance at 1390 nm to the reflectance at 1454 nm (R1390/R1454) common between species. The most significant VI considering all species together was SIPI, although A. thaliana effects dominate this result. VI determined separately by CP and FOV were occasionally well-related, but this relationship was inconsistent between species, further supporting the conclusion in the previous study that CP and FOV are not interchangeable. These techniques should either be used as compliments or independently, depending on the application.

APA, Harvard, Vancouver, ISO, and other styles
18

Beauchet, Sandra. "Evaluation multicritère d'itinéraires techniques viticoles associant l'évaluation environnementale par Analyse du Cycle de Vie avec l'évaluation de la qualité du raisin. : Contribution au choix des pratiques pour une amélioration des itinéraires techniques viticoles." Electronic Thesis or Diss., Angers, 2016. http://www.theses.fr/2016ANGE0078.

Full text
Abstract:
La production du raisin à l’origine de vins AOC (Appellation d’Origine Contrôlée) est soumise à des cahiers de charges imposant des exigences en termes de rendement, de qualité des produits et de pratiques. En plus de ces exigences, le viticulteur doit désormais faire évoluer ses itinéraires techniques viticoles pour en améliorer les performances environnementales. Or, définir des lignes directrices d’action pour une amélioration des pratiques viticoles en s'appuyant sur les évaluations environnementales et de qualité du raisin est complexe tant ces évaluations fournissent un nombre important d'indicateurs. L’objectif de la thèse est la construction d’une méthode d’évaluation prenant en compte en parallèle l’évaluation de la performance environnementale des itinéraires techniques viticoles avec la qualité du raisin et permettant d’aider le viticulteur et son conseiller à identifier les pratiques assurant le meilleur compromis entre « performance environnementale » et « qualité de production ». Cette méthode permet d’analyser un itinéraire technique viticole mais aussi de comparer ce dernier à d’autres. Cette méthode a été développée et testée à l’aide de cinq itinéraires techniques viticoles aux pratiques différenciées sur le cépage Chenin blanc en moyenne vallée de la Loire pendant deux années aux climats contrastés. Les travaux ont permis de (i) faire une adaptation du calcul d’Analyse du Cycle de Vie (ACV) spécifique au système de production viticole, (ii) montrer l’importance de la variabilité interannuelle dans les résultats d’évaluation environnementale par ACV. Les travaux ont aussi abouti à l’élaboration d’un modèle explicatif de la qualité du raisin à partir des pratiques viticoles et des facteurs pédoclimatiques permettant d’étudier l’incidence potentielle d’un changement de pratiques sur les critères d’évaluation de la qualité du raisin. La construction de la méthode multicritères CONTRA-QUALENVIC pour la viticulture, principale issue de ce travail, comporte (i) la construction de règles de décision et de fonctions mathématiques pour y répondre et (ii) des réunions d’experts pour caractériser les critères à agréger et les pondérer. La méthode CONTRA-QUALENVIC a été éprouvée en la comparant à d’autres méthodes. Pour conclure, la méthode CONTRA-QUALENVIC est une méthode pertinente pour l’aide à la décision dans le cadre d’une amélioration continue des pratiques viticoles vers un meilleur respect de l’environnement tout en préservant la qualité du raisin
Grape production from PDO wines (Protected Designation of Origin) is subjected to tender specifications, imposing requirements in terms of performance, as well as practices and products quality. In addition to these requirements, the winemaker must now make its viticultural technical management routes evolve, to improve its environmental performances. But, defining actions guidelines for the improvement of viticultural practices based on environmental assessments and grape qualityis very complex, since each one of these assessments provide a significant number of indicators. The aim of the thesis is to construct an evaluation method that takes into consideration both evaluating the environmental performance of viticultural technical management routes with grape quality and assisting the winemaker and advisor to identify practices to ensure the best compromise between "environmental performance" and " product quality". This method allows to analyze a technical management route as well as to compare it to others. This method was developed and tested on five technical management routes with differentiated practices, on the Chenin Blanc grape variety in the middle Loire Valley, for two years with contrasted climates.The study helped (i) analyzing the Life Cycle Assessment (LCA) results specifically for viticulture, (ii) showing the importance of interannual variability in the results of environmental assessment by LCA. The work also led to the development of a model to explain the grapes quality linked with viticultural practices, and soil and climate factors, to study the potential impact of a practice change, on the grape quality evaluation. The CONTRA-QUALENVIC multi-criteria method construction for viticulture is the main outcome of this study, and includes (i) the construction of decision rules and mathematical functions to meet them, and (ii) experts’ meetings to characterize the criteria to aggregate and weight. The CONTRA-QUALENVIC method has been tested by comparing it to other methods.To conclude, the CONTRA-QUALENVIC method is an effective method for decision support as part of a continuous improvement of viticultural practices towards a better respect of the environment, while maintaining the grape quality
APA, Harvard, Vancouver, ISO, and other styles
19

"A Comparison of Fuzzy Models in Similarity Assessment of Misregistered Area Class Maps." Master's thesis, 2010. http://hdl.handle.net/2286/R.I.8672.

Full text
Abstract:
abstract: Spatial uncertainty refers to unknown error and vagueness in geographic data. It is relevant to land change and urban growth modelers, soil and biome scientists, geological surveyors and others, who must assess thematic maps for similarity, or categorical agreement. In this paper I build upon prior map comparison research, testing the effectiveness of similarity measures on misregistered data. Though several methods compare uncertain thematic maps, few methods have been tested on misregistration. My objective is to test five map comparison methods for sensitivity to misregistration, including sub-pixel errors in both position and rotation. Methods included four fuzzy categorical models: fuzzy kappa's model, fuzzy inference, cell aggregation, and the epsilon band. The fifth method used conventional crisp classification. I applied these methods to a case study map and simulated data in two sets: a test set with misregistration error, and a control set with equivalent uniform random error. For all five methods, I used raw accuracy or the kappa statistic to measure similarity. Rough-set epsilon bands report the most similarity increase in test maps relative to control data. Conversely, the fuzzy inference model reports a decrease in test map similarity.
Dissertation/Thesis
M.A. Geography 2010
APA, Harvard, Vancouver, ISO, and other styles
20

Gripp, Natalie Mary. "A comparison of three brief analysis models with the inclusion of contingency reversals." Thesis, 2011. http://hdl.handle.net/2152/ETD-UT-2011-12-4851.

Full text
Abstract:
Functional Analysis is a widely used and effective tool for the assessment of challenging behavior. However, there are several practical issues associated with analogue functional analysis, including the reinforcement of challenging behavior and the extended duration of the assessment process. These issues have been addressed in several modified functional analysis models, including the brief functional analysis. The brief functional analysis allows practitioners and researchers to complete an assessment of challenging behavior within a 90-minute period, thus addressing the practical issue of extended duration. It does not, however, address the potential issues associated with the reinforcement of challenging behavior. The current study evaluated the efficacy of three modified functional analysis methods, including a brief antecedent-based analysis (A-B), a brief latency-based analysis, and a brief functional analysis (A-B-C). Results from each assessment were compared and high levels of correspondence was observed between the respective assessment models. Results are discusses in terms of the relative strengths and limitations of each of the models.
text
APA, Harvard, Vancouver, ISO, and other styles
21

Feryandi, Faus Tinus Handi. "Landslide susceptibility assessment in Karanganyar regency - Indonesia - Comparison of knowledge-based and Data-driven Models." Master's thesis, 2011. http://hdl.handle.net/10362/8277.

Full text
Abstract:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Disaster management requires spatial information as a backbone of preparedness and mitigation process. In that context, an assessment of landslide susceptibility becomes essential in an area that is prone to landslide due to its geographical condition. The Tawangmangu, Jenawi and Ngargoyoso Subdistric in Karanganyar Regency is the one of such areas, and is the area most frequently hit by landslides in the Central Java Province of Indonesia. In this study, three different methods were applied to examine landslide susceptibility in that area: heuristic, statistical logistic regression and Artificial Neural Network (ANN). Heuristic method is a knowledge-based approach whereas the latter two are categorized as data-driven methods due to the involvement of landslide inventory in their analysis. Eight site-specific available and commonly used landslide influencing factors (slope, aspect, topographical shape, curvature, lithology, land use, distance to road and distance to river) were preprocessed in a GIS environment and then analyzed using statistical and GIS tools to understand the relationship and significance of each to landslide occurrence, and to generate landslide susceptibility maps. ILWIS, Idrisi and ArcGIS software were used to prepare the dataset and visualize the model while PASW was employed to run prediction models (logistic regression for statistical method and multi-layer perceptron for ANN). The study employed degree of fit and Receiving Operating Characteristic (ROC) to assess the models performance. The region was mapped into five landslide susceptibility classes: very low, low, moderate, high and very high class. The results also showed that lithology, land use and topographical are the three most influential factors (i.e., significant in controlling the landslide to take place). According to degree of fit analysis applied to all models, ANN performed better than the other models when predicting landslide susceptibility of the study area. Meanwhile, according to ROC analysis applied to data-driven methods, ANN shows better performance (AUC 0,988) than statistical logistic regression (AUC 0,959).
APA, Harvard, Vancouver, ISO, and other styles
22

Svensson, Josefin. "Dispersion of Drilling Discharges : A comparison of two dispersion models and consequences for the risk picture of cold water corals." Thesis, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-210077.

Full text
Abstract:
One of the ocean’s greatest resources is the coral reefs, providing unique habitats for alarge variety of organisms. During drilling operations offshore many activities maypotentially harm these sensitive habitats. Det Norske Veritas (DNV) has developed arisk-based approach for planning of drilling operations called Coral Risk Assessment (CRA) to reduce the risk of negative effects upon cold water corals (Lophelia pertusa) on the Norwegian Continental Shelf (NCS). In order to get a good risk assessment amodelled dispersion plume of the drilling discharges is recommended. This study concerned a drilling case at the Pumbaa field (NOCS 6407/12-2) on the NCS, and used two different dispersion models, the DREAM model and the MUDFATE model in order to investigate how to perform good risk assessments. In the drill planning process a decision to move the discharge location 300 m north-west from the actual drilling location and reducing the amount of drilling discharges, was made inorder to reduce the risk for the coral targets in the area. The CRA analysis indicated that these decisions minimised the risk for the corals, and showed that the environmentalactions in the drill planning processes are necessary in order to reduce the risk for the coral targets and that the analysis method is a preferable tool to use. The amount of discharges, the ocean current data, the discharge location and the condition of the coral targets are the factors having the most important impact on the CRA results. From monitoring analysis from the case of study, it can be seen that a pile builds up around the discharge location. The dispersion models do not seem to take into account this build-up of a pile and thereby overestimate the dispersion of drilling discharges. This observation was done when modelled barite deposit was compared with barium concentrations measured in the sediment after the drilling operation. The overestimationis the case for the DREAM model, but has not been seen in the simulations with the MUDFATE model. Results from the modelling also indicated a higher overestimation for the DREAM model when using a cutting transport system (CTS) to release thedrilling discharges compared to release the discharges without using the CTS.
Korallrev består av ett skelett av kalciumkarbonat som bygger upp unika habitat på havsbotten. Dessa utnyttjas av flera olika organismer och är en av havets största och viktigaste resurser. Under prospekteringsborrningar till havs sker stora mängder utsläpp som kan påverka de känsliga miljöerna negativt. Det Norske Veritas (DNV) har utvecklat en riskbaserad strategi för planering av prospekteringsborrning i områden med koraller kallad Coral Risk Assessment (CRA). I CRA-analysen utvärderas risken för korallstrukturer (Lophelia pertusa) att påverkas av olika borrningsaktiviteter. Spridningsmodellering av det förväntade utsläppet från borrningsoperationen är ett viktigt hjälpmedel för att kunna utföra riskanalysen på ett tillfredsställande sätt. Studien har studerat en tidigare utförd prospekteringsborrning på Pumbaa-fältet (NOCS 6407/12-2) på den norska kontinentalsockeln och två olika spridningsmodeller DREAM och MUDFATE har jämförts i studien med syfte att förbättre riskbedömningen. I planeringsstadiet av prospekteringsborrningen togs ett beslut att flytta utsläppspunkten för det producerade borrslammet 300 m nordväst från brunnen samt att mängden borrslam skulle reduceras för att minska risken för påverkan på korallstrukturerna i området. CRA-analysen som utfördes i denna studie visade att dessa beslut minskat risken för korallstrukturerna att bli påverkade. Detta indikerar således att analysmetoden är ett viktigt verktyg att använda vid miljöundersökningar i planeringsstadiet för att minska risken för oönskad påverkan från aktiviteter i samband med prospekteringsborrning. De faktorer som har störst påverkan på CRA-analysen är mängden borrslam, strömdata, utsläppspunkt och tillståndet på korallstrukturerna. Under miljöövervakningen i samband med borrningsprocessen påvisades det att vallar av borrslam byggdes upp nära utsläppspunkten, vilket skedde relativt snabbt efter det att utsläppet startat. Spridningsmodellerna verkar inte ta hänsyn till denna uppbyggnad utan överestimerar spridningen och depositionen av borrslam. Detta har påvisats vid jämförelser av modellerade och uppmätta värden av bariumkoncentrationer i sedimentet. Överestimeringen är påvisad för DREAM, men slutsatsen är mer osäker för MUDFATE. Spridningsmodelleringen med DREAM indikerar även en större överestimering av resultaten om utsläppen sker med en så kallad CTS (Cutting Transport System).
APA, Harvard, Vancouver, ISO, and other styles
23

Brock, Terry A. "A comparison of deterministic and probabilistic radiation dose assessments at three fictitious �������Cs contaminated sites in California, Colorado, and Florida." Thesis, 1997. http://hdl.handle.net/1957/34111.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

"Visual Analytics Tool for the Global Change Assessment Model." Master's thesis, 2015. http://hdl.handle.net/2286/R.I.35998.

Full text
Abstract:
abstract: The Global Change Assessment Model (GCAM) is an integrated assessment tool for exploring consequences and responses to global change. However, the current iteration of GCAM relies on NetCDF file outputs which need to be exported for visualization and analysis purposes. Such a requirement limits the uptake of this modeling platform for analysts that may wish to explore future scenarios. This work has focused on a web-based geovisual analytics interface for GCAM. Challenges of this work include enabling both domain expert and model experts to be able to functionally explore the model. Furthermore, scenario analysis has been widely applied in climate science to understand the impact of climate change on the future human environment. The inter-comparison of scenario analysis remains a big challenge in both the climate science and visualization communities. In a close collaboration with the Global Change Assessment Model team, I developed the first visual analytics interface for GCAM with a series of interactive functions to help users understand the simulated impact of climate change on sectors of the global economy, and at the same time allow them to explore inter comparison of scenario analysis with GCAM models. This tool implements a hierarchical clustering approach to allow inter-comparison and similarity analysis among multiple scenarios over space, time, and multiple attributes through a set of coordinated multiple views. After working with this tool, the scientists from the GCAM team agree that the geovisual analytics tool can facilitate scenario exploration and enable scientific insight gaining process into scenario comparison. To demonstrate my work, I present two case studies, one of them explores the potential impact that the China south-north water transportation project in the Yangtze River basin will have on projected water demands. The other case study using GCAM models demonstrates how the impact of spatial variations and scales on similarity analysis of climate scenarios varies at world, continental, and country scales.
Dissertation/Thesis
Masters Thesis Computer Science 2015
APA, Harvard, Vancouver, ISO, and other styles
25

Mokilane, Paul Moloantoa. "The application and empirical comparison of item parameters of Classical Test Theory and Partial Credit Model of Rasch in performance assessments." Diss., 2014. http://hdl.handle.net/10500/18362.

Full text
Abstract:
This study empirically compares the Classical Test Theory (CTT) and the Partial Credit Model (PCM) of Rasch focusing on the invariance of item parameters. The invariance concept which is the consequence of the principle of specific objectivity was tested in both CTT and PCM using the results of learners who wrote the National Senior Certificate (NSC) Mathematics examinations in 2010. The difficulty levels of the test items were estimated from the independent samples of learn- ers. The same sample of learners used in the calibration of the difficulty levels of the test items in the PCM model were also used in the calibration of the difficulty levels of the test items in CTT model. The estimates of the difficulty levels of the test items were done using RUMM2030 in the case of PCM while SAS was used in the case of CTT. RUMM2030 and SAS are both the statistical softwares. The analysis of variance (ANOVA) was used to compare the four different design groups of test takers. In cases where the ANOVA showed a significant difference between the means of the design groups, the Tukeys groupings was used to establish where the difference came from. The research findings were that the test items' difficulty parameter estimates based on the CTT theoretical framework were not invariant across the different independent sample groups. The over- all findings from this study were that the CTT theoretical framework was unable to produce item difficulty invariant parameter estimates. The PCM estimates were very stable in the sense that for most of the items, there was no significant difference between the means of at least three design groups and the one that deviated from the rest did not deviate that much. The item parameters of the group that was representative of the population (proportional allocation) and the one where the same number of learners (50 learners) was taken from different performance categories did not differ significantly for all the items except for item 6.6 in examination question paper 2. It is apparent that for the test item parameters to be invariant of the group of test takers in PCM, the group of test takers must be heterogeneous and each performance category needed to be big enough for the proper calibration of item parameters. The higher values of the estimated item parameters in CTT were consistently found in the sample that was dominated by the high proficient learners in Mathematics ("bad") and the lowest values were consistently calculated in the design group that was dominated by the less proficient learners. This phenomenon was not apparent in the Rasch model.
Mathematical Sciences
M.Sc. (Statistics)
APA, Harvard, Vancouver, ISO, and other styles
26

Ally, Idrees Abdul Latif. "Comparison of hr-pQCT & MRTA to DXA & QUS for the Ex-vivo Assessment of Bone Strength." Thesis, 2010. http://hdl.handle.net/1807/24527.

Full text
Abstract:
There is a pressing need for better assessment of bone strength as current clinical tools do not directly measure bone mechanical properties, but offer only surrogate measures of bone strength. We conducted an ex-vivo study of emu bones to examine how two investigative devices, hr-pQCT and MRTA, compare to current clinical tools (DXA and QUS) in predicting true bone mechanical properties. We found that hr-pQCT parameters were able to assess bone strength as well as DXA and better than QUS, while MRTA was able to predict bone strength well in low-density but not high-density bones. Our results suggest that both hr-pQCT, which has the unique ability to specifically assess the various determinants of bone strength, and MRTA, which measures a bone mechanical property (stiffness), have great potential for use as clinical tools that can assess various components of bone strength not measured by current devices.
APA, Harvard, Vancouver, ISO, and other styles
27

"A Comparison of DIMTEST and Generalized Dimensionality Discrepancy Approaches to Assessing Dimensionality in Item Response Theory." Master's thesis, 2013. http://hdl.handle.net/2286/R.I.18167.

Full text
Abstract:
abstract: Dimensionality assessment is an important component of evaluating item response data. Existing approaches to evaluating common assumptions of unidimensionality, such as DIMTEST (Nandakumar & Stout, 1993; Stout, 1987; Stout, Froelich, & Gao, 2001), have been shown to work well under large-scale assessment conditions (e.g., large sample sizes and item pools; see e.g., Froelich & Habing, 2007). It remains to be seen how such procedures perform in the context of small-scale assessments characterized by relatively small sample sizes and/or short tests. The fact that some procedures come with minimum allowable values for characteristics of the data, such as the number of items, may even render them unusable for some small-scale assessments. Other measures designed to assess dimensionality do not come with such limitations and, as such, may perform better under conditions that do not lend themselves to evaluation via statistics that rely on asymptotic theory. The current work aimed to evaluate the performance of one such metric, the standardized generalized dimensionality discrepancy measure (SGDDM; Levy & Svetina, 2011; Levy, Xu, Yel, & Svetina, 2012), under both large- and small-scale testing conditions. A Monte Carlo study was conducted to compare the performance of DIMTEST and the SGDDM statistic in terms of evaluating assumptions of unidimensionality in item response data under a variety of conditions, with an emphasis on the examination of these procedures in small-scale assessments. Similar to previous research, increases in either test length or sample size resulted in increased power. The DIMTEST procedure appeared to be a conservative test of the null hypothesis of unidimensionality. The SGDDM statistic exhibited rejection rates near the nominal rate of .05 under unidimensional conditions, though the reliability of these results may have been less than optimal due to high sampling variability resulting from a relatively limited number of replications. Power values were at or near 1.0 for many of the multidimensional conditions. It was only when the sample size was reduced to N = 100 that the two approaches diverged in performance. Results suggested that both procedures may be appropriate for sample sizes as low as N = 250 and tests as short as J = 12 (SGDDM) or J = 19 (DIMTEST). When used as a diagnostic tool, SGDDM may be appropriate with as few as N = 100 cases combined with J = 12 items. The study was somewhat limited in that it did not include any complex factorial designs, nor were the strength of item discrimination parameters or correlation between factors manipulated. It is recommended that further research be conducted with the inclusion of these factors, as well as an increase in the number of replications when using the SGDDM procedure.
Dissertation/Thesis
M.A. Educational Psychology 2013
APA, Harvard, Vancouver, ISO, and other styles
28

Lin, Wei Chiu, and 林維秋. "An Assessment of Intention Models in the Overseas Traveling Domain for the Senior Group–also with Comparisons with Other Age Groups." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/02171853469165734623.

Full text
Abstract:
碩士
南開科技大學
福祉科技與服務管理所
98
The purpose of this study is to explore how the overseas traveling intention model differs among young adults of 25-49 years old, the pre-senior of 50-64 and the senior of 65 years old (including and above). According to the related literature, three models including Theory of Reasoned Action (TRA), Theory of Planed Behavior (TPB), Theory of Self-Regulation (TSR) were adopted to test their abilities of serviceability and prediction of overseas traveling intention. In the period between October 20th and Nov 30th 2009, 2400 questionnaires were distributed upon 16 cities in Taiwan by mailing and questioning on the spot. Twenty-two hundreds and thirty questionnaires were returned and thus the returns rate is 92.9%; among these returned ones, 1922 were validated and thus the validated return rate was 86.2%. This study reached the analytical results as listed below: (1) The subjective norms presents medium positive significances toward overseas traveling intensions for age groups of 25-49 and 50-64 years old. (2) The perceived behavioral control that reaches medium positive significance influences greatly the overseas traveling intentions for the group above 65 years old. The subjective norm that is close to medium positive significance is as followed. (3)All three models fitted well for the data from the three different age groups; and all of them were also cross-validated to be model stable between different samples. Furthermore, the selection analysis revealed that TRA is consistently the best prediction model of overseas traveling intensions. (4)With the analytical results, specific suggestions are proposed for the reference of government, tourism industry and academics.
APA, Harvard, Vancouver, ISO, and other styles
29

Blum-evitts, Shemariah. "Designing a Foodshed Assessment Model: Guidance for Local and Regional Planners in Understanding Local Farm Capacity in Comparison to Local Food Needs." 2009. https://scholarworks.umass.edu/theses/288.

Full text
Abstract:
This thesis explores how to conduct a regional foodshed assessment and further provides guidance to local and regional planners on the use of foodshed assessments. A foodshed is the geographic origin of a food supply. Before the 1800s, foodsheds were predominantly local — within the city or neighboring countryside. Today most urban areas are supported by a global foodshed. While the global foodshed can present many benefits, it also creates tremendous externalities. In an attempt to address these concerns, promotion of alternative local foodsheds has re-emerged. A foodshed assessment serves as a planning tool for land use planners, as well as for local food advocates, offering an understanding of land use implications that is not often carefully considered. By determining the food needs of a region’s population, the land base needed to support that population can then be identified. In this way, planners can have a stronger basis for promoting working farmland preservation measures and strengthening the local foodshed. This thesis compares the approaches of five previous foodshed assessments and presents a model for conducting an assessment on a regional level. This model is then applied to the Pioneer Valley of Western Massachusetts with the goal of determining how much the agricultural production in the Pioneer Valley fulfills the food consumption needs of the region’s population. The assessment also compares the amount of current working farmlands to open lands available for farming, and the extent of farmland necessary to meet regional food demand for various diet types.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography