Tesi sul tema "Educational statistics – data processing"

Segui questo link per vedere altri tipi di pubblicazioni sul tema: Educational statistics – data processing.

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Vedi i top-50 saggi (tesi di laurea o di dottorato) per l'attività di ricerca sul tema "Educational statistics – data processing".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Vedi le tesi di molte aree scientifiche e compila una bibliografia corretta.

1

Hill, Rachelle Phelps. "A Case Study of the Impact of the Middle School Data Coach on Teacher Use of Educational Test Data to Change Instruction". Thesis, University of North Texas, 2010. https://digital.library.unt.edu/ark:/67531/metadc33164/.

Testo completo
Abstract (sommario):
With the advent of No Child Left Behind (NCLB) legislation in 2002 and its attendant increases in accountability pressure, many districts and schools currently embrace data analysis as an essential part of the instructional decision making process. In their attempts to overcome low achievement on state-mandated tests, some districts have begun employing data coaches. The study reported here, which was set in three middle schools in a northeast Texas school district, assessed the influence of the campus data coach on a middle school mathematics teachers' use of analyzed data to make instructional decisions. It also examined the extent to which the Data Coach/teacher relationship resolved teacher concerns about data-driven decision making. Phenomenological interviews with data coaches were guided by Seidman's (2006) three-series interview. Measurement of teacher use of data to make decisions was based on the concerns-based adoption model's levels of use interview protocol, stages of concern questionnaire, and innovation configuration map. By the end of one school year, two out of the three teachers never used data to make instructional decisions, although the non-users both had moved closer toward employing the innovation in their classroom. Data indicated all teachers were aware of the innovation, but all three ended the study with high personal concerns, signifying that the minimal efforts made by the data coaches to resolve concerns were not successful. This study's small sample gave the research paradigm of data-based decision making an in-depth glimpse into the process of implementing data-based instructional decision making and the Data Coach position on three middle school campuses in one large northeast Texas district.
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Zhang, Zhidong 1957. "Cognitive assessment in a computer-based coaching environment in higher education : diagnostic assessment of development of knowledge and problem-solving skill in statistics". Thesis, McGill University, 2007. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=102853.

Testo completo
Abstract (sommario):
Diagnostic cognitive assessment (DCA) was explored using Bayesian networks and evidence-centred design (ECD) in a statistics learning domain (ANOVA). The assessment environment simulates problem solving activities that occurred in a web-based statistics learning environment. The assessment model is composed of assessment constructs, and evidence models. Assessment constructs correspond to components of knowledge and procedural skill in a cognitive domain model and are represented as explanatory variables in the assessment model. Explanatory variables represent specific aspects of student's performance of assessment problems. Bayesian networks are used to connect the explanatory variables to the evidence variables. These links enable the network to propagate evidential information to explanatory model variables in the assessment model. The purpose of DCA is to infer cognitive components of knowledge and skill that have been mastered by a student. These inferences are realized probabilistically using the Bayesian network to estimate the likelihood that a student has mastered specific components of knowledge or skill based on observations of features of the student's performance of an assessment task.
The objective of this study was to develop a Bayesian assessment model that implements DCA in a specific domain of statistics, and evaluate it in relation to its potential to achieve the objectives of DCA. This study applied a method for model development to the ANOVA score model domain to attain the objectives of the study. The results documented: (a) the process of model development in a specific domain; (b) the properties of the Bayesian assessment model; (c) the performance of the network in tracing students' progress towards mastery by using the model to successfully update the posterior probabilities; (d) the use of estimates of log odds ratios of likelihood of mastery as a measure of "progress toward mastery;" (e) the robustness of diagnostic inferences based on the network; and (f) the use of the Bayesian assessment model for diagnostic assessment with a sample of 20 students who completed the assessment tasks. The results indicated that the Bayesian assessment network provided valid diagnostic information about specific cognitive components, and was able to track development towards achieving mastery of learning goals.
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Mercier, Julien 1974. "Help seeking and use of tutor scaffolding by dyads learning with a computer tutor in statistics". Thesis, McGill University, 2004. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=85191.

Testo completo
Abstract (sommario):
Research on tutoring has shown that the student's interaction with the tutor determines the learning outcomes. In human tutoring, the responsibility of the interaction is shared between the tutor and the student. In the case of a computer coach such as the McGill Statistics Tutor, the control of the interaction is put entirely in the hands of the learners. Learners' ability to interact with the system productively therefore represents a critical aspect affecting the learning outcomes. This ability of help seeking (Nelson-LeGall, 1981) has not been well researched from a cognitive science point of view in the context of computer-supported learning (Aleven et al., 2003). The aims of the present work were to elaborate and test a cognitive model of help seeking and to examine the prevalence of help seeking in a problem-based computer-supported learning situation, as well as individual differences and the effect of the progression in a sequence of tasks with respect to help seeking.
Participants were 18 graduate students from a faculty of Education of a Canadian university. The seven-hour experiment involved working in pairs to solve a very challenging statistics problem for which students did not have sufficient background. A computer coach based on human tutoring, the McGill Statistics Tutor, was available to provide help with every aspect of the task.
Data consisted of two complementary sources. The main source was the dialogue between the participants as they worked on the statistics problem using the computer coach. The students' use of the computer coach and solutions to the tasks were also integrated into the database.
Data analysis consisted of statistical analyses using log-linear models. Conditional probability graphs were also constructed from the data.
The results were consistent with the help seeking model. Individual differences were found in terms of emphasis on certain help seeking activities. Effects of the progression in the sequence of tasks were also found. The quality of the solutions students elaborated corresponded to specific profiles of help seeking. The structure of help seeking episodes was established and corresponded to the model. These results have implications for the design of computer coaches and instructional situations.
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Thayne, Jeffrey L. "Making statistics matter| Self-data as a possible means to improve statistics learning". Thesis, Utah State University, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10250713.

Testo completo
Abstract (sommario):

Research has demonstrated that well into their undergraduate and even graduate education, learners often struggle to understand basic statistical concepts, fail to see their relevance in their personal and professional lives, and often treat them as little more than mere mathematics exercises. Undergraduate learners often see statistical concepts as means to passing exams, completing required courses, and moving on with their degree, and not as instruments of inquiry that can illuminate their world in new and useful ways.

This study explored ways help learners in an undergraduate learning context to treat statistical inquiry as mattering in a practical research context, by inviting them to ask questions about and analyze large, real, messy datasets that they have collected about their own personal lives (i.e., self -data). This study examined the conditions under which such an intervention might (and might not) successfully lead to a greater sense of the relevance of statistics to undergraduate learners. The goal is to place learners in a context where their relationship with data analysis can more closely mimic that of disciplinary professionals than that of students with homework; that is, where they are illuminating something about their world that concerns them for reasons beyond the limited concerns of the classroom.

The study revealed five themes in the experiences of learners working with self-data that highlight contexts in which data-analysis can be made to matter to learners (and how self-data can make that more likely): learners must be able to form expectations of the data, whether based on their own experiences or external benchmarks; the data should have variation to account for; the learners should treat the ups and downs of the data as more or less preferable in some way; the data should address or related to ongoing projects or concerns of the learner; and finally, learners should be able to investigate quantitative or qualitative covariates of their data. In addition, narrative analysis revealed that learners using self-data treated data analysis as more than a mere classroom exercise, but as exercises in inquiry and with an invested engagement that mimicked (in some ways) that of a disciplinary professional.

Gli stili APA, Harvard, Vancouver, ISO e altri
5

Tao, Yufei. "Indexing and query processing of spatio-temporal data /". View Abstract or Full-Text, 2002. http://library.ust.hk/cgi/db/thesis.pl?COMP%202002%20TAO.

Testo completo
Abstract (sommario):
Thesis (Ph. D.)--Hong Kong University of Science and Technology, 2002.
Includes bibliographical references (leaves 208-215). Also available in electronic version. Access restricted to campus users.
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Andersson-Sunna, Josefin. "Large Scale Privacy-Centric Data Collection, Processing, and Presentation". Thesis, Luleå tekniska universitet, Institutionen för system- och rymdteknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-84930.

Testo completo
Abstract (sommario):
It has become an important part of business development to collect statistical data from online sources. Information about users and how they interact with an online source can help improving the user experience and increasing sales of products. Collecting data about users has many benefits for the business owner, but it also raises privacy issues since more and more information about users are spread over the internet. Tools that collect statistical data from online sources exists, but using such tools gives away the control over the data collected. If a business implements its own analytics system, it is easier to make it more privacy centric and the control over the data collected is kept.  This thesis examines what techniques that are most suitable for a system whose purpose is to collect, store, process, and present large-scale privacy centric data. Research about what technique to use for collecting data and how to keep track of unique users in a privacy centric way has been made as well as research about what database to use that can handle many write requests and store large scale data. A prototype was implemented based on the research, where JavaScript tagging is used to collect data from several online sources and cookies is used to keep track of unique users. Cassandra was chosen as database for the prototype because of its high scalability and speed at write requests. Two versions of the processing of raw data into statistical reports was implemented to be able to evaluate if the data should be preprocessed or if the reports could be created when the user asks for it.   To evaluate the techniques used in the prototype, load tests of the prototype was made where the results showed that a bottleneck was reached after 45 seconds on a workload of 600 write requests per second. The tests also showed that the prototype managed to keep its performance at a workload of 500 write requests per second for one hour, where it completed 1 799 953 requests. Latency tests when processing raw data into statistical reports was also made to evaluate if the data should be preprocessed or processed when the user asks for the report. The result showed that it took around 30 seconds to process 1 200 000 rows of data from the database which is too long for a user to wait for the report. When investigating what part of the processing that increased the latency the most it showed that it was the retrieval of data from the database that increased the latency. It took around 25 seconds to retrieve the data and only around 5 seconds to process it into statistical reports. The tests showed that Cassandra is slow when retrieving many rows of data, but fast when writing data which is more important in this prototype.
Det har blivit en viktig del av affärsutvecklingen hos företag att samla in statistiska data från deras online-källor. Information om användare och hur de interagerar med en online-källa kan hjälpa till att förbättra användarupplevelsen och öka försäljningen av produkter. Att samla in data om användare har många fördelar för företagsägaren, men det väcker också integritetsfrågor eftersom mer och mer information om användare sprids över internet. Det finns redan verktyg som kan samla in statistiska data från online-källor, men när sådana verktyg används förloras kontrollen över den insamlade informationen. Om ett företag implementerar sitt eget analyssystem är det lättare att göra det mer integritetscentrerat och kontrollen över den insamlade informationen behålls. Detta arbete undersöker vilka tekniker som är mest lämpliga för ett system vars syfte är att samla in, lagra, bearbeta och presentera storskalig integritetscentrerad information. Teorier har undersökts om vilken teknik som ska användas för att samla in data och hur man kan hålla koll på unika användare på ett integritetscentrerat sätt, samt om vilken databas som ska användas som kan hantera många skrivförfrågningar och lagra storskaligdata. En prototyp implementerades baserat på teorierna, där JavaScript-taggning används som metod för att samla in data från flera online källor och cookies används för att hålla reda på unika användare. Cassandra valdes som databas för prototypen på grund av dess höga skalbarhet och snabbhet vid skrivförfrågningar. Två versioner av bearbetning av rådata till statistiska rapporter implementerades för att kunna utvärdera om data skulle bearbetas i förhand eller om rapporterna kunde skapas när användaren ber om den. För att utvärdera teknikerna som användes i prototypen gjordes belastningstester av prototypen där resultaten visade att en flaskhals nåddes efter 45 sekunder på en arbetsbelastning på 600 skrivförfrågningar per sekund. Testerna visade också att prototypen lyckades hålla prestandan med en arbetsbelastning på 500 skrivförfrågningar per sekund i en timme, där den slutförde 1 799 953 förfrågningar. Latenstest vid bearbetning av rådata till statistiska rapporter gjordes också för att utvärdera om data ska förbehandlas eller bearbetas när användaren ber om rapporten. Resultatet visade att det tog cirka 30 sekunder att bearbeta 1 200 000 rader med data från databasen vilket är för lång tid för en användare att vänta på rapporten. Vid undersökningar om vilken del av bearbetningen som ökade latensen mest visade det att det var hämtningen av data från databasen som ökade latensen. Det tog cirka 25 sekunder att hämta data och endast cirka 5 sekunder att bearbeta dem till statistiska rapporter. Testerna visade att Cassandra är långsam när man hämtar ut många rader med data, men är snabb på att skriva data vilket är viktigare i denna prototyp.
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Wong, Ka-yan, e 王嘉欣. "Positioning patterns from multidimensional data and its applications in meteorology". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2008. http://hub.hku.hk/bib/B39558630.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Fathi, Salmi Meisam. "Processing Big Data in Main Memory and on GPU". The Ohio State University, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=osu1451992820.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Gwaze, Arnold Rumosa. "A cox proportional hazard model for mid-point imputed interval censored data". Thesis, University of Fort Hare, 2011. http://hdl.handle.net/10353/385.

Testo completo
Abstract (sommario):
There has been an increasing interest in survival analysis with interval-censored data, where the event of interest (such as infection with a disease) is not observed exactly but only known to happen between two examination times. However, because so much research has been focused on right-censored data, so many statistical tests and techniques are available for right-censoring methods, hence interval-censoring methods are not as abundant as those for right-censored data. In this study, right-censoring methods are used to fit a proportional hazards model to some interval-censored data. Transformation of the interval-censored observations was done using a method called mid-point imputation, a method which assumes that an event occurs at some midpoint of its recorded interval. Results obtained gave conservative regression estimates but a comparison with the conventional methods showed that the estimates were not significantly different. However, the censoring mechanism and interval lengths should be given serious consideration before deciding on using mid-point imputation on interval-censored data.
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Tsoi, Kit-hon, e 徐傑漢. "Aspects of the statistics of condensation polymer networks". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2007. http://hub.hku.hk/bib/B38985433.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
11

Siluyele, Ian John. "Power studies of multivariate two-sample tests of comparison". Thesis, University of the Western Cape, 2007. http://etd.uwc.ac.za/index.php?module=etd&action=viewtitle&id=gen8Srv25Nme4_6355_1255091702.

Testo completo
Abstract (sommario):

The multivariate two-sample tests provide a means to test the match between two multivariate distributions. Although many tests exist in the literature, relatively little is known about the relative power of these procedures. The studies reported in the thesis contrasts the effectiveness, in terms of power, of seven such tests with a Monte Carlo study. The relative power of the tests was investigated against location, scale, and correlation alternatives.

Gli stili APA, Harvard, Vancouver, ISO e altri
12

Ramesh, Maganti V. "Magnetic stripe reader used to collect computer laboratory statistics". Virtual Press, 1990. http://liblink.bsu.edu/uhtbin/catkey/722464.

Testo completo
Abstract (sommario):
This thesis is concerned with interfacing a magnetic stripe reader with an AT&T PC 6300 consisting of a 20 MB hard disk and with collecting laboratory usage statistics. Laboratory usage statistics includes the name and social security number of the student,along with other necessary details. This system replaces all manual modes of entering data, checks for typographical errors, renames the file containing a particular day's data to a file that has the current day's date as its new filename, and keeps track of the number of students for a particular day. This procedure will ensure security of laboratory equipment and can be modified for each computer laboratory on campus. The program results indicate an acceleration of data entry, favorable student response, and an increase in the accuracy of the data recorded.
Department of Computer Science
Gli stili APA, Harvard, Vancouver, ISO e altri
13

Harris, Jeff R. "Processing and integration of geochemical data for mineral exploration: Application of statistics, geostatistics and GIS technology". Thesis, University of Ottawa (Canada), 2002. http://hdl.handle.net/10393/6421.

Testo completo
Abstract (sommario):
Geographic Information Systems (GIS) used in concert with statistical and geostatistical software provide the geologist with a powerful tool for processing, visualizing and analysing geoscience data for mineral exploration applications. This thesis focuses on different methods for analysing, visualizing and integrating geochemical data sampled from various media (rock, till, soil, humus), with other types of geoscience data. Different methods for defining geochemical anomalies and separating geochemical anomalies due to mineralization from other lithologic or surficial factors (i.e. true from false anomalies) are investigated. With respect to lithogeochemical data, this includes methods to distinguish between altered and un-altered samples, methods (normalization) for identifying lithologic from mineralization effects, and various statistical and visual methods for identifying anomalous geochemical concentrations from background. With respect to surficial geochemical data, methods for identifying bedrock signatures, and scavenging effects are presented. In addition, a new algorithm, the dispersal train identification algorithm (DTIA), is presented which broadly helps to identify and characterize anisotropies in till data due to glacial dispersion and more specifically identifies potential dispersal trains using a number of statistical parameters. The issue of interpolation of geochemical data is addressed and methods for determining whether geochemical data should or should not be interpolated are presented. New methods for visualizing geochemical data using red-green-blue (RGB) ternary displays are illustrated. Finally data techniques for integrating geochemical data with other geoscience data to produce mineral prospectivity maps are demonstrated. Both data and knowledge-driven GIS modeling methodologies are used (and compared) for producing prospectivity maps. New ways of preparing geochemical data for input to modeling are demonstrated with the aim of getting the most out of your data for mineral exploration purposes. Processing geochemical data by sub-populations, either by geographic unit (i.e., lithology) or by geochemical classification and alteration style was useful for better identification of geochemical anomalies, with respect to background, and for assessing varying alteration styles. Normal probability plots of geochemical concentrations based on spatial (lithologic) divisions and Principal Component Analysis (PCA) were found to be particularly useful for identifying geochemical anomalies and for identifying associations between major oxide elements that in turn reflect different alteration styles. (Abstract shortened by UMI.)
Gli stili APA, Harvard, Vancouver, ISO e altri
14

Brotherton, Jason Alan. "Enriching everyday activities through the automated capture and access of live experiences : eClass: building, observing and understanding the impact of capture and access in an educational domain". Diss., Georgia Institute of Technology, 2001. http://hdl.handle.net/1853/8143.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
15

Shahidi, Reza. "Methods and applications of irregular sampling and scattered data interpolation of digital images /". Internet access available to MUN users only, 2003. http://collections.mun.ca/u?/theses,161233.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
16

Begum, Munni. "Estimating posterior expectation of distributions belonging to exponential and non exponential families". Virtual Press, 2001. http://liblink.bsu.edu/uhtbin/catkey/1244863.

Testo completo
Abstract (sommario):
Bayesian principle is conceptually simple and intuitively plausible to carry out but its numerical implementation is not always straightforward. Most of the times we have posterior distributions in terms of complicated analytical funs ions and be known only up to a multiplicative constant. Hence it becomes computationally difficult to attain the marginal densities and the moments of the posterior distributions in closed form. In the present study the leading methods, both analytical and numerical, for implementing Bayesian inference has been explored. In particular, the non-iterative Monte Carlo method known as Importance Sampling has been applied to approximate the posterior expectations of the Lognormal and Cauchy distributions, belonging to the Exponential family and the non-Exponential family of distributions respectively. Sample values from these distributions have been simulated through computer programming. Calculations are done mostly by C++ programming language and Mathematica.
Department of Mathematical Sciences
Gli stili APA, Harvard, Vancouver, ISO e altri
17

Ethington, Corinna A. "The robustness of LISREL estimates in structural equation models with categorical data". Diss., Virginia Polytechnic Institute and State University, 1985. http://hdl.handle.net/10919/54504.

Testo completo
Abstract (sommario):
This study was an examination of the effect of type of correlation matrix on the robustness of LISREL maximum likelihood and unweighted least squares structural parameter estimates for models with categorical manifest variables. Two types of correlation matrices were analyzed; one containing Pearson product-moment correlations and one containing tetrachoric, polyserial, and product-moment correlations as appropriate. Using continuous variables generated according to the equations defining the population model, three cases were considered by dichotomizing some of the variables with varying degrees of skewness. When Pearson product-moment correlations were used to estimate associations involving dichotomous variables, the structural parameter estimates were biased when skewness was present in the dichotomous variables. Moreover, the degree of bias was consistent for both the maximum likelihood and unweighted least squares estimates. The standard errors of the estimates were found to be inflated, making significance tests unreliable. The analysis of mixed matrices produced average estimates that more closely approximated the model parameters except in the case where the dichotomous variables were skewed in opposite directions. However, since goodness-of-fit statistics and standard errors are not available in LISREL when tetrachoric and polyserial correlations are used, the unbiased estimates are not of practical significance. Until alternative computer programs are available that employ distribution-free estimation procedures that consider the skewness and kurtosis of the variables, researchers are ill-advised to employ LISREL in the estimation of structural equation models containing skewed categorical manifest variables.
Ph. D.
Gli stili APA, Harvard, Vancouver, ISO e altri
18

Tse, Kai-keung, e 謝啓強. "A survey of Hong Kong manufacturer's satisfaction on using microcomputers". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1989. http://hub.hku.hk/bib/B31264505.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
19

Osama, Muhammad. "Machine learning for spatially varying data". Licentiate thesis, Uppsala universitet, Avdelningen för systemteknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-429234.

Testo completo
Abstract (sommario):
Many physical quantities around us vary across space or space-time. An example of a spatial quantity is provided by the temperature across Sweden on a given day and as an example of a spatio-temporal quantity we observe the counts of the corona virus cases across the globe. Spatial and spatio-temporal data enable opportunities to answer many important questions. For example, what the weather would be like tomorrow or where the highest risk for occurrence of a disease is in the next few days? Answering questions such as these requires formulating and learning statistical models. One of the challenges with spatial and spatio-temporal data is that the size of data can be extremely large which makes learning a model computationally costly. There are several means of overcoming this problem by means of matrix manipulations and approximations. In paper I, we propose a solution to this problem where the model islearned in a streaming fashion, i.e., as the data arrives point by point. This also allows for efficient updating of the learned model based on newly arriving data which is very pertinent to spatio-temporal data. Another interesting problem in the spatial context is to study the causal effect that an exposure variable has on a response variable. For instance, policy makers might be interested in knowing whether increasing the number of police in a district has the desired effect of reducing crimes there. The challenge here is that of spatial confounding. A spatial map of the number of police against the spatial map of the number of crimes in different districts might show a clear association between these two quantities. However, there might be a third unobserved confounding variable that makes both quantities small and large together. In paper II, we propose a solution for estimating causal effects in the presence of such a confounding variable. Another common type of spatial data is point or event data, i.e., the occurrence of events across space. The event could for example be a reported disease or crime and one may be interested in predicting the counts of the event in a given region. A fundamental challenge here is to quantify the uncertainty in the predicted counts in a model in a robust manner. In paper III, we propose a regularized criterion for learning a predictive model of counts of events across spatial regions.The regularization ensures tighter prediction intervals around the predicted counts and have valid coverage irrespective of the degree of model misspecification.
Gli stili APA, Harvard, Vancouver, ISO e altri
20

Sternelöv, Gustav. "Analysis of forklift data – A process for decimating data and analyzing fork positioning functions". Thesis, Linköpings universitet, Statistik och maskininlärning, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-139213.

Testo completo
Abstract (sommario):
Investigated in this thesis are the possibilities and effects of reducing CAN data collected from forklifts. The purpose of reducing the data was to create the possibility of exporting and managing data for multiple forklifts and a relatively long period of time. For doing that was an autoregressive filter implemented for filtering and decimating data. Connected to the decimation was also the aim of generating a data set that could be used for analyzing lift sequences and in particular the usage of fork adjustment functions during lift sequences. The findings in the report are that an AR (18) model works well for filtering and decimating the data. Information losses are unavoidable but kept at a relatively low level, and the size of data becomes manageable. Each row in the decimated data is labeled as belonging to a lift sequence or as not belonging to a lift sequence given a manually specified definition of the lift sequence event. From the lift sequences is information about the lift like number of usages of each fork adjustment function, load weight and fork height gathered. The analysis of the lift sequences gave that the lift/lower function on average is used 4.75 times per lift sequence and the reach function 3.23 times on average. For the side shift the mean is 0.35 per lift sequence and for the tilt the mean is 0.10. Moreover, it was also found that the struggling time on average is about 17 % of the total lift sequence time. The proportion of the lift that is struggling time was also shown to differ between drivers, with the lowest mean proportion being 7 % and the highest 30 %.
Gli stili APA, Harvard, Vancouver, ISO e altri
21

Wyatt, Timothy Robert. "Development and evaluation of an educational software tool for geotechnical engineering". Diss., Georgia Institute of Technology, 2000. http://hdl.handle.net/1853/20225.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
22

Jenkins-Todd, Derone I. (Derone Ilene). "Determination of Author Characteristics and Content of Educational Computing Articles in Community/Junior College Serials Literature, 1977-1991". Thesis, University of North Texas, 1993. https://digital.library.unt.edu/ark:/67531/metadc278735/.

Testo completo
Abstract (sommario):
The study was undertaken: (a) to categorize the contents of educational computing articles using a taxonomy developed by Knezek, Rachlin, and Scannell (1988), (b) to examine the trends in educational computing subject matter addressed in community/junior college journals between 1977 and 1991, and (c) to identify and analyze specific characteristics of contributing authors and their employing institutions which might explain writing and publication biases.
Gli stili APA, Harvard, Vancouver, ISO e altri
23

Park, Chang Yun. "Predicting deterministic execution times of real-time programs /". Thesis, Connect to this title online; UW restricted, 1992. http://hdl.handle.net/1773/6978.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
24

Bui, Michael. "Path finding on a spherical self-organizing map using distance transformations". Thesis, The University of Sydney, 2008. http://hdl.handle.net/2123/9290.

Testo completo
Abstract (sommario):
Spatialization methods create visualizations that allow users to analyze high-dimensional data in an intuitive manner and facilitates the extraction of meaningful information. Just as geographic maps are simpli ed representations of geographic spaces, these visualizations are esssentially maps of abstract data spaces that are created through dimensionality reduction. While we are familiar with geographic maps for path planning/ nding applications, research into using maps of high-dimensional spaces for such purposes has been largely ignored. However, literature has shown that it is possible to use these maps to track temporal and state changes within a high-dimensional space. A popular dimensionality reduction method that produces a mapping for these purposes is the Self-Organizing Map. By using its topology preserving capabilities with a colour-based visualization method known as the U-Matrix, state transitions can be visualized as trajectories on the resulting mapping. Through these trajectories, one can gather information on the transition path between two points in the original high-dimensional state space. This raises the interesting question of whether or not the Self-Organizing Map can be used to discover the transition path between two points in an n-dimensional space. In this thesis, we use a spherically structured Self-Organizing Map called the Geodesic Self-Organizing Map for dimensionality reduction and the creation of a topological mapping that approximates the n-dimensional space. We rst present an intuitive method for a user to navigate the surface of the Geodesic SOM. A new application of the distance transformation algorithm is then proposed to compute the path between two points on the surface of the SOM, which corresponds to two points in the data space. Discussions will then follow on how this application could be improved using some form of surface shape analysis. The new approach presented in this thesis would then be evaluated by analyzing the results of using the Geodesic SOM for manifold embedding and by carrying out data analyses using carbon dioxide emissions data.
Gli stili APA, Harvard, Vancouver, ISO e altri
25

Moser, Robert B. Computer Science &amp Engineering Faculty of Engineering UNSW. "A methodology for the design of educational computer adventure games". Awarded by:University of New South Wales. Computer Science and Engineering, 2000. http://handle.unsw.edu.au/1959.4/18613.

Testo completo
Abstract (sommario):
This work undertakes a systematic study of various elements from differing fields which apply to the construction of computer-aided instructional systems. Drawing upon these works, the potential for instruction in computer adventure games is recognised, and previous work in the area analysed with respect to the theoretical findings. Based both on this theory and the germane advice of practicing game designers, a methodology for the design of educational computer adventure games is laid out in detail. The method described is then used to construct a sample game with basic programming skills as the pedagogical content, and this sample game is tested and the results examined. An informed approach to the design of computer-assisted instruction must begin with an understanding of how people acquire and store new information or skills. Cognitive psychology provides a number of conflicting models of the human information processing system, but these differing theories have a common basis which can be exploited in an attempt to make material more accessible. Instructional design describes a methodology for the analysis of pedagogical goals and demonstrates methods of learning support which can and should be incorporated into the new setting. In this field also is a judgement of different media, including computers, and their ability to provide the necessary elements of learning. By understanding the strengths and weaknesses of the medium the limits of what is possible within it can be catered to, and its failings augmented with supplemental materials. Both educational psychology and instructional design indicate benefits to learning from a correctly motivated learner, and the theory of engagement is therefore also scrutinised for elements helpful to the educational designer. The convergence of the knowledge gleaned from these various fields leads to one possible match to the desired criteria for computer-mediated instruction; the computerised fantasy adventure game. This being the case, other work in the field is examined for relevance, and it is found that a detailed methodology for the construction of such games does not exist. Existing material is combined with the aforementioned theoretical work and a survey of what is known about practical game design to create such a framework. It is proposed that through its use the systematic inclusion of educational content in an engaging environment will be facilitated. The hypothesis is examined, and an action research approach found to be called for. As such, the proposed methodology is used to create a sample game, and the process of its design used to inform the proposed methodology. The final form is described in detail, and the process of its application to the sample game elucidated. A prototype of the game is used with a number of test subjects to evaluate the game?s level of success at both engagement and the imparting of content material.
Gli stili APA, Harvard, Vancouver, ISO e altri
26

Bettesworth, Leanne Rae. "Administrators' use of data to guide decision-making /". view abstract or download file of text, 2006. http://proquest.umi.com/pqdweb?did=1192187491&sid=1&Fmt=2&clientId=11238&RQT=309&VName=PQD.

Testo completo
Abstract (sommario):
Thesis (Ph. D.)--University of Oregon, 2006.
Typescript. Includes vita and abstract. "This study builds on an emerging body of research literature that sites the importance of data driven decision-making in creating more effective schools ... The purpose of this study is to determine if participation in training sessions that teach pre-service administrators how to use statistics significantly increases their ability and efficacy in using data for decision making ... Findings from this study will inform training, instruction, and practical applications in data analysis and data based decision-making in the Initial Administrative Licensure (IAL) program at the University of Oregon and similar leadership training and preparation programs"--Introd. Includes bibliographical references (leaves 154-160). Also available for download via the World Wide Web; free to University of Oregon users.
Gli stili APA, Harvard, Vancouver, ISO e altri
27

Beck, Michael Joseph. "Educational software that requires no training to use". CSUSB ScholarWorks, 1997. https://scholarworks.lib.csusb.edu/etd-project/1182.

Testo completo
Abstract (sommario):
The goal of this project is to create a piece of educational software that most anyone can use without prior instruction. The intended audience is secondary level students and up. The content of the software is in the form of a data bank on vertebrates and invertebrates of the Caribbean ocean.
Gli stili APA, Harvard, Vancouver, ISO e altri
28

Majeke, Lunga. "Preliminary investigation into estimating eye disease incidence rate from age specific prevalence data". Thesis, University of Fort Hare, 2011. http://hdl.handle.net/10353/464.

Testo completo
Abstract (sommario):
This study presents the methodology for estimating the incidence rate from the age specific prevalence data of three different eye diseases. We consider both situations where the mortality may differ from one person to another, with and without the disease. The method used was developed by Marvin J. Podgor for estimating incidence rate from prevalence data. It delves into the application of logistic regression to obtain the smoothed prevalence rates that helps in obtaining incidence rate. The study concluded that the use of logistic regression can produce a meaningful model, and the incidence rates of these diseases were not affected by the assumption of differential mortality.
Gli stili APA, Harvard, Vancouver, ISO e altri
29

Offei, Felix. "Denoising Tandem Mass Spectrometry Data". Digital Commons @ East Tennessee State University, 2017. https://dc.etsu.edu/etd/3218.

Testo completo
Abstract (sommario):
Protein identification using tandem mass spectrometry (MS/MS) has proven to be an effective way to identify proteins in a biological sample. An observed spectrum is constructed from the data produced by the tandem mass spectrometer. A protein can be identified if the observed spectrum aligns with the theoretical spectrum. However, data generated by the tandem mass spectrometer are affected by errors thus making protein identification challenging in the field of proteomics. Some of these errors include wrong calibration of the instrument, instrument distortion and noise. In this thesis, we present a pre-processing method, which focuses on the removal of noisy data with the hope of aiding in better identification of proteins. We employ the method of binning to reduce the number of noise peaks in the data without sacrificing the alignment of the observed spectrum with the theoretical spectrum. In some cases, the alignment of the two spectra improved.
Gli stili APA, Harvard, Vancouver, ISO e altri
30

Ghassemi, Ali. "Nonparametric geostatistical estimation of soil physical properties". Thesis, McGill University, 1987. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=63904.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
31

Merlo, Stefania. "Contextualising intra-site spatial analysis : the role of three-dimensional GIS modelling in understanding excavation data". Thesis, University of Cambridge, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.609386.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
32

Lo, Kin-keung, e 羅建強. "An investigation of computer assisted testing for civil engineering students in a Hong Kong technical institute". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1988. http://hub.hku.hk/bib/B38627000.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
33

Alvarado, Mantecon Jesus Gerardo. "Towards the Automatic Classification of Student Answers to Open-ended Questions". Thesis, Université d'Ottawa / University of Ottawa, 2019. http://hdl.handle.net/10393/39093.

Testo completo
Abstract (sommario):
One of the main research challenges nowadays in the context of Massive Open Online Courses (MOOCs) is the automation of the evaluation process of text-based assessments effectively. Text-based assessments, such as essay writing, have been proved to be better indicators of higher level of understanding than machine-scored assessments (E.g. Multiple Choice Questions). Nonetheless, due to the rapid growth of MOOCs, text-based evaluation has become a difficult task for human markers, creating the need of automated systems for grading. In this thesis, we focus on the automated short answer grading task (ASAG), which automatically assesses natural language answers to open-ended questions into correct and incorrect classes. We propose an ensemble supervised machine learning approach that relies on two types of classifiers: a response-based classifier, which centers around feature extraction from available responses, and a reference-based classifier which considers the relationships between responses, model answers and questions. For each classifier, we explored a set of features based on words and entities. For the response-based classifier, we tested and compared 5 features: traditional n-gram models, entity URIs (Uniform Resource Identifier) and entity mentions both extracted using a semantic annotation API, entity mention embeddings based on GloVe and entity URI embeddings extracted from Wikipedia. For the reference-based classifier, we explored fourteen features: cosine similarity between sentence embeddings from student answers and model answers, number of overlapping elements (words, entity URI, entity mention) between student answers and model answers or question text, Jaccard similarity coefficient between student answers and model answers or question text (based on words, entity URI or entity mentions) and a sentence embedding representation. We evaluated our classifiers on three datasets, two of which belong to the SemEval ASAG competition (Dzikovska et al., 2013). Our results show that, in general, reference-based features perform much better than response-based features in terms of accuracy and macro average f1-score. Within the reference-based approach, we observe that the use of S6 embedding representation, which considers question text, student and model answer, generated the best performing models. Nonetheless, their combination with other similarity features helped build more accurate classifiers. As for response-based classifiers, models based on traditional n-gram features remained the best models. Finally, we combined our best reference-based and response-based classifiers using an ensemble learning model. Our ensemble classifiers combining both approaches achieved the best results for one of the evaluation datasets, but underperformed on the remaining two. We also compared the best two classifiers with some of the main state-of-the-art results on the SemEval competition. Our final embedded meta-classifier outperformed the top-ranking result on the SemEval Beetle dataset and our top classifier on SemEval SciEntBank, trained on reference-based features, obtained the 2nd position. In conclusion, the reference-based approach, powered mainly by sentence level embeddings and other similarity features, proved to generate the most efficient models in two out of three datasets and the ensemble model was the best on the SemEval Beetle dataset.
Gli stili APA, Harvard, Vancouver, ISO e altri
34

Bodily, Robert Gordon. "Designing, Developing, and Implementing Real-Time Learning Analytics Student Dashboards". BYU ScholarsArchive, 2018. https://scholarsarchive.byu.edu/etd/7258.

Testo completo
Abstract (sommario):
This document is a multiple-article format dissertation that discusses the iterative design, development, and evaluation processes necessary to create high quality learning analytics dashboard systems. With the growth of online and blended learning environments, the amount of data that researchers and practitioners collect from learning experiences has also grown. The field of learning analytics is concerned with using this data to improve teaching and learning. Many learning analytics systems focus on instructors or administrators, but these tools fail to involve students in the data-driven decision-making process. Providing feedback to students and involving students in this decision-making process can increase intrinsic motivation and help students succeed in online and blended environments. To support online and blended teaching and learning, the focus of this document is student-facing learning analytics dashboards. The first article in this dissertation is a literature review on student-facing learning analytics reporting systems. This includes any system that tracks learning analytics data and reports it directly to students. The second article in this dissertation is a design and development research article that used a practice-centered approach to iteratively design and develop a real-time student-facing dashboard. The third article in this dissertation is a design-based research article focused on improving student use of learning analytics dashboard tools.
Gli stili APA, Harvard, Vancouver, ISO e altri
35

Hechter, Trudie. "A comparison of support vector machines and traditional techniques for statistical regression and classification". Thesis, Stellenbosch : Stellenbosch University, 2004. http://hdl.handle.net/10019.1/49810.

Testo completo
Abstract (sommario):
Thesis (MComm)--Stellenbosch University, 2004.
ENGLISH ABSTRACT: Since its introduction in Boser et al. (1992), the support vector machine has become a popular tool in a variety of machine learning applications. More recently, the support vector machine has also been receiving increasing attention in the statistical community as a tool for classification and regression. In this thesis support vector machines are compared to more traditional techniques for statistical classification and regression. The techniques are applied to data from a life assurance environment for a binary classification problem and a regression problem. In the classification case the problem is the prediction of policy lapses using a variety of input variables, while in the regression case the goal is to estimate the income of clients from these variables. The performance of the support vector machine is compared to that of discriminant analysis and classification trees in the case of classification, and to that of multiple linear regression and regression trees in regression, and it is found that support vector machines generally perform well compared to the traditional techniques.
AFRIKAANSE OPSOMMING: Sedert die bekendstelling van die ondersteuningspuntalgoritme in Boser et al. (1992), het dit 'n populêre tegniek in 'n verskeidenheid masjienleerteorie applikasies geword. Meer onlangs het die ondersteuningspuntalgoritme ook meer aandag in die statistiese gemeenskap begin geniet as 'n tegniek vir klassifikasie en regressie. In hierdie tesis word ondersteuningspuntalgoritmes vergelyk met meer tradisionele tegnieke vir statistiese klassifikasie en regressie. Die tegnieke word toegepas op data uit 'n lewensversekeringomgewing vir 'n binêre klassifikasie probleem sowel as 'n regressie probleem. In die klassifikasiegeval is die probleem die voorspelling van polisvervallings deur 'n verskeidenheid invoer veranderlikes te gebruik, terwyl in die regressiegeval gepoog word om die inkomste van kliënte met behulp van hierdie veranderlikes te voorspel. Die resultate van die ondersteuningspuntalgoritme word met dié van diskriminant analise en klassifikasiebome vergelyk in die klassifikasiegeval, en met veelvoudige linêere regressie en regressiebome in die regressiegeval. Die gevolgtrekking is dat ondersteuningspuntalgoritmes oor die algemeen goed vaar in vergelyking met die tradisionele tegnieke.
Gli stili APA, Harvard, Vancouver, ISO e altri
36

Cannon, Paul C. "Extending the Information Partition Function: Modeling Interaction Effects in Highly Multivariate, Discrete Data". BYU ScholarsArchive, 2007. https://scholarsarchive.byu.edu/etd/1234.

Testo completo
Abstract (sommario):
Because of the huge amounts of data made available by the technology boom in the late twentieth century, new methods are required to turn data into usable information. Much of this data is categorical in nature, which makes estimation difficult in highly multivariate settings. In this thesis we review various multivariate statistical methods, discuss various statistical methods of natural language processing (NLP), and discuss a general class of models described by Erosheva (2002) called generalized mixed membership models. We then propose extensions of the information partition function (IPF) derived by Engler (2002), Oliphant (2003), and Tolley (2006) that will allow modeling of discrete, highly multivariate data in linear models. We report results of the modified IPF model on the World Health Organization's Survey on Global Aging (SAGE).
Gli stili APA, Harvard, Vancouver, ISO e altri
37

Kettermann, Anna. "Estimation of Standardized Mortality Ratio in Geographic Epidemiology". Fogler Library, University of Maine, 2004. http://www.library.umaine.edu/theses/pdf/KettermanA2004.pdf.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
38

Rashotte, Angela L. "Resistance to technology integration in elementary teaching by the technologically proficient classroom teacher". Thesis, McGill University, 2004. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=83144.

Testo completo
Abstract (sommario):
The Quebec Ministry of Education has implemented curriculum reforms that emphasize the integration of information technology into classroom teaching practices. Despite these efforts, however, many teachers appear to resist using computers in their classrooms. Some of these resistors are technologically literate! The purpose of this qualitative study is to better understand the reluctance of the technologically-literate teachers (with two to three years of experience) to integrate technology into their teaching practices.
The six teachers participating in this study completed questionnaires and were individually interviewed using an open-ended approach. The data were then analyzed using the Constant Comparative Method. The results showed that although the participants were using computers in their classrooms, they were not actually integrating technology as stipulated by the curriculum reforms. This was attributed to a number of factors, including personal limitations, job stability, lack of resources and funds, time, training, and curriculum issues.
Gli stili APA, Harvard, Vancouver, ISO e altri
39

Daffue, Ruan Albert. "Applying patient-admission predictive algorithms in the South African healthcare system". Thesis, Stellenbosch : Stellenbosch University, 2013. http://hdl.handle.net/10019.1/79897.

Testo completo
Abstract (sommario):
Thesis (MScEng)--Stellenbosch University, 2013.
ENGLISH ABSTRACT: Predictive analytics in healthcare has become one of the major focus areas in healthcare delivery worldwide. Due to the massive amount of healthcare data being captured, healthcare providers and health insurers are investing in predictive analytics and its enabling technologies to provide valuable insight into a large variety of healthcare outcomes. One of the latest developments in the field of healthcare predictive modelling (PM) was the launch of the Heritage Health Prize; a competition that challenges individuals from across the world to develop a predictive model that successfully identifies the patients at risk of admission to hospital from a given patient population. The patient-admission predictive algorithm (PAPA) is aimed at reducing the number of unnecessary hospitalisations that needlessly constrain healthcare service delivery worldwide. The aim of the research presented is to determine the feasibility and value of applying PAPAs in the South African healthcare system as part of a preventive care intervention strategy. A preventive care intervention strategy is a term used to describe an out-patient hospital service, aimed at providing preventive care in an effort to avoid unnecessary hospitalisations from occurring. The thesis utilises quantitative and qualitative techniques. This included a review of the current and historic PM applications in healthcare to determine the major expected shortfalls and barriers to implementation of PAPAs, as well as the institutional and operational requirements of these predictive algorithms. The literature study is concluded with a review of the current state of affairs in the South African healthcare system to, firstly, articulate the need for PAPAs and, secondly, to determine whether the public and private sectors provide a suitable platform for implementation (evaluated based on the operational and institutional requirements of PAPAs). Furthermore, a methodology to measure and analyse the potential value-add of a PAPA care intervention strategy was designed and developed. The methodology required a survey of the industry leaders in the private healthcare sector of South Africa to identify, firstly, the current performance foci and, secondly, the factors that compromise the performance of these organisations to deliver high quality, resource-effective care. A quantitative model was developed and applied to an industry leader in the private healthcare sector of South Africa, in order to gauge the resultant impact of a PAPA care intervention strategy on healthcare provider performance. Lastly, in an effort to ensure the seamless implementation and operation of PAPAs, an implementation framework was developed to address the strategic, tactical, and operational challenges of applying predictive analytics and preventive care strategies similar to PAPAs. The research found that the application of PAPAs in the public healthcare sector of South Africa is infeasible. The private healthcare sector, however, was considered a suitable platform to implement PAPAs, as this sector satisfies the institutional and operational requirements of PAPAs. The value-add model found that a PAPA intervention strategy will add significant value to the performance of healthcare providers in the private healthcare sector of South Africa. Noteworthy improvements are expected in the ability of healthcare provider’s to coordinate patient care, patient-practitioner relationships, inventory service levels, and staffing level efficiency and effectiveness. A slight decrease in the financial operating margin, however, was documented. The value-add methodology and implementation support framework provides a suitable platform for future researchers to explore the collaboration of preventive care and PM in an effort to improve healthcare resource management in hospitals. In conclusion, patient-admission predictive algorithms provide improved evidence-based decision making for preventive care intervention strategies. An efficient and effective preventive care intervention strategy improves healthcare provider performance and, therefore, adds significant value to these organisations. With the proper planning and implementation support, the application of PAPA care intervention strategies will change the way healthcare is delivered worldwide.
AFRIKAANSE OPSOMMING: Vooruitskattingsanalises in gesondheidsorg het ontwikkel in een van die mees belangrike fokusareas in die lewering van kwaliteit gesondheidsorg in ontwikkelde lande. Gesondheidsorgverskaffers en lewensversekeraars belê in vooruitskattingsanalise en ooreenstemmende tegnologieë om groot hoeveelhede gesondheidsorg pasiënt-data vas te lê, wat waardevolle insigte bied ten opsigte van ʼn groot verskeidenheid van gesondheidsorg-uitkomstes. Een van die nuutste ontwikkelinge in die veld van gesondheidsorg vooruitskattingsanalises, was die bekendstelling van die “Heritage Health Prize”, 'n kompetisie wat individue regoor die wêreld uitdaag om 'n vooruitskattingsalgoritme te ontwikkel wat pasiënte identifiseer wat hoogs waarskynlik gehospitaliseer gaan word in die volgende jaar en as bron-intensief beskou word as gevolg van die beraamde tyd wat hierdie individue in die hospitaal sal deurbring. Die pasiënt-toelating vooruitskattingsalgoritme (PTVA) het ten doel om onnodige hospitaliserings te identifiseer en te voorkom tem einde verbeterde hulpbronbestuur in gesondheidsorg wêreldwyd te bewerkstellig. Die doel van die hierdie projek is om die uitvoerbaarheid en waarde van die toepassing van PTVAs, as 'n voorkomende sorg intervensiestrategie, in die Suid-Afrikaanse gesondheidsorgstelsel te bepaal. 'n Voorkomende sorg intervensiestrategie poog om onnodige hospitaliserings te verhoed deur die nodige sorgmaatreëls te verskaf aan hoë-riskio pasiënte, sonder om hierdie individue noodwendig te hospitaliseer. Die tesis maak gebruik van kwantitatiewe en kwalitatiewe tegnieke. Dit sluit in 'n hersiening van die huidige en historiese vooruitskattings modelle in die gesondheidsorgsektor om die verwagte struikelblokke in die implementering van PTVAs te identifiseer, asook die institusionele en operasionele vereistes van hierdie vooruitskattingsalgoritmes te bepaal. Die literatuurstudie word afgesluit met 'n oorsig van die huidige stand van sake in die Suid-Afrikaanse gesondheidsorgstelsel om, eerstens, die behoefte vir PTVAs te identifiseer en, tweedens, om te bepaal of die openbare en private sektore 'n geskikte platform vir implementering bied (gebaseer op die operasionele en institusionele vereistes van PTVAs). Verder word 'n metodologie ontwerp en ontwikkel om die potensiële waarde-toevoeging van 'n PTVA sorg intervensiestrategie te bepaal. Die metode vereis 'n steekproef van die industrieleiers in die private gesondheidsorgsektor van Suid-Afrika om die volgende te identifiseer: die huidige hoë-prioriteit sleutel prestasie aanwysers (SPAs), en die faktore wat die prestasie van hierdie organisasies komprimeer om hoë gehalte, hulpbron-effektiewe sorg te lewer. 'n Kwantitatiewe model is ontwikkel en toegepas op een industrieleier in die private Stellenbosch gesondheidsorgsektor van Suid-Afrika, om die gevolglike impak van 'n PTVA sorg intervensiestrategie op prestasieverbetering te meet. Ten slotte, in 'n poging om te verseker dat die implementering en werking van PTVAs glad verloop, is 'n implementeringsraamwerk ontwikkel om die strategiese, taktiese en operasionele uitdagings aan te spreek in die toepassing van vooruitskattings analises en voorkomende sorg strategieë soortgelyk aan PTVAs. Die navorsing het bevind dat die toepassing van PTVAS in die openbare gesondheidsorgsektor van Suid-Afrika nie lewensvatbaar is nie. Die private gesondheidsorgsektor word egter beskou as 'n geskikte platform om PTVAs te implementeer, weens die bevrediging van die institusionele en operasionele vereistes van PTVAs. Die waarde-toevoegings model het bevind dat 'n PTVA intervensiestrategie beduidende waarde kan toevoeg tot die prestasieverbetering van gesondheidsorgverskaffers in die private gesondheidsorgsektor van Suid-Afrika. Die grootste verbetering word in die volgende SPAs verwag; sorg koördinasie, dokter-pasiënt verhoudings, voorraad diensvlakke, en personeel doeltreffendheid en effektiwiteit. 'n Effense afname in die finansiële bedryfsmarge word egter gedokumenteer. 'n Implementering-ondersteuningsraamwerk is ontwikkel in 'n poging om die sleutel strategiese, taktiese en operasionele faktore in die implementering en uitvoering van 'n PTVA sorg intervensiestrategie uit te lig. Die waarde-toevoegings metodologie en implementering ondersteuning raamwerk bied 'n geskikte platform vir toekomstige navorsers om die rol van vooruitskattings modelle in voorkomende sorg te ondersoek, in 'n poging om hulpbronbestuur in hospitale te verbeter. Ten slotte, PTVAs verbeter bewysgebaseerde besluitneming vir voorkomende sorg intervensiestrategieë. 'n Doeltreffende en effektiewe voorkomende sorg intervensiestrategie voeg aansienlike waarde tot die algehele prestasieverbetering van gesondheidsorgverskaffers. Met behoorlike beplanning en ondersteuning met implementering, sal PTVA sorg intervensiestrategieë die manier waarop gesondheidsorg gelewer word, wêreldwyd verander.
Gli stili APA, Harvard, Vancouver, ISO e altri
40

Chin, Christine Hui Li. "The effects of computer-based tests on the achievement, anxiety and attitudes of grade 10 science students". Thesis, University of British Columbia, 1990. http://hdl.handle.net/2429/29484.

Testo completo
Abstract (sommario):
The purpose of this study was to compare the achievement and test anxiety level of students taking a conventional paper-and-pencil science test comprising multiple-choice questions, and a computer-based version of the same test. The study assessed the equivalence of the computer-based and paper-and-pencil tests in terms of achievement scores and item characteristics, explored the relationship between computer anxiety and previous computer experience, and investigated the affective impact of computerized testing on the students. A 2 X 2 (mode of test administration by gender) factorial design was used. A sample of 54 male and 51 female Grade 10 students participated in the study. Subjects were blocked by gender and their scores on a previous school-based science exam. They were then randomly assigned to take either the computer-based test or the paper-and-pencil test, both versions of which were identical in length, item content and sequence. Three days before the test, all students were given the "Attitude questionnaire" which included pre-measures of test and computer anxiety. Immediately after taking the test, students in the computer-based group completed the "Survey of attitudes towards testing by computers" questionnaire which assessed their previous computer experience, their test anxiety and computer anxiety level while taking the test, and their reactions towards computer-based testing. Students in the paper-and-pencil test group answered the "Survey of attitudes towards testing" questionnaire which measured their test anxiety level while they were taking the paper-and-pencil test. The results indicate that the mean achievement score on the science test was significantly higher for the group taking the computer-based test. No significant difference in mean scores between sexes was observed; there was also no interaction effect between mode of test administration and gender. The test anxiety level was not significantly different between the groups taking the two versions of the test. A significant relationship existed between students' prior computer experience and their computer anxiety before taking the test. However, there was no significant relationship between previous computer experience and the computer anxiety evoked as a result of taking the test on the computer. Hence, the change in computer anxiety due to taking the test was not explained by computer experience. Of the students who took the computer-based test, 71.2 % said that if given a choice, they would prefer to take the test on a computer. Students indicated that they found the test easier, more convenient to answer because they did not have to write, erase mistakes or fill in bubbles on a scannable sheet, and faster to take when compared to a paper-and-pencil test. Negative responses to the computer-based test included the difficulty involved in reviewing and changing answers, having to type and use a keyboard, fear of the computer making mistakes, and a feeling of uneasiness because the medium of test presentation was unconventional. Students taking the computer-based test were more willing to guess on an item, and tended to avoid the option "I don't know." It is concluded that the computer-based and the paper-and-pencil tests were not equivalent in terms of achievement scores. Modifications in the way test items are presented on a computer-based test may change the strategies with which students approach the items. Extraneous variables incidental to the computer administration such as the inclination to guess on a question, the ease of getting cues from other questions, differences in test-taking flexibility, familiarity with computers, and attitudes towards computers may change the test-taking behaviour to the extent that a student's performance on a computer-based test and paper-and-pencil test may not be the same. Also, if the tasks involved in taking a test on a computer are kept simple enough, prior computer experience has little impact on the anxiety evoked in a student taking the test, and even test-takers with minimal computer experience will not be disadvantaged by having to use an unfamiliar machine.
Education, Faculty of
Curriculum and Pedagogy (EDCP), Department of
Graduate
Gli stili APA, Harvard, Vancouver, ISO e altri
41

Slater, Alan. "How do school managers view and use data to help improve student achievement at their school?" Thesis, University of Oxford, 2014. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.711732.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
42

Hobbs, Madison. "Automating an Engine to Extract Educational Priorities for Workforce City Innovation". Scholarship @ Claremont, 2019. https://scholarship.claremont.edu/scripps_theses/1388.

Testo completo
Abstract (sommario):
This thesis is grounded in my work done through the Harvey Mudd College Clinic Program as Project Manager of the PilotCity Clinic Team. PilotCity is a startup whose mission is to transform small to mid-sized cities into centers of innovation by introducing employer partnerships and work-based learning to high school classrooms. The team was tasked with developing software and algorithms to automate PilotCity's programming and to extract educational insights from unstructured data sources like websites, syllabi, resumes, and more. The team helped engineer a web application to expand and facilitate PilotCity's usership, designed a recommender system to automate the process of matching employers to high school classrooms, and packaged a topic modeling module to extract educational priorities from more complex data such as syllabi, course handbooks, or other educational text data. Finally, the team explored automatically generating supplementary course resources using insights from topic models. This thesis will detail the team's process from beginning to final deliverables including the methods, implementation, results, challenges, future directions, and impact of the project.
Gli stili APA, Harvard, Vancouver, ISO e altri
43

Beyers, Ronald Noel. "Selecting educational computer software and evaluating its use, with special reference to biology education". Thesis, Rhodes University, 1992. http://hdl.handle.net/10962/d1003649.

Testo completo
Abstract (sommario):
In the field of Biology there is a reasonable amount of software available for educational use but in the researcher's experience there are few teachers who take the computer into the classroom/laboratory, Teachers will make use of video machines and tape recorders quite happily, but a computer is a piece of apparatus which they are not prepared to use in the classroom/laboratory. This thesis is an attempt to devise an educational package, consisting of a Selection Form and an Evaluation Form, which can be used by teachers to select and evaluate educational software in the field of Biology. The forms were designed specifically for teachers to use in preparation of a computer lesson. The evaluation package also provides the teacher with a means of identifying whether the lesson has achieved its objectives or not. The teacher may also be provided with feedback about the lesson. The data is gathered by means of a questionnaire which the pupils complete. It would appear that teachers are uncertain as regards the purchase of software for their subject from the many catalogues that are available. The evaluation package implemented in this research can be regarded as the beginnings of a data base for the accumulation of information to assist teachers with details on which software to select. Evidence is provided in this thesis for the practical application of the Selection and Evaluation Forms, using Biology software.
Gli stili APA, Harvard, Vancouver, ISO e altri
44

(6630578), Yellamraju Tarun. "n-TARP: A Random Projection based Method for Supervised and Unsupervised Machine Learning in High-dimensions with Application to Educational Data Analysis". Thesis, 2019.

Cerca il testo completo
Abstract (sommario):
Analyzing the structure of a dataset is a challenging problem in high-dimensions as the volume of the space increases at an exponential rate and typically, data becomes sparse in this high-dimensional space. This poses a significant challenge to machine learning methods which rely on exploiting structures underlying data to make meaningful inferences. This dissertation proposes the n-TARP method as a building block for high-dimensional data analysis, in both supervised and unsupervised scenarios.

The basic element, n-TARP, consists of a random projection framework to transform high-dimensional data to one-dimensional data in a manner that yields point separations in the projected space. The point separation can be tuned to reflect classes in supervised scenarios and clusters in unsupervised scenarios. The n-TARP method finds linear separations in high-dimensional data. This basic unit can be used repeatedly to find a variety of structures. It can be arranged in a hierarchical structure like a tree, which increases the model complexity, flexibility and discriminating power. Feature space extensions combined with n-TARP can also be used to investigate non-linear separations in high-dimensional data.

The application of n-TARP to both supervised and unsupervised problems is investigated in this dissertation. In the supervised scenario, a sequence of n-TARP based classifiers with increasing complexity is considered. The point separations are measured by classification metrics like accuracy, Gini impurity or entropy. The performance of these classifiers on image classification tasks is studied. This study provides an interesting insight into the working of classification methods. The sequence of n-TARP classifiers yields benchmark curves that put in context the accuracy and complexity of other classification methods for a given dataset. The benchmark curves are parameterized by classification error and computational cost to define a benchmarking plane. This framework splits this plane into regions of "positive-gain" and "negative-gain" which provide context for the performance and effectiveness of other classification methods. The asymptotes of benchmark curves are shown to be optimal (i.e. at Bayes Error) in some cases (Theorem 2.5.2).

In the unsupervised scenario, the n-TARP method highlights the existence of many different clustering structures in a dataset. However, not all structures present are statistically meaningful. This issue is amplified when the dataset is small, as random events may yield sample sets that exhibit separations that are not present in the distribution of the data. Thus, statistical validation is an important step in data analysis, especially in high-dimensions. However, in order to statistically validate results, often an exponentially increasing number of data samples are required as the dimensions increase. The proposed n-TARP method circumvents this challenge by evaluating statistical significance in the one-dimensional space of data projections. The n-TARP framework also results in several different statistically valid instances of point separation into clusters, as opposed to a unique "best" separation, which leads to a distribution of clusters induced by the random projection process.

The distributions of clusters resulting from n-TARP are studied. This dissertation focuses on small sample high-dimensional problems. A large number of distinct clusters are found, which are statistically validated. The distribution of clusters is studied as the dimensionality of the problem evolves through the extension of the feature space using monomial terms of increasing degree in the original features, which corresponds to investigating non-linear point separations in the projection space.

A statistical framework is introduced to detect patterns of dependence between the clusters formed with the features (predictors) and a chosen outcome (response) in the data that is not used by the clustering method. This framework is designed to detect the existence of a relationship between the predictors and response. This framework can also serve as an alternative cluster validation tool.

The concepts and methods developed in this dissertation are applied to a real world data analysis problem in Engineering Education. Specifically, engineering students' Habits of Mind are analyzed. The data at hand is qualitative, in the form of text, equations and figures. To use the n-TARP based analysis method, the source data must be transformed into quantitative data (vectors). This is done by modeling it as a random process based on the theoretical framework defined by a rubric. Since the number of students is small, this problem falls into the small sample high-dimensions scenario. The n-TARP clustering method is used to find groups within this data in a statistically valid manner. The resulting clusters are analyzed in the context of education to determine what is represented by the identified clusters. The dependence of student performance indicators like the course grade on the clusters formed with n-TARP are studied in the pattern dependence framework, and the observed effect is statistically validated. The data obtained suggests the presence of a large variety of different patterns of Habits of Mind among students, many of which are associated with significant grade differences. In particular, the course grade is found to be dependent on at least two Habits of Mind: "computation and estimation" and "values and attitudes."
Gli stili APA, Harvard, Vancouver, ISO e altri
45

"Survey error modelling and benchmarking with monthly-quarterly data". 2004. http://library.cuhk.edu.hk/record=b5892185.

Testo completo
Abstract (sommario):
Shea Hon-Wai.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2004.
Includes bibliographical references (leaves 51-52).
Abstracts in English and Chinese.
Chapter 1 --- Introduction --- p.1
Chapter 2 --- Review of benchmarking methods --- p.6
Chapter 2.1 --- Denton method --- p.7
Chapter 2.2 --- Regression method --- p.9
Chapter 2.3 --- Signal extraction method --- p.11
Chapter 3 --- Survey error modelling by using benchmarks --- p.14
Chapter 4 --- A simulation study on benchmarking methods --- p.25
Chapter 4.1 --- Model assumptions --- p.25
Chapter 4.2 --- Simulation procedures --- p.27
Chapter 4.3 --- Simulation results --- p.29
Chapter 5 --- A simulation study on signal extraction with a nonparametric approach --- p.35
Chapter 5.1 --- Introduction of the nonparametric method --- p.35
Chapter 5.2 --- Simulation results --- p.38
Chapter 6 --- Example: An application to the Danish unemployment series --- p.42
Chapter 7 --- Conclusion --- p.49
Reference --- p.51
Gli stili APA, Harvard, Vancouver, ISO e altri
46

Zhang, Jianzhou. "Analyzing Hierarchical Data with the DINA-HC Approach". Thesis, 2015. https://doi.org/10.7916/D8PG1R67.

Testo completo
Abstract (sommario):
Cognitive Diagnostic Models (CDMs) are a class of models developed in order to diagnose the cognitive attributes of examinees. They have received increasing attention in recent years because of the need of more specific attribute and item related information. A particular cognitive diagnostic model, namely, the hierarchical deterministic, input, noisy ‘and’ gate model with convergent attribute hierarchy (DINA-HC) is proposed to handle situations when the attributes have a convergent hierarchy. Su (2013) first introduced the model as the deterministic, input, noisy ‘and’ gate with hierarchy (DINA-H) and retrofitted The Trends in International Mathematics and Science Study (TIMSS) data utilizing this model with linear and unstructured hierarchies. Leighton, Girl, and Hunka (1999) and Kuhn (2001) introduced four forms of hierarchical structures (Linear, Convergent, Divergent, and Unstructured) by assuming the interrelated competencies of the cognitive skills. Specifically, the convergent hierarchy is one of the four hierarchies (Leighton, Gierl & Hunka, 2004) and it was used to describe the attributes that have a convergent structure. One of the features of this model is that it can incorporate the hierarchical structures of the cognitive skills in the model estimation process (Su, 2013). The advantage of the DINA-HC over the Deterministic, input, noisy ‘and’ gate (DINA) model (Junker & Sijtsma, 2001) is that it will reduce the number of parameters as well as the latent classes by imposing the particular attribute hierarchy. This model follows the specification of the DINA except that it will pre-specify the attribute profiles by utilizing the convergent attribute hierarchies. Only certain possible attribute pattern will be allowed depending on the particular convergent hierarchy. Properties regarding the DINA-HC and DINA are examined and compared through the simulation and empirical study. Specifically, the attribute profile pattern classification accuracy, the model and item fit are compared between the DINA-HC and DINA under different conditions when the attributes have convergent hierarchies. This study indicates that the DINA-HC provides better model fit, less biased parameter estimates and higher attribute profile classification accuracy than the DINA when the attributes have a convergent hierarchy. The sample size, the number of attributes, and the test length have been shown to have an effect on the parameter estimates. The DINA model has better model fit than the DINA-HC when the attributes are not dependent on each other.
Gli stili APA, Harvard, Vancouver, ISO e altri
47

Fan, Huiyuan. "Sequential frameworks for statistics-based value function representation in approximate dynamic programming". 2008. http://hdl.handle.net/10106/1099.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
48

Meng, Lu. "Spectral Filtering for Spatio-temporal Dynamics and Multivariate Forecasts". Thesis, 2016. https://doi.org/10.7916/D80Z7385.

Testo completo
Abstract (sommario):
Due to the increasing availability of massive spatio-temporal data sets, modeling high dimensional data becomes quite challenging. A large number of research questions are rooted in identifying underlying dynamics in such spatio-temporal data. For many applications, the science suggests that the intrinsic dynamics be smooth and of low dimension. To reduce the variance of estimates and increase the computational tractability, dimension reduction is also quite necessary in the modeling procedure. In this dissertation, we propose a spectral filtering approach for dimension reduction and forecast amelioration, and apply it to multiple applications. We show the effectiveness of dimension reduction via our method and also illustrate its power for prediction in both simulation and real data examples. The resultant lower dimensional principal component series has a diagonal spectral density at each frequency whose diagonal elements are in descending order, which is not well motivated can be hard to interpret. Therefore we propose a phase-based filtering method to create principal component series with interpretable dynamics in the time domain. Our method is based on an approach of structural decomposition and phase-aligned construction in the frequency domain, identifying lower-rank dynamics and its components embedded in a high dimensional spatio-temporal system. In both our simulated examples and real data applications, we illustrate that the proposed method is able to separate and identify meaningful lower-rank movements. Benefiting from the zero-coherence property of the principal component series, we subsequently develop a predictive model for high-dimensional forecasting via lower-rank dynamics. Our modeling approach reduces multivariate modeling task to multiple univariate modeling and is flexible in combining with regularization techniques to obtain more stable estimates and improve interpretability. The simulation results and real data analysis show that our model achieves superior forecast performance compared to the class of autoregressive models.
Gli stili APA, Harvard, Vancouver, ISO e altri
49

"Power computation for multiple comparisons with a control in directional-mixed families". 2010. http://library.cuhk.edu.hk/record=b5894313.

Testo completo
Abstract (sommario):
Lau, Sin Yi.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2010.
Includes bibliographical references (leaves 64-66).
Abstracts in English and Chinese.
Chapter 1 --- Introduction --- p.1
Chapter 1.1 --- Multiple Comparison Procedures --- p.1
Chapter 1.2 --- Multiple Comparisons with a control --- p.2
Chapter 1.3 --- Multiple Comparisons with a control in directional- mixed families --- p.5
Chapter 1.4 --- Examples --- p.8
Chapter 1.5 --- Thesis Objectives --- p.10
Chapter 2 --- Evaluation of Power --- p.12
Chapter 2.1 --- Definition and the Use of Power --- p.12
Chapter 2.2 --- Computational Details --- p.13
Chapter 2.3 --- All-pairs Power --- p.13
Chapter 2.4 --- Any-pair Power --- p.15
Chapter 2.5 --- Average Power --- p.16
Chapter 2.6 --- Algorithm --- p.16
Chapter 2.7 --- Results --- p.19
Chapter 2.7.1 --- All-pairs Power --- p.20
Chapter 2.7.2 --- Any-pair Power --- p.23
Chapter 2.7.3 --- Average Power --- p.26
Chapter 3 --- Sample Size Determination --- p.29
Chapter 3.1 --- The required sample size for a pre-assigned all-pairs power --- p.31
Chapter 3.2 --- The required sample size for a pre-assigned any-pair power --- p.41
Chapter 3.3 --- The required sample size for a pre-assigned average power --- p.51
Chapter 4 --- An Illustrative Example --- p.61
Chapter 5 --- Conclusions --- p.63
References --- p.64
Gli stili APA, Harvard, Vancouver, ISO e altri
50

"Power computation for multiple comparisons with a control procedures in two-way designs". 2005. http://library.cuhk.edu.hk/record=b5892698.

Testo completo
Abstract (sommario):
Cheung Ching Man.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2005.
Includes bibliographical references (leaves 64-65).
Abstracts in English and Chinese.
Acknowledgement --- p.i
Abstract --- p.ii
Chapter 1 --- Introduction --- p.1
Chapter 1.1 --- Multiple Comparison Procedures --- p.1
Chapter 1.2 --- Multiple Comparisons with a control --- p.2
Chapter 1.3 --- Multiple Comparisons with a control in two-way designs --- p.5
Chapter 1.4 --- Example --- p.12
Chapter 1.5 --- Thesis Objectives --- p.13
Chapter 2 --- Evaluation of Power (Homogeneous Variance) --- p.14
Chapter 2.1 --- Definition and the use of power --- p.14
Chapter 2.2 --- Setup and Notations --- p.15
Chapter 2.3 --- Evaluation of power --- p.16
Chapter 2.4 --- Computational Details --- p.19
Chapter 2.4.1 --- Algorithm --- p.19
Chapter 2.4.2 --- Results --- p.20
Chapter 2.5 --- Numerical Example --- p.39
Chapter 3 --- Evaluation of Power (Heterogeneous Variances) --- p.42
Chapter 3.1 --- Setup and Notations --- p.42
Chapter 3.2 --- Evaluation of power --- p.43
Chapter 3.3 --- Results --- p.45
Chapter 3.3.1 --- All-pairs Power --- p.46
Chapter 3.3.2 --- Any-pair Power --- p.53
Chapter 3.4 --- Numerical Example --- p.60
Chapter 4 --- Conclusions --- p.63
References --- p.64
Gli stili APA, Harvard, Vancouver, ISO e altri
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!

Vai alla bibliografia