Dissertations / Theses on the topic 'Information response theory'

To see the other types of publications on this topic, follow the link: Information response theory.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Information response theory.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

McEwen, Peter A. "Trellis coding for partial response channels /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 1999. http://wwwlib.umi.com/cr/ucsd/fullcit?p9968170.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Li, Caihong Rosina. "ASSESSING THE MODEL FIT OF MULTIDIMENSIONAL ITEM RESPONSE THEORY MODELS WITH POLYTOMOUS RESPONSES USING LIMITED-INFORMATION STATISTICS." UKnowledge, 2019. https://uknowledge.uky.edu/edsc_etds/45.

Full text
Abstract:
Under item response theory, three types of limited information goodness-of-fit test statistics – M2, Mord, and C2 – have been proposed to assess model-data fit when data are sparse. However, the evaluation of the performance of these GOF statistics under multidimensional item response theory (MIRT) models with polytomous data is limited. The current study showed that M2 and C2 were well-calibrated under true model conditions and were powerful under misspecified model conditions. Mord were not well-calibrated when the number of response categories was more than three. RMSEA2 and RMSEAC2 are good tools to evaluate approximate fit. The second study aimed to evaluate the psychometric properties of the Religious Commitment Inventory-10 (RCI-10; Worthington et al., 2003) within the IRT framework and estimate C2 and its RMSEA to assess global model-fit. Results showed that the RCI-10 was best represented by a bifactor model. The scores from the RCI-10 could be scored as unidimensional notwithstanding the presence of multidimensionality. Two-factor correlational solution should not be used. Study two also showed that religious commitment is a risk factor of intimate partner violence, whereas spirituality was a protecting factor from the violence. More alcohol was related with more abusive behaviors. Implications of the two studies were discussed.
APA, Harvard, Vancouver, ISO, and other styles
3

Jonas, Katherine Grace. "Potential test information for multidimensional tests." Diss., University of Iowa, 2017. https://ir.uiowa.edu/etd/5787.

Full text
Abstract:
Test selection in psychological assessment is guided, both explicitly and implicitly, by how informative tests are with regard to a trait of interest. Most existing formulations of test information are sensitive to subpopulation variation, with the result that test information will vary from sample to sample. Recently, measures of test information have been developed that quantify the potential informativeness of the test. These indices are defined by the properties of the test, as distinct from the properties of the sample or examinee. As of yet, however, measures of potential information have been developed only for unidimensional tests. In practice, psychological tests are often multidimensional. Furthermore, multidimensional tests are often used to estimate one specific trait among many. This study develops measures of potential test information for multidimensional tests, as well as measures of marginal potential test information---test information with regard to one trait within a multidimensional test. In Study 1, the performance of the metrics was tested in data simulated from unidimensional, first-order multidimensional, second-order, and bifactor models. In Study 2, measures of marginal and multidimensional potential test information are applied to a set of neuropsychological data collected as part of Rush University's Memory and Aging Project. In simulated data, marginal and multidimensional potential test information were sensitive to the changing dimensionality of the test. In observed neuropsychological data, five traits were identified. Verbal abilities were most closely correlated with probable dementia. Both indices of marginal potential test information identify the Mini Mental Status Exam as the best measure of that trait. More broadly, greater marginal potential test information calculated with regard to verbal abilities was associated with greater criterion validity. These measures allow for the direct comparison of two multidimensional tests that assess the same trait, facilitating test selection and improving the precision and validity of psychological assessment.
APA, Harvard, Vancouver, ISO, and other styles
4

Swain, P. "A computer based method for analysing the phase response of a digital magnetic recording system." Thesis, Cranfield University, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.234497.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Rusch, Thomas, Paul Benjamin Lowry, Patrick Mair, and Horst Treiblmaier. "Breaking Free from the Limitations of Classical Test Theory: Developing and Measuring Information Systems Scales Using Item Response Theory." Elsevier, 2017. http://dx.doi.org/10.1016/j.im.2016.06.005.

Full text
Abstract:
Information systems (IS) research frequently uses survey data to measure the interplay between technological systems and human beings. Researchers have developed sophisticated procedures to build and validate multi-item scales that measure latent constructs. The vast majority of IS studies uses classical test theory (CTT), but this approach suffers from three major theoretical shortcomings: (1) it assumes a linear relationship between the latent variable and observed scores, which rarely represents the empirical reality of behavioral constructs; (2) the true score can either not be estimated directly or only by making assumptions that are difficult to be met; and (3) parameters such as reliability, discrimination, location, or factor loadings depend on the sample being used. To address these issues, we present item response theory (IRT) as a collection of viable alternatives for measuring continuous latent variables by means of categorical indicators (i.e., measurement variables). IRT offers several advantages: (1) it assumes nonlinear relationships; (2) it allows more appropriate estimation of the true score; (3) it can estimate item parameters independently of the sample being used; (4) it allows the researcher to select items that are in accordance with a desired model; and (5) it applies and generalizes concepts such as reliability and internal consistency, and thus allows researchers to derive more information about the measurement process. We use a CTT approach as well as Rasch models (a special class of IRT models) to demonstrate how a scale for measuring hedonic aspects of websites is developed under both approaches. The results illustrate how IRT can be successfully applied in IS research and provide better scale results than CTT. We conclude by explaining the most appropriate circumstances for applying IRT, as well as the limitations of IRT.
APA, Harvard, Vancouver, ISO, and other styles
6

De, Aguinaga José Guillermo. "Uncertainty Assessment of Hydrogeological Models Based on Information Theory." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2011. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-71814.

Full text
Abstract:
There is a great deal of uncertainty in hydrogeological modeling. Overparametrized models increase uncertainty since the information of the observations is distributed through all of the parameters. The present study proposes a new option to reduce this uncertainty. A way to achieve this goal is to select a model which provides good performance with as few calibrated parameters as possible (parsimonious model) and to calibrate it using many sources of information. Akaike’s Information Criterion (AIC), proposed by Hirotugu Akaike in 1973, is a statistic-probabilistic criterion based on the Information Theory, which allows us to select a parsimonious model. AIC formulates the problem of parsimonious model selection as an optimization problem across a set of proposed conceptual models. The AIC assessment is relatively new in groundwater modeling and it presents a challenge to apply it with different sources of observations. In this dissertation, important findings in the application of AIC in hydrogeological modeling using different sources of observations are discussed. AIC is tested on ground-water models using three sets of synthetic data: hydraulic pressure, horizontal hydraulic conductivity, and tracer concentration. In the present study, the impact of the following factors is analyzed: number of observations, types of observations and order of calibrated parameters. These analyses reveal not only that the number of observations determine how complex a model can be but also that its diversity allows for further complexity in the parsimonious model. However, a truly parsimonious model was only achieved when the order of calibrated parameters was properly considered. This means that parameters which provide bigger improvements in model fit should be considered first. The approach to obtain a parsimonious model applying AIC with different types of information was successfully applied to an unbiased lysimeter model using two different types of real data: evapotranspiration and seepage water. With this additional independent model assessment it was possible to underpin the general validity of this AIC approach
Hydrogeologische Modellierung ist von erheblicher Unsicherheit geprägt. Überparametrisierte Modelle erhöhen die Unsicherheit, da gemessene Informationen auf alle Parameter verteilt sind. Die vorliegende Arbeit schlägt einen neuen Ansatz vor, um diese Unsicherheit zu reduzieren. Eine Möglichkeit, um dieses Ziel zu erreichen, besteht darin, ein Modell auszuwählen, das ein gutes Ergebnis mit möglichst wenigen Parametern liefert („parsimonious model“), und es zu kalibrieren, indem viele Informationsquellen genutzt werden. Das 1973 von Hirotugu Akaike vorgeschlagene Informationskriterium, bekannt als Akaike-Informationskriterium (engl. Akaike’s Information Criterion; AIC), ist ein statistisches Wahrscheinlichkeitskriterium basierend auf der Informationstheorie, welches die Auswahl eines Modells mit möglichst wenigen Parametern erlaubt. AIC formuliert das Problem der Entscheidung für ein gering parametrisiertes Modell als ein modellübergreifendes Optimierungsproblem. Die Anwendung von AIC in der Grundwassermodellierung ist relativ neu und stellt eine Herausforderung in der Anwendung verschiedener Messquellen dar. In der vorliegenden Dissertation werden maßgebliche Forschungsergebnisse in der Anwendung des AIC in hydrogeologischer Modellierung unter Anwendung unterschiedlicher Messquellen diskutiert. AIC wird an Grundwassermodellen getestet, bei denen drei synthetische Datensätze angewendet werden: Wasserstand, horizontale hydraulische Leitfähigkeit und Tracer-Konzentration. Die vorliegende Arbeit analysiert den Einfluss folgender Faktoren: Anzahl der Messungen, Arten der Messungen und Reihenfolge der kalibrierten Parameter. Diese Analysen machen nicht nur deutlich, dass die Anzahl der gemessenen Parameter die Komplexität eines Modells bestimmt, sondern auch, dass seine Diversität weitere Komplexität für gering parametrisierte Modelle erlaubt. Allerdings konnte ein solches Modell nur erreicht werden, wenn eine bestimmte Reihenfolge der kalibrierten Parameter berücksichtigt wurde. Folglich sollten zuerst jene Parameter in Betracht gezogen werden, die deutliche Verbesserungen in der Modellanpassung liefern. Der Ansatz, ein gering parametrisiertes Modell durch die Anwendung des AIC mit unterschiedlichen Informationsarten zu erhalten, wurde erfolgreich auf einen Lysimeterstandort übertragen. Dabei wurden zwei unterschiedliche reale Messwertarten genutzt: Evapotranspiration und Sickerwasser. Mit Hilfe dieser weiteren, unabhängigen Modellbewertung konnte die Gültigkeit dieses AIC-Ansatzes gezeigt werden
APA, Harvard, Vancouver, ISO, and other styles
7

Mair, Patrick, and Horst Treiblmaier. "Partial Credit Models for Scale Construction in Hedonic Information Systems." Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 2008. http://epub.wu.ac.at/1614/1/document.pdf.

Full text
Abstract:
Information Systems (IS) research frequently uses survey data to measure the interplay between technological systems and human beings. Researchers have developed sophisticated procedures to build and validate multi-item scales that measure real world phenomena (latent constructs). Most studies use the so-called classical test theory (CTT), which suffers from several shortcomings. We first compare CTT to Item Response Theory (IRT) and subsequently apply a Rasch model approach to measure hedonic aspects of websites. The results not only show which attributes are best suited for scaling hedonic information systems, but also introduce IRT as a viable substitute that overcomes severall shortcomings of CTT. (author´s abstract)
Series: Research Report Series / Department of Statistics and Mathematics
APA, Harvard, Vancouver, ISO, and other styles
8

Morrison, Jeffrey Glenn. "The effects of diplay and response codes on information processing in an identification task." Diss., Georgia Institute of Technology, 1992. http://hdl.handle.net/1853/30531.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Parker, Heidi M. "The effect of negative sponsor information and team response on identification levels and consumer attitudes." Columbus, Ohio : Ohio State University, 2007. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1180025349.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Wenjia. "Item Response Theory in the Neurodegenerative Disease Data Analysis." Thesis, Bordeaux, 2017. http://www.theses.fr/2017BORD0624/document.

Full text
Abstract:
Les maladies neurodégénératives, telles que la maladie d'Alzheimer (AD) et Charcot Marie Tooth (CMT), sont des maladies complexes. Leurs mécanismes pathologiques ne sont toujours pas bien compris et les progrès dans la recherche et le développement de nouvelles thérapies potentielles modifiant la maladie sont lents. Les données catégorielles, comme les échelles de notation et les données sur les études d'association génomique (GWAS), sont largement utilisées dans les maladies neurodégénératives dans le diagnostic, la prédiction et le suivi de la progression. Il est important de comprendre et d'interpréter ces données correctement si nous voulons améliorer la recherche sur les maladies neurodégénératives. Le but de cette thèse est d'utiliser la théorie psychométrique moderne: théorie de la réponse d’item pour analyser ces données catégoriques afin de mieux comprendre les maladies neurodégénératives et de faciliter la recherche de médicaments correspondante. Tout d'abord, nous avons appliqué l'analyse de Rasch afin d'évaluer la validité du score de neuropathie Charcot-Marie-Tooth (CMTNS), un critère important d'évaluation principal pour les essais cliniques de la maladie de CMT. Nous avons ensuite adapté le modèle Rasch à l'analyse des associations génétiques pour identifier les gènes associés à la maladie d'Alzheimer. Cette méthode résume les génotypes catégoriques de plusieurs marqueurs génétiques tels que les polymorphisme nucléotidique (SNPs) en un seul score génétique. Enfin, nous avons calculé l'information mutuelle basée sur la théorie de réponse d’item pour sélectionner les items sensibles dans ADAS-cog, une mesure de fonctionnement cognitif la plus utilisées dans les études de la maladie d'Alzheimer, afin de mieux évaluer le progrès de la maladie
Neurodegenerative diseases, such as Alzheimer’s disease (AD) and Charcot Marie Tooth (CMT), are complex diseases. Their pathological mechanisms are still not well understood, and the progress in the research and development of new potential disease-modifying therapies is slow. Categorical data like rating scales and Genome-Wide Association Studies (GWAS) data are widely utilized in the neurodegenerative diseases in the diagnosis, prediction and progression monitor. It is important to understand and interpret these data correctly if we want to improve the disease research. The purpose of this thesis is to use the modern psychometric Item Response Theory to analyze these categorical data for better understanding the neurodegenerative diseases and facilitating the corresponding drug research. First, we applied the Rasch analysis in order to assess the validity of the Charcot-Marie-Tooth Neuropathy Score (CMTNS), a main endpoint for the CMT disease clinical trials. We then adapted the Rasch model to the analysis of genetic associations and used to identify genes associated with Alzheimer’s disease by summarizing the categorical genotypes of several genetic markers such as Single Nucleotide Polymorphisms (SNPs) into one genetic score. Finally, to select sensitive items in the most used psychometrical tests for Alzheimer’s disease, we calculated the mutual information based on the item response model to evaluate the sensitivity of each item on the ADAS-cog scale
APA, Harvard, Vancouver, ISO, and other styles
11

Mansour, Tony, and Majdi Murtaja. "Applied estimation theory on power cable as transmission line." Thesis, Linnéuniversitetet, Institutionen för fysik och elektroteknik (IFE), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-46583.

Full text
Abstract:
This thesis presents how to estimate the length of a power cable using the MaximumLikelihood Estimate (MLE) technique by using Matlab. The model of the power cableis evaluated in the time domain with additive white Gaussian noise. The statistics havebeen used to evaluate the performance of the estimator, by repeating the experiment fora large number of samples where the random additive noise is generated for each sample.The estimated sample variance is compared to the theoretical Cramer Raw lower Bound(CRLB) for unbiased estimators. At the end of thesis, numerical results are presentedthat show when the resulting sample variance is close to the CRLB, and hence that theperformance of the estimator will be more accurate.
APA, Harvard, Vancouver, ISO, and other styles
12

Mulongo, Benedith. "Analyzing music genre classification using item response theory : A case study of the GTZAN data." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-282828.

Full text
Abstract:
Machine learning models are usually evaluated by metrics such as accuracy, confusion matrix, recall. Those metrics are used in evaluation methods for assessing the models. However, those metrics used for assessing the models give no information regarding the ability of the models or the difficulty of the test data used to evaluate the given models. Without a reliable measure of the difficulty of test data it is harder to obtain an accurate evaluation of the performance of the models. Some of the limitations previously mentioned regarding the widely used metrics in current evaluation methods in machine learning may be overcome by using Item response theory. Item response theory is a psychometric framework used to simultaneously estimate the ability of the examinee and the difficulty of the test. This framework enables the comparison of test items by the mean of their difficulty and the comparison of examinees by the means of their respective ability. This thesis focuses on the applications of item response theory in machine learning and specifically applications of item response theory for evaluating music genre classification. In particular, a case study analysis of the GTZAN music data is made. The GTZAN dataset is a dataset composed of 1000 audio tracks with a duration of 30 seconds. The GTZAN dataset have been documented to possess many flaws such as the fact that many artists appear frequently within the same genre category. Therefore, a control study of the GTZAN data is made, where one group of classifiers is trained with randomly stratified data and the second group is trained with data where the training and the test data do not have songs from a common artist. The responses patterns from those two group of classifiers are analyzed using item response theory.
Maskininlärningsmodeller utvärderas vanligtvis med mätvärden så som noggrannhet, förväxlingsmatris, återkallelse. Dessa mätvärden ger varken information om kvaliteten på modellerna eller svårighetsgraden för de testdata som används för att utvärdera modellerna. Utan denna information kan ingen tillförlitlig jämförelse göras mellan olika test data och/eller olika modeller. Vissa av de tidigare nämnda begränsningarna avseende de aktuella utvärderingsmetoderna i maskininlärning kan övervinnas genom att applicera Item respons teori. Item respons teori är ett psykometriskt ramverk som används för att samtidigt uppskatta förmågan av en student och svårighetsgraden för en test. Detta ramverk möjliggör jämförelse av test data med hjälp av dess svårighetsgrad och jämförelse av studenterna med hjälp av deras respektive förmåga. Denna uppsats fokuserar på möjliga tillämpningar av item response teori i maskininlärning och specifikt dess tillämpningar för klassificering av musikgenren. Närmare bestämt görs en fallstudieanalys av GTZAN-musikdata. GTZAN har dokumenterats innehålla många brister, till exempel det faktum att många artister ofta förekommer i samma genre kategori. Därför görs en kontrollstudie av GTZAN-data, där en grupp klassificerare tränas med slumpmässigt stratifierade data och den andra gruppen tränas med data där träningsdata och testdata inte har låtar från en gemensam artist. Svarmönstren från dessa två grupper av klassificerare analyseras med hjälp av item response teori.
APA, Harvard, Vancouver, ISO, and other styles
13

Yoon, Young-Beol. "A Comparative Analysis of Two Forms of Gyeonggi English Communicative Ability Test Based on Classical Test Theory and Item Response Theory." BYU ScholarsArchive, 2012. https://scholarsarchive.byu.edu/etd/3153.

Full text
Abstract:
This study is an empirical analysis of the 2009 and 2010 forms of the Gyeonggi English Communicative Ability Test (GECAT) based on the responses of 2,307 students to the 2009 GECAT and 2,907 students to the 2010 GECAT. The GECAT is an English proficiency examination sponsored by the Gyeonggi Provincial Office of Education (GOE) in South Korea. This multiple-choice test has been administered annually at the end of each school year to high school students since 2004 as a measure of the students' ability to communicate in English. From 2004 until 2009, the test included 80 multiple-choice items, but in 2010, the length of the test was decreased to include only 50 items. The purpose of this study was to compare the psychometric properties of the 80-item 2009 form of the test with the psychometric properties of the shorter 50-item test using both Classical Test Theory item analysis statistics and parameter estimates obtained from 3-PL Item Response Theory. Cronbach's alpha coefficient for both forms was estimated to be .92 indicating that the overall reliability of the scores obtained from the two different test forms was essentially equivalent. For most of the six linguistic subdomains, the average classical item difficulty indexes were very similar across the two forms. The average of the classical item discrimination indexes were also quite similar for the 2009 80-item test and the 50-item 2010 test. However, 13 of the 2009 items and 3 of the 2010 had point biserial correlations with either negative or lower than acceptable positive values. A distracter analysis was conducted for each of these items with less than acceptable discriminating power as a basis to revise them. Total information functions of 6 subdomain tests (speaking, listening, reading, writing, vocabulary and grammar) showed that most of the test information functions of the 2009 GECAT were peaked at the ability level of around 0.9 < θ < 1.5, while those of the 2010 GECAT were peaked at the ability level of around 0.0 θ < 0.6. Recommendations for improving the GECAT and conducting future research are included.
APA, Harvard, Vancouver, ISO, and other styles
14

ZHUANG, Mengzhou. "Buyer beware : consumer response to manipulations of online product reviews." Digital Commons @ Lingnan University, 2014. https://commons.ln.edu.hk/cds_etd/9.

Full text
Abstract:
Online product reviews have become an important and influential source of information for consumers. Firms often manipulate online product reviews to influence consumer perceptions about the product, making it a research topic of urgent need for theory development and empirical investigation. In this thesis, we examine how consumers perceive and respond to the three commonly used manipulation tactics. Firstly, an exploratory pre-study via in-depth interviews with online shoppers indicates that consumers commonly have the knowledge for online review manipulations as well as for detecting them. In the first study, a survey was used to investigate the three popular manipulation tactics in terms of ethicality and deceptiveness. They rated hiding/deleting unfavorable messages as the most deceptive and unethical, followed by anonymously adding positive messages, and then offering incentives for posting favorable messages. In study 2, in a simulated field experiment, we introduce persuasion knowledge to further examine the negative influence of review manipulations on consumers’ attitudes. The results suggest that review manipulation increases suspicion of manipulations but can hardly reduce purchase intention of focal products. We also find that consumers’ persuasion knowledge enhances suspicion of manipulation, but lessens the negative impact of suspicion on purchase intention. The third study uses secondary data of a branded e-retailer and its third party website to cross-validate the effect of manipulations on product sales. The results confirm our hypotheses that review manipulation are effective in promoting sales; however, this influence would decrease over time. This research contributes to the online marketing literature by augmenting the Information Manipulation Theory and Persuasion Knowledge Model to examine the deceptive persuasion in the online context and its impact on consumer behavior. Furthermore, we also contribute to the literature of online WOM by empirically examining the influence of review manipulations on sales. Our findings provide valuable insights to practitioners and policy makers on the pitfalls of online manipulation activities and the need to ensure the healthy development of e-commerce.
APA, Harvard, Vancouver, ISO, and other styles
15

Dorfman, Vladimir. "Detection and coding techniques for partial response channels /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2003. http://wwwlib.umi.com/cr/ucsd/fullcit?p3094619.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Laurey, Paul. "The integration of perceptual and response information in the formation of an event file representation of the organism-environment /." view abstract or download file of text, 2003. http://wwwlib.umi.com/cr/uoregon/fullcit?p3102174.

Full text
Abstract:
Thesis (Ph. D.)--University of Oregon, 2003.
Typescript. Includes vita and abstract. Includes bibliographical references (leaves 86-88). Also available for download via the World Wide Web; free to University of Oregon users.
APA, Harvard, Vancouver, ISO, and other styles
17

Calanni, Fraccone Giorgio M. "Bayesian networks for uncertainty estimation in the response of dynamic structures." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/24714.

Full text
Abstract:
Thesis (Ph.D.)--Aerospace Engineering, Georgia Institute of Technology, 2009.
Committee Chair: Dr. Vitali Volovoi; Committee Co-Chair: Dr. Massimo Ruzzene; Committee Member: Dr. Andrew Makeev; Committee Member: Dr. Dewey Hodges; Committee Member: Dr. Peter Cento
APA, Harvard, Vancouver, ISO, and other styles
18

White, Joanne Isobel. "Information management and animal welfare in crisis| The role of collaborative technologies and cooperative work in emergency response." Thesis, University of Colorado at Boulder, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=3704841.

Full text
Abstract:

When making decisions about what to do in a disaster, people consider the welfare of their animals. Most people consider their pets to be "part of the family." There are more than 144 million pet dogs and cats in homes around the US, and Colorado is home to a $3 billion livestock industry. In emergency response, supporting the human-animal bond is one important way we can assist people in making good decisions about evacuation, and improve their ability to recover after the emergency period is over. There is an opportunity to leverage social computing tools to support the information needs of people concerned with animals in disasters. This research uses three major studies to examine the information management and cooperative work done around animals in this domain: First, an online study of the response of animal advocates in the 2012 Hurricane Sandy event; second, a study bridging the online and offline response of equine experts following the 2013 Colorado floods; and third, an extended 22-month ethnographic study of the work done at animal evacuation sites, beginning with on-the-ground participant observation at two fairground evacuation sites during the Black Forest Fire in Southern Colorado in 2013, and including the design of two information support tools. The research provides lessons about how information online, information offline, and the bridging of information in those arenas both supports and limits the potential for innovation in addressing the unusual and emergent ill-structured problems that are hallmarks of disaster response. The role of expertise as a vital resource in emergency response, and recommendations for policy improvements that appreciate the conscious inclusion of spontaneous volunteers are two contributions from this work.

APA, Harvard, Vancouver, ISO, and other styles
19

Svenkeson, Adam. "How Cooperative Systems Respond to External Forces." Thesis, University of North Texas, 2014. https://digital.library.unt.edu/ark:/67531/metadc500014/.

Full text
Abstract:
Cooperative interactions permeate through nature, bringing about emergent behavior and complexity. Using a simple cooperative model, I illustrate the mean field dynamics that occur at the critical point of a second order phase transition in the framework of Langevin equations. Through this formalism I discuss the response, both linear and nonlinear, to external forces. Emphasis is placed on how information is transferred from one individual to another in order to facilitate the collective response of the cooperative network to a localized perturbation. The results are relevant to a wide variety of systems, ranging from nematic liquid crystals, to flocks and swarms, social groups, and neural networks.
APA, Harvard, Vancouver, ISO, and other styles
20

Ko, Yu. "A National Survey on Prescribers' Knowledge of and Their Source of Drug-Drug Interaction Information-An Application of Item Response Theory." Diss., The University of Arizona, 2006. http://hdl.handle.net/10150/193703.

Full text
Abstract:
OBJECTIVES: (1) To assess prescribers' ability to recognize clinically significant DDIs, (2) to examine demographic and practice factors that may be associated with prescribers' DDI knowledge, and (3) to evaluate prescribers' perceived usefulness of various DDI information sources.METHODS: This study used a mailed questionnaire sent to a national sample of prescribers based on their past history of DDI prescribing which was determined using data from a pharmacy benefit manager covering over 50 million lives. The survey questionnaire included 14 drug-drug pairs that tested prescribers' ability to recognize clinically important DDIs and five 5-point Likert scale-type questions that assessed prescribers' perceived usefulness of DDI information provided by various sources. Demographic and practice characteristics were collected as well. Rasch analysis was used to evaluate the knowledge and usefulness questions.RESULTS: Completed questionnaires were obtained from 950 prescribers (overall response rate: 7.9%). The number of drug pairs correctly classified by the prescribers ranged from zero to thirteen, with a mean of 6 pairs (42.7%). The percentage of prescribers who correctly classified specific drug pairs ranged from 18.2% for warfarin-cimetidine to 81.2% for acetaminophen with codeine-amoxicillin. Half of the drug pair questions were answered "not sure" by over one-third of the respondents; among which, two were contraindicated. Rasch analysis of knowledge and usefulness questions revealed satisfactory model-data fit and person reliability of 0.72 and 0.61, respectively. A multiple regression analysis revealed that specialists were less likely to correctly identify interactions as compared to prescribers who were generalists. Other important predictors of DDI knowledge included the experience of seeing a harm caused by DDIs and the extent to which the risk of DDIs affected the prescribers' drug selection. ANOVA with the post-hoc Scheffe test indicated that prescribers considered DDI information provided by "other" sources to be more useful than that provided by computerized alert system. CONCLUSIONS: This study suggests that prescribers' DDI knowledge may be inadequate. The study found that for the drug interactions evaluated, generalists performed better than specialists. In addition, this study presents an application of IRT analysis to knowledge and attitude measurement in health science research.
APA, Harvard, Vancouver, ISO, and other styles
21

Engers, Emma. "Video tutorials and Quick Response codes to assist Mathematical Literacy students in a non-classroom environment: An Activity Theory approach." Master's thesis, University of Cape Town, 2017. http://hdl.handle.net/11427/25265.

Full text
Abstract:
This study investigated the effectiveness of video tutorials, accessed via Quick Response codes, on Grade 10 Mathematical Literacy students' ability to complete their homework. Students often struggle to complete their Mathematical Literacy homework. To assist them outside of the classroom, an intervention involving video tutorials that explained specific sections of work and how to go about solving problems, was devised. Students could access the relevant tutorials on a mobile device via the scanning of barcodes provided on the worksheets. The effectiveness of the intervention was assessed both quantitatively and qualitatively, through analysis of the participating students' homework submissions and interviews with the students after the intervention had ended. Use was made of the YouTube analytics view count feature to observe how many times the videos had been watched. Feedback forms, focus group interviews and questionnaires were also used to obtain additional data. Unfortunately, the students did not make as much use of the intervention as had been anticipated, and this, together with the very small sample, meant that no meaningful conclusions could be drawn. The students who had made use of the intervention claimed that the tutorials had helped them in their understanding of the relevant concepts, as well as with the completion of their homework. This would indicate that the intervention was potentially beneficial. I have recommended that future research be undertaken in this regard. When trying to understand why so little use was made of the intervention, it became apparent that many of the weaker students were unaware of their limitations in Mathematical Literacy, and therefore did not feel the need to access the available resources offered by the intervention. This is a serious obstacle to implementing such an intervention, and possible solutions are considered.
APA, Harvard, Vancouver, ISO, and other styles
22

Swartz, Horn Rebecca. "CT3 as an Index of Knowledge Domain Structure: Distributions for Order Analysis and Information Hierarchies." Thesis, University of North Texas, 2002. https://digital.library.unt.edu/ark:/67531/metadc3306/.

Full text
Abstract:
The problem with which this study is concerned is articulating all possible CT3 and KR21 reliability measures for every case of a 5x5 binary matrix (32,996,500 possible matrices). The study has three purposes. The first purpose is to calculate CT3 for every matrix and compare the results to the proposed optimum range of .3 to .5. The second purpose is to compare the results from the calculation of KR21 and CT3 reliability measures. The third purpose is to calculate CT3 and KR21 on every strand of a class test whose item set has been reduced using the difficulty strata identified by Order Analysis. The study was conducted by writing a computer program to articulate all possible 5 x 5 matrices. The program also calculated CT3 and KR21 reliability measures for each matrix. The nonparametric technique of Order Analysis was applied to two sections of test items to stratify the items into difficulty levels. The difficulty levels were used to reduce the item set from 22 to 9 items. All possible strands or chains of these items were identified so that both reliability measures (CT3 and KR21) could be calculated. One major finding of this study indicates that .3 to .5 is a desirable range for CT3 (cumulative p=.86 to p=.98) if cumulative frequencies are measured. A second major finding is that the KR21 reliability measure produced an invalid result more than half the time. The last major finding is that CT3, rescaled to range between 0 and 1, supports De Vellis' guidelines for reliability measures. The major conclusion is that CT3 is a better measure of reliability since it considers both inter- and intra-item variances.
APA, Harvard, Vancouver, ISO, and other styles
23

Jiang, Wei. "A study of the relationship between forest distribution and environmental variables using information theory: A regional-scale model for predicting forest response to global warming." Thesis, University of Ottawa (Canada), 1996. http://hdl.handle.net/10393/9810.

Full text
Abstract:
Many studies on forest or vegetation response to global warming have been done using the gap model or empirical models. Thus far, there is no good regional model allowing to predict forest change at an intermediate scale. In this study, we have developed a model of this type, called Knowledge Base Forest Model (KBFM), using an information analytical tool (P scEGASE) based on information theory. Using this model and data from the Canadian Climatic Centre general circulation model, we could predict the future distribution of forest types in the research area: the Province of Manitoba. The study shows that the KBFM may well be used to predict the future regional distribution of forest types. Its main advantages are: (1) environmental variables used as predictors can be qualitatives (e.g. soil texture) as well as quantitative (e.g. temperature); (2) the KBFM provides the possibility to account for the role of soil factors in the forest response to global warming; (3) the KBFM can predict forest type distribution using various climatic scenarios; (4) the KBFM can predict forest type distribution with greater details than empirical models.
APA, Harvard, Vancouver, ISO, and other styles
24

Yapar, Taner. "A Study Of The Predictive Validity Of The Baskent University English Proficiency Exam Through The Use Of The Two-parameter Irt Model&amp." Master's thesis, METU, 2003. http://etd.lib.metu.edu.tr/upload/1217629/index.pdf.

Full text
Abstract:
The purpose of this study is to analyze the predictive power of the ability estimates obtained through the two-parameter IRT model on the English Proficiency Exam administered at BaSkent University in September 2001 (BUSPE 2001). As prerequisite analyses the fit of one- and two-parameter models of IRT were investigated. The data used for this study were the test data of all students (727) who took BUSPE 2001 and the departmental English course grades of the passing students. At the first stage, whether the assumptions of IRT were met was investigated. Next, the observed and theoretical distribution of the test data was reviewed by using chi square statistics. After that, the invariance of ability estimates across different sets of items and invariance of item parameters across different groups of students were examined. At the second stage, the predictive validity of BUSPE 2001 and its subtests was analyzed by using both classical test scores and ability estimates of the better fitting IRT model. The findings revealed that the test met the assumptions of unidimensionality, local independence and nonspeededness, the assumptions of equal discrimination indices was not met. Whether the assumption of minimal guessing was met remained vague. The chi square statistics indicated that only the two parameter model fitted the test data. The ability estimates were found to be invariant across different item sets and the item parameters were found to be invariant across different groups of students. The IRT estimated predictive validity outweighed the predictive validity calculated through classical total scores both for the whole test and its subtests. The reading subtest was the best predictor of future performance in departmental English courses among all subtests.
APA, Harvard, Vancouver, ISO, and other styles
25

Lee, Philseok. "Investigating Parameter Recovery and Item Information for Triplet Multidimensional Forced Choice Measure: An Application of the GGUM-RANK Model." Scholar Commons, 2016. http://scholarcommons.usf.edu/etd/6298.

Full text
Abstract:
To control various response biases and rater errors in noncognitive assessment, multidimensional forced choice (MFC) measures have been proposed as an alternative to single-statement Likert-type scales. Historically, MFC measures have been criticized because conventional scoring methods can lead to ipsativity problems that render scores unsuitable for inter-individual comparisons. However, with the recent advent of classical test theory and item response theory scoring methods that yield normative information, MFC measures are surging in popularity and becoming important components of personnel and educational assessment systems. This dissertation presents developments concerning a GGUM-based MFC model henceforth referred to as the GGUM-RANK. Markov Chain Monte Carlo (MCMC) algorithms were developed to estimate GGUM-RANK statement and person parameters directly from MFC rank responses, and the efficacy of the new estimation algorithm was examined through computer simulations and an empirical construct validity investigation. Recently derived GGUM-RANK item information functions and information indices were also used to evaluate overall item and test quality for the empirical study and to give insights into differences in scoring accuracy between two-alternative (pairwise preference) and three-alternative (triplet) MFC measures for future work. This presentation concludes with a discussion of the research findings and potential applications in workforce and educational setting.
APA, Harvard, Vancouver, ISO, and other styles
26

Bodine, Andrew James. "A Monte Carlo Investigation of Fit Statistic Behavior in Measurement Models Assessed Using Limited-and Full-Information Estimation." The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1433412282.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Ersson, Lucas. "Facilitating More Frequent Updates: Towards Evergreen : A Case Study of an Enterprise Software Vendor’s Response to the Emerging DevOps Trend, Drawing on Neo-Institutional Theory." Thesis, Linköpings universitet, Informatik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-155785.

Full text
Abstract:
The last couple of years the trend within the software industry has been to releasesmaller software updates more frequent, to overcome challenges and increase flexibility, to alignwith the swiftly changing industry environment. As an effect, we now see companies moving over tocapitalizing on subscriptions and incremental releases instead of charging for upgrades. By utilizingneo-institutional theory and Oliver’s (1991) strategic response theory, an enterprise systemsvendor’s response to the emerging DevOps trend can be determined.
APA, Harvard, Vancouver, ISO, and other styles
28

Salem, Joseph A. Jr. "The Development and Validation of All Four TRAILS (Tool for Real-Time Assessment of Information Literacy Skills) Tests for K-12 Students." Kent State University / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=kent1415382839.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Elijah, Daniel. "Neural encoding by bursts of spikes." Thesis, University of Manchester, 2014. https://www.research.manchester.ac.uk/portal/en/theses/neural-encoding-by-bursts-of-spikes(56f4cf97-3887-4e89-bc0d-8db183ce9ce1).html.

Full text
Abstract:
Neurons can respond to input by firing isolated action potentials or spikes. Sequences of spikes have been linked to the encoding of neuron input. However, many neurons also fire bursts; mechanistically distinct responses consisting of brief high-frequency spike firing. Bursts form separate response symbols but historically have not been thought to encode input. However, recent experimental evidence suggests that bursts can encode input in parallel with tonic spikes. The recognition of bursts as distinct encoding symbols raises important questions; these form the basic aims of this thesis: (1) What inputs do bursts encode? (2) Does burst structure provide extra information about different inputs. (3) Is burst coding robust against the presence of noise; an inherent property of all neural systems? (4) What mechanisms are responsible for burst input encoding? (5) How does burst coding manifest in in-vivo neurons. To answer these questions, bursting is studied using a combination of neuron models and in-vivo hippocampal neuron recordings. Models ranged from neuron-specific cell models to models belonging to three fundamentally different burst dynamic classes (unspecific to any neural region). These classes are defined using concepts from non-linear system theory. Together, analysing these model types with in-vivo recordings provides a specific and general analysis of burst encoding. For neuron-specific and unspecific models, a number of model types expressing different levels of biological realism are analysed. For the study of thalamic encoding, two models containing either a single simplified burst-generating current or multiple currents are used. For models simulating three burst dynamic classes, three further models of different biological complexity are used. The bursts generated by models and real neurons were analysed by assessing the input they encode using methods such as information theory, and reverse correlation. Modelled bursts were also analysed for their resilience to simulated neural noise. In all cases, inputs evoking bursts and tonic spikes were distinct. The structure of burst-evoking input depended on burst dynamic class rather than the biological complexity of models. Different n-spike bursts encoded different inputs that, if read by downstream cells, could discriminate complex input structure. In the thalamus, this n-spike burst code explains informative responses that were not due to tonic spikes. In-vivo hippocampal neurons and a pyramidal cell model both use the n-spike code to mark different LFP features. This n-spike burst may therefore be a general feature of bursting relevant to both model and in-vivo neurons. Bursts can also encode input corrupted by neural noise, often outperforming the encoding of single spikes. Both burst timing and internal structure are informative even when driven by strongly noise-corrupted input. Also, bursts induce input-dependent spike correlations that remain informative despite strong added noise. As a result, bursts endow their constituent spikes with extra information that would be lost if tonic spikes were considered the only informative responses.
APA, Harvard, Vancouver, ISO, and other styles
30

Kopf, Julia. "Model-based recursive partitioning meets item response theory." Diss., Ludwig-Maximilians-Universität München, 2013. http://nbn-resolving.de/urn:nbn:de:bvb:19-164348.

Full text
Abstract:
The aim of this thesis is to develop new statistical methods for the evaluation of assumptions that are crucial for reliably assessing group-differences in complex studies in the field of psychological and educational testing. The framework of item response theory (IRT) includes a variety of psychometric models for scaling latent traits such as the widely-used Rasch model. The Rasch model ensures objective measures and fair comparisons between groups of subjects. However, this important property holds only if the underlying assumptions are met. One essential assumption is the invariance property. Its violation is extensively discussed in the literature and termed differential item functioning (DIF). This thesis focuses on the methodology of DIF detection. Existing methods for DIF detection are briefly discussed and new statistical methods for DIF detection are introduced together with new anchor methods. The methods introduced in this thesis allow to classify items with and without DIF more accurately and, thus, to improve the evaluation of the invariance assumption in the Rasch model. This thesis, thereby, provides a contribution to the construction of objective and fair tests in psychology and educational testing.
APA, Harvard, Vancouver, ISO, and other styles
31

McBride, Freda D. H. "Memory Bias in the Use of Accounting Information: An Examination of Affective Responses and Retrieval of Information in Accounting Decision Making." Diss., Virginia Tech, 1998. http://hdl.handle.net/10919/30551.

Full text
Abstract:
This dissertation is based on the Kida-Smith (1995) model of "The encoding and retrievability of numerical data." It is concerned with the variable conditions under which a positive affective response (i.e., a decision or opinion that results in a positive valence) on previously viewed accounting information may and may not influence current decision-making. An affective response to accounting numbers may adversely influence decisions made based on those numbers. Prior research has found that individuals recall information that is consistent with prior decisions more readily than they recall inconsistent information. Research has also shown that current judgements are biased toward prior decisions or judgements. These biases may cause current decisions to be suboptimal or dysfunctional. Two 2x2 experiments were conducted to examine four hypotheses. These hypotheses concerned (1) the influence of an affective response on an investment decision when the differences between two sets of accounting numbers are small and when the differences are large, (2) the influence of an affective response on the recall of numerical data, (3) the influence of time on the recall of numerical data given an affective response, and (4) the influence of an affective response on an investment decision when the level of cognitive processing at the time the affective response is produced is low and when the level of processing is high. The first experiment used graduate students in an accounting course to investigate the influence of differences between numerical amounts on decision making. It also investigated the influence of time between the encoding and retrieval on recall of numerical amounts. The second experiment used accounting practitioners to investigate the influence of differences between numerical amounts on decision making, and to examine the influence of different levels of cognitive processing at the time of encoding on decision making. Results indicate that an affective response does produce suboptimal decisions. In the case of accounting practitioners, however, the influence of the affective response is mitigated when the magnitude of the difference between the accounting numbers previously viewed and those undergoing current examination is large rather than small. The affective response did not significantly influence the recall of numerical amounts. There was no significant change in the influence of the affective response on recalled amounts with increased time between encoding and retrieval. Also, there were no significant changes in decision-making with increased processing at the time of encoding.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
32

Simonovic, Nicolle. "Effects of Construal Framing on Responses to Ambiguous Health Information." Kent State University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=kent1594927308547261.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Daukšaitė, Gabrielė. "Informatikos pagrindų konceptualizavimas naudojant uždavinius." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2014. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2011~D_20140701_164559-17014.

Full text
Abstract:
Magistro darbe tyrinėjama, kaip Lietuvos ir kai kurių užsienio valstybių bendrojo lavinimo mokyklose yra mokoma informatikos, aiškinamasi, koks požiūris į šią mokomąja discipliną, kurie veiksniai tai įtakoja. Tyrimui pasirinktas įdomesnis kelias – naudojamasi informatikos ir kompiuterinio lavinimosi varžybomis „Bebras“, kurios vyksta daugiau kaip dešimtyje valstybių. Palyginti 2008–2010 metais Lietuvoje vykusių „Bebro“ varžybų užduočių rinkiniai pagal įvairius informatikos konceptus. Pasinaudojus 2010 metais Lietuvos „Bebro“ varžybose dalyvavusių mokinių rezultatų duomenimis bei pritaikius atitinkamus matematinius užduočių vertinimo modelius, buvo įvertinta užduočių rinkinio informacinė funkcija, kuri leidžia parinkti tinkamiausias užduotis atitinkamam mokinių žinių lygiui. Mokinių informatikos žinių lygis neatsiejamas nuo informatikos pagrindų, kurie formuojasi laikui bėgant, kai mokinys gauna tinkamą informaciją ne tik per informatikos ar informacinių technologijų pamokas, bet ir kai mokytojai informacines ir komunikacines priemones taiko per kitų dalykų pamokas. Darbe apskaičiuoti užduočių sunkumo koeficientai, kurie palyginti su užduočių sunkumo lygiais, kuriuos priskyrė uždavinių sudarytojai ar vertintojai. Taip pat nustatyti užduočių skiriamosios gebos indeksai, kurie nustato, kiek gerai užduotis atskiria geresnius mokinių darbus nuo blogesnių tikrinamo dalyko atžvilgiu. Tyrimo rezultatai svarbūs tiek mokytojams, kurie turi įtakos mokinių informatikos pagrindų... [toliau žr. visą tekstą]
In this master thesis, computer science curriculum in compulsory school of Lithuania and other foreign countries are reviewed. Data of "Beaver" information technology contest, which is organized in more than ten countries, has been selected as a more attractive way to imlement this study. The comparisons of tasks sets in Lithuanian “Beaver” competition in 2008 – 2010 according to informatics concepts are presented. In this thesis, there was assessed information function of tasks set by using data of pupils’ results. The data of results were obtained from Lithuanian competition of “Beaver” in 2010. Information function allows choosing the best tasks for due ability level of pupils. Pupils’ abilities level of computer science is inseparable from informatics fundamentals, which is forming over time when pupils get the right information about informatics fundamentals during the computers science lesson, or when their teachers use information and communication technologies. The difficulty parameters of tasks, and discriminations parameters of tasks, which describe how well an item can differentiate between examinees having abilities below the item location and those having abilities above the item location, are calculated. The results of this study are important for teachers, which influence formation of informatics fundamentals of pupils, as well as for experts and creators of competition tasks, because for them it is important the right and purposeful introduction to computer... [to full text]
APA, Harvard, Vancouver, ISO, and other styles
34

Rogers, Christian. "A Study of Student Engagement with Media in Online Training." University of Toledo / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1364393833.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Mealing, Richard Andrew. "Dynamic opponent modelling in two-player games." Thesis, University of Manchester, 2015. https://www.research.manchester.ac.uk/portal/en/theses/dynamic-opponent-modelling-in-twoplayer-games(def6e187-156e-4bc9-9c56-9a896b9f2a42).html.

Full text
Abstract:
This thesis investigates decision-making in two-player imperfect information games against opponents whose actions can affect our rewards, and whose strategies may be based on memories of interaction, or may be changing, or both. The focus is on modelling these dynamic opponents, and using the models to learn high-reward strategies. The main contributions of this work are: 1. An approach to learn high-reward strategies in small simultaneous-move games against these opponents. This is done by using a model of the opponent learnt from sequence prediction, with (possibly discounted) rewards learnt from reinforcement learning, to lookahead using explicit tree search. Empirical results show that this gains higher average rewards per game than state-of-the-art reinforcement learning agents in three simultaneous-move games. They also show that several sequence prediction methods model these opponents effectively, supporting the idea of using them from areas such as data compression and string matching; 2. An online expectation-maximisation algorithm that infers an agent's hidden information based on its behaviour in imperfect information games; 3. An approach to learn high-reward strategies in medium-size sequential-move poker games against these opponents. This is done by using a model of the opponent learnt from sequence prediction, which needs its hidden information (inferred by the online expectation-maximisation algorithm), to train a state-of-the-art no-regret learning algorithm by simulating games between the algorithm and the model. Empirical results show that this improves the no-regret learning algorithm's rewards when playing against popular and state-of-the-art algorithms in two simplified poker games; 4. Demonstrating that several change detection methods can effectively model changing categorical distributions with experimental results comparing their accuracies to empirical distributions. These results also show that their models can be used to outperform state-of-the-art reinforcement learning agents in two simultaneous-move games. This supports the idea of modelling changing opponent strategies with change detection methods; 5. Experimental results for the self-play convergence to mixed strategy Nash equilibria of the empirical distributions of plays of sequence prediction and change detection methods. The results show that they converge faster, and in more cases for change detection, than fictitious play.
APA, Harvard, Vancouver, ISO, and other styles
36

Jatobá, Victor Miranda Gonçalves. "Uma abordagem personalizada no processo de seleção de itens em Testes Adaptativos Computadorizados." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/100/100131/tde-27012019-110739/.

Full text
Abstract:
Testes Adaptativos Computadorizados (CAT) baseados na Teoria de Resposta ao Item permitem fazer testes mais precisos com um menor número de questões em relação à prova clássica feita a papel. Porém a construção de CAT envolve alguns questionamentos-chave, que quando feitos de forma adequada, podem melhorar ainda mais a precisão e a eficiência na estimativa das habilidades dos respondentes. Um dos principais questionamentos está na escolha da Regra de Seleção de Itens (ISR). O CAT clássico, faz uso, exclusivamente, de uma ISR. Entretanto, essas regras possuem vantagens, entre elas, a depender do nível de habilidade e do estágio em que o teste se encontra. Assim, o objetivo deste trabalho é reduzir o comprimento de provas dicotômicas - que consideram apenas se a resposta foi correta ou incorreta - que estão inseridas no ambiente de um CAT que faz uso, exclusivo, de apenas uma ISR sem perda significativa de precisão da estimativa das habilidades. Para tal, cria-se a abordagem denominada ALICAT que personaliza o processo de seleção de itens em CAT, considerando o uso de mais de uma ISR. Para aplicar essa abordagem é necessário primeiro analisar o desempenho de diferentes ISRs. Um estudo de caso na prova de Matemática e suas tecnologias do ENEM de 2012, indica que a regra de seleção de Kullback-Leibler com distribuição a posteriori (KLP) possui melhor desempenho na estimativa das habilidades dos respondentes em relação as regras: Informação de Fisher (F); Kullback-Leibler (KL); Informação Ponderada pela Máxima Verossimilhança (MLWI); e Informação ponderada a posteriori (MPWI). Resultados prévios da literatura mostram que CAT utilizando a regra KLP conseguiu reduzir a prova do estudo de caso em 46,6% em relação ao tamanho completo de 45 itens sem perda significativa na estimativa das habilidades. Neste trabalho, foi observado que as regras F e a MLWI tiveram melhor desempenho nos estágios inicias do CAT, para estimar respondentes com níveis de habilidades extremos negativos e positivos, respectivamente. Com a utilização dessas regras de seleção em conjunto, a abordagem ALICAT reduziu a mesma prova em 53,3%
Computerized Adaptive Testing (CAT) based on Item Response Theory allows more accurate assessments with fewer questions than the classic paper test. Nonetheless, the CAT building involves some key questions that, when done properly, can further improve the accuracy and efficiency in estimating examinees\' abilities. One of the main questions is in regard to choosing the Item Selection Rule (ISR). The classic CAT makes exclusive use of one ISR. However, these rules have differences depending on the examinees\' ability level and on the CAT stage. Thus, the objective of this work is to reduce the dichotomous - which considers only correct and incorrect answers - test size which is inserted on a classic CAT without significant loss of accuracy in the estimation of the examinee\'s ability level. For this purpose, we create the ALICAT approach that personalizes the item selection process in a CAT considering the use of more than one ISR. To apply this approach, we first analyze the performance of different ISRs. The case study in textit test of the ENEM 2012 shows that the Kullback-Leibler Information with a Posterior Distribution (KLP) has better performance in the examinees\' ability estimation when compared with: Fisher Information (F); Kullback-Leibler Information (KL); Maximum Likelihood Weighted Information(MLWI); and Maximum Posterior Weighted Information (MPWI) rules. Previous results in the literature show that CAT using KLP was able to reduce this test size by 46.6% from the full size of 45 items with no significant loss of accuracy in estimating the examinees\' ability level. In this work, we observe that the F and the MLWI rules performed better on early CAT stages to estimate examinees proficiency level with extreme negative and positive values, respectively. With this information, we were able to reduce the same test by 53.3% using an approach that uses the best rules together
APA, Harvard, Vancouver, ISO, and other styles
37

Campos, Simone Silva. "O jogo e os jogos: o jogo da leitura, o jogo de xadrez e a sanidade mental em A defesa Lujin, de Vladimir Nabokov." Universidade do Estado do Rio de Janeiro, 2014. http://www.bdtd.uerj.br/tde_busca/arquivo.php?codArquivo=6936.

Full text
Abstract:
Fundação de Amparo à Pesquisa do Estado do Rio de Janeiro
No romance A defesa Lujin, de Vladimir Nabokov, publicado em russo em 1930, o texto procura levar o leitor a adotar processos mentais similares ao de um jogador de xadrez e de um esquizofrênico, características do personagem-título do romance. Delineiam-se as expectativas e circunstâncias de um ser de papel que se vê jogando um xadrez em que também é peça e traçam-se paralelos com as expectativas e circunstâncias do leitor perante esse texto literário. O prefácio de Nabokov à edição em inglês de 1964 é tomado como indício de um leitor e um autor implícitos que ele procura moldar. Para análise dos elementos textuais e níveis de abstração mental envolvidos, recorre-se à estética da recepção de Wolfgang Iser e a diversas ideias do psiquiatra e etnólogo Gregory Bateson, entre elas o conceito de duplo vínculo, com atenção às distinções entre mapa/território e play/game. Um duplo duplo vínculo é perpetrado na interação leitor-texto: 1) o leitor é convidado a sentir empatia pela situação do personagem Lujin e a considerá-lo lúcido e louco ao mesmo tempo; e 2) o leitor é colocado como uma instância pseudo-transcendental incapaz de comunicação com a instância inferior (Lujin), gerando uma angústia diretamente relacionável ao seu envolvimento com a ficção, replicando de certa forma a loucura de Lujin. A sinestesia do personagem Lujin é identificada como um dos elementos do texto capaz de recriar a experiência de jogar xadrez até para quem não aprecia o jogo. Analisa-se a conexão entre a esquizofrenia ficcional do personagem Lujin e a visão batesoniana do alcoolismo
In Vladimir Nabokovs novel, The Luzhin Defense, published in Russian in 1930, the text beckons the reader on to adopt mental processes similar to a chess players and a schizophrenic persons both traits of the novels title character. This character sees himself both as player and piece of an ongoing game of chess; his expectations and predicaments are traced in parallel to the readers own as he or she navigates the text. Nabokovs preface to the 1964 English edition is taken as an indication that he tries to shape both an implicit reader and an implicit author. In order to analyze the elements of the text and degrees of mental abstraction involved in this, we refer to Wolfgang Isers reader-response theory and also many of psychiatrist and ethnologist Gregory Batesons ideas, such as the double bind, with special regard to map vs. territory and play vs. game distinctions. A double double bind is built within the reader-text interplay as follows: 1) the reader is invited to feel empathy for Luzhins predicament and to regard him at once as sane and insane; and 2) the reader is posited as a pseudo-transcendental instance unable to communicate with his nether instance (Luzhin) in such a way that it brews a feeling of anxiety directly relatable to his or her engagement in the work of fiction, reproducing, in a way, Luzhins madness. Luzhins synesthesia is identified as one of the text elements with the ability to recreate the chess-playing experience even to readers who are not fond of the game. The connection between Luzhins fictional schizophrenia and Batesons views on alcoholism is analyzed
APA, Harvard, Vancouver, ISO, and other styles
38

Jordán, Prunera Jaume Magí. "Non-Cooperative Games for Self-Interested Planning Agents." Doctoral thesis, Universitat Politècnica de València, 2017. http://hdl.handle.net/10251/90417.

Full text
Abstract:
Multi-Agent Planning (MAP) is a topic of growing interest that deals with the problem of automated planning in domains where multiple agents plan and act together in a shared environment. In most cases, agents in MAP are cooperative (altruistic) and work together towards a collaborative solution. However, when rational self-interested agents are involved in a MAP task, the ultimate objective is to find a joint plan that accomplishes the agents' local tasks while satisfying their private interests. Among the MAP scenarios that involve self-interested agents, non-cooperative MAP refers to problems where non-strictly competitive agents feature common and conflicting interests. In this setting, conflicts arise when self-interested agents put their plans together and the resulting combination renders some of the plans non-executable, which implies a utility loss for the affected agents. Each participant wishes to execute its plan as it was conceived, but congestion issues and conflicts among the actions of the different plans compel agents to find a coordinated stable solution. Non-cooperative MAP tasks are tackled through non-cooperative games, which aim at finding a stable (equilibrium) joint plan that ensures the agents' plans are executable (by addressing planning conflicts) while accounting for their private interests as much as possible. Although this paradigm reflects many real-life problems, there is a lack of computational approaches to non-cooperative MAP in the literature. This PhD thesis pursues the application of non-cooperative games to solve non-cooperative MAP tasks that feature rational self-interested agents. Each agent calculates a plan that attains its individual planning task, and subsequently, the participants try to execute their plans in a shared environment. We tackle non-cooperative MAP from a twofold perspective. On the one hand, we focus on agents' satisfaction by studying desirable properties of stable solutions, such as optimality and fairness. On the other hand, we look for a combination of MAP and game-theoretic techniques capable of efficiently computing stable joint plans while minimizing the computational complexity of this combined task. Additionally, we consider planning conflicts and congestion issues in the agents' utility functions, which results in a more realistic approach. To the best of our knowledge, this PhD thesis opens up a new research line in non-cooperative MAP and establishes the basic principles to attain the problem of synthesizing stable joint plans for self-interested planning agents through the combination of game theory and automated planning.
La Planificación Multi-Agente (PMA) es un tema de creciente interés que trata el problema de la planificación automática en dominios donde múltiples agentes planifican y actúan en un entorno compartido. En la mayoría de casos, los agentes en PMA son cooperativos (altruistas) y trabajan juntos para obtener una solución colaborativa. Sin embargo, cuando los agentes involucrados en una tarea de PMA son racionales y auto-interesados, el objetivo último es obtener un plan conjunto que resuelva las tareas locales de los agentes y satisfaga sus intereses privados. De entre los distintos escenarios de PMA que involucran agentes auto-interesados, la PMA no cooperativa se centra en problemas que presentan un conjunto de agentes no estrictamente competitivos con intereses comunes y conflictivos. En este contexto, pueden surgir conflictos cuando los agentes ponen en común sus planes y la combinación resultante provoca que algunos de estos planes no sean ejecutables, lo que implica una pérdida de utilidad para los agentes afectados. Cada participante desea ejecutar su plan tal como fue concebido, pero las congestiones y conflictos que pueden surgir entre las acciones de los diferentes planes fuerzan a los agentes a obtener una solución estable y coordinada. Las tareas de PMA no cooperativa se abordan a través de juegos no cooperativos, cuyo objetivo es hallar un plan conjunto estable (equilibrio) que asegure que los planes de los agentes sean ejecutables (resolviendo los conflictos de planificación) al tiempo que los agentes satisfacen sus intereses privados en la medida de lo posible. Aunque este paradigma refleja muchos problemas de la vida real, existen pocos enfoques computacionales para PMA no cooperativa en la literatura. Esta tesis doctoral estudia el uso de juegos no cooperativos para resolver tareas de PMA no cooperativa con agentes racionales auto-interesados. Cada agente calcula un plan para su tarea de planificación y posteriormente, los participantes intentan ejecutar sus planes en un entorno compartido. Abordamos la PMA no cooperativa desde una doble perspectiva. Por una parte, nos centramos en la satisfacción de los agentes estudiando las propiedades deseables de soluciones estables, tales como la optimalidad y la justicia. Por otra parte, buscamos una combinación de PMA y técnicas de teoría de juegos capaz de calcular planes conjuntos estables de forma eficiente al tiempo que se minimiza la complejidad computacional de esta tarea combinada. Además, consideramos los conflictos de planificación y congestiones en las funciones de utilidad de los agentes, lo que resulta en un enfoque más realista. Bajo nuestro punto de vista, esta tesis doctoral abre una nueva línea de investigación en PMA no cooperativa y establece los principios básicos para resolver el problema de la generación de planes conjuntos estables para agentes de planificación auto-interesados mediante la combinación de teoría de juegos y planificación automática.
La Planificació Multi-Agent (PMA) és un tema de creixent interès que tracta el problema de la planificació automàtica en dominis on múltiples agents planifiquen i actuen en un entorn compartit. En la majoria de casos, els agents en PMA són cooperatius (altruistes) i treballen junts per obtenir una solució col·laborativa. No obstant això, quan els agents involucrats en una tasca de PMA són racionals i auto-interessats, l'objectiu últim és obtenir un pla conjunt que resolgui les tasques locals dels agents i satisfaci els seus interessos privats. D'entre els diferents escenaris de PMA que involucren agents auto-interessats, la PMA no cooperativa se centra en problemes que presenten un conjunt d'agents no estrictament competitius amb interessos comuns i conflictius. En aquest context, poden sorgir conflictes quan els agents posen en comú els seus plans i la combinació resultant provoca que alguns d'aquests plans no siguin executables, el que implica una pèrdua d'utilitat per als agents afectats. Cada participant vol executar el seu pla tal com va ser concebut, però les congestions i conflictes que poden sorgir entre les accions dels diferents plans forcen els agents a obtenir una solució estable i coordinada. Les tasques de PMA no cooperativa s'aborden a través de jocs no cooperatius, en els quals l'objectiu és trobar un pla conjunt estable (equilibri) que asseguri que els plans dels agents siguin executables (resolent els conflictes de planificació) alhora que els agents satisfan els seus interessos privats en la mesura del possible. Encara que aquest paradigma reflecteix molts problemes de la vida real, hi ha pocs enfocaments computacionals per PMA no cooperativa en la literatura. Aquesta tesi doctoral estudia l'ús de jocs no cooperatius per resoldre tasques de PMA no cooperativa amb agents racionals auto-interessats. Cada agent calcula un pla per a la seva tasca de planificació i posteriorment, els participants intenten executar els seus plans en un entorn compartit. Abordem la PMA no cooperativa des d'una doble perspectiva. D'una banda, ens centrem en la satisfacció dels agents estudiant les propietats desitjables de solucions estables, com ara la optimalitat i la justícia. D'altra banda, busquem una combinació de PMA i tècniques de teoria de jocs capaç de calcular plans conjunts estables de forma eficient alhora que es minimitza la complexitat computacional d'aquesta tasca combinada. A més, considerem els conflictes de planificació i congestions en les funcions d'utilitat dels agents, el que resulta en un enfocament més realista. Des del nostre punt de vista, aquesta tesi doctoral obre una nova línia d'investigació en PMA no cooperativa i estableix els principis bàsics per resoldre el problema de la generació de plans conjunts estables per a agents de planificació auto-interessats mitjançant la combinació de teoria de jocs i planificació automàtica.
Jordán Prunera, JM. (2017). Non-Cooperative Games for Self-Interested Planning Agents [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/90417
TESIS
APA, Harvard, Vancouver, ISO, and other styles
39

Hultman, Alexandra, and Erik Häggström. "Reklameffekter av storytelling för olika produkttyper : En kvantitativ studie av hur storytelling påverkar reklameffektivitet." Thesis, Linköpings universitet, Företagsekonomi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-151392.

Full text
Abstract:
Storytelling har identifierats som en effektiv marknadsföringsmetod och många företag har bemästrat konsten att berätta historier. Storytelling används för att skapa en emotionell respons hos den tilltänkta målgruppen, och om företag på ett framgångsrikt kan göra detta kan det vara ett sätt att differentiera sig själva. Metoden har länge framhållits som nyckeln till framgång, men är detta hela sanningen? Därav vill vi med studien besvara; hur påverkar storytelling reklameffektiviteten? Syftet med uppsatsen var att undersöka, beskriva och föra en diskussion gällande storytelling och dess effekter som marknadsföringsmetod. Resultatet av denna studie visar att storytelling är effektivare än traditionell produktbaserad reklam. Dock finns det skillnader i dess effekt på olika produktkategorier. Enligt studien fungerar storytelling bättre när engagemanget för produkten är lågt och när köpet av den har ett transformativt eller känsloförhöjande motiv. Resultatet av studien leder till ökad förståelse för när storytelling som marknadsföringsmetod är effektiv.
Storytelling has been identified as an effective marketing method, and many companies have mastered the art of telling stories. Storytelling is used to create an emotional response among the intended audience, and if companies can successfully do this, it can be a way of differentiating themselves. The method has long been emphasized as the key to success, but is this the whole truth? Therefore, we want to answer; How does storytelling affect advertising effectiveness? The aim of the essay was to investigate, describe and discuss storytelling and its effects as a marketing method. The result of this study shows that storytelling is more efficient than traditional product-based advertising. However, there are differences in its effect on different product categories. According to the study, storytelling works better when the commitment to the product is low and when its purchase has a transformative or emotional enhancement. The result of the study leads to an increased understanding of when storytelling as a marketing method is effective.
APA, Harvard, Vancouver, ISO, and other styles
40

Chen, Carla Chia-Ming. "Bayesian methodology for genetics of complex diseases." Thesis, Queensland University of Technology, 2010. https://eprints.qut.edu.au/43357/1/Carla_Chen_Thesis.pdf.

Full text
Abstract:
Genetic research of complex diseases is a challenging, but exciting, area of research. The early development of the research was limited, however, until the completion of the Human Genome and HapMap projects, along with the reduction in the cost of genotyping, which paves the way for understanding the genetic composition of complex diseases. In this thesis, we focus on the statistical methods for two aspects of genetic research: phenotype definition for diseases with complex etiology and methods for identifying potentially associated Single Nucleotide Polymorphisms (SNPs) and SNP-SNP interactions. With regard to phenotype definition for diseases with complex etiology, we firstly investigated the effects of different statistical phenotyping approaches on the subsequent analysis. In light of the findings, and the difficulties in validating the estimated phenotype, we proposed two different methods for reconciling phenotypes of different models using Bayesian model averaging as a coherent mechanism for accounting for model uncertainty. In the second part of the thesis, the focus is turned to the methods for identifying associated SNPs and SNP interactions. We review the use of Bayesian logistic regression with variable selection for SNP identification and extended the model for detecting the interaction effects for population based case-control studies. In this part of study, we also develop a machine learning algorithm to cope with the large scale data analysis, namely modified Logic Regression with Genetic Program (MLR-GEP), which is then compared with the Bayesian model, Random Forests and other variants of logic regression.
APA, Harvard, Vancouver, ISO, and other styles
41

Wu, Jheng-Fong, and 吳政峯. "The Influence of Information Richness on Consumer WOM Response with Social Presence Theory." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/79524081561034889471.

Full text
Abstract:
碩士
國立臺北科技大學
商業自動化與管理研究所
99
The rapid development of Internet, e-word of mouth makes more features present in the internet, previous research for the impact of e-wom used word-of-mouth volume and both positive and negative characteristics to measure. This study consider using social presence theory(Short et al., 1976) and add the Sweeney et al. (2008)proposed by word of mouth features, including vivid message,and with the inclusion social presence as mediating variables, verify that the characteristics of different messages through word of mouth Response to the impact of social presence on word of mouth. In this study, experimental design, using the Internet for gourmet reputation as hot pot restaurants, for example, results showed, the response characteristics of the purchase will significantly influence positive word of mouth intentions; on the weak response to the significant negative impact, and Indeed, social presence in response to word of mouth, between features and word of mouth generated mediator. The results can provide network operators marketing strategies and management recommendations, and take advantage of the power of online word of mouth to increase the sales network.
APA, Harvard, Vancouver, ISO, and other styles
42

Chandran, Aneesh. "Investigation of the use of infinite impulse response filters to construct linear block codes." Thesis, 2016. http://hdl.handle.net/10539/22669.

Full text
Abstract:
A dissertation submitted in ful lment of the requirements for the degree of Masters in Science in the Information Engineering School of Electrical and Information Engineering August 2016
The work presented extends and contributes to research in error-control coding and information theory. The work focuses on the construction of block codes using an IIR lter structure. Although previous works in this area uses FIR lter structures for error-detection, it was inherently used in conjunction with other error-control codes, there has not been an investigation into using IIR lter structures to create codewords, let alone to justify its validity. In the research presented, linear block codes are created using IIR lters, and the error-correcting capabilities are investigated. The construction of short codes that achieve the Griesmer bound are shown. The potential to construct long codes are discussed and how the construction is constrained due to high computational complexity is shown. The G-matrices for these codes are also obtained from a computer search, which is shown to not have a Quasi-Cyclic structure, and these codewords have been tested to show that they are not cyclic. Further analysis has shown that IIR lter structures implements truncated cyclic codes, which are shown to be implementable using an FIR lter. The research also shows that the codewords created from IIR lter structures are valid by decoding using an existing iterative soft-decision decoder. This represents a unique and valuable contribution to the eld of error-control coding and information theory.
MT2017
APA, Harvard, Vancouver, ISO, and other styles
43

De, Aguinaga José Guillermo. "Uncertainty Assessment of Hydrogeological Models Based on Information Theory." Doctoral thesis, 2010. https://tud.qucosa.de/id/qucosa%3A25657.

Full text
Abstract:
There is a great deal of uncertainty in hydrogeological modeling. Overparametrized models increase uncertainty since the information of the observations is distributed through all of the parameters. The present study proposes a new option to reduce this uncertainty. A way to achieve this goal is to select a model which provides good performance with as few calibrated parameters as possible (parsimonious model) and to calibrate it using many sources of information. Akaike’s Information Criterion (AIC), proposed by Hirotugu Akaike in 1973, is a statistic-probabilistic criterion based on the Information Theory, which allows us to select a parsimonious model. AIC formulates the problem of parsimonious model selection as an optimization problem across a set of proposed conceptual models. The AIC assessment is relatively new in groundwater modeling and it presents a challenge to apply it with different sources of observations. In this dissertation, important findings in the application of AIC in hydrogeological modeling using different sources of observations are discussed. AIC is tested on ground-water models using three sets of synthetic data: hydraulic pressure, horizontal hydraulic conductivity, and tracer concentration. In the present study, the impact of the following factors is analyzed: number of observations, types of observations and order of calibrated parameters. These analyses reveal not only that the number of observations determine how complex a model can be but also that its diversity allows for further complexity in the parsimonious model. However, a truly parsimonious model was only achieved when the order of calibrated parameters was properly considered. This means that parameters which provide bigger improvements in model fit should be considered first. The approach to obtain a parsimonious model applying AIC with different types of information was successfully applied to an unbiased lysimeter model using two different types of real data: evapotranspiration and seepage water. With this additional independent model assessment it was possible to underpin the general validity of this AIC approach.
Hydrogeologische Modellierung ist von erheblicher Unsicherheit geprägt. Überparametrisierte Modelle erhöhen die Unsicherheit, da gemessene Informationen auf alle Parameter verteilt sind. Die vorliegende Arbeit schlägt einen neuen Ansatz vor, um diese Unsicherheit zu reduzieren. Eine Möglichkeit, um dieses Ziel zu erreichen, besteht darin, ein Modell auszuwählen, das ein gutes Ergebnis mit möglichst wenigen Parametern liefert („parsimonious model“), und es zu kalibrieren, indem viele Informationsquellen genutzt werden. Das 1973 von Hirotugu Akaike vorgeschlagene Informationskriterium, bekannt als Akaike-Informationskriterium (engl. Akaike’s Information Criterion; AIC), ist ein statistisches Wahrscheinlichkeitskriterium basierend auf der Informationstheorie, welches die Auswahl eines Modells mit möglichst wenigen Parametern erlaubt. AIC formuliert das Problem der Entscheidung für ein gering parametrisiertes Modell als ein modellübergreifendes Optimierungsproblem. Die Anwendung von AIC in der Grundwassermodellierung ist relativ neu und stellt eine Herausforderung in der Anwendung verschiedener Messquellen dar. In der vorliegenden Dissertation werden maßgebliche Forschungsergebnisse in der Anwendung des AIC in hydrogeologischer Modellierung unter Anwendung unterschiedlicher Messquellen diskutiert. AIC wird an Grundwassermodellen getestet, bei denen drei synthetische Datensätze angewendet werden: Wasserstand, horizontale hydraulische Leitfähigkeit und Tracer-Konzentration. Die vorliegende Arbeit analysiert den Einfluss folgender Faktoren: Anzahl der Messungen, Arten der Messungen und Reihenfolge der kalibrierten Parameter. Diese Analysen machen nicht nur deutlich, dass die Anzahl der gemessenen Parameter die Komplexität eines Modells bestimmt, sondern auch, dass seine Diversität weitere Komplexität für gering parametrisierte Modelle erlaubt. Allerdings konnte ein solches Modell nur erreicht werden, wenn eine bestimmte Reihenfolge der kalibrierten Parameter berücksichtigt wurde. Folglich sollten zuerst jene Parameter in Betracht gezogen werden, die deutliche Verbesserungen in der Modellanpassung liefern. Der Ansatz, ein gering parametrisiertes Modell durch die Anwendung des AIC mit unterschiedlichen Informationsarten zu erhalten, wurde erfolgreich auf einen Lysimeterstandort übertragen. Dabei wurden zwei unterschiedliche reale Messwertarten genutzt: Evapotranspiration und Sickerwasser. Mit Hilfe dieser weiteren, unabhängigen Modellbewertung konnte die Gültigkeit dieses AIC-Ansatzes gezeigt werden.
APA, Harvard, Vancouver, ISO, and other styles
44

Watson, James Alexander. "The application of sense-making theory to advertising : an exploratory case study." Thesis, 2003. http://hdl.handle.net/10019.1/16471.

Full text
Abstract:
Thesis (MPhil)--University of Stellenbosch, 2003.
ENGLISH ABSTRACT: The purpose of the study was to investigate the controlled transfer of meaning that could be facilitated by the application of knowledge of sense-making theory. An object of communication, an advertisement, was consciously constructed on the basis of sense-making principles. An application of knowledge of sense making was then employed to assess the reception of the advertisement by a selected sample of respondents. The decision to select advertising as the choice of medium for the study stemmed from the increasing levels of criticism directed at this form of communication as a result of its frequent failure to deliver intended benefits for its sponsors. The intended benefits relate to the transfer of meaning that would prompt recipients of advertising messages to take an action that would be of value to the advertiser. More specific criticisms have centred on the failure of a growing number of advertising messages to deliver meaningful benefits as a result of their lack of relevance for the intended recipients of these communications. A call for a shift in mind-set away from traditional linear models currently employed to facilitate the design of advertising messages has prompted a growing recognition of the need to employ a more empathetic approach that would facilitate a positive interaction between an advertiser and a target audience. The emergence of what has been termed experiential marketing communications has advocated a view that advertising communications can promote stronger allegiances between organisations and their customers by the inclusion of meaningful sensory associations for recipients. This view, together with the insights revealed by those working in the field of sense making, suggested that the incorporation of sense-making theory could well accommodate the paradigm shift that has been called for in the design of advertising communications. The views and insights outlined above prompted the development of an advertisement that sought to incorporate sense-making theory into its construction. The requirement to allow for the transfer of intended meaning in the advertisement was facilitated by incorporating frames and cues, the design of which sought to assist in the resolution of equivocality and enable respondents to bridge cognitive gaps. The investigation took the form of an exploratory case study. The advertisement, constructed on the basis of sense-making theory, represented the control element of the study. In-depth interviews were conducted amongst grade 12 learners selected on the basis of their matching the target audience for which the advertisement had been designed. The semi-structured nature of the interviews followed a format that allowed for a comparison to be made between the intended input of meaning and the decoding of responses relating to the advertisement. Results indicated that there was a transfer of intended meanings incorporated into the advertisement as indicated in the decoded responses of respondents. These positive findings tend to indicate that a conscious application of sense-making theory to the construction of advertising messages could enhance their effectiveness.
AFRIKAANSE OPSOMMING: Die doel van hierdie studie was om die beheerde oordrag van betekenis, gefassiliteer deur die bepassing van kennis van die “sense-making” teorie, te ondersoek. ‘n Voorwerp van kommunikasie, ‘n advertensie, is doelbewus geskep op grond van die beginsels van die “sense-making” teorie. ‘n Toepassing van kennis van dié teorie is aangewend om die impak van die advertensie op ‘n steekproef van reagente te assesseer. Weens die voortdurende mislukking van die advertensie medium om beplande voordele aan die borge te lewer, styg vlakke van kritiek gerig tot hierdie medium. Juis hierdie statistiek het die besluit om die advertensie medium as voorwerp van studie te gebruik, laat ontwikkel. Die beplande voordele vir die borge hou direk verband met die oordrag van die betekenis wat die ontvanger van die advertensie boodskap sal aanpar om op te tree, en sodoende tot voordeel van die adverteerder sal strek. Kritiek is spesifiek gerig op die mislukking van al hoe meer advertensies wat nie betekenisvolle voordele lewer nie as gevolg van die irrelevansie van die vorm van kommunikasie vir die ontvanger. Die herkenning van die groeiende behoefde het ‘n beroep gemaak om ‘n paradigma skuif te maak, weg van die huidige, tradisionele, linêre model wat die ontwerp van advertensie boodskappe fassiliteer. Hierdie behoefte is om ‘n meer empatiese benadering in te stel, wat ‘n positiewe interaksie tussen die adverteerder en teiken gehoor sal fassiliteer. Die ontstaan van die sogenaamde “experietial marketing communications” het voorgestel dat die advertensie medium sterker getrouheid tussen organisasies en hul klante kan adverteer. Hierdie oogpunt, saam met die insig van dié gene wat in die veld van “sense making” werk, stel voor dat die inkorporasie van die “sense-making” teorie, wel die paradigma skuif, beroep op die ontwerp van die advertensie, kan akkommodeer. Dit het die ontwikkeling van ‘n advertensie met die “sense-making” teorie in sy konstruksie geïnkorporeer, aangewaldeer. Die vereiste om die oordrag van die beplande betekenis van ‘n advertensie te bewerkstellig, is gefassiliteer deur sketse en aanwysings te inkorporeer. Hierdie sketse en aanwysings is ontwerp om die voorkoming/oplossing van dubbelsinnigheid te ondersteun en om reagente te help om kognitiewe gapings te oorbrug. Die ondersoek het die vorm van ‘n ontdekkings gevallestudie aangeneem. Die advertensie, gebaseer op die “sense-making” teorie, het die kontrole element van die studie verteenwoordig. Onderhoude is indiepte gevoer met graad 12 leerders wat gekeur is op grond van die feit dat hulle in die teikengroep van die ontwerpte advertensie val. Die semigestruktureerde aard van die onderhoude het toegelaat dat die voorafbeplande blootstelling aan die betekenis vergelyk word met die dekodering van die terugvoering in verband met die advertensie. Resultate het gewys dat daar wel ‘n oordrag van die beplande, geïnkorporeerde betekenis in die advertensie plaas gevind het, wat bewys is in as ‘n dekodeerde reaksie van die reagent. Hierdie positiewe bevindings neig om te bewys dat ‘n bewuste aanwending van die “sensemaking” teorie tot die konstruksie van die advertensie wese die effektiewiteit van boodskappe verbeter.
APA, Harvard, Vancouver, ISO, and other styles
45

Sinha, Shameek. "Essays in direct marketing : understanding response behavior and implementation of targeting strategies." Thesis, 2011. http://hdl.handle.net/2152/ETD-UT-2011-05-2799.

Full text
Abstract:
In direct marketing, understanding the response behavior of consumers to marketing initiatives is a pre-requisite for marketers before implementing targeting strategies to reach potential as well as existing consumers in the future. Consumer response can either be in terms of the incidence or timing of purchases, category/ brand choice of purchases made as well as the volume or purchase amounts in each category. Direct marketers seek to explore how past consumer response behavior as well as their targeting actions affects current response patterns. However, considerable heterogeneity is also prevalent in consumer responses and the possible sources of this heterogeneity need to be investigated. With the knowledge of consumer response and the corresponding heterogeneity, direct marketers can devise targeting strategies to attract potential new consumers as well as retain existing consumers. In the first essay of my dissertation (Chapter 2), I model the response behavior of donors in non-profit charity fund-raising in terms of their timing and volume of donations. I show that past donations (both the incidence and volume) and solicitation for alternative causes by non-profits matter in donor responses and the heterogeneity in donation behavior can be explained in terms of individual and community level donor characteristics. I also provide a heuristic approach to target new donors by using a classification scheme for donors in terms of the frequency and amount of donations and then characterize each donor portfolio with corresponding donor characteristics. In the second essay (Chapter 3), I propose a more structural approach in the targeting of customers by direct marketers in the context of customized retail couponing. First I model customer purchase in a retail setting where brand choice decisions in a product category depend on pricing, in-store promotions, coupon targeting as well as the face values of those coupons. Then using a utility function specification for the retailer which implements a trade-off between net revenue (revenue – coupon face value) and information gain, I propose a Bayesian decision theoretic approach to determine optimal customized coupon face values. The optimization algorithm is sequential where past as well as future customer responses affect targeted coupon face values and the direct marketer tries to determine the trade-off through natural experimentation.
text
APA, Harvard, Vancouver, ISO, and other styles
46

(7525319), Megan M. Nyre-Yu. "Determining System Requirements for Human-Machine Integration in Cyber Security Incident Response." Thesis, 2019.

Find full text
Abstract:
In 2019, cyber security is considered one of the most significant threats to the global economy and national security. Top U.S. agencies have acknowledged this fact, and provided direction regarding strategic priorities and future initiatives within the domain. However, there is still a lack of basic understanding of factors that impact complexity, scope, and effectiveness of cyber defense efforts. Computer security incident response is the short-term process of detecting, identifying, mitigating, and resolving a potential security threat to a network. These activities are typically conducted in computer security incident response teams (CSIRTs) comprised of human analysts that are organized into hierarchical tiers and work closely with many different computational tools and programs. Despite the fact that CSIRTs often provide the first line of defense to a network, there is currently a substantial global skills shortage of analysts to fill open positions. Research and development efforts from educational and technological perspectives have been independently ineffective at addressing this shortage due to time lags in meeting demand and associated costs. This dissertation explored how to combine the two approaches by considering how human-centered research can inform development of computational solutions toward augmenting human analyst capabilities. The larger goal of combining these approaches is to effectively complement human expertise with technological capability to alleviate pressures from the skills shortage.

Insights and design recommendations for hybrid systems to advance the current state of security automation were developed through three studies. The first study was an ethnographic field study which focused on collecting and analyzing contextual data from three diverse CSIRTs from different sectors; the scope extended beyond individual incident response tasks to include aspects of organization and information sharing within teams. Analysis revealed larger design implications regarding collaboration and coordination in different team environments, as well as considerations about usefulness and adoption of automation. The second study was a cognitive task analysis with CSIR experts with diverse backgrounds; the interviews focused on expertise requirements for information sharing tasks in CSIRTs. Outputs utilized a dimensional expertise construct to identify and prioritize potential expertise areas for augmentation with automated tools and features. Study 3 included a market analysis of current automation platforms based on the expertise areas identified in Study 2, and used Systems Engineering methodologies to develop concepts and functional architectures for future system (and feature) development.

Findings of all three studies support future directions for hybrid automation development in CSIR by identifying social and organizational factors beyond traditional tool design in security that supports human-systems integration. Additionally, this dissertation delivered functional considerations for automated technology that can augment human capabilities in incident response; these functions support better information sharing between humans and between humans and technological systems. By pursuing human-systems integration in CSIR, research can help alleviate the skills shortage by identifying where automation can dynamically assist with information sharing and expertise development. Future research can expand upon the expertise framework developed for CSIR and extend the application of proposed augmenting functions in other domains.
APA, Harvard, Vancouver, ISO, and other styles
47

Cranley, Lisa Anne. "A Grounded Theory of Intensive Care Nurses’ Experiences and Responses to Uncertainty." Thesis, 2009. http://hdl.handle.net/1807/17749.

Full text
Abstract:
The purpose of this study was to develop a theory to explain how nurses experience and respond to uncertainty arising from patient care-related situations and the influence of uncertainty on their information behaviour. Strauss and Corbin’s (1998) grounded theory approach guided the study. Semi-structured face-to-face interviews were conducted with 14 staff nurses working in an adult medical-surgical intensive care unit (MSICU) at one of two participating hospitals. The grounded theory recognizing and responding to uncertainty was developed from constant comparison analysis of transcribed interview data. The theory explicates recognizing, managing, and learning from uncertainty in patient care-related situations. Recognizing uncertainty involved a complex recursive process of assessing, reflecting, questioning and/or predicting, occurring concomitantly with facing uncertain aspects of patient care situations. Together, antecedent conditions and the process of recognizing uncertainty shaped the experience of uncertainty. Two main responses to uncertainty were physiological/affective responses and strategies used to manage uncertainty. Resolved uncertainty, unresolved uncertainty, and learning from uncertainty experiences were three consequences of managing uncertainty. The ten main categories of antecedent, actions and interactions, and consequences that comprised the theory were interrelated and connected through temporal and causal statements of relationship. Nurse, patient, and contextual factors were linked through patterns of conditions and intervening relational statements. Together, these conceptual relationships formed an explanatory theory of how MSICU nurses experienced and responded to uncertainty in their practice. This theory provides understanding of how nurses think through, act and interact in patient situations for which they are uncertain, and provides insight into the nature of the processes involved in recognizing and responding to uncertainty. Study implications for practice, nursing education, and further theory development and research are discussed.
APA, Harvard, Vancouver, ISO, and other styles
48

Hanekom, Janette. "A conceptual integrated theoretical model for online consumer behaviour." Thesis, 2013. http://hdl.handle.net/10500/11984.

Full text
Abstract:
The study addresses the limited and fragmented approaches of consumer behaviour studies in the existing literature and a lack of comprehensive integrated theoretical models of online consumer behaviour. The aim of the study is to propose a conceptual integrated theoretical model for online consumer behaviour which suggests a deviation from the existing purchasing approaches to consumer behaviour - hence a move towards an understanding of consumer behaviour in terms of two new approaches, namely the web-based communication exposure and internal psychological behavioural processes approaches, is proposed. The study addresses two main research problems, namely that inadequate knowledge and information exist on online consumers’ behavioural processes, especially their internal psychological behavioural processes during their exposure to web-based communication messages and their progression through the complete web-based communication experience; and that there is no conceptual integrated theoretical model for online consumer behaviour in the literature. This study, firstly, allows for systematic theoretical exploration, description, interpretation and integration of existing literature and theory on offline and online consumer behaviour including the following: theoretical perspectives and approaches; determinants; decision making; consumer information processing and response; and theoretical foundations. This systematic theoretical exploration and description of consumer behaviour literature and theory commences with the contextualisation and proposal of a new definition, perspective and theoretical approaches to online consumer behaviour; the discussion and analysis of the theory of the determinants of consumer behaviour; the discussion and analysis of decision-making theory; the proposition of a new online information decision-making perspective and model; the discussion and analysis of consumer information-processing and response theory and models; the discussion and analysis of the theoretical foundations of consumer behaviour; and the identification of theoretical criteria for online consumer behaviour. Declaration – acknowledgements - abstract Secondly, the study develops a conceptual integrated theoretical model for online consumer behaviour, thereby theoretically grounding online consumer behavioural processes in the context of internal psychological behavioural processes and exposure to web-based communication messages. It is hence posited that the study provides a more precise understanding of online consumers’ complicated internal cognitive and psychological behavioural processes in their interactive search for and experience of online web-based communication and information, which can be seen as a major contribution to the field of study.
Communication Science
D. Litt. et Phil. (Communication)
APA, Harvard, Vancouver, ISO, and other styles
49

Bierling, David. "Participants and Information Outcomes in Planning Organizations." Thesis, 2012. http://hdl.handle.net/1969.1/ETD-TAMU-2012-08-11708.

Full text
Abstract:
This research presents empirical evidence and interpretation about the effects of planning participants and contextual factors on information selection in public organizations. The study addresses important research questions and gaps in the literature about applicability of planning theory to practice, about effects of planning participants and participant diversity on information selection, and about community and organizational factors that influence information selection in the planning process. The research informs emergency planning, practice, and guidance, as well as planning theory and practice in general. The research sample consists of survey data from 183 local emergency planning committees (LEPCs) about their conduct of hazardous materials commodity flow studies (HMCFS), along with data from other secondary sources. HMCFS projects collect information about hazardous materials (HazMat) transport that can be used in a wide range of local emergency planning and community planning applications. This study takes the perspective that socio-cultural frameworks, such as organizational norms and values, influence information behaviors of planning participants. Controlling for organizational and community factors, the participation of community planners in HMCFS projects has a significant positive effect on selection of communicative information sources. Participation of HazMat responders in HMCFS projects does not have a significant negative effect on selection of communicative information sources. The diversity of HMCFS participants has a significant positive effect on information selection diversity. Other organizational and community factors, such as vicarious experience, 'know-how' and direct experience, financial resources, and knowledge/perception of hazards and risks are also important influences on information selection behavior. Results of this study are applicable to planning entities that are likely to use planning information: proactive LEPCs, planning agencies, and planning consortiums. The results are also applicable to community planners in local planning agencies and emergency responders in local emergency response agencies, and public planning organizations in general. In addition to providing evidence about the applicability of communicative rationality in planning practice, this research suggests that institutional/contextual, bounded, instrumental, and political rationalities may also in influence conduct of planning projects. Four corresponding prescriptive recommendations are made for planning theory and practice.
APA, Harvard, Vancouver, ISO, and other styles
50

Berber, Fatih. "High-Performance Persistent Identification for Research Data Management." Doctoral thesis, 2018. http://hdl.handle.net/11858/00-1735-0000-002E-E4AA-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography