To see the other types of publications on this topic, follow the link: Measurement error model, Rasch model, Latent variable.

Journal articles on the topic 'Measurement error model, Rasch model, Latent variable'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Measurement error model, Rasch model, Latent variable.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Maier, Kimberly S. "A Rasch Hierarchical Measurement Model." Journal of Educational and Behavioral Statistics 26, no. 3 (September 2001): 307–30. http://dx.doi.org/10.3102/10769986026003307.

Full text
Abstract:
In this article, a hierarchical measurement model is developed that enables researchers to measure a latent trait variable and model the error variance corresponding to multiple levels. The Rasch hierarchical measurement model (HMM) results when a Rasch IRT model and a one-way ANOVA with random effects are combined ( Bryk & Raudenbush, 1992 ; Goldstein, 1987 ; Rasch, 1960 ). This model is appropriate for modeling dichotomous response strings nested within a contextual level. Examples of this type of structure include responses from students nested within schools and multiple response strings nested within people. Model parameter estimates of the Rasch HMM were obtained using the Bayesian data analysis methods of Gibbs sampling and the Metropolis-Hastings algorithm ( Gelfand, Hills, Racine-Poon, & Smith, 1990 ; Hastings, 1970 ; Metropolis, Rosenbluth, Rosenbluth, Teller, & Teller, 1953 ). The model is illustrated with two simulated data sets and data from the Sloan Study of Youth and Social Development. The results are discussed and parameter estimates for the simulated data sets are compared to parameter estimates obtained using a two-step estimation approach.
APA, Harvard, Vancouver, ISO, and other styles
2

Baghaei, Purya, and Mona Tabatabaee Yazdi. "The Logic of Latent Variable Analysis as Validity Evidence in Psychological Measurement." Open Psychology Journal 9, no. 1 (December 30, 2016): 168–75. http://dx.doi.org/10.2174/1874350101609010168.

Full text
Abstract:
Background:Validity is the most important characteristic of tests and social science researchers have a general consensus of opinion that the trustworthiness of any substantive research depends on the validity of the instruments employed to gather the data.Objective:It is a common practice among psychologists and educationalists to provide validity evidence for their instruments by fitting a latent trait model such as exploratory and confirmatory factor analysis or the Rasch model. However, there has been little discussion on the rationale behind model fitting and its use as validity evidence. The purpose of this paper is to answer the question: why the fit of data to a latent trait model counts as validity evidence for a test?Method:To answer this question latent trait theory and validity concept as delineated by Borsboom and his colleagues in a number of publications between 2003 to 2013 is reviewed.Results:Validating psychological tests employing latent trait models rests on the assumption of conditional independence. If this assumption holds it means that there is a ‘common cause’ underlying the co-variation among the test items, which hopefully is our intended construct.Conclusion:Providing validity evidence by fitting latent trait models is logistically easy and straightforward. However, it is of paramount importance that researchers appreciate what they do and imply about their measures when they demonstrate that their data fit a model. This helps them to avoid unforeseen pitfalls and draw logical conclusions.
APA, Harvard, Vancouver, ISO, and other styles
3

Grilli, Leonardo, and Roberta Varriale. "Specifying Measurement Error Correlations in Latent Growth Curve Models With Multiple Indicators." Methodology 10, no. 4 (January 1, 2014): 117–25. http://dx.doi.org/10.1027/1614-2241/a000082.

Full text
Abstract:
In this tutorial paper we focus on a multi-item Latent Growth Curve model for modeling change across time of a latent variable measured by multiple items at different occasions: in the structural part the latent variable grows according to a random slope linear model, whereas in the measurement part the latent variable is measured at each occasion by a conventional factor model with time-invariant loadings. The specification of a multi-item Latent Growth Curve model involves several interrelated choices: indeed, the features of the structural part, such as the functional form of the growth, are linked to the features of the measurement part, such as the correlation structure across time of measurement errors. In the paper, we give guidelines on the specification of the variance-covariance structure of measurement errors. In particular, we investigate the empirical implications of different specification strategies through an analysis of student ratings collected in four academic years about courses of the University of Florence. In the application we compare three correlation structures (independence, lag-1, and compound symmetry), illustrating the differences in terms of substantive assumptions, model fit, and interpretability of the results.
APA, Harvard, Vancouver, ISO, and other styles
4

Bourke, Mary, Linda Wallace, Marlene Greskamp, and Lucy Tormoehlen. "Improving Objective Measurement in Nursing Research: Rasch Model Analysis and Diagnostics of the Nursing Students' Clinical Stress Scale." Journal of Nursing Measurement 23, no. 1 (2015): 1E—15E. http://dx.doi.org/10.1891/1061-3749.23.1.1.

Full text
Abstract:
Background and Purpose: The purpose of this study is to use Rasch model diagnostics and analysis to understand survey item (questions) functioning of the Nursing Students' Clinical Stress Scale, a rating scale instrument developed by Whang (2002). Methods: A rating scale instrument originally written in Korean was translated into English and administered to a convenience sample of all junior (46) and senior (64) students at a large Midwest university. Results and Conclusions: Rasch model analysis provided the empirical evidence to support that the survey items measured the latent variable, stress. Diagnostic results indicated the need for improved category labeling. Clinical Relevance: It is imperative that nursing educators evaluate and facilitate inter- and intraprofessional relationships between students and staff/faculty as well as understand the student experience.
APA, Harvard, Vancouver, ISO, and other styles
5

Huang, Xianzheng, and Joshua M. Tebbs. "On Latent-Variable Model Misspecification in Structural Measurement Error Models for Binary Response." Biometrics 65, no. 3 (September 29, 2008): 710–18. http://dx.doi.org/10.1111/j.1541-0420.2008.01128.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Leus, Olga, and Anatoly Maslak. "MEASUREMENT AND ANALYSIS OF TEACHERS’ PROFESSIONAL PERFORMANCE." SOCIETY. INTEGRATION. EDUCATION. Proceedings of the International Scientific Conference 2 (May 25, 2018): 308–19. http://dx.doi.org/10.17770/sie2018vol1.3097.

Full text
Abstract:
The relevance of the measurement and analysis of teachers’ professional performance is based on the fact that teachers largely determine the quality of education at schools. The measurement of the latent variable "teacher’s professional performance" is done within the framework of the theory of latent variables based on the Rasch model. It was shown that the set of indicators has a high differentiating ability. The results of the measurement of this latent variable are used to compare the quality of professional activities of teachers of mathematics, history, Russian language, and literature as well as primary school teachers. No statistically significant differences were found between the professional performance of teachers of mathematics, history, and primary school teachers. The quality of professional activity of teachers of Russian language and literature is lower. The results of the measurement of teachers’ professional performance were used for comparison of schools. As one would expect, the highest quality of professional performance of teachers is in high schools and lowest in the primary schools; secondary schools occupy an intermediate position. Teachers’ professional performance is defined operationally, using a set of indicators. This set of indicators can be adjusted to clarify the content of the latent variable "teacher’s professional performance".
APA, Harvard, Vancouver, ISO, and other styles
7

Doyle, Patrick J., William D. Hula, Malcolm R. McNeil, Joseph M. Mikolic, and Christine Matthews. "An Application of Rasch Analysis to the Measurement of Communicative Functioning." Journal of Speech, Language, and Hearing Research 48, no. 6 (December 2005): 1412–28. http://dx.doi.org/10.1044/1092-4388(2005/098).

Full text
Abstract:
Purpose:The purposes of this investigation were to examine the construct dimensionality and range of ability effectively measured by 28 assessment items obtained from 3 different patient-reported scales of communicative functioning, and to provide a demonstration of how the Rasch approach to measurement may contribute to the definition of latent constructs and the development of instruments to measure them.Method:Item responses obtained from 421 stroke survivors with and without communication disorders were examined using the Rasch partial credit model. The dimensionality of the item pool was evaluated by (a) examining correlations of Rasch person ability scores obtained separately from each of the 3 scales, (b) iteratively excluding items exceeding mean square model fit criteria, and (c) using principal-components analysis of Rasch model residuals. The range of ability effectively measured by the item pool was examined by comparing item difficulty and category threshold calibrations to the distribution of person ability scores and by plotting the modeled standard error of person ability estimates as a function of person ability level.Results:The results indicate that most assessment items fit a unidimensional measurement model, with the notable exception of items relating to the use of written communication. The results also suggest that the range of ability that could be reliably measured by the current item pool was restricted relative to the range of ability observed in the patient sample.Conclusions:It is concluded that (a) a mature understanding of communicative functioning as a measurement construct will require further research, (b) patients with stroke-related communication disorders will be better served by the development of instruments measuring a wider range of communicative functioning ability, and (c) the theoretical and methodological tools provided by the Rasch family of measurement models may be productively applied to these efforts.
APA, Harvard, Vancouver, ISO, and other styles
8

Bazan, Bartolo. "A Rasch-Validation Study of a Novel Speaking Span Task." Shiken 24.1 24, no. 1 (June 1, 2020): 1–21. http://dx.doi.org/10.37546/jaltsig.teval24.1-1.

Full text
Abstract:
Working Memory refers to the capacity to temporarily retain a limited amount of information that is available for manipulation by higher-order cognitive processes. Several assessment instruments, such as the speaking span task, have been associated with the measurement of working memory span. However, despite the widespread use of the speaking span task, no study, to the best of my knowledge, has attempted to validate it using Rasch Measurement Theory. Rasch analysis can potentially shed light on the dimensionality of a complex construct such as working memory as well as examine whether a collection of items is working together to construct a coherent and reliable measure of a targeted population. This pilot study reports a Rasch analysis of a novel speaking span task, which was administered individually to 31 Japanese junior high school students and scored using a newly developed scoring system. Two separate analyses were conducted on the task: an analysis of the individual items using the Rasch dichotomous model and an analysis of the super items (sets) using the partial credit model. The results indicate that the task measures a coherent unidimensional latent variable and is thus a useful tool for measuring the construct. Moreover, Rasch analysis was shown to be suitable method for evaluating working memory tests.
APA, Harvard, Vancouver, ISO, and other styles
9

Smith, Bradley C., and William Spaniel. "Introducingν-CLEAR: a latent variable approach to measuring nuclear proficiency." Conflict Management and Peace Science 37, no. 2 (January 10, 2018): 232–56. http://dx.doi.org/10.1177/0738894217741619.

Full text
Abstract:
The causes and consequences of nuclear proficiency are central to important questions in international relations. At present, researchers tend to use observable characteristics as a proxy. However, aggregation is a problem: existing measures implicitly assume that each indicator is equally informative and that measurement error is not a concern. We overcome these issues by applying a statistical measurement model to directly estimate nuclear proficiency from observed indicators. The resulting estimates form a new dataset on nuclear proficiency which we call ν-CLEAR. We demonstrate that these estimates are consistent with known patterns of nuclear proficiency while also uncovering more nuance than existing measures. Additionally, we demonstrate how scholars can use these estimates to account for measurement error by revisiting existing results with our measure.
APA, Harvard, Vancouver, ISO, and other styles
10

Flaherty, Brian P., and Yusuke Shono. "Many Classes, Restricted Measurement (MACREM) Models for Improved Measurement of Activities of Daily Living." Journal of Survey Statistics and Methodology 9, no. 2 (March 1, 2021): 231–56. http://dx.doi.org/10.1093/jssam/smaa047.

Full text
Abstract:
Abstract Scientists use latent class (LC) models to identify subgroups in heterogeneous data. LC models reduce an item set to a latent variable and estimate measurement error. Researchers typically use unrestricted LC models, which have many measurement estimates, yet scientific interest primarily concerns the classes. We present highly restricted LC measurement models as an alternate method of operationalization. MACREM (Many Classes, Restricted Measurement) models have a larger number of LCs than a typical unrestricted model, but many fewer measurement estimates. Goals of this approach include producing more interpretable classes and better measurement error estimates. Parameter constraints accomplish this structuring. We present unrestricted and MACREM model results using data on activities of daily living (ADLs) from a national survey (N = 3,485). We compare a four-class unrestricted model with a fourteen-class MACREM model. The four-class unrestricted model approximates a dimension of functional limitation. The fourteen-class model includes unordered classes at lower levels of limitation, but ordered classes at higher levels of limitation. In contrast to the four-class model, all measurement error rates are reasonably small in the fourteen-class model. The four-class model fits the data better, but the fourteen-class model is more parsimonious (forty-three versus twenty-five parameters). Three covariates reveal specific associations with MACREM classes. In multinomial logistic regression models with a no limitation class as the reference class, past 12-month diabetes only distinguishes low limitation classes that include cutting one’s own toenails as a limitation. It does not distinguish low limitation classes characterized by other common limitations. Past 12-month asthma and current disability status perform similarly, but for heavy housework and walking limitation classes, respectively. These limitation-specific covariate associations are not apparent in the unrestricted model analyses. Identifying such connections could provide useful information to advance theory and intervention efforts.
APA, Harvard, Vancouver, ISO, and other styles
11

Ramli, Mohd Zakwan, Marlinda Abdul Malek, Mohamad Zaki Muda, Zulkhairi Abd Talib, Nor Syahirah Azman, Nur Farah Syazana Mohamad Fu’ad, Mohd Hafiz Zawawi, and Herda Yati Katman. "A Review of Structural Equation Model for Construction Delay Study." International Journal of Engineering & Technology 7, no. 4.35 (November 30, 2018): 299. http://dx.doi.org/10.14419/ijet.v7i4.35.22750.

Full text
Abstract:
Structural Equation Modelling (SEM) has been widely used in science social area compared to construction engineering and management field especially in area of delay construction. SEM is a second generation multivariate analysis that has an advance features compare to first generations of analysis tools. First generation techniques suffer with some assumptions such as error measurement is neglected, only observed variable allowed, only for simple model and other limitations. In construction delay study, comprehensive and complex analysis which involves hidden variables need to be considered to get precise results. Therefore, the main objective of this paper is to review the importance of applying SEM for construction delay study. Various papers which were taken from construction delay and construction management studies has been reviewed to observe the suitability of SEM for construction delay study. Outcome of this review reveals that SEM can include latent variable in the analysis model and consider of error measurement as integral part of the model as well as simultaneously analyse theory and measurement in a structural model while it is unobtainable for first generation techniques. This review proves that SEM can be an appropriate analysis tool for construction delay study.
APA, Harvard, Vancouver, ISO, and other styles
12

Isa, Mona, Mazlan Abu Bakar, Mohamad Sufian Hasim, Mohd Khairul Anuar, Ibrahim Sipan, and Mohd Zali Mohd Nor. "Data quality control for survey instrument of office investors in rationalising green office building investment in Kuala Lumpur by the application of Rasch analysis." Facilities 35, no. 11/12 (August 8, 2017): 638–57. http://dx.doi.org/10.1108/f-06-2016-0067.

Full text
Abstract:
Purpose The purpose of this paper is to attempt to verify the quality of the survey instrument of office investors in rationalising green office building investment in the city of Kuala Lumpur using the Rasch measurement method. It investigates whether the quality of the data obtained from the survey is statistically acceptable and aims to ensure that the scales used are based on the same measurement model and fit with the Rasch model. Design/methodology/approach In achieving this objective, a questionnaire survey was developed consisting of six sections. Some 394 questionnaires were distributed, and in total, 106 responses were received from office investors who own and lease office buildings in the Federal Territory of Kuala Lumpur. The data obtained from this survey were entered into Rasch software, and the analysis aims to consider three main parameters, specifically: point measure correlation 0.32 < x < 0.8; outfit mean squared 0.5 < y < 1.5; and outfit z-standard −2 < z < 2. It also provides a separable or independent measurement instrument for the parameters of the research object. Findings The analysis performed using the Rasch model confirmed that all items in the questionnaire construct were statistically reliable and valid. The Rasch analysis consists of, namely, the summary statistics; item unidimensionality to provide interval measures of item endorsements and fit statistics on persons involved; and items for further investigation. Unidimensionality, as a pre-requisite to the Rasch analysis, provides statistics on whether the items are on the same latent variable intended by the instrument. The results were also supported by Cronbach’s alpha at 0.91 which showed excellent reliability for person data. The results showed that the summary statistics, unidimensionality and item fit analysis were excellent. Originality/value This paper introduces the application of the Rasch analysis as a provision for a new dimension and technique in examining data reliability and validity of the instrument.
APA, Harvard, Vancouver, ISO, and other styles
13

Cassese, Alberto, Michele Guindani, and Marina Vannucci. "A Bayesian Integrative Model for Genetical Genomics with Spatially Informed Variable Selection." Cancer Informatics 13s2 (January 2014): CIN.S13784. http://dx.doi.org/10.4137/cin.s13784.

Full text
Abstract:
We consider a Bayesian hierarchical model for the integration of gene expression levels with comparative genomic hybridization (CGH) array measurements collected on the same subjects. The approach defines a measurement error model that relates the gene expression levels to latent copy number states. In turn, the latent states are related to the observed surrogate CGH measurements via a hidden Markov model. The model further incorporates variable selection with a spatial prior based on a probit link that exploits dependencies across adjacent DNA segments. Posterior inference is carried out via Markov chain Monte Carlo stochastic search techniques. We study the performance of the model in simulations and show better results than those achieved with recently proposed alternative priors. We also show an application to data from a genomic study on lung squamous cell carcinoma, where we identify potential candidates of associations between copy number variants and the transcriptional activity of target genes. Gene ontology (GO) analyses of our findings reveal enrichments in genes that code for proteins involved in cancer. Our model also identifies a number of potential candidate biomarkers for further experimental validation.
APA, Harvard, Vancouver, ISO, and other styles
14

Marquardt, Kyle L. "How and how much does expert error matter? Implications for quantitative peace research." Journal of Peace Research 57, no. 6 (November 2020): 692–700. http://dx.doi.org/10.1177/0022343320959121.

Full text
Abstract:
Expert-coded datasets provide scholars with otherwise unavailable data on important concepts. However, expert coders vary in their reliability and scale perception, potentially resulting in substantial measurement error. These concerns are acute in expert coding of key concepts for peace research. Here I examine (1) the implications of these concerns for applied statistical analyses, and (2) the degree to which different modeling strategies ameliorate them. Specifically, I simulate expert-coded country-year data with different forms of error and then regress civil conflict onset on these data, using five different modeling strategies. Three of these strategies involve regressing conflict onset on point estimate aggregations of the simulated data: the mean and median over expert codings, and the posterior median from a latent variable model. The remaining two strategies incorporate measurement error from the latent variable model into the regression process by using multiple imputation and a structural equation model. Analyses indicate that expert-coded data are relatively robust: across simulations, almost all modeling strategies yield regression results roughly in line with the assumed true relationship between the expert-coded concept and outcome. However, the introduction of measurement error to expert-coded data generally results in attenuation of the estimated relationship between the concept and conflict onset. The level of attenuation varies across modeling strategies: a structural equation model is the most consistently robust estimation technique, while the median over expert codings and multiple imputation are the least robust.
APA, Harvard, Vancouver, ISO, and other styles
15

Huang, Xianzheng. "An improved test of latent-variable model misspecification in structural measurement error models for group testing data." Statistics in Medicine 28, no. 26 (November 20, 2009): 3316–27. http://dx.doi.org/10.1002/sim.3698.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Seo, Hyojeong, Todd D. Little, Karrie A. Shogren, and Kyle M. Lang. "On the benefits of latent variable modeling for norming scales." International Journal of Behavioral Development 40, no. 4 (June 30, 2015): 373–84. http://dx.doi.org/10.1177/0165025415591230.

Full text
Abstract:
Structural equation modeling (SEM) is a powerful and flexible analytic tool to model latent constructs and their relations with observed variables and other constructs. SEM applications offer advantages over classical models in dealing with statistical assumptions and in adjusting for measurement error. So far, however, SEM has not been fully used to develop norms of assessments in educational or psychological fields. In this article, we highlighted the norming process of the Supports Intensity Scale – Children’s Version (SIS-C) within the SEM framework, using a recently developed method of identification (i.e., effects-coding method) that estimates latent means and variances in the metric of the observed indicators. The SIS-C norming process involved (a) creating parcels, (b) estimating latent means and standard deviations, (c) computing T scores using obtained latent means and standard deviations, and (d) reporting percentile ranks.
APA, Harvard, Vancouver, ISO, and other styles
17

Fariss, Christopher J., Therese Anders, Jonathan N. Markowitz, and Miriam Barnum. "New Estimates of Over 500 Years of Historic GDP and Population Data." Journal of Conflict Resolution 66, no. 3 (February 17, 2022): 553–91. http://dx.doi.org/10.1177/00220027211054432.

Full text
Abstract:
Gross domestic product (GDP), GDP per capita, and population are central to the study of politics and economics broadly, and conflict processes in particular. Despite the prominence of these variables in empirical research, existing data lack historical coverage and are assumed to be measured without error. We develop a latent variable modeling framework that expands data coverage (1500 AD–2018 AD) and, by making use of multiple indicators for each variable, provides a principled framework to estimate uncertainty for values for all country-year variables relative to one another. Expanded temporal coverage of estimates provides new insights about the relationship between development and democracy, conflict, repression, and health. We also demonstrate how to incorporate uncertainty in observational models. Results show that the relationship between repression and development is weaker than models that do not incorporate uncertainty suggest. Future extensions of the latent variable model can address other forms of systematic measurement error with new data, new measurement theory, or both.
APA, Harvard, Vancouver, ISO, and other styles
18

������, Anatoliy Maslak, ����, O. Leus, ���������, and V. Titarenko. "Assessing General Educational Institutions� Performance Against the Linear Quality Measurement Scale." Standards and Monitoring in Education 3, no. 3 (June 17, 2015): 9–16. http://dx.doi.org/10.12737/11836.

Full text
Abstract:
The paper describes the procedure for measuring performance of general educational institutions, using the linear quality scale. Such assessments are essential to monitoring and compare performance of general education schools. The questionnaire is developed to carry out operational measurement of schools� performance as the set of indicators. The questionnaire is considered to be the tool for measuring the quality level of schools� performance against the linear scale and helpful for monitoring and comparative analysis of schools. The analyses of the questionnaire quality and of the general educational schools� performance assessment are carried out within the frameworks of the latent variables theory, based on the Rasch model. The indicators, which fit better than others for differentiating between schools with poor and high performance levels, are discussed, as well as indicators, considered as the most adequate and the less adequate for the assessment model. Characteristic curves of these indicators are provided. Positions of schools assessed and of indicators are shown against one and the same linear scale of �school performance quality�. Results obtained are used to compare quality of various schools� performance. As it is shown, the difference between lyceums and general education schools by the latent variable of �performance quality� is wider, than difference between secondary and general education schools.
APA, Harvard, Vancouver, ISO, and other styles
19

An Hsieh, Chueh, and Alexander Von Eye. "The best of both worlds: a joint modeling approach for the assessment of change across repeated measurements." International Journal of Psychological Research 3, no. 1 (June 30, 2010): 176–207. http://dx.doi.org/10.21500/20112084.862.

Full text
Abstract:
The usefulness of Bayesian methods in estimating complex statistical models is undeniable. From a Bayesian standpoint, this paper aims to demonstrate the capacity of Bayesian methods and propose a comprehensive model combining both a measurement model (e.g., an item response model, IRM) and a structural model (e.g., a latent variable model, LVM). That is, through the incorporation of the probit link and Bayesian estimation, the item response model can be introduced naturally into a latent variable model. The utility of this IRM-LVM comprehensive framework is investigated with a real data example and promising results are obtained, in which the data drawn from part of the British Social Attitudes Panel Survey 1983-1986 reveal the attitude toward abortion of a representative sample of adults aged 18 or older living in Great Britain. The application of IRMs to responses gathered from repeated assessments allows us to take the characteristics of both item responses and measurement error into consideration in the analysis of individual developmental trajectories, and helps resolve some difficult modeling issues commonly encountered in developmental research, such as small sample sizes, multiple discretely scaled items, many repeated assessments, and attrition over time.
APA, Harvard, Vancouver, ISO, and other styles
20

Cao, Jing, S. Lynne Stokes, and Song Zhang. "A Bayesian Approach to Ranking and Rater Evaluation." Journal of Educational and Behavioral Statistics 35, no. 2 (April 2010): 194–214. http://dx.doi.org/10.3102/1076998609353116.

Full text
Abstract:
We develop a Bayesian hierarchical model for the analysis of ordinal data from multirater ranking studies. The model for a rater’s score includes four latent factors: one is a latent item trait determining the true order of items and the other three are the rater’s performance characteristics, including bias, discrimination, and measurement error in the ratings. The proposed approach aims at three goals. First, three Bayesian estimators are introduced to estimate the ranks of items. They all show a substantial improvement over the widely used score sums by using the information on the variable skill of the raters. Second, rater performance can be compared based on rater bias, discrimination, and measurement error. Third, a simulation-based decision-theoretic approach is described to determine the number of raters to employ. A simulation study and an analysis based on a grant review data set are presented.
APA, Harvard, Vancouver, ISO, and other styles
21

Scalise, Kathleen. "Hybrid Measurement Models for Technology-Enhanced Assessments Through mIRT-bayes." International Journal of Statistics and Probability 6, no. 3 (May 14, 2017): 168. http://dx.doi.org/10.5539/ijsp.v6n3p168.

Full text
Abstract:
Technology-enhanced assessments (TEAs) are rapidly emerging in educational measurement. In contexts such as simulation and gaming, a common challenge is handling complex streams of information, for which new statistical innovations are needed that can provide high quality proficiency estimates for the psychometrics of complex TEAs. Often in educational assessments with formal measurement models, latent variable models such as item response theory (IRT) are used to generate proficiency estimates from evidence elicited. Such robust techniques have become a foundation of educational assessment, when models fit. Another less common approach to compile evidence is through Bayesian networks, which represent a set of random variables and their conditional dependencies via a directed acyclic graph. Network approaches can be much more flexibly designed for complex assessment tasks and are often preferred by task developers, for technology-enhanced settings. However, the Bayesian network-based statistical models often are difficult to validate and to gauge the stability and accuracy, since the models make assumptions regarding conditional dependencies that are difficult to test. Here a new measurement model family, mIRT-bayes, is proposed to gain advantages of both latent variable models and network techniques combined through hybridization. Specifically, the technique described here embeds small Bayesian networks within an overarching multidimensional IRT model (mIRT), preserving the flexibility for task design while retaining the robust statistical properties of latent variable methods. Applied to simulation-based data from Harvard's Virtual Performance Assessments (VPA), the results of the new model show acceptable fit for the overarching mIRT model, along with reduction of the standard error of measurement through the embedded Bayesian networks, compared to use of mIRT alone. Overall for respondents, a finer grain-size of inference is made possible without additiona testing time or scoring resources, showing potentially promise for this family of new hybrid models.
APA, Harvard, Vancouver, ISO, and other styles
22

AGGEN, STEVEN H., MICHAEL C. NEALE, and KENNETH S. KENDLER. "DSM criteria for major depression: evaluating symptom patterns using latent-trait item response models." Psychological Medicine 35, no. 4 (December 2, 2004): 475–87. http://dx.doi.org/10.1017/s0033291704003563.

Full text
Abstract:
Background. Expert committees of clinicians have chosen diagnostic criteria for psychiatric disorders with little guidance from measurement theory or modern psychometric methods. The DSM-III-R criteria for major depression (MD) are examined to determine the degree to which latent trait item response models can extract additional useful information.Method. The dimensionality and measurement properties of the 9 DSM-III-R criteria plus duration are evaluated using dichotomous factor analysis and the Rasch and 2 parameter logistic item response models. Quantitative liability scales are compared with a binary DSM-III-R diagnostic algorithm variable to determine the ramifications of using each approach.Results. Factor and item response model results indicated the 10 MD criteria defined a reasonably coherent unidimensional scale of liability. However, person risk measurement was not optimal. Criteria thresholds were unevenly spaced leaving scale regions poorly measured. Criteria varied in discriminating levels of risk. Compared to a binary MD diagnosis, item response model (IRM) liability scales performed far better in (i) elucidating the relationship between MD symptoms and liability, (ii) predicting the personality trait of neuroticism and future depressive episodes and (iii) more precisely estimating heritability parameters.Conclusions. Criteria for MD largely defined a single dimension of disease liability although the quality of person risk measurement was less clear. The quantitative item response scales were statistically superior in predicting relevant outcomes and estimating twin model parameters. Item response models that treat symptoms as ordered indicators of risk rather than as counts towards a diagnostic threshold more fully exploit the information available in symptom endorsement data patterns.
APA, Harvard, Vancouver, ISO, and other styles
23

Grilli, Leonardo, and Carla Rampichini. "The Role of Sample Cluster Means in Multilevel Models." Methodology 7, no. 4 (August 1, 2011): 121–33. http://dx.doi.org/10.1027/1614-2241/a000030.

Full text
Abstract:
The paper explores some issues related to endogeneity in multilevel models, focusing on the case where the random effects are correlated with a level 1 covariate in a linear random intercept model. We consider two basic specifications, without and with the sample cluster mean. It is generally acknowledged that the omission of the cluster mean may cause omitted-variable bias. However, it is often neglected that the inclusion of the sample cluster mean in place of the population cluster mean entails a measurement error that yields biased estimators for both the slopes and the variance components. In particular, the contextual effect is attenuated, while the level 2 variance is inflated. We derive explicit formulae for measurement error biases that allow us to implement simple post-estimation corrections based on the reliability of the covariate. In the first part of the paper, the issue is tackled in a standard framework where the population cluster mean is treated as a latent variable. Later we consider a different framework arising when sampling from clusters of finite size, where the latent variable methods may have a poor performance, and we show how to effectively modify the measurement error correction. The theoretical analysis is supplemented with a simulation study and a discussion of the implications for effectiveness evaluation.
APA, Harvard, Vancouver, ISO, and other styles
24

Olivera-Aguilar, Margarita, Samuel H. Rikoon, Oscar Gonzalez, Yasemin Kisbu-Sakarya, and David P. MacKinnon. "Bias, Type I Error Rates, and Statistical Power of a Latent Mediation Model in the Presence of Violations of Invariance." Educational and Psychological Measurement 78, no. 3 (January 6, 2017): 460–81. http://dx.doi.org/10.1177/0013164416684169.

Full text
Abstract:
When testing a statistical mediation model, it is assumed that factorial measurement invariance holds for the mediating construct across levels of the independent variable X. The consequences of failing to address the violations of measurement invariance in mediation models are largely unknown. The purpose of the present study was to systematically examine the impact of mediator noninvariance on the Type I error rates, statistical power, and relative bias in parameter estimates of the mediated effect in the single mediator model. The results of a large simulation study indicated that, in general, the mediated effect was robust to violations of invariance in loadings. In contrast, most conditions with violations of intercept invariance exhibited severely positively biased mediated effects, Type I error rates above acceptable levels, and statistical power larger than in the invariant conditions. The implications of these results are discussed and recommendations are offered.
APA, Harvard, Vancouver, ISO, and other styles
25

Pankowska, Paulina, Bart F. M. Bakker, Daniel L. Oberski, and Dimitris Pavlopoulos. "How Linkage Error Affects Hidden Markov Model Estimates: A Sensitivity Analysis." Journal of Survey Statistics and Methodology 8, no. 3 (May 29, 2019): 483–512. http://dx.doi.org/10.1093/jssam/smz011.

Full text
Abstract:
Abstract Hidden Markov models (HMMs) are increasingly used to estimate and correct for classification error in categorical, longitudinal data, without the need for a “gold standard,” error-free data source. To accomplish this, HMMs require multiple observations over time on a single indicator and assume that the errors in these indicators are conditionally independent. Unfortunately, this “local independence” assumption is often unrealistic, untestable, and a source of serious bias. Linking independent data sources can solve this problem by making the local independence assumption plausible across sources, while potentially allowing for local dependence within sources. However, record linkage introduces a new problem: the records may be erroneously linked or incorrectly not linked. In this paper, we investigate the effects of linkage error on HMM estimates of transitions between employment contract types. Our data come from linking a labor force survey to administrative employer records; this linkage yields two indicators per time point that are plausibly conditionally independent. Our results indicate that both false-negative and false-positive linkage error turn out to be problematic primarily if the error is large and highly correlated with the dependent variable. Moreover, under certain conditions, false-positive linkage error (mislinkage) in fact acts as another source of misclassification that the HMM can absorb into its error-rate estimates, leaving the latent transition estimates unbiased. In these cases, measurement error modeling already accounts for linkage error. Our results also indicate where these conditions break down and more complex methods would be needed.
APA, Harvard, Vancouver, ISO, and other styles
26

Köhler, Carmen, Johannes Hartig, and Alexander Naumann. "Detecting Instruction Effects—Deciding Between Covariance Analytical and Change-Score Approach." Educational Psychology Review 33, no. 3 (January 27, 2021): 1191–211. http://dx.doi.org/10.1007/s10648-020-09590-6.

Full text
Abstract:
AbstractThe article focuses on estimating effects in nonrandomized studies with two outcome measurement occasions and one predictor variable. Given such a design, the analysis approach can be to include the measurement at the previous time point as a predictor in the regression model (ANCOVA), or to predict the change-score of the outcome variable (CHANGE). Researchers demonstrated that both approaches can result in different conclusions regarding the reported effect. Current recommendations on when to apply which approach are, in part, contradictory. In addition, they lack direct reference to the educational and instructional research contexts, since they do not consider latent variable models in which variables are measured without measurement error. This contribution assists researchers in making decisions regarding their analysis model. Using an underlying hypothetical data-generating model, we identify for which kind of data-generating scenario (i.e., under which assumptions) the defined true effect equals the estimated regression coefficients of the ANCOVA and the CHANGE approach. We give empirical examples from instructional research and discuss which approach is more appropriate, respectively.
APA, Harvard, Vancouver, ISO, and other styles
27

Kato, Kengo, Yuya Sasaki, and Takuya Ura. "Robust inference in deconvolution." Quantitative Economics 12, no. 1 (2021): 109–42. http://dx.doi.org/10.3982/qe1643.

Full text
Abstract:
Kotlarski's identity has been widely used in applied economic research based on repeated‐measurement or panel models with latent variables. However, how to conduct inference for these models has been an open question for two decades. This paper addresses this open problem by constructing a novel confidence band for the density function of a latent variable in repeated measurement error model. The confidence band builds on our finding that we can rewrite Kotlarski's identity as a system of linear moment restrictions. Our approach is robust in that we do not require the completeness. The confidence band controls the asymptotic size uniformly over a class of data generating processes, and it is consistent against all fixed alternatives. Simulation studies support our theoretical results.
APA, Harvard, Vancouver, ISO, and other styles
28

Kato, Kengo, Yuya Sasaki, and Takuya Ura. "Robust inference in deconvolution." Quantitative Economics 12, no. 1 (2021): 109–42. http://dx.doi.org/10.3982/qe1643.

Full text
Abstract:
Kotlarski's identity has been widely used in applied economic research based on repeated‐measurement or panel models with latent variables. However, how to conduct inference for these models has been an open question for two decades. This paper addresses this open problem by constructing a novel confidence band for the density function of a latent variable in repeated measurement error model. The confidence band builds on our finding that we can rewrite Kotlarski's identity as a system of linear moment restrictions. Our approach is robust in that we do not require the completeness. The confidence band controls the asymptotic size uniformly over a class of data generating processes, and it is consistent against all fixed alternatives. Simulation studies support our theoretical results.
APA, Harvard, Vancouver, ISO, and other styles
29

Becker, Mark P., and Ilsoon Yang. "7. Latent Class Marginal Models for Cross-Classifications of Counts." Sociological Methodology 28, no. 1 (August 1998): 293–325. http://dx.doi.org/10.1111/0081-1750.00050.

Full text
Abstract:
The standard latent class model is a finite mixture of indirectly observed multinomial distributions, each of which is assumed to exhibit statistical independence. Latent class analysis has been applied in a wide variety of research contexts, including studies of mobility, educational attainment, agreement, and diagnostic accuracy, and as measurement error models in social research. One of the attractive features of the latent class model in these settings is that the parameters defining the individual multinomials are readily interpretable marginal probabilities, conditional on the unobserved latent variable(s), that are often of substantive interest. There are, however, settings where the local-independence axiom is not supported, and hence it is useful to consider some form of local dependence. In this paper we consider a family of models defined in terms of finite mixtures of multinomial models where the multinomials are parameterized in terms of a set of models for the univariate marginal distributions and for marginal associations. Local dependence is introduced through the models for marginal associations, and the standard latent class model obtains as a special case. Three examples are analyzed with the models to illustrate their utility in analyzing complex cross-classifications.
APA, Harvard, Vancouver, ISO, and other styles
30

Steyer, Rolf. "Analyzing Individual and Average Causal Effects via Structural Equation Models." Methodology 1, no. 1 (January 2005): 39–54. http://dx.doi.org/10.1027/1614-1881.1.1.39.

Full text
Abstract:
Abstract. Although both individual and average causal effects are defined in Rubin's approach to causality, in this tradition almost all papers center around learning about the average causal effects. Almost no efforts deal with developing designs and models to learn about individual effects. This paper takes a first step in this direction. In the first and general part, Rubin's concepts of individual and average causal effects are extended replacing Rubin's deterministic potential-outcome variables by the stochastic expected-outcome variables. Based on this extension, in the second and main part specific designs, assumptions and models are introduced which allow identification of (1) the variance of the individual causal effects, (2) the regression of the individual causal effects on the true scores of the pretests, (3) the regression of the individual causal effects on other explanatory variables, and (4) the individual causal effects themselves. Although random assignment of the observational unit to one of the treatment conditions is useful and yields stronger results, much can be achieved with a nonequivalent control group. The simplest design requires two pretests measuring a pretest latent trait that can be interpreted as the expected outcome under control, and two posttests measuring a posttest latent trait: The expected outcome under treatment. The difference between these two latent trait variables is the individual-causal-effect variable, provided some assumptions can be made. These assumptions - which rule out alternative explanations in the Campbellian tradition - imply a single-trait model (a one-factor model) for the untreated control condition in which no treatment takes place, except for change due to measurement error. These assumptions define a testable model. More complex designs and models require four occasions of measurement, two pretest occasions and two posttest occasions. The no-change model for the untreated control condition is then a single-trait-multistate model allowing for measurement error and occasion-specific effects.
APA, Harvard, Vancouver, ISO, and other styles
31

Adams, Raymond J., Mark Wilson, and Margaret Wu. "Multilevel Item Response Models: An Approach to Errors in Variables Regression." Journal of Educational and Behavioral Statistics 22, no. 1 (March 1997): 47–76. http://dx.doi.org/10.3102/10769986022001047.

Full text
Abstract:
In this article we show how certain analytic problems that arise when one attempts to use latent variables as outcomes in regression analyses can be addressed by taking a multilevel perspective on item response modeling. Under a multilevel, or hierarchical, perspective we cast the item response model as a within-student model and the student population distribution as a between-student model. Taking this perspective leads naturally to an extension of the student population model to include a range of student-level variables, and it invites the possibility of further extending the models to additional levels so that multilevel models can be applied with latent outcome variables. In the two-level case, the model that we employ is formally equivalent to the plausible value procedures that are used as part of the National Assessment of Educational Progress (NAEP), but we present the method for a different class of measurement models, and we use a simultaneous estimation method rather than two-step estimation. In our application of the models to the appropriate treatment of measurement error in the dependent variable of a between-student regression, we also illustrate the adequacy of some approximate procedures that are used in NAEP.
APA, Harvard, Vancouver, ISO, and other styles
32

Xue, Xiaonan, Maja Oktay, Sumanta Goswami, and Mimi Y. Kim. "A method to compare the performance of two molecular diagnostic tools in the absence of a gold standard." Statistical Methods in Medical Research 28, no. 2 (August 17, 2017): 419–31. http://dx.doi.org/10.1177/0962280217726804.

Full text
Abstract:
The paper is motivated by the problem of comparing the accuracy of two molecular tests in detecting genetic mutations in tumor samples when there is no gold standard test. Commonly used sequencing methods require a large number of tumor cells in the tumor sample and the proportion of tumor cells with mutation positivity to be above a threshold level whereas new tests aim to reduce the requirement for number of tumor cells and the threshold level. A new latent class model is proposed to compare these two tests in which a random variable is used to represent the unobserved proportion of mutation positivity so that these two tests are conditionally dependent; furthermore, an independent random variable is included to address measurement error associated with the reading from each test, while existing latent class models often assume conditional independence and do not allow measurement error. In addition, methods for calculating the sample size for a study that is sufficiently powered to compare the accuracy of two molecular tests are proposed and compared. The proposed methods are then applied to a study which aims to compare two molecular tests for detecting EGFR mutations in lung cancer patients.
APA, Harvard, Vancouver, ISO, and other styles
33

E. Berzofsky, Marcus, and Paul P. Biemer. "Time Varying Grouping Variables in Markov Latent Class Analysis: Some Problems and Solutions." International Journal of Statistics and Probability 7, no. 5 (August 3, 2018): 28. http://dx.doi.org/10.5539/ijsp.v7n5p28.

Full text
Abstract:
Markov latent class analysis (MLCA) is a modeling technique for panel or longitudinal data that can be used to estimate the classification error rates (e.g., false positive and false negative rates for dichotomous items) for discrete outcomes with categorical predictors when gold-standard measurements are not available. Because panel surveys collect data at multiple time points, the grouping variables in the model may either be time varying or time invariant (static). Time varying grouping variables may be more correlated with either the latent construct or the measurement errors because they are measured simultaneously with the construct during the measurement process. However, they generate a large number of model parameters that can cause problems with data sparseness, model diagnostic validity, and model convergence. In this paper we investigate whether more parsimonious grouping variables that either summarize the variation of the time varying grouping variable or assume a structure that lacks memory of previous values of the grouping variables can be used instead, without sacrificing model fit or validity. We propose a simple diagnostic approach for comparing the validity of models that use time-invariant summary variables with their time-varying counterparts. To illustrate the methodology, this approach is applied to data from the National Crime Victimization Survey (NCVS) where greater parsimony and a reduction in data sparseness were achieved with no appreciable loss in model validity for the outcome variables considered. The approach is generalized for application to essentially any MLCA using time varying group variables and its advantages and disadvantages are discussed.
APA, Harvard, Vancouver, ISO, and other styles
34

Bergantino, Angela S., Mauro Capurso, Thijs Dekker, and Stephane Hess. "Allowing for Heterogeneity in the Consideration of Airport Access Modes: The Case of Bari Airport." Transportation Research Record: Journal of the Transportation Research Board 2673, no. 8 (February 7, 2019): 50–61. http://dx.doi.org/10.1177/0361198118825126.

Full text
Abstract:
Mode choice models traditionally assume that all objectively available alternatives are considered. This might not always be a reasonable assumption, even when the number of alternatives is limited. Consideration of alternatives, like many other aspects of the decision-making process, cannot be observed by the analyst, and can only be imperfectly measured. As part of a stated choice survey aimed at unveiling air passengers’ preferences for access modes to Bari International Airport in Italy, we collected a wide set of indicators that either directly or indirectly measure respondents’ consideration of the public transport alternatives. In our access mode choice model, consideration of public transport services was treated as a latent variable, and entered the utility function for this mode through a “discounting” factor. The proposed integrated choice and latent variable approach allows the analyst not only to overcome potential endogeneity and measurement error issues associated with the indicators, but also makes the model suitable for forecasting. As a result of accounting for consideration effects, we observed an improvement in fit that also held in a validation sample; moreover, the effects of policy changes aimed at improving the modal share of public transport were considerably reduced.
APA, Harvard, Vancouver, ISO, and other styles
35

Williams, Larry J., Ernest H. O’Boyle, and Jia (Joya) Yu. "Condition 9 and 10 Tests of Model Confirmation: A Review of James, Mulaik, and Brett (1982) and Contemporary Alternatives." Organizational Research Methods 23, no. 1 (October 26, 2017): 6–29. http://dx.doi.org/10.1177/1094428117736137.

Full text
Abstract:
Structural equation modeling (SEM) serves as one of the most important advances in the social sciences in the past 40 years. Through a combination of factor analysis and path analysis, SEM allows organizational researchers to test causal models while accounting for random and nonrandom (bias) measurement error. SEM is now one of the most commonly used analytic techniques and its modern day ubiquity can be traced in large part to a series of intellectual contributions by Larry James. The current article focuses on the seminal work, James, Mulaik, and Brett (1982), and the unique contribution of the “conditions” required for appropriate confirmatory inference with the path and latent variable models. We discuss the importance of James et al.’s Condition 9 and 10 tests, systematically review 14 years of studies using SEM in leading management journals and reanalyze results based on new techniques that extend James et al. (1982), and conclude with suggestions for improved Condition 9 and 10 assessments.
APA, Harvard, Vancouver, ISO, and other styles
36

TEACHMAN, JAY D. "Families as Natural Experiments." Journal of Family Issues 16, no. 5 (September 1995): 519–37. http://dx.doi.org/10.1177/019251395016005002.

Full text
Abstract:
In this article, the author argues that data on siblings provide a way to account for the impact of unmeasured, omitted variables on relashionships of interest. This is possible because families form a sort of natural experiment. Family members are likely to have many shared experiences, as well as a common genetic heritage, but relationships between variables defined as differences between family members cannot be attributed to these shared family characteristics. Although fixed- and random-effects models are discussed as one means to make use of information on siblings from the same family, the author proposes a latent-variable structural equation approach to the problem. This model provides estimates of both within-and between-family relationships, and it accounts for the impact of measurement error.
APA, Harvard, Vancouver, ISO, and other styles
37

Paap, Muirne C. S., Karel A. Kroeze, Cees A. W. Glas, Caroline B. Terwee, Job van der Palen, and Bernard P. Veldkamp. "Measuring Patient-Reported Outcomes Adaptively: Multidimensionality Matters!" Applied Psychological Measurement 42, no. 5 (October 24, 2017): 327–42. http://dx.doi.org/10.1177/0146621617733954.

Full text
Abstract:
As there is currently a marked increase in the use of both unidimensional (UCAT) and multidimensional computerized adaptive testing (MCAT) in psychological and health measurement, the main aim of the present study is to assess the incremental value of using MCAT rather than separate UCATs for each dimension. Simulations are based on empirical data that could be considered typical for health measurement: a large number of dimensions (4), strong correlations among dimensions (.77-.87), and polytomously scored response data. Both variable- ( SE < .316, SE < .387) and fixed-length conditions (total test length of 12, 20, or 32 items) are studied. The item parameters and variance–covariance matrix Φ are estimated with the multidimensional graded response model (GRM). Outcome variables include computerized adaptive test (CAT) length, root mean square error (RMSE), and bias. Both simulated and empirical latent trait distributions are used to sample vectors of true scores. MCATs were generally more efficient (in terms of test length) and more accurate (in terms of RMSE) than their UCAT counterparts. Absolute average bias was highest for variable-length UCATs with termination rule SE < .387. Test length of variable-length MCATs was on average 20% to 25% shorter than test length across separate UCATs. This study showed that there are clear advantages of using MCAT rather than UCAT in a setting typical for health measurement.
APA, Harvard, Vancouver, ISO, and other styles
38

Mohiuddin, Hossain, Md Musfiqur Rahman Bhuiya, Shaila Jamal, and Zhi Chen. "Exploring the Choice of Bicycling and Walking in Rajshahi, Bangladesh: An Application of Integrated Choice and Latent Variable (ICLV) Models." Sustainability 14, no. 22 (November 9, 2022): 14784. http://dx.doi.org/10.3390/su142214784.

Full text
Abstract:
Bangladesh has emphasized active transportation in its transportation policies and has encouraged its population, especially the youth and students, towards bicycling. However, there is a scarcity of studies that have examined the factors important to the choice of active transportation that can be referenced to support the initiative. To address this research gap, in this study, we explore the influence of sociodemographics and latent perceptions of a built environment on the choice to walk and bicycle among students and nonstudents in Rajshahi, Bangladesh. In Rajshahi, we conducted a household survey between July and August, 2017. We used a modeling framework that integrated choice and latent variable (ICLV) models to effectively incorporate the latent perception variables in the choice model, addressing measurement error and endogeneity bias. Our models show that students are influenced by perceptions of safety from crime, while nonstudents are influenced by their perceptions of the walkability of a built environment when choosing a bicycle for commuting trips. For recreational bicycle trips, students are more concerned about the perceptions of road safety, whereas nonstudents are concerned about safety from crime. We find that road safety perception significantly and positively influences walking behavior among nonstudents. Structural equation models of the latent perception variables show that females are more likely to provide lower perceptions of neighborhood walkability, road safety, and safety from crime. Regarding active transportation decisions, overall, we find there is a difference between student and nonstudent groups and also within these groups. The findings of this study can assist in developing a sustainable active transportation system by addressing the needs of different segments of the population. In this study, we also provide recommendations regarding promoting active transportation in Rajshahi.
APA, Harvard, Vancouver, ISO, and other styles
39

Forsberg, Jakob, Per Munk Nielsen, Søren Balling Engelsen, and Klavs Martin Sørensen. "On-Line Real-Time Monitoring of a Rapid Enzymatic Oil Degumming Process: A Feasibility Study Using Free-Run Near-Infrared Spectroscopy." Foods 10, no. 10 (October 5, 2021): 2368. http://dx.doi.org/10.3390/foods10102368.

Full text
Abstract:
Enzymatic degumming is a well established process in vegetable oil refinement, resulting in higher oil yield and a more stable downstream processing compared to traditional degumming methods using acid and water. During the reaction, phospholipids in the oil are hydrolyzed to free fatty acids and lyso-phospholipids. The process is typically monitored by off-line laboratory measurements of the free fatty acid content in the oil, and there is a demand for an automated on-line monitoring strategy to increase both yield and understanding of the process dynamics. This paper investigates the option of using Near-Infrared spectroscopy (NIRS) to monitor the enzymatic degumming reaction. A new method for balancing spectral noise and keeping the chemical information in the spectra obtained from a rapid changing chemical process is suggested. The effect of a varying measurement averaging window width (0 to 300 s), preprocessing method and variable selection algorithm is evaluated, aiming to obtain the most accurate and robust calibration model for prediction of the free fatty acid content (% (w/w)). The optimal Partial Least Squares (PLS) model includes eight wavelength variables, as found by rPLS (recursive PLS) calibration, and yields an RMSECV (Root Mean Square Error of Cross Validation) of 0.05% (w/w) free fatty acid using five latent variables.
APA, Harvard, Vancouver, ISO, and other styles
40

Abdullah, Nurzulaikha, Yee Cheng Kueh, Garry Kuan, Mung Seong Wong, Fatan Hamamah Yahaya, Nor Aslina Abd Samat, Khairil Khuzaini Zulkifli, and Yeong Yeh Lee. "Development and validation of the Health Promoting Behaviour for Bloating (HPB-Bloat) scale." PeerJ 9 (June 4, 2021): e11444. http://dx.doi.org/10.7717/peerj.11444.

Full text
Abstract:
Background Health management strategies may help patients with abdominal bloating (AB), but there are currently no tools that measure behaviour and awareness. This study aimed to validate and verify the dimensionality of the newly-developed Health Promoting Behaviour for Bloating (HPB-Bloat) scale. Methods Based on previous literature, expert input, and in-depth interviews, we generated new items for the HPB-Bloat. Its content validity was assessed by experts and pre-tested across 30 individuals with AB. Construct validity and dimensionality were first determined using exploratory factor analysis (EFA) and Promax rotation analysis, and then using confirmatory factor analysis (CFA). Results During the development stage, 35 items were generated for the HPB-Bloat, and were maintained following content validity assessment and pre-testing. One hundred and fifty-two participants (mean age of 31.27 years, 68.3% female) and 323 participants (mean age of 27.69 years, 59.4% male) completed the scale for EFA and CFA, respectively. Using EFA, we identified 20 items that we divided into five factors: diet (five items), health awareness (four items), physical activity (three items), stress management (four items), and treatment (four items). The total variance explained by the EFA model was 56.7%. The Cronbach alpha values of the five factors ranged between 0.52 and 0.81. In the CFA model, one problematic latent variable (treatment) was identified and three items were removed. In the final measurement model, four factors and 17 items fit the data well based on several fit indices (root mean square error of approximation (RMSEA) = 0.044 and standardized root mean squared residual (SRMR) = 0.052). The composite reliability of all factors in the final measurement model was above 0.60, indicating acceptable construct reliability. Conclusion The newly developed HPB-Bloat scale is valid and reliable when assessing the awareness of health-promoting behaviours across patients with AB. Further validation is needed across different languages and populations.
APA, Harvard, Vancouver, ISO, and other styles
41

Naragon-Gainey, K., and D. Watson. "The anxiety disorders and suicidal ideation: accounting for co-morbidity via underlying personality traits." Psychological Medicine 41, no. 7 (November 8, 2010): 1437–47. http://dx.doi.org/10.1017/s0033291710002096.

Full text
Abstract:
BackgroundThe anxiety disorders are robust correlates/predictors of suicidal ideation, but it is unclear whether (a) the anxiety disorders are specifically associated with suicidal ideation or (b) the association is due to co-morbidity with depression and other disorders. One means of modeling co-morbidity is through the personality traits neuroticism/negative emotionality (N/NE) and extraversion/positive emotionality (E/PE), which account for substantial shared variance among the internalizing disorders. The current study examines the association between the internalizing disorders and suicidal ideation, after controlling for co-morbidity via N/NE and E/PE.MethodThe sample consisted of 327 psychiatric out-patients. Multiple self-report and interview measures were collected for internalizing disorders [depression, generalized anxiety disorder (GAD), post-traumatic stress disorder (PTSD), social anxiety, panic and specific phobia] and suicidal ideation, as well as self-report measures for N/NE and E/PE. A model was hypothesized in which each disorder and suicidal ideation was regressed on N/NE, and depression and social anxiety were regressed on E/PE. Structural equation modeling (SEM) was used to examine the unique association of suicidality with each disorder, beyond shared variance with N/NE and E/PE.ResultsThe hypothesized model was an acceptable fit to the data. Although zero-order analyses indicated that suicidal ideation was moderately to strongly correlated with all of the disorders, only depression and PTSD remained significantly associated with suicidal ideation in the SEM analyses.ConclusionsIn a latent variable model that accounts for measurement error and a broad source of co-morbidity, only depression and PTSD were uniquely associated with suicidal ideation; panic, GAD, social anxiety and specific phobia were not.
APA, Harvard, Vancouver, ISO, and other styles
42

Bishop, Nicholas. "Estimating Sample Size Required to Establish an Association Between Walnut Intake and Cognitive Change in Older Adults: An Application of Monte Carlo Power Analysis." Current Developments in Nutrition 4, Supplement_2 (May 29, 2020): 10. http://dx.doi.org/10.1093/cdn/nzaa040_010.

Full text
Abstract:
Abstract Objectives Observational studies support a cross-sectional association between walnut intake and cognitive function among older adults, but few of these studies identify walnut intake as a predictor of cognitive change. This project estimates the sample size required to establish a statistically significant association between walnut intake and cognitive change in an observational study using Monte Carlo power analysis. Methods Initial observations were drawn from the 2012, 2014, and 2016 Health and Retirement Study (HRS) and the 2013 Health Care and Nutrition Study (HCNS; age ≥ 65, n = 3632). Global cognitive function was measured using the Telephone Interview for Cognitive Status and two operationalizations of walnut intake were investigated (none, low intake (.01 – .08 servings/day), and moderate intake (&gt; .08 servings/day); no intake vs. any intake). Latent growth models adjusting for covariates and complex sample design were used to estimate age-based developmental trajectories of TICS scores as a function of walnut intake. Parameter estimates from these models were used as starting values in Monte Carlo simulation models replicated for sample sizes from 1000–50,000. Results Model estimation required around 1200 hours of processing time. When measured as a trichotomous variable, the observed association between walnut intake and cognitive change was weak (for moderate intake, b = −0.030, SE = .03) and would require at least 42,000 observations to reduce the standard error to a level where 80% or more of random samples would identify the effect as statistically significant (P &lt; .05). When measured as a dichotomous variable, the observed effect was small (b = −0.013, SE = 0.025) and required a sample size of at least 39,000 observations to identify power above .80. Conclusions Given that the HRS and HCNS are nationally-representative studies, the population size from which an adequate sample would need to be drawn to identify walnut intake as a significant predictor of cognitive decline would exceed the number of adults age 65 and older currently living in the US. Rather than increase sample size of observational studies, researchers should apply quasi-experimental methods and detailed measurement of walnut intake to establish an association between walnut intake and cognitive change. Funding Sources This research was funded by the California Walnut Commission.
APA, Harvard, Vancouver, ISO, and other styles
43

Skrondal, Anders, and Sophia Rabe-Hesketh. "The Role of Conditional Likelihoods in Latent Variable Modeling." Psychometrika, January 10, 2022. http://dx.doi.org/10.1007/s11336-021-09816-8.

Full text
Abstract:
AbstractIn psychometrics, the canonical use of conditional likelihoods is for the Rasch model in measurement. Whilst not disputing the utility of conditional likelihoods in measurement, we examine a broader class of problems in psychometrics that can be addressed via conditional likelihoods. Specifically, we consider cluster-level endogeneity where the standard assumption that observed explanatory variables are independent from latent variables is violated. Here, “cluster” refers to the entity characterized by latent variables or random effects, such as individuals in measurement models or schools in multilevel models and “unit” refers to the elementary entity such as an item in measurement. Cluster-level endogeneity problems can arise in a number of settings, including unobserved confounding of causal effects, measurement error, retrospective sampling, informative cluster sizes, missing data, and heteroskedasticity. Severely inconsistent estimation can result if these challenges are ignored.
APA, Harvard, Vancouver, ISO, and other styles
44

Fischer, Luise, Theresa Rohm, Claus H. Carstensen, and Timo Gnambs. "Linking of Rasch-Scaled Tests: Consequences of Limited Item Pools and Model Misfit." Frontiers in Psychology 12 (July 6, 2021). http://dx.doi.org/10.3389/fpsyg.2021.633896.

Full text
Abstract:
In the context of item response theory (IRT), linking the scales of two measurement points is a prerequisite to examine a change in competence over time. In educational large-scale assessments, non-identical test forms sharing a number of anchor-items are frequently scaled and linked using two− or three-parametric item response models. However, if item pools are limited and/or sample sizes are small to medium, the sparser Rasch model is a suitable alternative regarding the precision of parameter estimation. As the Rasch model implies stricter assumptions about the response process, a violation of these assumptions may manifest as model misfit in form of item discrimination parameters empirically deviating from their fixed value of one. The present simulation study investigated the performance of four IRT linking methods—fixed parameter calibration, mean/mean linking, weighted mean/mean linking, and concurrent calibration—applied to Rasch-scaled data with a small item pool. Moreover, the number of anchor items required in the absence/presence of moderate model misfit was investigated in small to medium sample sizes. Effects on the link outcome were operationalized as bias, relative bias, and root mean square error of the estimated sample mean and variance of the latent variable. In the light of this limited context, concurrent calibration had substantial convergence issues, while the other methods resulted in an overall satisfying and similar parameter recovery—even in the presence of moderate model misfit. Our findings suggest that in case of model misfit, the share of anchor items should exceed 20% as is currently proposed in the literature. Future studies should further investigate the effects of anchor item composition regarding unbalanced model misfit.
APA, Harvard, Vancouver, ISO, and other styles
45

Monticone, Marco, Andrea Giordano, and Franco Franchignoni. "Scale Shortening and Decrease in Measurement Precision: Analysis of the Pain Self-Efficacy Questionnaire and Its Short Forms in an Italian-Speaking Population With Neck Pain Disorders." Physical Therapy 101, no. 6 (February 1, 2021). http://dx.doi.org/10.1093/ptj/pzab039.

Full text
Abstract:
Abstract Objective Short (2- and 4-item) forms of the Pain Self-Efficacy Questionnaire (PSEQ) have been proposed, but their measurement precision at the individual level is unclear. The purpose of this study was to analyze the Rasch psychometric characteristics of PSEQ and its 3 short forms (one 4-item and two 2-item versions) in an Italian-speaking population with neck pain (NP) disorders and compare their measurement precision at the individual level through calculation of the test information function (TIF). Methods Secondary analysis of data from a prospective single-group observational study was conducted. In 161 consecutive participants (mean age = 45 years [SD = 14]; 104 women) with NP disorders, a Rasch analysis was performed on each version of the PSEQ (full scale plus 3 short forms), and the TIF was calculated to examine the degree of measurement precision in estimating person ability over the whole measured construct (pain self-efficacy). Results In all versions of the PSEQ, the rating scale fulfilled the category functioning criteria, and all items showed an adequate fit to the Rasch model. The TIF showed a bell-shaped distribution of information, with an acceptable measurement precision (standard error &lt;0.5) for persons with a wide range of ability; conversely, measurement precision was unacceptably low in each short form (particularly the two 2-item versions). Conclusions The results confirm and expand reports on the sound psychometric characteristics of PSEQ, showing for the first time, to our knowledge, its conditional precision in estimating pain self-efficacy measures in Italian individuals with NP disorders. The study cautions against use of the 3 PSEQ short forms for individual-level clinical decision-making. Impact Short scales are popular in rehabilitation settings largely because they can save assessment time and related costs. The psychometric characteristics of the 10-item PSEQ were confirmed and deepened, including its precision in estimating individual pain self-efficacy at different levels of this latent variable. On the other hand, low measurement precision of the 3 PSEQ short forms cautions against their use for individual judgments.
APA, Harvard, Vancouver, ISO, and other styles
46

Baharudin, Harun, Zunita Mohamad Maskor, and Mohd Effendi Ewan Mohd Matore. "The raters’ differences in Arabic writing rubrics through the Many-Facet Rasch measurement model." Frontiers in Psychology 13 (December 16, 2022). http://dx.doi.org/10.3389/fpsyg.2022.988272.

Full text
Abstract:
Writing assessment relies closely on scoring the excellence of a subject’s thoughts. This creates a faceted measurement structure regarding rubrics, tasks, and raters. Nevertheless, most studies did not consider the differences among raters systematically. This study examines the raters’ differences in association with the reliability and validity of writing rubrics using the Many-Facet Rasch measurement model (MFRM) to model these differences. A set of standards for evaluating the quality of rating based on writing assessment was examined. Rating quality was tested within four writing domains from an analytic rubric using a scale of one to three. The writing domains explored were vocabulary, grammar, language, use, and organization; whereas the data were obtained from 15 Arabic essays gathered from religious secondary school students under the supervision of the Malaysia Ministry of Education. Five raters in the field of practice were selected to evaluate all the essays. As a result, (a) raters range considerably on the lenient-severity dimension, so rater variations ought to be modeled; (b) the combination of findings between raters avoids the doubt of scores, thereby reducing the measurement error which could lower the criterion validity with the external variable; and (c) MFRM adjustments effectively increased the correlations of the scores obtained from partial and full data. Predominant findings revealed that rating quality varies across analytic rubric domains. This also depicts that MFRM is an effective way to model rater differences and evaluate the validity and reliability of writing rubrics.
APA, Harvard, Vancouver, ISO, and other styles
47

Guastadisegni, Lucia, Silvia Cagnone, Irini Moustaki, and Vassilis Vasdekis. "Use of the Lagrange Multiplier Test for Assessing Measurement Invariance Under Model Misspecification." Educational and Psychological Measurement, June 2, 2021, 001316442110203. http://dx.doi.org/10.1177/00131644211020355.

Full text
Abstract:
This article studies the Type I error, false positive rates, and power of four versions of the Lagrange multiplier test to detect measurement noninvariance in item response theory (IRT) models for binary data under model misspecification. The tests considered are the Lagrange multiplier test computed with the Hessian and cross-product approach, the generalized Lagrange multiplier test and the generalized jackknife score test. The two model misspecifications are those of local dependence among items and nonnormal distribution of the latent variable. The power of the tests is computed in two ways, empirically through Monte Carlo simulation methods and asymptotically, using the asymptotic distribution of each test under the alternative hypothesis. The performance of these tests is evaluated by means of a simulation study. The results highlight that, under mild model misspecification, all tests have good performance while, under strong model misspecification, the tests performance deteriorates, especially for false positive rates under local dependence and power for small sample size under misspecification of the latent variable distribution. In general, the Lagrange multiplier test computed with the Hessian approach and the generalized Lagrange multiplier test have better performance in terms of false positive rates while the Lagrange multiplier test computed with the cross-product approach has the highest power for small sample sizes. The asymptotic power turns out to be a good alternative to the classic empirical power because it is less time consuming. The Lagrange tests studied here have been also applied to a real data set.
APA, Harvard, Vancouver, ISO, and other styles
48

Laghaie, Arash, and Thomas Otter. "EXPRESS: Measuring Evidence for Mediation in the Presence of Measurement Error." Journal of Marketing Research, January 7, 2023, 002224372311518. http://dx.doi.org/10.1177/00222437231151873.

Full text
Abstract:
Mediation analysis empirically investigates the process underlying the effect of an experimental manipulation on a dependent variable of interest. In the simplest mediation setting, the experimental treatment can affect the dependent variable through the mediator (indirect effect) and/or directly (direct effect). However, what appears to be an indirect effect in standard mediation analysis may reflect a data generating process without mediation, including the possibility of a reversed causal ordering of measured variables, regardless of the statistical properties of the estimate. To overcome this indeterminacy where possible, we develop the insight that a statistically reliable total effect combined with strong evidence for conditional independence of treatment and outcome given the mediator is unequivocal evidence for mediation as the underlying causal model into an operational procedure. This is particularly helpful when theory is insufficient to definitely causally order measured variables, or when the dependent variable is measured before what is believed to be the mediator. Our procedure combines Bayes factors as principled measures of the degree of support for conditional independence, with latent variable modeling to account for measurement error and discretization in a fully Bayesian framework. Re-analyzing a set of published mediation studies, we illustrate how our approach facilitates stronger conclusions.
APA, Harvard, Vancouver, ISO, and other styles
49

Finch, W. Holmes, Brian F. French, and Alicia Hazelwood. "A Comparison of Confirmatory Factor Analysis and Network Models for Measurement Invariance Assessment When Indicator Residuals are Correlated." Applied Psychological Measurement, January 14, 2023, 014662162311517. http://dx.doi.org/10.1177/01466216231151700.

Full text
Abstract:
Social science research is heavily dependent on the use of standardized assessments of a variety of phenomena, such as mood, executive functioning, and cognitive ability. An important assumption when using these instruments is that they perform similarly for all members of the population. When this assumption is violated, the validity evidence of the scores is called into question. The standard approach for assessing the factorial invariance of the measures across subgroups within the population involves multiple groups confirmatory factor analysis (MGCFA). CFA models typically, but not always, assume that once the latent structure of the model is accounted for, the residual terms for the observed indicators are uncorrelated (local independence). Commonly, correlated residuals are introduced after a baseline model shows inadequate fit and inspection of modification indices ensues to remedy fit. An alternative procedure for fitting latent variable models that may be useful when local independence does not hold is based on network models. In particular, the residual network model (RNM) offers promise with respect to fitting latent variable models in the absence of local independence via an alternative search procedure. This simulation study compared the performances of MGCFA and RNM for measurement invariance assessment when local independence is violated, and residual covariances are themselves not invariant. Results revealed that RNM had better Type I error control and higher power compared to MGCFA when local independence was absent. Implications of the results for statistical practice are discussed.
APA, Harvard, Vancouver, ISO, and other styles
50

Bywater, Tracey, Abigail Dunn, Charlotte Endacott, Karen Smith, Paul A. Tiffin, Matthew Price, and Sarah Blower. "The Measurement Properties and Acceptability of a New Parent–Infant Bonding Tool (‘Me and My Baby’) for Use in United Kingdom Universal Healthcare Settings: A Psychometric, Cross-Sectional Study." Frontiers in Psychology 13 (February 14, 2022). http://dx.doi.org/10.3389/fpsyg.2022.804885.

Full text
Abstract:
IntroductionThe National Institute for Health and Care Excellence (NICE) guidelines acknowledge the importance of the parent–infant relationship for child development but highlight the need for further research to establish reliable tools for assessment, particularly for parents of children under 1 year. This study explores the acceptability and psychometric properties of a co-developed tool, ‘Me and My Baby’ (MaMB).Study designA cross-sectional design was applied. The MaMB was administered universally (in two sites) with mothers during routine 6–8-week Health Visitor contacts. The sample comprised 467 mothers (434 MaMB completers and 33 ‘non-completers’). Dimensionality of instrument responses were evaluated via exploratory and confirmatory ordinal factor analyses. Item response modeling was conducted via a Rasch calibration to evaluate how the tool conformed to principles of ‘fundamental measurement’. Tool acceptability was evaluated via completion rates and comparing ‘completers’ and ‘non-completers’ demographic differences on age, parity, ethnicity, and English as an additional language. Free-text comments were summarized. Data sharing agreements and data management were compliant with the General Data Protection Regulation, and University of York data management policies.ResultsHigh completion rates suggested the MaMB was acceptable. Psychometric analyses showed the response data to be an excellent fit to a unidimensional confirmatory factor analytic model. All items loaded statistically significantly and substantially (&gt;0.4) on a single underlying factor (latent variable). The item response modeling showed that most MaMB items fitted the Rasch model. (Rasch) item reliability was high (0.94) yet the test yielded little information on each respondent, as highlighted by the relatively low ‘person separation index’ of 0.1.Conclusion and next stepsMaMB reliably measures a single construct, likely to be infant bonding. However, further validation work is needed, preferably with ‘enriched population samples’ to include higher-need/risk families. The MaMB tool may benefit from reduced response categories (from four to three) and some modest item wording amendments. Following further validation and reliability appraisal the MaMB may ultimately be used with fathers/other primary caregivers and be potentially useful in research, universal health settings as part of a referral pathway, and clinical practice, to identify dyads in need of additional support/interventions.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography