To see the other types of publications on this topic, follow the link: Rasch model.

Journal articles on the topic 'Rasch model'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Rasch model.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Otiv, Sunil. "The Rasch Model." Plastic and Reconstructive Surgery 131, no. 2 (2013): 283e—286e. http://dx.doi.org/10.1097/prs.0b013e318278d5ac.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

van der Linden, Wim J. "Applying the Rasch Model." International Journal of Testing 1, no. 3-4 (2001): 319–26. http://dx.doi.org/10.1080/15305058.2001.9669478.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

SNYDER, SCOTT, and ROBERT SHEEHAN. "The Rasch Measurement Model." Journal of Early Intervention 16, no. 1 (1992): 87–95. http://dx.doi.org/10.1177/105381519201600108.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Wen-Chung, and Mark Wilson. "The Rasch Testlet Model." Applied Psychological Measurement 29, no. 2 (2005): 126–49. http://dx.doi.org/10.1177/0146621604271053.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

van der Linden, Wim. "Applying the Rasch Model." International Journal of Testing 1, no. 3 (2001): 319–26. http://dx.doi.org/10.1207/s15327574ijt013&4_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Robitzsch, Alexander. "Regularized Mixture Rasch Model." Information 13, no. 11 (2022): 534. http://dx.doi.org/10.3390/info13110534.

Full text
Abstract:
The mixture Rasch model is a popular mixture model for analyzing multivariate binary data. The drawback of this model is that the number of estimated parameters substantially increases with an increasing number of latent classes, which, in turn, hinders the interpretability of model parameters. This article proposes regularized estimation of the mixture Rasch model that imposes some sparsity structure on class-specific item difficulties. We illustrate the feasibility of the proposed modeling approach by means of one simulation study and two simulated case studies.
APA, Harvard, Vancouver, ISO, and other styles
7

Sen, Sedat, Allan S. Cohen, and Seock-Ho Kim. "Model Selection for Multilevel Mixture Rasch Models." Applied Psychological Measurement 43, no. 4 (2018): 272–89. http://dx.doi.org/10.1177/0146621618779990.

Full text
Abstract:
Mixture item response theory (MixIRT) models can sometimes be used to model the heterogeneity among the individuals from different subpopulations, but these models do not account for the multilevel structure that is common in educational and psychological data. Multilevel extensions of the MixIRT models have been proposed to address this shortcoming. Successful applications of multilevel MixIRT models depend in part on detection of the best fitting model. In this study, performance of information indices, Akaike information criterion (AIC), Bayesian information criterion (BIC), consistent Akaike information criterion (CAIC), and sample-size adjusted Bayesian information criterion (SABIC), were compared for use in model selection with a two-level mixture Rasch model in the context of a real data example and a simulation study. Level 1 consisted of students and Level 2 consisted of schools. The performances of the model selection criteria under different sample sizes were investigated in a simulation study. Total sample size (number of students) and Level 2 sample size (number of schools) were studied for calculation of information criterion indices to examine the performance of these fit indices. Simulation study results indicated that CAIC and BIC performed better than the other indices at detection of the true (i.e., generating) model. Furthermore, information indices based on total sample size yielded more accurate detections than indices at Level 2.
APA, Harvard, Vancouver, ISO, and other styles
8

DeMars, Christine E. "Multilevel Rasch Modeling: Does Misfit to the Rasch Model Impact the Regression Model?" Journal of Experimental Education 88, no. 4 (2019): 605–19. http://dx.doi.org/10.1080/00220973.2019.1610859.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Roskam, Edward E., and Paul G. W. Jansen. "Conditions for rasch-dichotomizability of the unidimensional polytomous rasch model." Psychometrika 54, no. 2 (1989): 317–32. http://dx.doi.org/10.1007/bf02294523.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Weitzman, R. A. "The Rasch Model Plus Guessing." Educational and Psychological Measurement 56, no. 5 (1996): 779–90. http://dx.doi.org/10.1177/0013164496056005005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Maier, Kimberly S. "A Rasch Hierarchical Measurement Model." Journal of Educational and Behavioral Statistics 26, no. 3 (2001): 307–30. http://dx.doi.org/10.3102/10769986026003307.

Full text
Abstract:
In this article, a hierarchical measurement model is developed that enables researchers to measure a latent trait variable and model the error variance corresponding to multiple levels. The Rasch hierarchical measurement model (HMM) results when a Rasch IRT model and a one-way ANOVA with random effects are combined ( Bryk & Raudenbush, 1992 ; Goldstein, 1987 ; Rasch, 1960 ). This model is appropriate for modeling dichotomous response strings nested within a contextual level. Examples of this type of structure include responses from students nested within schools and multiple response strings nested within people. Model parameter estimates of the Rasch HMM were obtained using the Bayesian data analysis methods of Gibbs sampling and the Metropolis-Hastings algorithm ( Gelfand, Hills, Racine-Poon, & Smith, 1990 ; Hastings, 1970 ; Metropolis, Rosenbluth, Rosenbluth, Teller, & Teller, 1953 ). The model is illustrated with two simulated data sets and data from the Sloan Study of Youth and Social Development. The results are discussed and parameter estimates for the simulated data sets are compared to parameter estimates obtained using a two-step estimation approach.
APA, Harvard, Vancouver, ISO, and other styles
12

Andrich, David. "Controversy and the Rasch Model." Medical Care 42, Supplement (2004): I—7. http://dx.doi.org/10.1097/01.mlr.0000103528.48582.7c.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Holster, Trevor A., and J. Lake. "Guessing and the Rasch Model." Language Assessment Quarterly 13, no. 2 (2016): 124–41. http://dx.doi.org/10.1080/15434303.2016.1160096.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Kelderman, Henk. "Common Item Equating Using the Loglinear Rasch Model." Journal of Educational Statistics 13, no. 4 (1988): 319–36. http://dx.doi.org/10.3102/10769986013004319.

Full text
Abstract:
A method is proposed to equate different sets of items administered to different groups of individuals using the Rasch model. A Rasch equating model is formulated that describes one common Rasch scale in different groups with different but overlapping sets of items. The item parameters can then be estimated simultaneously, avoiding different parameter estimates of common items in different groups. The model can be tested globally to test the hypothesis of one common Rasch scale, and the goodness of link can be tested. The method is based on the quasi-loglinear Rasch model.
APA, Harvard, Vancouver, ISO, and other styles
15

Robitzsch, Alexander. "Relating the One-Parameter Logistic Diagnostic Classification Model to the Rasch Model and One-Parameter Logistic Mixed, Partial, and Probabilistic Membership Diagnostic Classification Models." Foundations 3, no. 3 (2023): 621–33. http://dx.doi.org/10.3390/foundations3030037.

Full text
Abstract:
Diagnostic classification models (DCMs) are statistical models with discrete latent variables (so-called skills) to analyze multiple binary variables (i.e., items). The one-parameter logistic diagnostic classification model (1PLDCM) is a DCM with one skill and shares desirable measurement properties with the Rasch model. This article shows that the 1PLDCM is indeed a latent class Rasch model. Furthermore, the relationship of the 1PLDCM to extensions of the DCM to mixed, partial, and probabilistic memberships is treated. It is argued that the partial and probabilistic membership models are also equivalent to the Rasch model. The fit of the different models was empirically investigated using six datasets. It turned out for these datasets that the 1PLDCM always had a worse fit than the Rasch model and mixed and partial membership extensions of the DCM.
APA, Harvard, Vancouver, ISO, and other styles
16

Nitta, Hideo, and Takuya Aiba. "An Alternative Learning Gain Based on the Rasch Model." Physics Educator 01, no. 01 (2019): 1950005. http://dx.doi.org/10.1142/s2661339519500057.

Full text
Abstract:
Using the Rasch model for the pretest–posttest analysis, a learning gain, the “Rasch gain”, is introduced as the simple difference of the estimated ability parameter for students. It is shown that, although the Rasch gain strongly correlates with the normalized learning gain introduced by Hake, the Rasch gain has advantages over the Hake gain as a scientific measure.
APA, Harvard, Vancouver, ISO, and other styles
17

de Jong, John H. A. L. "Le Modele De Rasch." Taaltoetsen 31 (January 1, 1988): 57–70. http://dx.doi.org/10.1075/ttwia.31.07jon.

Full text
Abstract:
This paper provides an elementary introduction to the one parameter psychometric model known as the Rasch model. It explains the basic principles underlying the model and the concepts of unidimensionality, local stochastic independence, and additivity in non-mathematical terms. The requirements of measurement procedures, the measurement of latent traits, the control on model fit, and the definition of a trait are discussed. It is argued that the Rasch model is particularly appropriate to understand the mutual dependence of test reliability and validity. Examples from foreign language listening comprehension tests are used to illustrate the application of the model to a test validation procedure.
APA, Harvard, Vancouver, ISO, and other styles
18

Khotimah, Khotimah, Tb Sofwan Hadi, and Indri Lestari. "Application of The Rasch Model in Research Publications: A Bibliometric Analysis." Plusminus: Jurnal Pendidikan Matematika 4, no. 2 (2024): 229–40. https://doi.org/10.31980/plusminus.v4i2.1466.

Full text
Abstract:
Model Rasch merupakan teknik yang ideal untuk menilai respons secara teratur dan meningkatkan analisis data. Model ini merupakan instrumen yang berguna untuk mengukur pencapaian dalam tes pendidikan dan psikologi. Namun, penting untuk mengidentifikasi potensi keterbatasan model dan mempelajarinya dalam penelitian di masa depan. Analisis bibliometrik penelitian digunakan untuk menentukan tren penelitian tentang penggunaan model Rasch. Pencarian literatur ekstensif dilakukan dengan menggunakan database scopus pada tanggal 19 Februari 2023, dengan menggunakan kata kunci "model Rasch". Temuan menunjukkan bahwa model Rasch adalah alat pengujian yang diterima secara luas dan berguna yang mungkin lebih efektif daripada metode alternatif. Meskipun model Rasch adalah alat yang berguna untuk mengkalibrasi tes psikologi, namun penting untuk menangani masalah yang mungkin terjadi saat menggunakannya. Oleh karena itu, sangat penting untuk mengevaluasi hasil dengan hati-hati dan memahami keterbatasan model Rasch ketika menggunakan psikotes. The Rasch model is an ideal technique for regularly assessing responses and improving data analysis. This model is a useful instrument for measuring achievement in educational and psychological tests. However, it is important to identify potential limitations of the model and study them in future research. Bibliometric analysis of research was used to determine research trends regarding the use of Rasch models. An extensive literature search was conducted using the Scopus database on February 19, 2023, using the keyword "Rasch model". The findings suggest that the Rasch model is a widely accepted and useful testing tool that may be more effective than alternative methods. Although the Rasch model is a useful tool for calibrating psychological tests, it is important to address the problems that may occur when using it. Therefore, it is important to evaluate the results carefully and understand the limitations of the Rasch model when using psychological tests.
APA, Harvard, Vancouver, ISO, and other styles
19

Tarigan, Elsa Febrina, Suriati Nilmarito, Khairani Islamiyah, Ayi Darmana, and Retno Dwi Suyanti. "Analisis Instrumen Tes Menggunakan Rasch Model dan Software SPSS 22.0." Jurnal Inovasi Pendidikan Kimia 16, no. 2 (2022): 92–96. http://dx.doi.org/10.15294/jipk.v16i2.30530.

Full text
Abstract:
Penelitian ini bertujuan untuk membandingkan perbedaan validitas dan reliabilitas instrumen tes menggunakan Rasch model dan software SPSS 22.0. Penelitian ini merupakan penelitian kualitatif dengan metode deskriptif, yang di ujicobakan kepada 40 mahasiswa Universitas Negeri Medan yang berjumlah 40 butir soal dengan 5 opsi (a, b, c, d, dan e). Selanjutnya data yang diperoleh dianalisis melalui pendekatan teori tes klasik menggunakan bantuan program model Rasch dengan softwareWinsteps dan SPSS versi 22.0. Hasil penelitian menggunakan SPSS 22.0 dan Rasch model memiliki keakuratan validitas yang baik yang menunjukkan taraf perbedaan hasil yang kecil pada hasil validitas butir soal. Berdasarkan analisis SPSS 22.0 diperoleh 20 item yang valid dan 20 item yang tidak valid. Sedangkan menggunakan Rasch model dengan bantuan software Winstep diperoleh 17 item yang valid dan 23 item tidak valid. Berdasarkan keakuratan data, analisis validasi menggunakan Rasch model dengan bantuan software Winstep lebih akurat dibandingkan dengan menggunakan program SPSS 22.0. Faktor yang membedakan hasil reliabilitas SPSS 22.0 dan Rasch model dengan bantuan dari hasil uji validasi. Sehingga, uji reliabilitas menggunakan SPSS 22.0 tingkat reliabilitasnya 0,828 (kategori tinggi) dan Rasch model yang diperoleh 0,69 (kategori cukup).
APA, Harvard, Vancouver, ISO, and other styles
20

Basran, Afiqah, and Denis Lajium. "APPLICATION OF RASCH MODEL IN TESTING FORCE CONCEPT INVENTORY." International Journal of Modern Education 2, no. 6 (2020): 14–27. http://dx.doi.org/10.35631/ijmoe.26003.

Full text
Abstract:
Inventori Konsep Daya is an instrument that is adapted from the Force Concept Inventory (FCI). It is an instrument consisting of 30 diagnostic items related to the concept of force and motion. This instrument is widely used in physics education. However, the validity of this instrument in Bahasa Malaysia is not well studied to ensure that the items in the instrument function properly. Based on previous research, one of the major issues that are often questioned in the FCI is the reliability of the instrument when administered to different groups. When studies conducted in this country, researchers often use the reliability analysis under the Classical Test Theory. Various weaknesses are identified when evaluating using the analysis under the theory. Therefore, the purpose of this study is to apply the Rasch model under Item Response Theory in analyzing the items in Inventori Konsep Daya. Several analyzes were selected to determine the validity of the items and instruments. This study will be conducted on three levels of students involved in the learning of force and motion concepts. 300 samples will be taken from school students, elementary or matriculation students as well as undergraduate students who have studied this topic. The data will be analyzed using Windstep software. The results showed that Inventori Konsep Daya was a good instrument with high reliability and separation index, positive polarity value for every item, and fit the Rasch model. However, the instrument was quite difficult for the respondents in this study. This study is important in providing information to other researchers who will use FCI as an instrument in their study. In addition, the findings of this study can also be used to compare with the previous studies to draw more accurate conclusions.
APA, Harvard, Vancouver, ISO, and other styles
21

Widhiarso, Wahyu. "APLIKASI MODEL RASCH CAMPURAN DALAM MENGEVALUASI PENGUKURAN HARGA DIRI." Jurnal Penelitian dan Evaluasi Pendidikan 17, no. 1 (2013): 172–87. http://dx.doi.org/10.21831/pep.v17i1.1367.

Full text
Abstract:
Penelitian ini bertujuan untuk mengeksplorasi keberadaan kelompok responden yang menyebabkan estimasi parameter butir melalui pemodelan Rasch tidak invarian pada keseluruhan responden. Teknik analisis yang dipakai adalah model Rasch campuran yang merupakan penggabungan antara Model Rasch dan analisis kelas laten. Dengan menggunakan data hasil pengukuran harga diri didapatkan hasil analisis bahwa keseluruhan responden penelitian sebanyak 2.987 dapat dikategorikan menjadi tiga kelas berdasarkan pola respons mereka pada skala. Hasil estimasi parameter butir pada responden pada masing-masing kelas dengan menggunakan model kredit parsial menunjukkan bahwa ketiga kelas memiliki parameter butir yang berbeda. Dua kelas relatif sesuai dengan model, sedangkan satu kelas tidak sesuai karena responden pada kelas tersebut merespons skala dengan cara yang unik. Keberadaan responden dengan respons unik ini relatif kecil (12,5%) sehingga tidak mengganggu estimasi parameter pada keseluruhan butir. Kata kunci: model Rasch campuran, parameter butir, kelas responden ______________________________________________________________ RASCH MIXED MODEL APPLICATION IN EVALUATING THE MEASUREMENT OF SELF-ESTEEMAbstract This study aimed to explore the existence of groups of items that cause Estimation of item parameters using Rasch modeling was not invariant for all respondents. Mixed Rasch model which is the combination between Rasch Models and Latent Class Analysis was employed. By using data from measuring self-esteem found for overall respondents (N=2987) can be categorized into three classes based on their item respons patterns on entire scale. Results based on estimation of item parameters to the respondents in each class using the Partial Credit Model found that each class has different item parameters. Two classes supported the model while the other class did not; due to respondents on this class give a response on the scale in a unique way. The proportion of the respondents with a unique response is relatively small (12,5%) therefore they do not much interfere the estimation of item parameters on the overall items.Keywords: mixed rasch model, Item Estimation Parameter, Class
APA, Harvard, Vancouver, ISO, and other styles
22

ZHAN, Peida, WANG Wen-Chung, Lijun WANG, and Xiaomin LI. "The Multidimensional Testlet-Effect Rasch Model." Acta Psychologica Sinica 46, no. 8 (2014): 1208. http://dx.doi.org/10.3724/sp.j.1041.2014.01208.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Praekanata, I. Wayan Indra, Kadek Suranata, I. Ketut Gading, and Luh Putu Sri Lestari. "Analisis Politomi Rasch Model Skala PTSD." Jurnal Konseling dan Pendidikan 12, no. 4 (2024): 12–24. https://doi.org/10.29210/1119800.

Full text
Abstract:
Post-Traumatic Stress Disorder (PTSD) is a significant mental health issue, particularly in Indonesia with its complex cultural diversity. This study aims to assess the validity of the Indonesian Version DSM-V PTSD scale using an Item Response Theory (IRT) approach through the Rasch model. The research method involved 70 respondents who experienced trauma, measured using a Likert scale consisting of 20 items. Data were collected from an online questionnaire and analysed using the Rasch Polytomous model. The results indicate that the PTSD scale has good reliability, with a Person Reliability value of 0.865. The analysis revealed that several items have varying difficulty levels, suggesting the need for adjustments to reflect local experiences. The Partial Credit Model (PCM) was found to be more suitable than the Rating Scale Model (RSM), with likelihood ratio test results showing significant differences (χ² = 155, p < .001). This study provides new insights into the importance of validating psychological measurement tools in diverse cultural contexts and contributes to the development of better mental health policies in Indonesia.
APA, Harvard, Vancouver, ISO, and other styles
24

Smith, Richard M. "Person Fit in the Rasch Model." Educational and Psychological Measurement 46, no. 2 (1986): 359–72. http://dx.doi.org/10.1177/001316448604600210.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Müller, Hans. "A rasch model for continuous ratings." Psychometrika 52, no. 2 (1987): 165–81. http://dx.doi.org/10.1007/bf02294232.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Andersen, Erling B. "Residualanalysis in the polytomous rasch model." Psychometrika 60, no. 3 (1995): 375–93. http://dx.doi.org/10.1007/bf02294382.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Glas, C. A. W. "The Rasch Model and Multistage Testing." Journal of Educational Statistics 13, no. 1 (1988): 45–52. http://dx.doi.org/10.3102/10769986013001045.

Full text
Abstract:
This paper concerns the problem of estimating the item parameters of latent trait models in a multistage testing design. It is shown that using the Rasch model and conditional maximum likelihood estimates does not lead to solvable estimation equations. It is also shown that marginal maximum likelihood estimation, which assumes a sample of subjects from a population with a specified distribution of ability, will lead to solvable estimation equations, both in the Rasch model and in the Birnbaum model.
APA, Harvard, Vancouver, ISO, and other styles
28

Glas, C. A. W. "The Rasch Model and Multistage Testing." Journal of Educational Statistics 13, no. 1 (1988): 45. http://dx.doi.org/10.2307/1164950.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Avlund, Kirsten, Svend Kreiner, and Kirsten Schultz-Larsen. "Construct validation and the Rasch model." Scandinavian Journal of Social Medicine 21, no. 4 (1993): 233–44. http://dx.doi.org/10.1177/140349489302100403.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Graßhoff, Ulrike, Heinz Holling, and Rainer Schwabe. "Optimal Designs for the Rasch Model." Psychometrika 77, no. 4 (2012): 710–23. http://dx.doi.org/10.1007/s11336-012-9276-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Triana, Apri. "Analisis Model Rasch pada Tes Diagnostik Kesulitan Belajar Matematika Siswa SMA." Jurnal Tarbiyah dan Ilmu Keguruan Borneo 5, no. 1 (2024): 119–32. http://dx.doi.org/10.21093/jtikborneo.v5i3.7135.

Full text
Abstract:
Penelitian ini bertujuan untuk mengetahui karakteristik tes diagnostik kesulitan belajar Matematika menggunakan analisis model Rasch. Data yang dianalisis dengan model Rasch adalah sebanyak 323 respon jawaban siswa kelas XII IPA pada bidang studi Matematika. Hasil penelitian menunjukkan bahwa setiap butir tes diagnostik cocok dengan analisis model Rasch. Dari 323 siswa yang mengikuti tes, hanya sebanyak 298 respon jawaban siswa yang cocok dan bisa dianalisis dengan model Rasch. Analisis dengan model Rasch memiliki karakteristik satu parameter, yaitu tingkat kesulitan butir. Tingkat kesulitan butir yang cocok dengan model Rasch sesuai dengan karakteristik tes diagnostik adalah butir tes dengan tingkat sedang yaitu 0,30 sampai 0,80. Adapun karakteristik butir tes yang sesuai dengan karakteristik tes diagnostik dan parameter model analisis sebanyak 26 butir dan ada empat butir yang harus disisihkan pada perangkat tes diagnostik. Koefisien reliabilitas perangkat tes diagnostik sebesar 0,84 yang artinya tes diagnostik sangat reliabel untuk mengukur kemampuan matematika kelas XII IPA. Perangkat tes diagnostik tidak cocok digunakan pada peserta tes memiliki kemampuan dibawah -3 atau diatas 3.
APA, Harvard, Vancouver, ISO, and other styles
32

Baghaei, Purya, and Philipp Doebler. "Introduction to the Rasch Poisson Counts Model: An R Tutorial." Psychological Reports 122, no. 5 (2018): 1967–94. http://dx.doi.org/10.1177/0033294118797577.

Full text
Abstract:
The Rasch Poisson Counts Model is the oldest Rasch model developed by the Danish mathematician Georg Rasch in 1952. Nevertheless, the model has had limited applications in psychoeducational assessment. With the rise of neurocognitive and psychomotor testing, there is more room for new applications of the model where other item response theory models cannot be applied. In this paper, we give a general introduction to the Rasch Poisson Counts Model and then using data of an attention test walk the reader through how to use the “lme4” package in R to estimate the model and interpret the outputs.
APA, Harvard, Vancouver, ISO, and other styles
33

Komboz, Basil, Carolin Strobl, and Achim Zeileis. "Tree-Based Global Model Tests for Polytomous Rasch Models." Educational and Psychological Measurement 78, no. 1 (2016): 128–66. http://dx.doi.org/10.1177/0013164416664394.

Full text
Abstract:
Psychometric measurement models are only valid if measurement invariance holds between test takers of different groups. Global model tests, such as the well-established likelihood ratio (LR) test, are sensitive to violations of measurement invariance, such as differential item functioning and differential step functioning. However, these traditional approaches are only applicable when comparing previously specified reference and focal groups, such as males and females. Here, we propose a new framework for global model tests for polytomous Rasch models based on a model-based recursive partitioning algorithm. With this approach, a priori specification of reference and focal groups is no longer necessary, because they are automatically detected in a data-driven way. The statistical background of the new framework is introduced along with an instructive example. A series of simulation studies illustrates and compares its statistical properties to the well-established LR test. While both the LR test and the new framework are sensitive to differential item functioning and differential step functioning and respect a given significance level regardless of true differences in the ability distributions, the new data-driven approach is more powerful when the group structure is not known a priori—as will usually be the case in practical applications. The usage and interpretation of the new method are illustrated in an empirical application example. A software implementation is freely available in the R system for statistical computing.
APA, Harvard, Vancouver, ISO, and other styles
34

Preinerstorfer, David, and Anton K. Formann. "Parameter recovery and model selection in mixed Rasch models." British Journal of Mathematical and Statistical Psychology 65, no. 2 (2011): 251–62. http://dx.doi.org/10.1111/j.2044-8317.2011.02020.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Herwin, Herwin, Andi Tenriawaru, and Abdoulaye Fane. "Math elementary school exam analysis based on the Rasch model." Jurnal Prima Edukasia 7, no. 2 (2019): 106–13. http://dx.doi.org/10.21831/jpe.v7i2.24450.

Full text
Abstract:
This study aims to analyze the quality of mathematics exam tests in elementary schools using the Rasch model. This research is a type of descriptive quantitative research. The subject of this study were all items of School Examination Mathematical Questions in SDN Region III of Donri Donri Subdistrict, Soppeng Regency. The Mathematics Problem is 40 items. Besides that, in this study, 125 answer sheets from the participants were collected from 125 participants. The technique of data collection is done through documentation. This data collection technique is used to get a set of questions, answers, and a list of names of examinees. The data obtained were analyzed using the Rasch Model. The results showed that based on the Rash Model of 40 items on the mathematics exam 33 items (82.5%) were in a good category, while the other seven items (17.5%) were in a bad category. Test results indicate that the test information value is 13.8 on the ability scale -1.5 with a measurement error of 0.26.
APA, Harvard, Vancouver, ISO, and other styles
36

Khair, Muhammad Dhiyaul, and Sukaesi Marianti. "Individual ability on high-stakes test: Choosing cumulative score or rasch for scoring model." Jurnal Penelitian dan Evaluasi Pendidikan 28, no. 1 (2024): 79–93. https://doi.org/10.21831/pep.v28i1.71661.

Full text
Abstract:
In a test, a method is required to estimate an individual's ability based on their responses. Typically, this is done by summing the correct responses or calculating a cumulative score. An alternative method is the Rasch model. This study aims to determine whether an individual's position, based on cumulative score estimates, remains unchanged or changes when compared with ability estimates using Rasch on dichotomous responses. The study uses open-source data from the 2018 Program for International Student Assessment (PISA) by the Organization for Economic Co-operation and Development (OECD) and involves 317 Indonesian students. Ability analysis will be conducted on Math and Reading aspects using cumulative scores and Rasch with dichotomous responses. The study will employ data analysis techniques such as Rasch, paired samples t-test, and descriptive statistical analysis. The cumulative score and Rasch results will be tested using a paired samples t-test, and a comparison of the cumulative score and Rasch estimation results will be carried out using descriptive statistical analysis. The study results indicate that there are differences in individual positions based on ability estimates using cumulative score and Rasch. These differences are caused by variations in scores. Therefore, even if two individuals have the same cumulative score, they may have different Rasch estimates.
APA, Harvard, Vancouver, ISO, and other styles
37

Wright, BD, JM Linacre, RM Smith, AW Heinemann, and CV Granger. "FIM measurement properties and Rasch model details." Journal of Rehabilitation Medicine 29, no. 4 (1997): 267–72. http://dx.doi.org/10.2340/165019771997267272.

Full text
Abstract:
To summarize, we take issue with the criticisms of Dickson & Köhler for two main reasons: 1. Rasch analysis provides a model from which to approach the analysis of the FIM, an ordinal scale, as an interval scale. The existence of examples of items or individuals which do not fit the model does not disprove the overall efficacy of the model; and 2. the principal components analysis of FIM motor items as presented by Dickson & Köhler tends to undermine rather than support their argument. Their own analyses produce a single major factor explaining between 58.5 and 67.1% of the variance, depending upon the sample, with secondary factors explaining much less variance. Finally, analysis of item response, or latent trait, is a powerful method for understanding the meaning of a measure. However, it presumes that item scores are accurate. Another concern is that Dickson & Köhler do not address the issue of reliability of scoring the FIM items on which they report, a critical point in comparing results. The Uniform Data System for Medical Rehabilitation (UDSMRSM) expends extensive effort in the training of clinicians of subscribing facilities to score items accurately. This is followed up with a credentialing process. Phase 1 involves the testing of individual clinicians who are submitting data to determine if they have achieved mastery over the use of the FIM instrument. Phase 2 involves examining the data for outlying values. When Dickson & Köhler investigate more carefully the application of the Rasch model to their FIM data, they will discover that the results presented in their paper support rather than contradict their application of the Rasch model! This paper is typical of supposed refutations of Rasch model applications. Dickson & Köhler will find that idiosyncrasies in their data and misunderstandings of the Rasch model are the only basis for a claim to have disproven the relevance of the model to FIM data. The Rasch model is a mathematical theorem (like Pythagoras') and so cannot be disproven by empirical data once it has been deduced on theoretical grounds. Sometimes empirical data are not suitable for construction of a measure. When this happens, the routine fit statistics indicate the unsuitable segments of the data. Most FIM data do conform closely enough to the Rasch model to support generalizable linear measures. Science can advance!
APA, Harvard, Vancouver, ISO, and other styles
38

Agust, Satria. "How Does Rasch Model Reveal Dishonesty between Coastal Students and Easy Grammar Test?" Jurnal Iqra' : Kajian Ilmu Pendidikan 4, no. 2 (2019): 214–30. http://dx.doi.org/10.25217/ji.v4i2.531.

Full text
Abstract:
Academic dishonesty can occur with the supports of the technology devices and it also can be prevented with the help of the technology with its applications like Rasch Model. It can give detail information of the analyzed data and can trace the academic dishonesty like cheating. The aims of this research are to (1) analyze the grammar test items whether they are difficult items or easy ones using Rasch Model, (2) know the percentage of those who are assumed to do cheating based upon their origins and sex, and (3) expose their patterns in working on the grammar test in the form of multiple choice through the Rasch Model analysis. The researcher hypothesized that the academic dishonesty i.e. cheating was undergone by students who are from rural and urban areas of Riau Archipelago Province. The results of this research were: (1) through the students’ responses analyzed by Rasch Model, the grammar test was for medium ability, (2) The Rasch Model revealed that the percentage came to the number 5.71% or 4 of 70 students who were identified to cheat while working on the grammar test. They were two female students from rural area and the other two male students from urban area, and (3) The Rasch Model revealed that their responses did not represent their ability. The Rasch Model has helped the researcher to exposes the cheating deeds on exams. The practitioners just need methods, approaches, strategies, techniques, and media to prevent them in the future
 Keywords: Rasch Model, Wright Maps, Grammar Test.
APA, Harvard, Vancouver, ISO, and other styles
39

Paek, Insu, and Mark Wilson. "Formulating the Rasch Differential Item Functioning Model Under the Marginal Maximum Likelihood Estimation Context and Its Comparison With Mantel–Haenszel Procedure in Short Test and Small Sample Conditions." Educational and Psychological Measurement 71, no. 6 (2011): 1023–46. http://dx.doi.org/10.1177/0013164411400734.

Full text
Abstract:
This study elaborates the Rasch differential item functioning (DIF) model formulation under the marginal maximum likelihood estimation context. Also, the Rasch DIF model performance was examined and compared with the Mantel–Haenszel (MH) procedure in small sample and short test length conditions through simulations. The theoretically known relationship of the DIF estimators between the Rasch DIF model and the MH procedure was confirmed. In general, the MH method showed a conservative tendency for DIF detection rates compared with the Rasch DIF model approach. When there is DIF, the z test (when the standard error of the DIF estimator is estimated properly) and the likelihood ratio test in the Rasch DIF model approach showed higher DIF detection rates than the MH chi-square test for sample sizes of 100 to 300 per group and test lengths ranging from 4 to 39. In addition, this study discusses proposed Rasch DIF classification rules that accommodate statistical inference on the direction of DIF.
APA, Harvard, Vancouver, ISO, and other styles
40

Jaya, Petrus Redy Partus, Beata Palmin, and Theresia Alviani Sum. "Pengujian Instrumen Akreditasi PAUD dengan Model Rasch." Jurnal Obsesi : Jurnal Pendidikan Anak Usia Dini 8, no. 4 (2024): 687–98. http://dx.doi.org/10.31004/obsesi.v8i4.5953.

Full text
Abstract:
Kualitas Pendidikan Anak Usia Dini (PAUD) sangat dipengaruhi oleh instrumen akreditasi yang valid dan reliabel, yang dapat memberikan penilaian objektif terhadap stimulasi kognitif yang diberikan oleh guru. Penelitian ini bertujuan untuk mengevaluasi dan mengkritisi validitas serta reliabilitas instrumen visitasi akreditasi PAUD menggunakan Model Rasch 1 Parameter Logistic (1PL). Penelitian ini menggunakan pendekatan kuantitatif dengan melibatkan 124 lembaga PAUD dan 47 asesor pada tahun 2023. Data dikumpulkan melalui penilaian performa guru dalam menstimulasi aspek kognitif anak usia dini. Analisis data dilakukan menggunakan Model Rasch yang mengevaluasi kesesuaian item dengan model serta variasi tingkat kesulitan item. Hasil penelitian menunjukkan bahwa sebagian besar item dalam instrumen sesuai dengan model Rasch dan memiliki variasi tingkat kesulitan yang memadai. Beberapa item diidentifikasi memerlukan revisi untuk meningkatkan kualitas instrumen. Temuan ini menegaskan validitas dan reliabilitas instrumen, serta memberikan rekomendasi perbaikan untuk akreditasi PAUD di Indonesia. Implikasi penelitian mencakup peningkatan konsistensi dan akurasi penilaian melalui penyempurnaan instrumen dan pelatihan asesor.
APA, Harvard, Vancouver, ISO, and other styles
41

Zafrullah, Sa'adatul Ulwiyah, and Nofriyandi. "RASCH MODEL ANALYSIS ON MATHEMATICS TEST INSTRUMENTS: BIBLIOSHINY (1983-2023)." Mathematics Research and Education Journal 7, no. 2 (2023): 1–13. http://dx.doi.org/10.25299/mrej.2023.vol7(2).14550.

Full text
Abstract:
The Rasch Model is a very useful tool in testing the quality of a measurement instrument, including mathematics tests. The Rasch model is part of item response theory which can classify item and person calculations in one distribution map. This research analyzes research trends regarding Rasch models in mathematics instruments, using bibliometric analysis to understand the current state of research in this field. From the analysis obtained, it can be said that publications regarding Rasch models in mathematical instruments have increased rapidly from 1983 to 2023, with a total of 173 articles. Universiti Kebangsaan Malaysia holds the highest affiliation with 13 articles. The Journal of Applied Measurement is the most prolific publication with 10 articles and 113 citations. Printy (2008) and Clements et al., (2008) are the sources with the most total citations, namely 139 citations. Azrilah Abdul Aziz, Clelia Cascella, Chiara GIBERTI, and Azami Zaharim are the authors with the largest number of articles discussing Rasch models in mathematical instruments with 4 articles. New themes that emerge in this analysis are mathematics education, engineering education, validity, reliability, gender, and differential item functioning so that future researchers can consider the keywords above
APA, Harvard, Vancouver, ISO, and other styles
42

Lee, Seungbak, Hyo-Jun Yun, Minsoo Jeon, and Minsoo Kang. "Validating Athletes’ Subjective Performance Scale: A Rasch Model Analysis." IJASS(International Journal of Applied Sports Sciences) 35, no. 2 (2023): 238–50. http://dx.doi.org/10.24985/ijass.2023.35.2.238.

Full text
Abstract:
This study aimed to validate the athletes’ subjective performance scale (ASPS) and examine its optimal categorization measuring Korean university athletes using the Rasch model. A six-item ASPS with ten response categories was administered to 201 Korean university athletes participating in team sport events. The Rash measurement program, Winsteps (version 4.6.2.1), was used to perform Rasch analysis. The results showed that the model was a good fit for the data. The Wright-Andrich map indicated ceiling and floor effects, as ASPS items were unable to measure individuals with logits beyond 3 or below -2.5. Furthermore, the reliability of item separation and person separation demonstrated acceptable confidence. Lastly, the findings indicated that the ASPS, which utilized a 10-category rating scale, was problematic due to disordered thresholds. The exploratory analysis revealed that both six and seven-category rating scales appeared to comply with the effective classification criteria, but further research is needed for confirmatory analysis. Previous research has explored the relationships between psychometric factors and subjective performance; however, this study offers valuable insights into optimal categorization and introduces an innovative approach to measuring athletes’ subjective performance. To assess subjective sport performance satisfaction, the authors propose employing a six-category rating scale, which this study found to be reliable and valid in relation to construct.
APA, Harvard, Vancouver, ISO, and other styles
43

Carvalho, Lucas de Francisco, Ricardo Primi, and Gregory J. Meyer. "Application of the Rasch model in measuring personality disorders." Trends in Psychiatry and Psychotherapy 34, no. 2 (2012): 101–9. http://dx.doi.org/10.1590/s2237-60892012000200009.

Full text
Abstract:
OBJECTIVE: To describe item and person parameters obtained with the Rasch model, one of the item response theory models, in the assessment of personality disorders based on Millon's theory. METHOD: A total of 350 people participated in the study. Age ranged from 18 to 67 years (mean ± standard deviation = 27.02±10.13), and 71.7% of the participants (n = 251) were female. Of the 350 individuals, 21.1% (n = 74) answered affirmatively about being under psychiatric treatment and taking psychiatric medications. The Personality Disorders Dimensional Inventory (PDDI), an instrument designed to assess personality disorders according to Millon's theory, was applied to all participants. Data were analyzed using the Rasch model. RESULTS: Overall, analysis with the Rasch model revealed that the PDDI has adequate psychometric properties for the assessment of personality disorders. CONCLUSION: Among the contributions of item response theory models for clinical instruments, the Rasch person-item map deserves to be highlighted as a successful attempt to improve the understanding of clinical scores obtained in response to particular test items.
APA, Harvard, Vancouver, ISO, and other styles
44

De Leeuw, Jan, and Norman Verhelst. "Maximum Likelihood Estimation in Generalized Rasch Models." Journal of Educational Statistics 11, no. 3 (1986): 183–96. http://dx.doi.org/10.3102/10769986011003183.

Full text
Abstract:
We review various models and techniques that have been proposed for item analysis according to the ideas of Rasch. A general model is proposed that unifies them, and maximum likelihood procedures are discussed for this general model. We show that unconditional maximum likelihood estimation in the functional Rasch model, as proposed by Wright and Haberman, is an important special case. Conditional maximum likelihood estimation, as proposed by Rasch and Andersen, is another important special case. Both procedures are related to marginal maximum likelihood estimation in the structural Rasch model, which has been studied by Sanathanan, Andersen, Tjur, Thissen, and others. Our theoretical results lead to suggestions for alternative computational algorithms.
APA, Harvard, Vancouver, ISO, and other styles
45

Priyani, Tanti, and Bowo Sugiharto. "Analysis of biology midterm exam items using a comparison of the classical theory test and the Rasch model." JPBI (Jurnal Pendidikan Biologi Indonesia) 10, no. 3 (2024): 939–58. http://dx.doi.org/10.22219/jpbi.v10i3.34345.

Full text
Abstract:
In biology learning, test instruments are essential for assessing students' understanding of complex concepts. A test instrument is a crucial factor in learning evaluation; however, its implementation remains minimal. This descriptive quantitative study aims to analyze the quality of test items using the classical approach in terms of validity, reliability, difficulty index, discrimination power, distractor effectiveness, and the Rasch model analysis. The data consists of 30 multiple-choice questions from a biology midterm exam administered to 40 students. Classical test data analysis uses Microsoft Excel, while Rasch model analysis uses Winsteps software. The validity test results from both approaches show 14 valid questions and 16 invalid ones. The reliability scores are 0.619 (adequate) for the classical approach's Cronbach's Alpha, 0.85 (good) for the Rasch model, and 0.65 (weak) for personal reliability. The classical test theory and the Rasch model categorize item difficulty into four levels. The classical approach produces five categories for item discrimination, while the Rasch model identifies three groups based on the item separation index (H=3.45) and two groups based on respondent ability (H=1.96). Distractor effectiveness shows 93.3% functional distractors in the classical test and 80% in the Rasch model. The Rasch model offers greater precision in measuring student ability and detecting bias. Both models should be integrated for comprehensive item analysis. Future tests should focus on improving invalid items and the quality of distractors.
APA, Harvard, Vancouver, ISO, and other styles
46

Permatasari, Nila. "Analisis kualitas instrumen penilaian materi keanekaragaman hayati melalui tes klasik dan Rasch model." Bio-Pedagogi 14, no. 1 (2025): 10. https://doi.org/10.20961/bio-pedagogi.v14i1.88470.

Full text
Abstract:
EN<br />Learning evaluation functions as an indicator. Assessment is an activity that involves the interpretation of measurement data in accordance with certain criteria or rules. An effective assessment instrument must meet the requirements including validity, reliability, level of difficulty, discriminatory power, and distractor effectiveness. This study aims to compare the results of the analysis of assessment instruments between classical test theory and the Rasch model on the assessment of Biodiversity material. The study was conducted in class VII at one of the junior high schools in Sukoharjo. This study is a quantitative descriptive study with random sampling techniques. The data were analyzed using Microsoft Excel (and Winsteps. The results of the validity analysis through the classical test theory were categorized as sufficient/moderate quality (5 valid) from the Rasch model analysis obtained good question quality (6 valid). The reliability analysis of the questions through the classical test theory was 0.458 (moderate) and the Rasch model obtained a person with a weak value (0.34) and a question with a good value (0.86). The level of difficulty, according to the classical test theory and the Rasch model, was distributed into three groups. The discriminatory power of the classical theory test was divided into 5 questions with sufficient value, 4 questions with high value, and 1 question with very high value. Then, in the Rasch model there were four groups of questions that could be identified, while for the person there was only one group. For the effectiveness of the distractors in the classical theory test and the Rasch model, most of them had shown good quality while a small number of distractors did not work. The Rasch model approach is considered better because it is more objective, easy to interpret the results, flexible, and statically strong.<br /><br />ID<br />Evaluasi pembelajaran berfungsi sebagai indikator Penilaian merupakan kegiatan yang melibatkan interpretasi data pengukuran yang sesuai dengan kriteria atau aturan-aturan tertentu. Instrumen penilaian yang efektif harus memenuhi persyaratan yang meliputi validitas, reliabilitas, tingkat kesukaran, daya pembeda, dan efektivitas distraktor. Penelitian ini bertujuan untuk membandingkan hasil analisis instrumen penilaian antara teori tes klasik dan Rasch model pada assesment materi Keanekaragaman Hayati Penelitian dilakukan pada kelas VII di salah satu SMP di Sukoharjo. Penelitian ini berupa penelitian deskriptif kuantitatif dengan teknik sampling acak. Data dianalisis menggunakan Microsoft Excel (dan Winsteps. Hasil analisis validitas melalui teori tes klasik dikategorikan berkualitas cukup/sedang (5 valid) dari melalui analisis Rasch model diperoleh kualitas soal yang baik (6 valid). Analisis reliabilitas butir soal melalui teori tes klasik sebesar 0,458 (sedang) dan Rasch model diperoleh person bernilai lemah (0,34) dan butir soal bernilai bagus (0,86). Tingkat kesukaran, menurut teori tes klasik dan Rasch model terdistribusi dalam tiga kelompok. Daya pembeda tes teori klasik terbagi pada 5 soal bernilai cukup, 4 soal bernilai tinggi, dan 1 soal bernilai sangat tinggi. Lalu, pada Rasch model terdapat empat kelompok butir soal yang dapat diidentifikasi, sedangkan untuk person hanya terdapat satu kelompok. Untuk efektivitas distraktor pada tes teori klasik dan Rasch model sebagian besar sudah menunjukkan kualitas baik sedangkan sebagian kecil distraktor tidak bekerja. Pendekatan Rasch model dianggap lebih baik dikarenakan lebih objektif, mudah dalam menafsirkan hasil, fleksibel, dan kuat secara statis.
APA, Harvard, Vancouver, ISO, and other styles
47

Mursidi, Andi, and Soeharto Soeharto. "AN INTRODUCTION: EVALUATION OF QUALITY ASSURANCE FOR HIGHER EDUCATIONAL INSTITUTIONS USING RASCH MODEL." JETL (Journal Of Education, Teaching and Learning) 1, no. 1 (2017): 1. http://dx.doi.org/10.26737/jetl.v1i1.25.

Full text
Abstract:
This is a descriptive qualitatif research about quality assurance evaluation. the research aims to introduce analyzing using Rasch model to evaluate higher education institution based on quality assurance standars that have been developed to evaluate each member including instructor and staff in higher education institution. The instrument have been developed to conduct the experiment to provide raw data sample to doing practical analyzing using Rasch model in this research. The first part of this research will explain definition of the quality assurance and Rasch model analysis. The second part of this research will show introduction analysis using Rasch model to analysis sample data. The third part of this research will show a brief summary of the result and important finding in evaluation of higher assurance. Analyzing data of evaluation quality assurance using Rasch model will help higher educational institutions to increase and develop their quality assurance to be better higher educational institution.
APA, Harvard, Vancouver, ISO, and other styles
48

Strobl, Carolin, Julia Kopf, and Achim Zeileis. "Rasch Trees: A New Method for Detecting Differential Item Functioning in the Rasch Model." Psychometrika 80, no. 2 (2013): 289–316. http://dx.doi.org/10.1007/s11336-013-9388-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Iseppi, Luca, Marcella Rizzo, Enrico Gori, Federico Nassivera, Ivana Bassi, and Alessandro Scuderi. "Rasch Model for Assessing Propensity to Entomophagy." Sustainability 13, no. 8 (2021): 4346. http://dx.doi.org/10.3390/su13084346.

Full text
Abstract:
The Food and Agriculture Organization of the United Nations supports the production of edible insects as a promising and sustainable source of nutrients to meet the increasing demand for animal-derived products by the growing world population. Even if insects are part of the diet of more than two billion people worldwide, the practice of eating insects (entomophagy) raises challenging questions for Western countries where this is not a habit. The research applied the Rasch models and showed that, in the case of hunger or need, 70.8% of the sample declared that they would be willing to eat insects. The willingness to habitually consume and pay for insect food is very low, but the percentages are higher than people who had actually had insect tasting experiences. This demonstrates that a communication process is necessary that aims to overcome psychological/cultural barriers. Only in this way will it be possible to increase the propensity to consume insects.
APA, Harvard, Vancouver, ISO, and other styles
50

Anandita, Rina, and Lukman Cahyadi. "Aplikasi Model Rasch dalam Mengukur Komitmen Dosen." Jurnal Manajemen dan Supervisi Pendidikan 4, no. 3 (2020): 220–31. http://dx.doi.org/10.17977/um025v4i32020p220.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography