UX Survey Report
Results reported for a UX survey from an e-learning system.
Results reported for a UX survey from an e-learning system.
Published in Psychological Test and Assessment Modeling, 2019
In this study, we show that rates of success of identifying DIF items are higher when the anchor set is made up of highly discriminating items. We also show that DIF items are more easily detected if they have high discrimination and at least moderate difficulty (if using a correctly specified anchor). These findings reveal a relationship between item characteristics and DIF that have been previously ignored, which could lead test designers and DIF researchers to make erroneous recommendations about DIF detection.
Download here
Published in Journal of Educational Measurement, 2020
Undesirable response behavior, such as speeding and innatentive responding, have long plagued educational and psychological assessments. With the advances in response time modeling, the challenge of handling such responses must be addressed. In this article, we introduce a robust estimation approach for the respondent’s working speed under the log-normal model by down-weighting aberrant responses.
Published:
This study employs a two-step procedure which selects an anchor set and assesses the items outside the anchor for DIF (M-IT/M-PA; Wang & Shih, 2010). Results show that accuracy rates in selecting an anchor with multiple imputation were generally higher than with full-information maximum likelihood.
Published:
In this study we show that, in general, a DIF item with low-discrimination is more likely to be mistakenly chosen into the anchor set; even if it is tested for DIF, it is difficult to detect the DIF effect.
Published:
In this study we show that the lack of universal DIF effect size measure makes interpretation of power of DIF detection difficult, and develop a set of criteria for a desirable DIF effect size measure.
Published:
In this study, we propose the application of change-point analysis through a hierarchical framework for responses and response times. By leveraging information from both item responses and response times, we demonstrate this application to a real dataset of assessment data.
Published:
Building upon previous change-point analysis methods, joint modeling of item responses and response times is applied to detect various types of aberrant response behavior and to estimate the point of change in behavior.
Published:
Using survey data from a high school sample, in this study, we explore item-level response times, and response sequence and styles using process and meta data, and build on previously recommended methods for detection of careless response behavior.
Undergraduate course, University 1, Department, 2014
This is a description of a teaching experience. You can use markdown like any other post.
Workshop, University 1, Department, 2015
This is a description of a teaching experience. You can use markdown like any other post.