Empirical Ensemble Equating Under the NEAT Design Inspired by Machine Learning Ideology

Authors

  • Zhehan Jiang
  • Yuting Han
  • Jihong Zhang
  • Lingling Xu
  • Dexin Shi
  • Haiying Liang
  • Jinying Ouyang

Abstract

This study proposes an empirical ensemble equating (3E) approach that collectively selects, adopts, weighs, and combines outputs from different sources to take and combine advantage of equating techniques in various score intervals. The ensemble idea was demonstrated and tailored to the Non-Equivalent groups with Anchor Test (NEAT) equating. A simulation study based on several published settings was conducted. Three outcome measures – average bias, its absolute value, and root mean square difference – were used to evaluate the selected methods’ performance. The 3E approach outperformed other counterparts in most given conditions, while the cautions, such as tuning weights and assuming possible scenarios for using the proposed approach were also addressed.