https://meth.psychopen.eu/index.php/meth/issue/feed Methodology 2024-09-30T01:59:04-07:00 Tamás Rudas, Isabel Benítez editors@meth.psychopen.eu Open Journal Systems <h1>Methodology. <span class="font-weight-normal">European Journal of Research Methods for the Behavioral and Social Sciences</span></h1> <h2 class="mt-0">A platform for interdisciplinary exchange of methodological research and applications — <em>Free of charge for authors and readers</em></h2> <hr> <p><strong>Methodology</strong> is the official journal of the <a class="primary" href="http://www.eam-online.org/" target="_blank" rel="noopener">European Association of Methodology (EAM)</a>, a union of methodologists working in different areas of the social and behavioral sciences (e.g., psychology, sociology, economics, educational and political sciences). The journal provides a platform for interdisciplinary exchange of methodological research and applications in the different fields, including new methodological approaches, review articles, software information, and instructional papers that can be used in teaching. Three main disciplines are covered: data analysis, research methodology, and psychometrics. The articles published in the journal are not only accessible to methodologists but also to more applied researchers in the various disciplines.</p> <p><strong>Since 2020</strong>, <em>Methodology</em> is published as an <em>open-access journal</em> in cooperation with the <a href="https://www.psychopen.eu">PsychOpen GOLD</a> portal of the <a href="https://leibniz-psychology.org">Leibniz Institute for Psychology (ZPID)</a>. Both, access to published articles by readers as well as the submission, review, and publication of contributions for authors are <strong>free of charge</strong>!</p> <p><strong>Articles published before 2020</strong> (Vol. 1-15) are accessible via the <a href="https://econtent.hogrefe.com/loi/med">journal archive of <em>Methodology's</em> former publisher</a> (Hogrefe).&nbsp;<em>Methodology&nbsp;</em>is the successor of the two journals <em>Metodologia de las Ciencias del Comportamiento</em> and <a href="https://www.psycharchives.org/en/browse/?q=dc.identifier.issn%3A1432-8534"><em>Methods of Psychological Research-Online</em> (MPR-Online)</a>.</p> https://meth.psychopen.eu/index.php/meth/article/view/12503 Partitioning Dichotomous Items Using Mokken Scale Analysis, Exploratory Graph Analysis and Parallel Analysis: A Monte Carlo Simulation 2024-09-30T01:59:03-07:00 Gomaa Said Mohamed Abdelhamid gsm00@fayoum.edu.eg María Dolores Hidalgo gsm00@fayoum.edu.eg Brian F. French gsm00@fayoum.edu.eg Juana Gómez-Benito gsm00@fayoum.edu.eg <p>Estimating the number of latent factors underlying a set of dichotomous items is a major challenge in social and behavioral research. Mokken scale analysis (MSA) and exploratory graph analysis (EGA) are approaches for partitioning measures consisting of dichotomous items. In this study we perform simulation-based comparisons of two EGA methods (EGA with graphical least absolute shrinkage and selector operator; EGAtmfg with triangulated maximally filtered graph algorithm), two MSA methods (AISP: automated item selection procedure; GA: genetic algorithm), and two widely used factor analytic techniques (parallel analysis with principal component analysis (PApc) and parallel analysis with principal axis factoring (PApaf)) for partitioning dichotomous items. Performance of the six methods differed significantly according to the data structure. AISP and PApc had highest accuracy and lowest bias for unidimensional structures. Moreover, AISP demonstrated the lowest rate of misclassification of items. Regarding multidimensional structures, EGA with GLASSO estimation and PApaf yielded highest accuracy and lowest bias, followed by EGAtmfg. In addition, both EGA techniques exhibited the lowest rate of misclassification of items to factors. In summary, EGA and EGAtmfg showed comparable performance to the highly accurate traditional method, parallel analysis. These findings offer guidance on selecting methods for dimensionality analysis with dichotomous indicators to optimize accuracy in factor identification.</p> 2024-09-30T00:00:00-07:00 Copyright (c) 2024 Gomaa Said Mohamed Abdelhamid, María Dolores Hidalgo, Brian F. French, Juana Gómez-Benito https://meth.psychopen.eu/index.php/meth/article/view/14823 A General Framework for Modeling Missing Data Due to Item Selection With Item Response Theory 2024-09-30T01:59:03-07:00 Paul A. Jewsbury pjewsbury@ets.org Ru Lu pjewsbury@ets.org Peter W. van Rijn pjewsbury@ets.org <p>In education testing, the items that examinees receive may be selected for a variety of reasons, resulting in missing data for items that were not selected. Item selection is internal when based on prior performance on the test, such as in adaptive testing designs or for branching items. Item selection is external when based on an auxiliary variable collected independently to performance on the test, such as education level in a targeting testing design or geographical location in a nonequivalent anchor test equating design. This paper describes the implications of this distinction for Item Response Theory (IRT) estimation, drawing upon missing-data theory (e.g., Mislevy & Sheehan, 1989, https://doi.org/10.1007/BF02296402; Rubin, 1976, https://doi.org/10.1093/biomet/63.3.581), and selection theory (Meredith, 1993, https://doi.org/10.1007/BF02294825). Through mathematical analyses and simulations, we demonstrate that this internal versus external item selection framework provides a general guide in applying missing-data and selection theory to choose a valid analysis model for datasets with missing data.</p> 2024-09-30T00:00:00-07:00 Copyright (c) 2024 Paul A. Jewsbury, Ru Lu, Peter W. van Rijn https://meth.psychopen.eu/index.php/meth/article/view/11721 Post-Hoc Tests in One-Way ANOVA: The Case for Normal Distribution 2024-06-28T05:40:50-07:00 Joel Juarros-Basterretxea joeljuarros@unizar.es Gema Aonso-Diego joeljuarros@unizar.es Álvaro Postigo joeljuarros@unizar.es Pelayo Montes-Álvarez joeljuarros@unizar.es Álvaro Menéndez-Aller joeljuarros@unizar.es Eduardo García-Cueto joeljuarros@unizar.es <p>When one-way ANOVA is statistically significant, a multiple comparison problem arises, hence post-hoc tests are needed to elucidate between which groups significant differences are found. Different post-hoc tests have been proposed for each situation regarding heteroscedasticity and sample size groups. This study aims to compare the Type I error (α) rate of 10 post-hoc tests in four different conditions based on heteroscedasticity and balance between-group sample size. A Montecarlo simulation study was carried out on a total of 28 data sets, with 10,000 resamples in each, distributed through four conditions. One-way ANOVA tests and post-hoc tests were conducted to estimate the α rate at a 95% confidence level. The percentage of times the null hypothesis was falsely refused is used to compare the tests. Three out of four conditions demonstrated considerable variability among sample sizes. However, the best post-hoc test in the second condition (heteroscedastic and balance group) did not depend on simple size. In some cases, inappropriate post-hoc tests were more accurate. Homoscedasticity and balance between-group sample size should be considered for appropriate post-hoc test selection.</p> 2024-06-28T00:00:00-07:00 Copyright (c) 2024 Joel Juarros-Basterretxea, Gema Aonso-Diego, Álvaro Postigo, Pelayo Montes-Álvarez, Álvaro Menéndez-Aller, Eduardo García-Cueto https://meth.psychopen.eu/index.php/meth/article/view/12943 Modelling the Effect of Instructional Support on Logarithmic-Transformed Response Time: An Exploratory Study 2024-06-28T05:40:51-07:00 Luis Alberto Pinos Ullauri wim.vandennoortgate@kuleuven.be Wim Van Den Noortgate wim.vandennoortgate@kuleuven.be Dries Debeer wim.vandennoortgate@kuleuven.be <p>Instructional support can be implemented in learning environments to pseudo-modify the difficulty or time intensity of items presented to persons. This support can affect both the response accuracy of persons towards items as well as the time persons require to complete items. This study proposes a framework to model response time in learning environments as a function of instructional support. Moreover, it explores the effect of instructional support on response time in assembly tasks training using Virtual Reality. Three models are fitted with real-life data collected by a project that involves both industry and academic partners from Belgium. A Bayesian approach is followed to implement the models, where the Bayes factor is used to select the best fitting model.</p> 2024-06-28T00:00:00-07:00 Copyright (c) 2024 Luis Alberto Pinos Ullauri, Wim Van Den Noortgate, Dries Debeer https://meth.psychopen.eu/index.php/meth/article/view/11523 Comparison of Lasso and Stepwise Regression in Psychological Data 2024-06-28T05:40:51-07:00 Di Jody Zhou jodzhou@ucdavis.edu Rajpreet Chahal jodzhou@ucdavis.edu Ian H. Gotlib jodzhou@ucdavis.edu Siwei Liu jodzhou@ucdavis.edu <p>Identifying significant predictors of behavioral outcomes is of great interest in many psychological studies. Lasso regression, as an alternative to stepwise regression for variable selection, has started gaining traction among psychologists. Yet, further investigation is valuable to fully understand its performance across various psychological data conditions. Using a Monte Carlo simulation and an empirical demonstration, we compared Lasso regression to stepwise regression in typical psychological datasets varying in sample size, predictor size, sparsity, and signal-to-noise ratio. We found that: (1) Lasso regression was more accurate in within-sample selection and yielded more consistent out-of-sample prediction accuracy than stepwise regression; (2) Lasso with a harsher shrinkage parameter was more accurate, parsimonious, and robust to sampling variability than the prediction-optimizing Lasso. Finally, we concluded with cautious notes and recommendations in practice on the application of Lasso regression.</p> 2024-06-28T00:00:00-07:00 Copyright (c) 2024 Di Jody Zhou, Rajpreet Chahal, Ian H. Gotlib, Siwei Liu https://meth.psychopen.eu/index.php/meth/article/view/12877 Metric Invariance in Exploratory Graph Analysis via Permutation Testing 2024-06-28T05:40:51-07:00 Laura Jamison lj5yn@virginia.edu Alexander P. Christensen lj5yn@virginia.edu Hudson F. Golino lj5yn@virginia.edu <p>Establishing measurement invariance (MI) is crucial for the validity and comparability of psychological measurements across different groups. If MI is violated, mean differences among groups could be due to the measurement rather than differences in the latent variable. Recent research has highlighted the prevalence of inaccurate MI models in studies, often influenced by the software used. Additionally, unequal group sample sizes, noninvariant referent indicators, and reliance on data-driven methods reduce the power of traditional SEM methods. Network psychometrics lacks methods comparing network structures conceptually similar to MI. We propose a more conceptually consistent method within the Exploratory Graph Analysis (EGA) framework using network loadings, analogous to factor loadings. Our simulation study demonstrates that this method offers comparable or improved power, especially in scenarios with smaller or unequal sample sizes and lower noninvariance effect sizes, compared to SEM MI testing.</p> 2024-06-28T00:00:00-07:00 Copyright (c) 2024 Laura Jamison, Alexander P. Christensen, Hudson F. Golino https://meth.psychopen.eu/index.php/meth/article/view/10449 A General Framework for Planning the Number of Items/Subjects for Evaluating Cronbach’s Alpha: Integration of Hypothesis Testing and Confidence Intervals 2024-03-22T07:28:45-07:00 Wei-Ming Luh luhwei@mail.ncku.edu.tw <p>Cronbach’s alpha, widely used for measuring reliability, often operates within studies with sample information, suffering insufficient sample sizes to have sufficient statistical power or precise estimation. To address this challenge and incorporate considerations of both confidence intervals and cost-effectiveness into statistical inferences, our study introduces a novel framework. This framework aims to determine the optimal configuration of measurements and subjects for Cronbach’s alpha by integrating hypothesis testing and confidence intervals. We have developed two R Shiny apps capable of considering up to nine probabilities, which encompass width, validity, and/or rejection events. These apps facilitate obtaining the required number of measurements/subjects, either by minimizing overall cost for a desired probability or by maximizing probability for a predefined cost.</p> 2024-03-22T00:00:00-07:00 Copyright (c) 2024 Wei-Ming Luh https://meth.psychopen.eu/index.php/meth/article/view/11235 The Prediction-Explanation Fallacy: A Pervasive Problem in Scientific Applications of Machine Learning 2024-03-22T07:28:45-07:00 Marco Del Giudice marco.delgiudice@units.it <p>I highlight a problem that has become ubiquitous in scientific applications of machine learning and can lead to seriously distorted inferences. I call it the Prediction-Explanation Fallacy. The fallacy occurs when researchers use prediction-optimized models for explanatory purposes, without considering the relevant tradeoffs. This is a problem for at least two reasons. First, prediction-optimized models are often deliberately biased and unrealistic in order to prevent overfitting. In other cases, they have an exceedingly complex structure that is hard or impossible to interpret. Second, different predictive models trained on the same or similar data can be biased in different ways, so that they may predict equally well but suggest conflicting explanations. Here I introduce the tradeoffs between prediction and explanation in a non-technical fashion, present illustrative examples from neuroscience, and end by discussing some mitigating factors and methods that can be used to limit the problem.</p> 2024-03-22T00:00:00-07:00 Copyright (c) 2024 Marco Del Giudice https://meth.psychopen.eu/index.php/meth/article/view/12271 A Quantile Shift Approach to Main Effects and Interactions in a 2-by-2 Design 2024-03-22T07:28:46-07:00 Rand R. Wilcox rwilcox@usc.edu Guillaume A. Rousselet rwilcox@usc.edu <p>When comparing two independent groups, shift functions are basically techniques that compare multiple quantiles rather than a single measure of location, the goal being to get a more detailed understanding of how the distributions differ. Various versions have been proposed and studied. This paper deals with extensions of these methods to main effects and interactions in a between-by-between, 2-by-2 design. Two approaches are studied, one that compares the deciles of the distributions, and one that has a certain connection to the Wilcoxon–Mann–Whitney method. There are many quantile estimators, but for reasons summarized in the paper, the focus is on using the Harrell–Davis quantile estimator used in conjunction with a percentile bootstrap method. Included are results comparing two methods aimed at controlling the probability of one or more Type I errors.</p> 2024-03-22T00:00:00-07:00 Copyright (c) 2024 Rand R. Wilcox, Guillaume A. Rousselet https://meth.psychopen.eu/index.php/meth/article/view/12467 The Vuong-Lo-Mendell-Rubin Test for Latent Class and Latent Profile Analysis: A Note on the Different Implementations in Mplus and LatentGOLD 2024-03-22T07:28:46-07:00 Jeroen K. Vermunt j.k.vermunt@uvt.nl <p>Mplus and LatentGOLD implement the Vuong-Lo-Mendell-Rubin test (comparing models with K and K + 1 latent classes) in slightly differ manners. While LatentGOLD uses the formulae from Vuong (1989; https://doi.org/10.2307/1912557), Mplus replaces the standard parameter variance-covariance matrix by its robust version. Our small simulation study showed why such a seemingly small difference may sometimes yield rather different results. The main finding is that the Mplus approximation of the distribution of the likelihood-ratio statistic is much more data dependent than the LatentGOLD one. This data dependency is stronger when the true model serves as the null hypothesis (H0) with K classes than when it serves as the alternative hypothesis (H1) with K + 1 classes, and it is also stronger for low class separation than for high class separation. Another important finding is that neither of the two implementations yield uniformly distributed p-values under the correct null hypothesis, indicating this test is not the best model selection tool in mixture modeling.</p> 2024-03-22T00:00:00-07:00 Copyright (c) 2024 Jeroen K. Vermunt