https://meth.psychopen.eu/index.php/meth/issue/feed Methodology 2024-03-22T07:28:47-07:00 Katrijn van Deun, Isabel Benítez editors@meth.psychopen.eu Open Journal Systems <h1 class="font-weight-bold" style="width: 75%;"><span style="color: #0e0b2a; font-size: x-large;">Methodology. <span class="font-weight-normal">European Journal of Research Methods for the Behavioral and Social Sciences</span></span></h1> <h2 class="font-weight-bold" style="width: 75%; line-height: 1.5em;">A platform for interdisciplinary exchange of methodological research and applications</h2> <h2 class="font-weight-bold"><em>Free of charge for authors and readers</em></h2> <hr style="height: 2px; border-width: 0; color: gray; background-color: gray;"> <p><em><strong>Methodology</strong>&nbsp;</em>is the official organ of the&nbsp;<a class="primary" href="http://www.eam-online.org/" target="_blank" rel="noopener">European Association of Methodology (EAM)</a>, a union of methodologists working in different areas of the social and behavioral sciences (e.g., psychology, sociology, economics, educational and political sciences). The journal provides a platform for interdisciplinary exchange of methodological research and applications in the different fields, including new methodological approaches, review articles, software information, and instructional papers that can be used in teaching. Three main disciplines are covered: data analysis, research methodology, and psychometrics. The articles published in the journal are not only accessible to methodologists but also to more applied researchers in the various disciplines.</p> <p><strong>Since 2020</strong>, <em>Methodology</em> is published as an <em>open-access journal</em> in cooperation with the <a href="https://www.psychopen.eu">PsychOpen GOLD</a> portal of the <a href="https://leibniz-psychology.org">Leibniz Institute for Psychology (ZPID)</a>. Both, access to published articles by readers as well as the submission, review, and publication of contributions for authors are <strong>free of charge</strong>!</p> <p><strong>Articles published before 2020</strong> (Vol. 1-15) are accessible via the <a href="https://econtent.hogrefe.com/loi/med">journal archive of <em>Methodology's</em> former publisher</a> (Hogrefe).&nbsp;<em>Methodology&nbsp;</em>is the successor of the two journals <em>Metodologia de las Ciencias del Comportamiento</em> and <a href="https://www.psycharchives.org/en/browse/?q=dc.identifier.issn%3A1432-8534"><em>Methods of Psychological Research-Online</em> (MPR-Online)</a>. <!-- All issues of these journals are still available at <a class="primary" href="http://www.aemcco.org/" target="_blank" rel="noopener">www.aemcco.org</a> and <a class="primary" href="http://www.dgps.de/fachgruppen/methoden/mpr-online/" target="_blank" rel="noopener">www.dgps.de/fachgruppen/methoden/mpr-online</a>. --></p> https://meth.psychopen.eu/index.php/meth/article/view/10449 A General Framework for Planning the Number of Items/Subjects for Evaluating Cronbach’s Alpha: Integration of Hypothesis Testing and Confidence Intervals 2024-03-22T07:28:45-07:00 Wei-Ming Luh luhwei@mail.ncku.edu.tw <p>Cronbach’s alpha, widely used for measuring reliability, often operates within studies with sample information, suffering insufficient sample sizes to have sufficient statistical power or precise estimation. To address this challenge and incorporate considerations of both confidence intervals and cost-effectiveness into statistical inferences, our study introduces a novel framework. This framework aims to determine the optimal configuration of measurements and subjects for Cronbach’s alpha by integrating hypothesis testing and confidence intervals. We have developed two R Shiny apps capable of considering up to nine probabilities, which encompass width, validity, and/or rejection events. These apps facilitate obtaining the required number of measurements/subjects, either by minimizing overall cost for a desired probability or by maximizing probability for a predefined cost.</p> 2024-03-22T00:00:00-07:00 Copyright (c) 2024 Wei-Ming Luh https://meth.psychopen.eu/index.php/meth/article/view/11235 The Prediction-Explanation Fallacy: A Pervasive Problem in Scientific Applications of Machine Learning 2024-03-22T07:28:45-07:00 Marco Del Giudice marco.delgiudice@units.it <p>I highlight a problem that has become ubiquitous in scientific applications of machine learning and can lead to seriously distorted inferences. I call it the Prediction-Explanation Fallacy. The fallacy occurs when researchers use prediction-optimized models for explanatory purposes, without considering the relevant tradeoffs. This is a problem for at least two reasons. First, prediction-optimized models are often deliberately biased and unrealistic in order to prevent overfitting. In other cases, they have an exceedingly complex structure that is hard or impossible to interpret. Second, different predictive models trained on the same or similar data can be biased in different ways, so that they may predict equally well but suggest conflicting explanations. Here I introduce the tradeoffs between prediction and explanation in a non-technical fashion, present illustrative examples from neuroscience, and end by discussing some mitigating factors and methods that can be used to limit the problem.</p> 2024-03-22T00:00:00-07:00 Copyright (c) 2024 Marco Del Giudice https://meth.psychopen.eu/index.php/meth/article/view/12271 A Quantile Shift Approach to Main Effects and Interactions in a 2-by-2 Design 2024-03-22T07:28:46-07:00 Rand R. Wilcox rwilcox@usc.edu Guillaume A. Rousselet rwilcox@usc.edu <p>When comparing two independent groups, shift functions are basically techniques that compare multiple quantiles rather than a single measure of location, the goal being to get a more detailed understanding of how the distributions differ. Various versions have been proposed and studied. This paper deals with extensions of these methods to main effects and interactions in a between-by-between, 2-by-2 design. Two approaches are studied, one that compares the deciles of the distributions, and one that has a certain connection to the Wilcoxon–Mann–Whitney method. There are many quantile estimators, but for reasons summarized in the paper, the focus is on using the Harrell–Davis quantile estimator used in conjunction with a percentile bootstrap method. Included are results comparing two methods aimed at controlling the probability of one or more Type I errors.</p> 2024-03-22T00:00:00-07:00 Copyright (c) 2024 Rand R. Wilcox, Guillaume A. Rousselet https://meth.psychopen.eu/index.php/meth/article/view/12467 The Vuong-Lo-Mendell-Rubin Test for Latent Class and Latent Profile Analysis: A Note on the Different Implementations in Mplus and LatentGOLD 2024-03-22T07:28:46-07:00 Jeroen K. Vermunt j.k.vermunt@uvt.nl <p>Mplus and LatentGOLD implement the Vuong-Lo-Mendell-Rubin test (comparing models with K and K + 1 latent classes) in slightly differ manners. While LatentGOLD uses the formulae from Vuong (1989; https://doi.org/10.2307/1912557), Mplus replaces the standard parameter variance-covariance matrix by its robust version. Our small simulation study showed why such a seemingly small difference may sometimes yield rather different results. The main finding is that the Mplus approximation of the distribution of the likelihood-ratio statistic is much more data dependent than the LatentGOLD one. This data dependency is stronger when the true model serves as the null hypothesis (H0) with K classes than when it serves as the alternative hypothesis (H1) with K + 1 classes, and it is also stronger for low class separation than for high class separation. Another important finding is that neither of the two implementations yield uniformly distributed p-values under the correct null hypothesis, indicating this test is not the best model selection tool in mixture modeling.</p> 2024-03-22T00:00:00-07:00 Copyright (c) 2024 Jeroen K. Vermunt