Methodology https://meth.psychopen.eu/index.php/meth <h1>Methodology. <span class="font-weight-normal">European Journal of Research Methods for the Behavioral and Social Sciences</span></h1> <h2 class="mt-0">A platform for interdisciplinary exchange of methodological research and applications — <em>Free of charge for authors and readers</em></h2> <hr> <p><strong>Methodology</strong> is the official journal of the <a class="primary" href="http://www.eam-online.org/" target="_blank" rel="noopener">European Association of Methodology (EAM)</a>, a union of methodologists working in different areas of the social and behavioral sciences (e.g., psychology, sociology, economics, educational and political sciences). The journal provides a platform for interdisciplinary exchange of methodological research and applications in the different fields, including new methodological approaches, review articles, software information, and instructional papers that can be used in teaching. Three main disciplines are covered: data analysis, research methodology, and psychometrics. The articles published in the journal are not only accessible to methodologists but also to more applied researchers in the various disciplines.</p> <p><strong>Since 2020</strong>, <em>Methodology</em> is published as an <em>open-access journal</em> in cooperation with the <a href="https://www.psychopen.eu">PsychOpen GOLD</a> portal of the <a href="https://leibniz-psychology.org">Leibniz Institute for Psychology (ZPID)</a>. Both, access to published articles by readers as well as the submission, review, and publication of contributions for authors are <strong>free of charge</strong>!</p> <p><strong>Articles published before 2020</strong> (Vol. 1-15) are accessible via the <a href="https://econtent.hogrefe.com/loi/med">journal archive of <em>Methodology's</em> former publisher</a> (Hogrefe).&nbsp;<em>Methodology&nbsp;</em>is the successor of the two journals <em>Metodologia de las Ciencias del Comportamiento</em> and <a href="https://www.psycharchives.org/en/browse/?q=dc.identifier.issn%3A1432-8534"><em>Methods of Psychological Research-Online</em> (MPR-Online)</a>.</p> en-US <p>Authors who publish with <em>Methodology</em> agree to the following terms:</p> <p><a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" rel="noopener"><img class="float-left mr-3" src="https://i.creativecommons.org/l/by/4.0/88x31.png" alt="Creative Commons License"></a> Articles are published under the&nbsp;<a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" rel="noopener">Creative Commons Attribution 4.0 International License</a>&nbsp;(CC BY 4.0). Under the CC BY license, authors retain ownership of the copyright for their article, but authors grant others permission to use the content of publications in <em>Methodology</em> in whole or in part provided that the original work is properly cited. Users (redistributors) of <em>Methodology</em> are required to cite the original source, including the author's names, <em>Methodology</em> as the initial source of publication, year of publication, volume number and DOI (if available).&nbsp;Authors may publish the manuscript in any other journal or medium but any such subsequent publication must include a notice that the manuscript was initially published by <em>Methodology</em>.</p> <p>Authors grant <em>Methodology</em> the right of first publication. Although authors remain the copyright owner, they grant the journal the irrevocable, nonexclusive rights to publish, reproduce, publicly distribute and display, and transmit their article or portions thereof in any manner.</p> editors@meth.psychopen.eu (Editorial Office, Methodology) support@meth.psychopen.eu (PsychOpen Technical Support) Mon, 23 Dec 2024 04:10:04 -0800 OJS 3.1.2.4 http://blogs.law.harvard.edu/tech/rss 60 Extending the Reach of the Common Cause Design Using Meta-Analytic Methods: Applications and Issues https://meth.psychopen.eu/index.php/meth/article/view/10591 <p>Most meta-analytic methods examine effects across a collection of primary studies. We introduce an application of meta-analytic techniques to estimate effects and homogeneity within a single, primary study consisting of multiple, pretest-intervention-posttest units. This novel assessment was used to validate the recently created “Common Cause” (CC) design. In each case, we established the CC design by eliminating control groups from randomized studies, thereby deconstructing each experiment. This deconstruction enabled us to compare difference-in-difference results in randomized designs with a control group to pretest-posttest differences in a CC design without a control group. Meta-analysis results of multiple OXO effects from the CC designs were compared to meta-analytic effects of multiple randomized studies. This within-study-comparison logic and associated analyses produced consistent similarity between CC and validating-study results when directions of findings and patterns of statistical significance were considered. We provide plausible explanations for varying CC effect-size estimates, describe strengths and limitations, and address future research directions.</p> Christopher G. Thompson, William H. Yeaton, Gertrudes Velasquez, Kitchka Petrova, Betsy J. Becker Copyright (c) 2024 Christopher G. Thompson, William H. Yeaton, Gertrudes Velasquez, Kitchka Petrova, Betsy J. Becker https://creativecommons.org/licenses/by/4.0 https://meth.psychopen.eu/index.php/meth/article/view/10591 Mon, 23 Dec 2024 00:00:00 -0800 Minimum Required Sample Size for Modelling Daily Cyclic Patterns in Ecological Momentary Assessment Data https://meth.psychopen.eu/index.php/meth/article/view/11399 <p>Cyclical patterns in ecological momentary assessment (EMA) data on emotions have remained relatively underresearched. Addressing such patterns can help to better understand emotion dynamics across time and contexts. However, no general rules of thumb are readily available for psychological researchers to determine the required sample size for measuring cyclical patterns in emotions. This study, therefore, estimates the minimum required sample sizes—in terms of the number of measurements per time period and subjects—to obtain a power of 80% given a certain underlying cyclical pattern based on input parameter values derived from an empirical EMA dataset. Estimated minimum required sample sizes varied between 50 subjects and 10 measurements per subject for accurately detecting cyclical patterns with a large magnitude, to 60 subjects and 30 measurements per subject for cyclical patterns of small magnitude. The resulting rules of thumb for sample sizes are discussed with a number of considerations in mind.</p> Robin van de Maat, Johan Lataster, Peter Verboon Copyright (c) 2024 Robin van de Maat, Johan Lataster, Peter Verboon https://creativecommons.org/licenses/by/4.0 https://meth.psychopen.eu/index.php/meth/article/view/11399 Mon, 23 Dec 2024 00:00:00 -0800 A Framework for Planning Sample Sizes Regarding Prediction Intervals of the Normal Mean Using R Shiny Apps https://meth.psychopen.eu/index.php/meth/article/view/13549 <p>Replication is a core principle for research, and the recent recognition of the importance of constructing prediction intervals for precise replications highlights the need for robust sample-size planning methodologies. However, methodological and technical complexities often hinder researchers from efficiently achieving this task. This study addresses this challenge by developing five R Shiny apps specifically tailored to determine sample sizes concerning prediction intervals for the mean of the normal distribution. Two measures of precision, absolute and relative widths, are considered. Additionally, the apps consider unequal sampling unit costs and sample size allocations to achieve optimal results by exhaustive search. Simulation results validate the proposed methodology, demonstrating favorable coverage rates. Two illustrative examples of one-sample and two-sample problems showcase these apps’ versatility and user-friendly nature, providing researchers with a valid and straightforward approach for systematically planning sample sizes.</p> Wei-Ming Luh, Jiin-Huarng Guo Copyright (c) 2024 Wei-Ming Luh, Jiin-Huarng Guo https://creativecommons.org/licenses/by/4.0 https://meth.psychopen.eu/index.php/meth/article/view/13549 Mon, 23 Dec 2024 00:00:00 -0800 Maintaining Data Quality When Using Social Media for Recruitment: Risks, Rewards, and Steps Forward https://meth.psychopen.eu/index.php/meth/article/view/13839 <p>Social media is increasingly used to recruit participants for research studies and has been shown to be an effective means of recruitment, in terms of cost, time, and accessibility. However, researchers often struggle with the challenges of using social media for recruitment, as minimal guidance is available. Without careful consideration of the risks to data quality when using social media as a recruitment tool, the overall results of studies can be compromised. This paper provides three hypothetical scenarios based in part on the real-world experiences of researchers using social media-based recruitment (SMR) methods. The scenarios serve as a discussion and learning opportunity for researchers to identify data quality issues with SMR and postulate how issues can be mitigated. Inexperience with SMR can lead to severe flaws in data collection, which can be mitigated early in the study process with appropriate measures in place. Researchers need to proactively educate themselves and take measures to avoid common pitfalls associated with SMR to achieve robust data quality and research integrity.</p> Marissa P. Bartmess, Tamu Abreu Copyright (c) 2024 Marissa P. Bartmess, Tamu Abreu https://creativecommons.org/licenses/by/4.0 https://meth.psychopen.eu/index.php/meth/article/view/13839 Mon, 23 Dec 2024 00:00:00 -0800 How Large Must an Associational Mean Difference Be to Support a Causal Effect? https://meth.psychopen.eu/index.php/meth/article/view/14579 <p>An observational study might support a causal claim if the association found cannot be explained by bias due to unconsidered confounders. This bias depends on how strongly the common predisposition, a summary of unconsidered confounders, is related to the factor and the outcome. For a positive effect to be supported, the product of these two relations must be smaller than the left boundary of the confidence interval for, e.g., a standardised mean difference (d). We suggest means to derive heuristics for how large this product must be to serve as a confirmatory threshold. We also provide non-technical, visual means to express researchers’ assumptions on the two relations to assess whether a finding on d is explainable by omitted confounders. The ViSe tool, available as an R package and Shiny application, allows users to choose between various effect sizes and apply it to their own data or published summary results.</p> Michael Höfler, Ekaterina Pronizius, Erin Buchanan Copyright (c) 2024 Michael Höfler, Ekaterina Pronizius, Erin Buchanan https://creativecommons.org/licenses/by/4.0 https://meth.psychopen.eu/index.php/meth/article/view/14579 Mon, 23 Dec 2024 00:00:00 -0800 Partitioning Dichotomous Items Using Mokken Scale Analysis, Exploratory Graph Analysis and Parallel Analysis: A Monte Carlo Simulation https://meth.psychopen.eu/index.php/meth/article/view/12503 <p>Estimating the number of latent factors underlying a set of dichotomous items is a major challenge in social and behavioral research. Mokken scale analysis (MSA) and exploratory graph analysis (EGA) are approaches for partitioning measures consisting of dichotomous items. In this study we perform simulation-based comparisons of two EGA methods (EGA with graphical least absolute shrinkage and selector operator; EGAtmfg with triangulated maximally filtered graph algorithm), two MSA methods (AISP: automated item selection procedure; GA: genetic algorithm), and two widely used factor analytic techniques (parallel analysis with principal component analysis (PApc) and parallel analysis with principal axis factoring (PApaf)) for partitioning dichotomous items. Performance of the six methods differed significantly according to the data structure. AISP and PApc had highest accuracy and lowest bias for unidimensional structures. Moreover, AISP demonstrated the lowest rate of misclassification of items. Regarding multidimensional structures, EGA with GLASSO estimation and PApaf yielded highest accuracy and lowest bias, followed by EGAtmfg. In addition, both EGA techniques exhibited the lowest rate of misclassification of items to factors. In summary, EGA and EGAtmfg showed comparable performance to the highly accurate traditional method, parallel analysis. These findings offer guidance on selecting methods for dimensionality analysis with dichotomous indicators to optimize accuracy in factor identification.</p> Gomaa Said Mohamed Abdelhamid, María Dolores Hidalgo, Brian F. French, Juana Gómez-Benito Copyright (c) 2024 Gomaa Said Mohamed Abdelhamid, María Dolores Hidalgo, Brian F. French, Juana Gómez-Benito https://creativecommons.org/licenses/by/4.0 https://meth.psychopen.eu/index.php/meth/article/view/12503 Mon, 30 Sep 2024 00:00:00 -0700 A General Framework for Modeling Missing Data Due to Item Selection With Item Response Theory https://meth.psychopen.eu/index.php/meth/article/view/14823 <p>In education testing, the items that examinees receive may be selected for a variety of reasons, resulting in missing data for items that were not selected. Item selection is internal when based on prior performance on the test, such as in adaptive testing designs or for branching items. Item selection is external when based on an auxiliary variable collected independently to performance on the test, such as education level in a targeting testing design or geographical location in a nonequivalent anchor test equating design. This paper describes the implications of this distinction for Item Response Theory (IRT) estimation, drawing upon missing-data theory (e.g., Mislevy & Sheehan, 1989, https://doi.org/10.1007/BF02296402; Rubin, 1976, https://doi.org/10.1093/biomet/63.3.581), and selection theory (Meredith, 1993, https://doi.org/10.1007/BF02294825). Through mathematical analyses and simulations, we demonstrate that this internal versus external item selection framework provides a general guide in applying missing-data and selection theory to choose a valid analysis model for datasets with missing data.</p> Paul A. Jewsbury, Ru Lu, Peter W. van Rijn Copyright (c) 2024 Paul A. Jewsbury, Ru Lu, Peter W. van Rijn https://creativecommons.org/licenses/by/4.0 https://meth.psychopen.eu/index.php/meth/article/view/14823 Mon, 30 Sep 2024 00:00:00 -0700 Post-Hoc Tests in One-Way ANOVA: The Case for Normal Distribution https://meth.psychopen.eu/index.php/meth/article/view/11721 <p>When one-way ANOVA is statistically significant, a multiple comparison problem arises, hence post-hoc tests are needed to elucidate between which groups significant differences are found. Different post-hoc tests have been proposed for each situation regarding heteroscedasticity and sample size groups. This study aims to compare the Type I error (α) rate of 10 post-hoc tests in four different conditions based on heteroscedasticity and balance between-group sample size. A Montecarlo simulation study was carried out on a total of 28 data sets, with 10,000 resamples in each, distributed through four conditions. One-way ANOVA tests and post-hoc tests were conducted to estimate the α rate at a 95% confidence level. The percentage of times the null hypothesis was falsely refused is used to compare the tests. Three out of four conditions demonstrated considerable variability among sample sizes. However, the best post-hoc test in the second condition (heteroscedastic and balance group) did not depend on simple size. In some cases, inappropriate post-hoc tests were more accurate. Homoscedasticity and balance between-group sample size should be considered for appropriate post-hoc test selection.</p> Joel Juarros-Basterretxea, Gema Aonso-Diego, Álvaro Postigo, Pelayo Montes-Álvarez, Álvaro Menéndez-Aller, Eduardo García-Cueto Copyright (c) 2024 Joel Juarros-Basterretxea, Gema Aonso-Diego, Álvaro Postigo, Pelayo Montes-Álvarez, Álvaro Menéndez-Aller, Eduardo García-Cueto https://creativecommons.org/licenses/by/4.0 https://meth.psychopen.eu/index.php/meth/article/view/11721 Fri, 28 Jun 2024 00:00:00 -0700 Modelling the Effect of Instructional Support on Logarithmic-Transformed Response Time: An Exploratory Study https://meth.psychopen.eu/index.php/meth/article/view/12943 <p>Instructional support can be implemented in learning environments to pseudo-modify the difficulty or time intensity of items presented to persons. This support can affect both the response accuracy of persons towards items as well as the time persons require to complete items. This study proposes a framework to model response time in learning environments as a function of instructional support. Moreover, it explores the effect of instructional support on response time in assembly tasks training using Virtual Reality. Three models are fitted with real-life data collected by a project that involves both industry and academic partners from Belgium. A Bayesian approach is followed to implement the models, where the Bayes factor is used to select the best fitting model.</p> Luis Alberto Pinos Ullauri, Wim Van Den Noortgate, Dries Debeer Copyright (c) 2024 Luis Alberto Pinos Ullauri, Wim Van Den Noortgate, Dries Debeer https://creativecommons.org/licenses/by/4.0 https://meth.psychopen.eu/index.php/meth/article/view/12943 Fri, 28 Jun 2024 00:00:00 -0700 Comparison of Lasso and Stepwise Regression in Psychological Data https://meth.psychopen.eu/index.php/meth/article/view/11523 <p>Identifying significant predictors of behavioral outcomes is of great interest in many psychological studies. Lasso regression, as an alternative to stepwise regression for variable selection, has started gaining traction among psychologists. Yet, further investigation is valuable to fully understand its performance across various psychological data conditions. Using a Monte Carlo simulation and an empirical demonstration, we compared Lasso regression to stepwise regression in typical psychological datasets varying in sample size, predictor size, sparsity, and signal-to-noise ratio. We found that: (1) Lasso regression was more accurate in within-sample selection and yielded more consistent out-of-sample prediction accuracy than stepwise regression; (2) Lasso with a harsher shrinkage parameter was more accurate, parsimonious, and robust to sampling variability than the prediction-optimizing Lasso. Finally, we concluded with cautious notes and recommendations in practice on the application of Lasso regression.</p> Di Jody Zhou, Rajpreet Chahal, Ian H. Gotlib, Siwei Liu Copyright (c) 2024 Di Jody Zhou, Rajpreet Chahal, Ian H. Gotlib, Siwei Liu https://creativecommons.org/licenses/by/4.0 https://meth.psychopen.eu/index.php/meth/article/view/11523 Fri, 28 Jun 2024 00:00:00 -0700