https://meth.psychopen.eu/index.php/meth/issue/feedMethodology2024-12-23T04:10:04-08:00Editorial Office, Methodologyeditors@meth.psychopen.euOpen Journal Systems<h1>Methodology. <span class="font-weight-normal">European Journal of Research Methods for the Behavioral and Social Sciences</span></h1> <h2 class="mt-0">A platform for interdisciplinary exchange of methodological research and applications — <em>Free of charge for authors and readers</em></h2> <hr> <p><strong>Methodology</strong> is the official journal of the <a class="primary" href="http://www.eam-online.org/" target="_blank" rel="noopener">European Association of Methodology (EAM)</a>, a union of methodologists working in different areas of the social and behavioral sciences (e.g., psychology, sociology, economics, educational and political sciences). The journal provides a platform for interdisciplinary exchange of methodological research and applications in the different fields, including new methodological approaches, review articles, software information, and instructional papers that can be used in teaching. Three main disciplines are covered: data analysis, research methodology, and psychometrics. The articles published in the journal are not only accessible to methodologists but also to more applied researchers in the various disciplines.</p> <p><strong>Since 2020</strong>, <em>Methodology</em> is published as an <em>open-access journal</em> in cooperation with the <a href="https://www.psychopen.eu">PsychOpen GOLD</a> portal of the <a href="https://leibniz-psychology.org">Leibniz Institute for Psychology (ZPID)</a>. Both, access to published articles by readers as well as the submission, review, and publication of contributions for authors are <strong>free of charge</strong>!</p> <p><strong>Articles published before 2020</strong> (Vol. 1-15) are accessible via the <a href="https://econtent.hogrefe.com/loi/med">journal archive of <em>Methodology's</em> former publisher</a> (Hogrefe). <em>Methodology </em>is the successor of the two journals <em>Metodologia de las Ciencias del Comportamiento</em> and <a href="https://www.psycharchives.org/en/browse/?q=dc.identifier.issn%3A1432-8534"><em>Methods of Psychological Research-Online</em> (MPR-Online)</a>.</p>https://meth.psychopen.eu/index.php/meth/article/view/10591Extending the Reach of the Common Cause Design Using Meta-Analytic Methods: Applications and Issues2024-12-23T04:10:01-08:00Christopher G. Thompsoncgthompson@tamu.eduWilliam H. Yeatoncgthompson@tamu.eduGertrudes Velasquezcgthompson@tamu.eduKitchka Petrovacgthompson@tamu.eduBetsy J. Beckercgthompson@tamu.edu<p>Most meta-analytic methods examine effects across a collection of primary studies. We introduce an application of meta-analytic techniques to estimate effects and homogeneity within a single, primary study consisting of multiple, pretest-intervention-posttest units. This novel assessment was used to validate the recently created “Common Cause” (CC) design. In each case, we established the CC design by eliminating control groups from randomized studies, thereby deconstructing each experiment. This deconstruction enabled us to compare difference-in-difference results in randomized designs with a control group to pretest-posttest differences in a CC design without a control group. Meta-analysis results of multiple OXO effects from the CC designs were compared to meta-analytic effects of multiple randomized studies. This within-study-comparison logic and associated analyses produced consistent similarity between CC and validating-study results when directions of findings and patterns of statistical significance were considered. We provide plausible explanations for varying CC effect-size estimates, describe strengths and limitations, and address future research directions.</p>2024-12-23T00:00:00-08:00Copyright (c) 2024 Christopher G. Thompson, William H. Yeaton, Gertrudes Velasquez, Kitchka Petrova, Betsy J. Beckerhttps://meth.psychopen.eu/index.php/meth/article/view/11399Minimum Required Sample Size for Modelling Daily Cyclic Patterns in Ecological Momentary Assessment Data2024-12-23T04:10:02-08:00Robin van de Maatrobin.vandemaat@ou.nlJohan Latasterrobin.vandemaat@ou.nlPeter Verboonrobin.vandemaat@ou.nl<p>Cyclical patterns in ecological momentary assessment (EMA) data on emotions have remained relatively underresearched. Addressing such patterns can help to better understand emotion dynamics across time and contexts. However, no general rules of thumb are readily available for psychological researchers to determine the required sample size for measuring cyclical patterns in emotions. This study, therefore, estimates the minimum required sample sizes—in terms of the number of measurements per time period and subjects—to obtain a power of 80% given a certain underlying cyclical pattern based on input parameter values derived from an empirical EMA dataset. Estimated minimum required sample sizes varied between 50 subjects and 10 measurements per subject for accurately detecting cyclical patterns with a large magnitude, to 60 subjects and 30 measurements per subject for cyclical patterns of small magnitude. The resulting rules of thumb for sample sizes are discussed with a number of considerations in mind.</p>2024-12-23T00:00:00-08:00Copyright (c) 2024 Robin van de Maat, Johan Lataster, Peter Verboonhttps://meth.psychopen.eu/index.php/meth/article/view/13549A Framework for Planning Sample Sizes Regarding Prediction Intervals of the Normal Mean Using R Shiny Apps2024-12-23T04:10:02-08:00Wei-Ming Luhluhwei@mail.ncku.edu.twJiin-Huarng Guoluhwei@mail.ncku.edu.tw<p>Replication is a core principle for research, and the recent recognition of the importance of constructing prediction intervals for precise replications highlights the need for robust sample-size planning methodologies. However, methodological and technical complexities often hinder researchers from efficiently achieving this task. This study addresses this challenge by developing five R Shiny apps specifically tailored to determine sample sizes concerning prediction intervals for the mean of the normal distribution. Two measures of precision, absolute and relative widths, are considered. Additionally, the apps consider unequal sampling unit costs and sample size allocations to achieve optimal results by exhaustive search. Simulation results validate the proposed methodology, demonstrating favorable coverage rates. Two illustrative examples of one-sample and two-sample problems showcase these apps’ versatility and user-friendly nature, providing researchers with a valid and straightforward approach for systematically planning sample sizes.</p>2024-12-23T00:00:00-08:00Copyright (c) 2024 Wei-Ming Luh, Jiin-Huarng Guohttps://meth.psychopen.eu/index.php/meth/article/view/13839Maintaining Data Quality When Using Social Media for Recruitment: Risks, Rewards, and Steps Forward2024-12-23T04:10:03-08:00Marissa P. Bartmessbartmesm@mailbox.sc.eduTamu Abreubartmesm@mailbox.sc.edu<p>Social media is increasingly used to recruit participants for research studies and has been shown to be an effective means of recruitment, in terms of cost, time, and accessibility. However, researchers often struggle with the challenges of using social media for recruitment, as minimal guidance is available. Without careful consideration of the risks to data quality when using social media as a recruitment tool, the overall results of studies can be compromised. This paper provides three hypothetical scenarios based in part on the real-world experiences of researchers using social media-based recruitment (SMR) methods. The scenarios serve as a discussion and learning opportunity for researchers to identify data quality issues with SMR and postulate how issues can be mitigated. Inexperience with SMR can lead to severe flaws in data collection, which can be mitigated early in the study process with appropriate measures in place. Researchers need to proactively educate themselves and take measures to avoid common pitfalls associated with SMR to achieve robust data quality and research integrity.</p>2024-12-23T00:00:00-08:00Copyright (c) 2024 Marissa P. Bartmess, Tamu Abreuhttps://meth.psychopen.eu/index.php/meth/article/view/14579How Large Must an Associational Mean Difference Be to Support a Causal Effect?2024-12-23T04:10:03-08:00Michael Höflermichael.hoefler@tu-dresden.deEkaterina Proniziusmichael.hoefler@tu-dresden.deErin Buchananmichael.hoefler@tu-dresden.de<p>An observational study might support a causal claim if the association found cannot be explained by bias due to unconsidered confounders. This bias depends on how strongly the common predisposition, a summary of unconsidered confounders, is related to the factor and the outcome. For a positive effect to be supported, the product of these two relations must be smaller than the left boundary of the confidence interval for, e.g., a standardised mean difference (d). We suggest means to derive heuristics for how large this product must be to serve as a confirmatory threshold. We also provide non-technical, visual means to express researchers’ assumptions on the two relations to assess whether a finding on d is explainable by omitted confounders. The ViSe tool, available as an R package and Shiny application, allows users to choose between various effect sizes and apply it to their own data or published summary results.</p>2024-12-23T00:00:00-08:00Copyright (c) 2024 Michael Höfler, Ekaterina Pronizius, Erin Buchananhttps://meth.psychopen.eu/index.php/meth/article/view/12503Partitioning Dichotomous Items Using Mokken Scale Analysis, Exploratory Graph Analysis and Parallel Analysis: A Monte Carlo Simulation2024-09-30T01:59:03-07:00Gomaa Said Mohamed Abdelhamidgsm00@fayoum.edu.egMaría Dolores Hidalgogsm00@fayoum.edu.egBrian F. Frenchgsm00@fayoum.edu.egJuana Gómez-Benitogsm00@fayoum.edu.eg<p>Estimating the number of latent factors underlying a set of dichotomous items is a major challenge in social and behavioral research. Mokken scale analysis (MSA) and exploratory graph analysis (EGA) are approaches for partitioning measures consisting of dichotomous items. In this study we perform simulation-based comparisons of two EGA methods (EGA with graphical least absolute shrinkage and selector operator; EGAtmfg with triangulated maximally filtered graph algorithm), two MSA methods (AISP: automated item selection procedure; GA: genetic algorithm), and two widely used factor analytic techniques (parallel analysis with principal component analysis (PApc) and parallel analysis with principal axis factoring (PApaf)) for partitioning dichotomous items. Performance of the six methods differed significantly according to the data structure. AISP and PApc had highest accuracy and lowest bias for unidimensional structures. Moreover, AISP demonstrated the lowest rate of misclassification of items. Regarding multidimensional structures, EGA with GLASSO estimation and PApaf yielded highest accuracy and lowest bias, followed by EGAtmfg. In addition, both EGA techniques exhibited the lowest rate of misclassification of items to factors. In summary, EGA and EGAtmfg showed comparable performance to the highly accurate traditional method, parallel analysis. These findings offer guidance on selecting methods for dimensionality analysis with dichotomous indicators to optimize accuracy in factor identification.</p>2024-09-30T00:00:00-07:00Copyright (c) 2024 Gomaa Said Mohamed Abdelhamid, María Dolores Hidalgo, Brian F. French, Juana Gómez-Benitohttps://meth.psychopen.eu/index.php/meth/article/view/14823A General Framework for Modeling Missing Data Due to Item Selection With Item Response Theory2024-09-30T01:59:03-07:00Paul A. Jewsburypjewsbury@ets.orgRu Lupjewsbury@ets.orgPeter W. van Rijnpjewsbury@ets.org<p>In education testing, the items that examinees receive may be selected for a variety of reasons, resulting in missing data for items that were not selected. Item selection is internal when based on prior performance on the test, such as in adaptive testing designs or for branching items. Item selection is external when based on an auxiliary variable collected independently to performance on the test, such as education level in a targeting testing design or geographical location in a nonequivalent anchor test equating design. This paper describes the implications of this distinction for Item Response Theory (IRT) estimation, drawing upon missing-data theory (e.g., Mislevy & Sheehan, 1989, https://doi.org/10.1007/BF02296402; Rubin, 1976, https://doi.org/10.1093/biomet/63.3.581), and selection theory (Meredith, 1993, https://doi.org/10.1007/BF02294825). Through mathematical analyses and simulations, we demonstrate that this internal versus external item selection framework provides a general guide in applying missing-data and selection theory to choose a valid analysis model for datasets with missing data.</p>2024-09-30T00:00:00-07:00Copyright (c) 2024 Paul A. Jewsbury, Ru Lu, Peter W. van Rijnhttps://meth.psychopen.eu/index.php/meth/article/view/11721Post-Hoc Tests in One-Way ANOVA: The Case for Normal Distribution2024-06-28T05:40:50-07:00Joel Juarros-Basterretxeajoeljuarros@unizar.esGema Aonso-Diegojoeljuarros@unizar.esÁlvaro Postigojoeljuarros@unizar.esPelayo Montes-Álvarezjoeljuarros@unizar.esÁlvaro Menéndez-Allerjoeljuarros@unizar.esEduardo García-Cuetojoeljuarros@unizar.es<p>When one-way ANOVA is statistically significant, a multiple comparison problem arises, hence post-hoc tests are needed to elucidate between which groups significant differences are found. Different post-hoc tests have been proposed for each situation regarding heteroscedasticity and sample size groups. This study aims to compare the Type I error (α) rate of 10 post-hoc tests in four different conditions based on heteroscedasticity and balance between-group sample size. A Montecarlo simulation study was carried out on a total of 28 data sets, with 10,000 resamples in each, distributed through four conditions. One-way ANOVA tests and post-hoc tests were conducted to estimate the α rate at a 95% confidence level. The percentage of times the null hypothesis was falsely refused is used to compare the tests. Three out of four conditions demonstrated considerable variability among sample sizes. However, the best post-hoc test in the second condition (heteroscedastic and balance group) did not depend on simple size. In some cases, inappropriate post-hoc tests were more accurate. Homoscedasticity and balance between-group sample size should be considered for appropriate post-hoc test selection.</p>2024-06-28T00:00:00-07:00Copyright (c) 2024 Joel Juarros-Basterretxea, Gema Aonso-Diego, Álvaro Postigo, Pelayo Montes-Álvarez, Álvaro Menéndez-Aller, Eduardo García-Cuetohttps://meth.psychopen.eu/index.php/meth/article/view/12943Modelling the Effect of Instructional Support on Logarithmic-Transformed Response Time: An Exploratory Study2024-06-28T05:40:51-07:00Luis Alberto Pinos Ullauriwim.vandennoortgate@kuleuven.beWim Van Den Noortgatewim.vandennoortgate@kuleuven.beDries Debeerwim.vandennoortgate@kuleuven.be<p>Instructional support can be implemented in learning environments to pseudo-modify the difficulty or time intensity of items presented to persons. This support can affect both the response accuracy of persons towards items as well as the time persons require to complete items. This study proposes a framework to model response time in learning environments as a function of instructional support. Moreover, it explores the effect of instructional support on response time in assembly tasks training using Virtual Reality. Three models are fitted with real-life data collected by a project that involves both industry and academic partners from Belgium. A Bayesian approach is followed to implement the models, where the Bayes factor is used to select the best fitting model.</p>2024-06-28T00:00:00-07:00Copyright (c) 2024 Luis Alberto Pinos Ullauri, Wim Van Den Noortgate, Dries Debeerhttps://meth.psychopen.eu/index.php/meth/article/view/11523Comparison of Lasso and Stepwise Regression in Psychological Data2024-06-28T05:40:51-07:00Di Jody Zhoujodzhou@ucdavis.eduRajpreet Chahaljodzhou@ucdavis.eduIan H. Gotlibjodzhou@ucdavis.eduSiwei Liujodzhou@ucdavis.edu<p>Identifying significant predictors of behavioral outcomes is of great interest in many psychological studies. Lasso regression, as an alternative to stepwise regression for variable selection, has started gaining traction among psychologists. Yet, further investigation is valuable to fully understand its performance across various psychological data conditions. Using a Monte Carlo simulation and an empirical demonstration, we compared Lasso regression to stepwise regression in typical psychological datasets varying in sample size, predictor size, sparsity, and signal-to-noise ratio. We found that: (1) Lasso regression was more accurate in within-sample selection and yielded more consistent out-of-sample prediction accuracy than stepwise regression; (2) Lasso with a harsher shrinkage parameter was more accurate, parsimonious, and robust to sampling variability than the prediction-optimizing Lasso. Finally, we concluded with cautious notes and recommendations in practice on the application of Lasso regression.</p>2024-06-28T00:00:00-07:00Copyright (c) 2024 Di Jody Zhou, Rajpreet Chahal, Ian H. Gotlib, Siwei Liu