Single-case designs facilitate the study of intervention effects at the level of the individual by collecting for each individual repeated observations in each of at least two conditions (e.g., baseline and treatment). The most common type of single-case design is the multiple-baseline design (Shadish & Sullivan, 2011), in which multiple individuals are studied concurrently in a baseline followed by a treatment phase with the restriction that the start of intervention is temporally staggered across the individuals so their baselines have different lengths. Because single-case designs allow researchers to study individuals from populations with low prevalence rates, they have been widely employed in disciplines like special education, school psychology, applied behavior analysis, and rehabilitation counseling.
The application of statistical models to single-case data requires researchers to make assumptions about the errors in the statistical model. It is difficult to assume independence, because the unmeasured factors that give rise to the errors may impact multiple adjacent observations, such as when a child’s illness impacts their behavior several days in a row. When the errors that are closer in time are more similar than errors further apart in time, there is some serial dependency or autocorrelation among the errors. Several autoregressive and moving average models have been considered for the dependency among errors in single-case regression models, such as unstructured, compound symmetry, banded toeplitz or moving average, first-order autoregressive (AR[1]) or independent (σ2I; Wolfinger,1993), and the most commonly assumed correlated error structure in SCED data is the AR(1) model (i.e., et = ρ et-1 + at; Pustejovsky, Hedges, & Shadish, 2014).
If positive autocorrelation is ignored, the regression coefficients of single-level models are unbiased, but the standard errors of the regression coefficients would be underestimated (Neter, Wasserman, & Kutner, 1990) which inflates Type I error rates for significance tests of the treatment effect (Toothaker, Banz, Noble, Camp, & Davis, 1983). These negative effects of ignoring autocorrelation motivate single-level models that estimate and adjust for autocorrelation (e.g., McKnight, McKean, & Huitema, 2000; Maggin et al., 2011), but with short series it is difficult to disentangle autocorrelation from trends and estimating both can compound short series estimation bias (Ferron, 2002). To obtain negligible bias in the estimate of autocorrelation, over 100 observations would be needed using a traditional estimator, or at least 20 observations using a jackknife estimator (Huitema & McKean, 1994), which makes a challenge in single-level models for single-case studies, which typically have baselines that are less than 20 observations.
Multilevel Modeling in Single-Case Study
When single-case studies involve multiple cases, like in the multiple-baseline design, multilevel modeling (MLM) has been suggested as a method for analyzing the data from multiple cases. MLM allows for possible dependency of the errors to be taken into account (Baek et al., 2014; Shadish, Rindskopf, Hedges, & Sullivan,2013; Van den Noortgate & Onghena, 2003). The performance of the MLM approach to analyze single-case data has been examined using several simulation studies. It is generally known that the fixed effects are unbiased, but the variance components have small sample bias (Ferron, Bell, Hess, Rendina-Gobioff, & Hibbard, 2009; Ferron, Farmer, & Owens, 2010; Moeyaert, Ugille, Ferron, Beretvas, & Van den Noortgate, 2013; Petit-Bois, Baek, Van den Noortgate, Beretvas, & Ferron, 2016).
The following Equations (1) and (2) are for a two-level model for single-case studies.
Level-1 equation:
1
Level-2 equation:
2
where yij is the observed value (outcome) at the ith observation for the jth case. Phaseij is a dichotomous variable that indicates the phase in which the observation occurred, being 0 indicates the baseline phase and 1 indicates the treatment phase; is an indicator of time coded such that 0 corresponds to the first observation of the study, and is recoded such that 0 corresponds to the first intervention observation (Huitema & McKean, 2000). Thus, β0j is the baseline intercept for the jth case at the beginning of the baseline phase, and β1j is the difference between the baseline level and the treatment level (shift in level) for the jth case at the beginning of treatment, β2j is the baseline slope for the jth case, and is the change in slope that occurs with treatment. Finally, eij is the residual that indicates within case variation (level-1 errors) and is assumed to be multivariate normally distributed N(0,Σe). The covariance structure ∑e of the errors can be assumed as any of the error structures mentioned earlier. For the second level equation, θ00 is the average baseline intercept, θ10 is the average shift in level indexed at the time of the first treatment observation, θ20 is the average baseline slope, and θ30 is the average shift in slope. The level-2 errors, u0j, u1j, u2j and u3j are assumed to be multivariate normally distributed N(0,Σu).
Assumption of Between Case Homogeneity in Level-1 Error Structures
The effect of ignoring autocorrelation and other misspecifications of the level-1 error structure have been examined for MLM analyses for single-case data. Researchers found that ignoring autocorrelation does not bias the fixed effect estimates, but the inferences about the fixed effects can be inaccurate due to the underestimate of the corresponding standard errors, and the estimates of the variance parameters become more biased (Ferron et al., 2009; Owens & Ferron, 2012; Petit-Bois et al.,2016).
Although MLM allows autocorrelation among level-1 errors to be taken into consideration in single-case data analyses, this approach holds a critical assumption that the level-1 error structure is the same for all cases. Specifically, it is assumed that (a) autocorrelation is the same for all cases and (b) the level-1 error variance is the same for all cases. Previous single-case research using MLM application as well as misspecification research of level-1 error structures has often assumed the autocorrelation and level-1 error variance to be equal for all cases (e.g., Ferron et al., 2010; Petit-Bois et al., 2016; Van den Noortgate & Onghena, 2003).
However, it is possible that the level-1 error structure may not be homogeneous across cases. For example, the behavior of one child with an emotional/behavioral disorder may vary more substantially from day to day because of intermittent problems in the home, or lapses in medication provision, than that of another child with the same disorder. The findings from previous studies in single-case data support that variations in level-1 error covariance matrices could exist (Baek & Ferron, 2013; Baek, Petit-Bois, Van den Noortgate, Beretvas, & Ferron, 2016). For example, Baek and Ferron (2013) discovered relatively large differences in estimates of autocorrelation and level-1 error variances, when they allowed the level-1 error structure to vary across cases in real datasets from four previously published single-case studies (i.e., Dufrene et al., 2010; Ingersoll & Lalonde, 2010; Koegel, Singh, & Koegel, 2010; Oddo et al., 2010), and the fit indices favored a model with separate estimates of the autocorrelation and the level-1 error variances for some studies. Baek et al. (2016) also found that allowing the level-1 error structure to differ for one of the participants in a single-case study led to estimated individual trajectories that were more consistent with the visually plotted data and improved the model fit.
Because in some studies the cases appear to have different variances, but commonly adopted models assume homogeneity, it is critical to examine the consequences of modeling (and not modeling) the heterogeneity. Thus, the purpose of this study is to extend the MLM modeling in single-case design to allow between case variation in the level-1 error structure, and to identify the consequences of different modeling methods for the level-1 error structure.
Modeling Between Case Variation in Level-1 Error Structures
The two level model that allows between case variation in the level-1 error structure in single-case design can still be represented by Equations (1) and (2). When modeling between case variation in the level-1 error structure, the covariance structure ∑e is assumed to have parameters that vary from one case (j) to the next which can be re-expressed as ∑ej. More specifically, if a AR(1) structure is chosen, then the autocorrelation and level-1 error variance are estimated separately for each case.
Figure 1 illustrates the results of different ways of modeling the level-1 covariance structure ∑e. Assume that there are single-case data with three cases and the AR(1) structure is assumed for the covariance structure ∑e . In scenario (1), the covariance structure ∑e is assumed to be constant across cases, thus, the autocorrelation (ρ = .2) and the variance of level-1 errors (σ2 = 30) are estimated and these values are same across all three cases. However, in scenario (2), for the proposed approach where the covariance structure ∑ej is allowed to vary across cases, the autocorrelation and the variance of level-1 errors are estimated with as many values as cases. Thus, each case has unique autocorrelation and level-1 error variance.
Figure 1
Bayesian Method of Estimating Heterogeneous Level-1 Error Structure
Restricted maximum likelihood (REML) estimation is the most commonly used method to analyze multilevel models, and has been implemented by several software procedures that allow easy access. However, REML may encounter estimation problems, such as non-convergence, when analyzing complex multilevel models of SCED data. The Bayesian approach provides a feasible option in computationally intensive scenarios through the use of Markov Chain Monte Carlo (MCMC) procedures (e.g., Browne, 2008; Gilks, Richardson, & Spiegelhalter, 1996) which have the potential to resolve non-convergence issues of REML estimation. It was found that a complex multilevel model of SCED data that failed to converge using REML could be estimated using the Bayesian approach (Baek, Petit-Bois, & Ferron, 2013; Baek, 2015).
Bayesian inference is the process of fitting a probability model, given the observed data, and summarizing the uncertainty of the model parameters with probability distributions (Gelman, 2002). Thus, Bayesian estimation also offers practical advantages because it takes into account the uncertainty of estimating both fixed effects and variance components (Gelman, Carlin, Stern, & Rubin, 2004). In addition to the computational and practical benefits of using Bayesian estimation, recent studies have indicated that the Bayesian approach has potential benefits in estimating effect sizes, analyzing nonlinear data, and estimating autocorrelation (Baek et al., 2019; Moeyaert, Rindskopf, Onghena, &Van den Noortgate, 2017; Rindskopf, 2014a; Rindskopf, 2014b; Shadish et al., 2013; Swaminathan, Rogers, & Horner, 2014). Baek et al. (2019) examined the impact of REML and Bayesian estimation on average treatment effect inferences of SCED studies using multilevel modeling. They found that Bayesian estimation results were comparable to REML estimation results. However, Moeyaert et al. (2017) found that Bayesian estimation can lead to more precise variance component estimation than ML estimation for certain conditions. Thus, the current study was conducted using Bayesian estimation.
Method
A Monte Carlo simulation study was conducted to examine the performance of the proposed Bayesian analysis of single-case design data, which allows between case variation in the level-1 error variances and autocorrelation. Specifically, two level models where the level-1 error structures were modeled in different ways (i.e., not modeling between case variation vs. modeling between case variation) were examined in terms of the accuracy of the estimates of the parameters using Bayesian estimation. In this study, multiple baseline single-case data were generated using Equations (1) and (2) with the interactive matrix language procedure in SAS version 9.3. The true level-1 error structure was AR(1) and was either homogenous across cases, moderately heterogeneous across cases, or severely heterogeneous across cases. The AR(1) error structure was selected and generated as the true level-1 error structure because it is the most commonly applied correlated level-1 error structure in analyzing single-case data, and it is a relatively simple autocorrelated error structure (Ferron et al., 2009; Matyas & Greenwood, 1996). For the homogeneous conditions, level-1 errors were generated for each case with a level-1 error standard deviation (SD) ( of 1.0 and an autocorrelation ( of .2. The value of the autocorrelation .2 had been selected based on the literature review of single-case designs (Shadish & Sullivan, 2011). For the heterogeneous conditions, values of autocorrelation ( were generated from a normal distribution (Moderately Heterogeneous: M = 0.2 and SD = 0.1; Severely Heterogeneous: M = 0.2 and SD = 0.2), whereas the level-1 error SD ( was generated from a uniform distribution (Moderately Heterogeneous: M = 1 and SD = 0.17 (lower and upper limit = 0.7, 1.3) ; Severely Heterogeneous: M = 1 and SD = 0.35 (lower and upper limit = 0.4, 1.6). The level-1 error SD were generated such that the largest level-1 error variance value can be either as much as 3.5 times or as much as 16 times the smallest level-1 error variance value. The motivation was based on the analyses of real single-case design datasets presented by Baek and Ferron (2013) and Baek et al. (2016). The level-2 errors u0j, u1j, u2j, and u3j, were generated independently from normal distributions with mean 0 and variances of either 0.5, 0.5, 0.05, and 0.05, or 2, 2, 0.2, and 0.2, respectively.
The simulation conditions varied in series length (10 or 20) and the number of cases (4 or 8), to cover the sample sizes that are typical in single case studies. In addition, conditions varied in the level-2 error variance (more or less between case variability), and the method used to analyze the data, either allowing level-1 error structure to vary across cases (Model 2, proposed model) or holding the level-1 error variance and autocorrelation constant across cases (Model 1). For each condition, 1000 data sets were simulated, and parameter relative bias, RMSE and credible interval (CI) coverage and width were estimated. The two different models (Model 1 and Model 2) were used to analyze each generated data set using OpenBUGS software. Although various software programs are available to run Bayesian estimation including SAS, OpenBUGS software was selected due to several advantages, including flexibility to handle the complexity of the models, and possible computational efficiency from using the Gibbs sampling method (Spiegelhalter, Thomas, Best, & Lunn, 2014).
Prior Distributions for the Parameters
In Bayesian estimation, various choices of priors can be assumed for each parameter. A noninformative prior distribution is considered as one of the reasonable choices of objective prior distribution because noninformative distributions make the data speak for themselves so that posterior inferences are unaffected by external information (Berger, 2006; Gelman, 2002; Gelman et al., 2004; Goldstein, 2006; Jeffreys, 1961; Morris, 1983). For multilevel models, reasonable noninformative prior distributions have been developed for the fixed effect parameters (i.e., θ00, θ10, θ20, θ30), including the noninformative normal distribution (Gelman, 2006; Gelman et al., 2004). In general, noninformative normal distributions are constructed with large variance (i.e., 10002), so that posterior inference is not influenced by the choice of variance value. Thus, in this study, all fixed effect parameters were assumed to follow noninformative normal distributions (M = 0 and Var = 10002).
Unlike priors for fixed effect parameters, priors for random effect parameters have been more difficult to construct. The choice of noninformative prior distribution for the level-2 error variances (i.e., ) and the level-1 error variance ( ) can have a more substantial impact on inferences, especially in the case where j (the number of units of the higher level) is small (Gelman, 2002; Gelman, 2006).
Various noninformative and weakly informative prior distributions have been suggested for the variance parameters in multilevel models, including uniform, inverse-gamma family, inverse-Wishart, half-Cauchy, and half-t distributions (Baek et al., 2019; Berger & Strawderman, 1996; Daniels & Kass, 1999; Gelman, 2006; Moeyaert et al., 2017). Moeyaert et al. (2017) constructed various weakly informative priors for variance components for SCED data, and the results found biased and less precise variance estimates when the number of cases is small ( ). By increasing the number of cases, more precise estimates are obtained. Similarly, Gelman (2006) found that when the uniform distribution (noninformative prior) is constructed for the standard deviation (SD) unit rather than variance unit of level-2 error, the uniform distribution generally performs well, as long as which is required to ensure a proper posterior density. Baek et al. (2019) examined the impact of estimation methods in multilevel SCED using noninformative priors for variance parameters, and their study yielded similar results with the previous study that constructed weakly informative priors. Based on these suggestions, noninformative prior distributions for the SD of the level-2 errors ( ) and the level-1 errors ( were assigned to be the uniform distribution (lower limit = 0 and upper limit = 100) in this study. Since all of the level-2 and level-1 error SD values were generated to be less than 2, the upper limit of 100 seems to be large enough to minimally impact the posterior distribution.
Similarly, a noninformative prior for ρ that follows a normal distribution with M = 0 and SD = 1000 was assigned. However, since ρ is a correlation parameter, the scale of the prior distribution for ρ is stationary restricted so that its range falls between -1 and 1 (Gamerman & Lopes, 2006).
For the proposed model, priors for the level-1 error SD that varies across cases ( and the autocorrelation that varies across cases ( ) was constructed as follows:
3
The prior for was assumed to follow the uniform distribution with the lower limit of Lσ and the upper limit of Uσ. The Lσ and Uσ were further assumed to follow a uniform distribution. In addition, the prior for ρj was assumed to follow the normal distribution with a mean of μρ and a variance of but with the restricted range between -1 and 1. The priors for μρ and σρ were further defined as a normal distribution and a uniform distribution, respectively.
Results
A data set per each condition of the design factors (24 conditions) was first generated to test convergence and to make decisions about the number of iterations, and the burn-in period. Based on the test results, it was decided to use a burn-in of 2,000 iterations and to run an additional 500,000 iterations, but to use only 50,000 samples of the 500,000 iterations after thinning to form the posterior distribution for the main analyses. Various diagnostic criteria were used in monitoring convergence, including trace plots, history plots, Kernel density plots, and Brooks–Gelman–Rubin (BGR) plots for the simulated data sets using two different MCMC chains. No signs for non-convergence were found.
Fixed Effects
The relative bias values for treatment effects (shift in level and shift in slope) were compared across the two models. For the treatment effects, the average relative bias values were very minimal and RMSE values were comparable for both models. The average relative bias values were smaller than 5% for both treatment parameters (Model 1: shift in level = .001, shift in slope = .017; Model 2: shift in level < .001, shift in slope = .015) which are considered as minimal, according to Hoogland and Boomsma (1998)’s criteria. The average RMSE value of the shift in level was 0.68 for both models and the average RMSE value of the shift in slope was 0.23 for Model 1 and 0.22 for Model 2 (proposed model). In addition, the average credible interval (CI) coverage and width values for the treatment effects were similar across the two models. The CI coverage exceeded .95 for both models and the average CI width values of the shift in level was 4.96 for Model 1 and 4.95 for Model 2, and the average CI width value of the shift in slope was 1.68 for both models.
Variance Components
The average relative bias values of the variance components across the models are provided in Table 1, along with values for RMSE, CI coverage, and CI width. Consistent with previous findings of SCED with MLM using both REML and Bayesian estimations, the estimates of the variance components were biased. Although the biases were often comparable across models, the average relative bias for the level-1 error SD was noticeably smaller in Model 2 (.06) than Mode1 1 (.10). To further examine bias, RMSE, CI coverage, and CI width, ANOVAs were run to examine the proportion of the variance in these outcomes associated with each of the factors in the simulation study (i.e., η2 values) and identify factors with a medium effect of .06 or larger. The interaction between the true level-1 error structure and the type of model accounted for substantial variation in the average bias (η2 = .10) and the average RMSE (η2 = .16) of the level-1 error SD (Figure 2). Specifically, for Model 1, mean bias and RMSE increased consistently and substantially as the true level-1 error structure shifts from homogeneous to moderately and severely heterogeneous. However, for Model 2, changes in the true level-1 error structure led to smaller changes in mean bias and RMSE.
Table 1
Outcome | Level-2 error SD |
Level-1 error SD |
Autocorrelation
|
|||||
---|---|---|---|---|---|---|---|---|
Shift in level
|
Shift in slope
|
|||||||
Model 1 | Model 2 | Model 1 | Model 2 | Model 1 | Model 2 | Model 1 | Model 2 | |
Bias | 0.89 | 0.88 | 0.99 | 0.97 | 0.10 | 0.06 | -0.60 | -0.48 |
RMSE | 1.23 | 1.21 | 0.42 | 0.41 | 0.22 | 0.18 | 0.26 | 0.25 |
CI coverage | 0.97 | 0.97 | 0.97 | 0.98 | 0.85 | 0.97 | 0.81 | 0.94 |
CI width | 6.43 | 6.44 | 1.62 | 2.16 | 0.47 | 0.81 | 0.74 | 1.10 |
Note. Model 2 is the proposed model.
Figure 2
The average CI coverage values of the level-2 error SD for treatment effects were over the nominal value (.95) across the two models. The interaction between type of model and the true level-1 error structure had found that the different modeling of the level-1 error structure had substantial impacts on the average CI coverage of the level-1 error SD and the autocorrelation (η2 = .19, .07, respectively). As illustrated in Figure 3, when the true level-1 error structure was homogeneous, the CI coverage was above the nominal level for both models. However, when the true level-1 error structure was one of the heterogeneous error structures, CI coverage was close to the nominal level (.95) for Model 2, while CI coverage decreased substantially below the nominal level for Model 1. However, Model 2 has larger CI widths than Model 1.
Figure 3
A Follow-Up Study
In the main study, data having the heterogeneous level-1 error structure had been generated in a way that every case had a unique value of the level-1 error SD and autocorrelation within a specified range. However, it is conceivable that the values are not evenly spread out in a real dataset, instead, one or two of the cases may have substantially larger variance compared with the other cases (Baek et al., 2016). Therefore, it seemed worthwhile to expand the simulation to include such a condition. Thus, a follow-up study was conducted with a condition where data were generated in a way that one case had a substantial difference in the level-1 error variance (16 times bigger than the other cases) and the autocorrelation (either half [.2] or twice [.4] as large as the other cases). In the follow-up study, there were 8 conditions simulated using the three factors that were used in the main study as shown in Table 2.
Table 2
Factor | Level |
---|---|
Autocorrelation | .2 and .4 or .4 and .2 |
Combination of number of cases and series length per case | 4 and 10 or 8 and 20 |
Method to model the level-1 error structure | Not modeling between case variation (Model 1) or Modeling between case variation (Model 2) |
The results of all of the simulated conditions for the fixed treatment effects and variance components are provided in Table 3. Unlike the main study, this study found that the different modeling methods for the level-1 error structure had an impact on the estimates of the both fixed treatment effects and variance components. Although bias estimates for the fixed effects were minimal regardless of the model used, the RMSE values for the fixed treatment effects were generally smaller when estimated by the proposed model. In addition, unlike the main study, this study found that the different modeling methods for the level-1 error structure had an impact on the estimates of the level-2 error SD. For the level-2 error SD, the proposed model had a generally smaller average bias and smaller RMSE values than Model 1. Similar to the main study, the proposed model generally provided better estimates for the level-1 error SD and autocorrelation.
Table 3
Parameter | Bias
|
RMSE
|
CI coverage
|
CI width
|
||||
---|---|---|---|---|---|---|---|---|
Model 1 | Model 2 | Model 1 | Model 2 | Model 1 | Model 2 | Model 1 | Model 2 | |
Fixed treatment effects | ||||||||
Shift in level | < -0.01 | < -0.01 | 1.01 | 0.73 | > 0.99 | 0.99 | 7.11 | 5.80 |
Shift in slope | 0.02 | 0.01 | 0.39 | 0.26 | 0.98 | 0.97 | 3.10 | 2.47 |
Variance components | ||||||||
Level-2 error SD | ||||||||
Shift in level | 2.31 | 1.80 | 2.12 | 1.55 | 0.93 | 0.98 | 9.08 | 7.70 |
Shift in slope | 3.74 | 2.67 | 1.09 | 0.74 | 0.91 | 0.98 | 1.68 | 3.42 |
Level-1 error SD | 0.60 | 0.10 | 1.24 | 0.51 | 0.03 | 0.93 | 0.85 | 1.53 |
Autocorrelation | -1.06 | -0.87 | 0.41 | 0.33 | 0.50 | 0.84 | 0.67 | 1.10 |
Note. Model 2 is the proposed model.
An Empirical Example of the Proposed Model
A published single-case study (Resetar, Noell, & Pellegrin, 2006) was selected to obtain data for an illustrative analysis. Resetar and colleagues (2006) conducted a multiple-baseline design across five first-grade students who were reading below grade level. The purpose of their study was to investigate the effect of parents reading tutoring on child reading fluency, which was measured as words read correctly per minute (WCPM).
The raw data were extracted from the graphs presented in the article using DataThief III (Tummers, 2006). Model 1 and Model 2 (proposed model) were then specified using Equations (1) and (2). OpenBUGS software and the same priors from the simulation study were used. Ten thousand iterations were discarded for a burn-in, and 700,000 iterations for Model 1 and 800,000 iterations for the proposed model were run to form the posterior distribution. The previously mentioned diagnostic criteria found no signs for non-convergence.
The results for Model 1 and Model 2 are compared in Table 4 below. As indicated in the simulation studies that both the fixed effects and the variance components could differ across the two models, results from the empirical data were different across the two models. First, the treatment effect estimates were somewhat different across the two models. The treatment effect for phase or the shift in level was 25.38 for Model 1, and 22.76 for the proposed model (θ10). The treatment effect for the change in trend was -0.31 for Model 1 and -0.69 for the proposed model (θ20). The variance components were also different across the two models. The level-2 error SDs were higher for the proposed model than Model 1. The SD for the shift in level was 11.73 for Model 1 and 14.85 for the proposed model ( ). For the difference in trends (shift in slope), the SD was 1.00 for Model 1 and 1.07 for the proposed model . The average level-1 error SD was slightly higher in the proposed model than Model 1, 13.51 for Model 1 and 14.00 for the proposed model. In the proposed model, the level-1 error SD varied across cases, ranging from 10.66 to 17.23. The estimated average autocorrelation was 0.34 for Model 1 and 0.29 for the proposed model (ρ). In the proposed model, the autocorrelation varied across cases, ranging from 0.13 to 0.36.
Table 4
Parameter | Model 1 | Model 2 |
---|---|---|
Variance components | ||
Intercept | 14.24 | 12.88 |
Phase | 11.73 | 14.85 |
Time | 0.85 | 1.08 |
Interaction | 1.00 | 1.07 |
Variance | 13.51 | 14.00 |
AR(1) | 0.34 | 0.29 |
Variance (Person 1) | - | 10.66 |
AR(1) | - | 0.13 |
Variance (Person 2) | - | 11.56 |
AR(1) | - | 0.36 |
Variance (Person 3) | - | 13.07 |
AR(1) | - | 0.30 |
Variance (Person 4) | - | 11.35 |
AR(1) | - | 0.32 |
Variance (Person 5) | - | 17.23 |
AR(1) | - | 0.34 |
Fixed effects | ||
Intercept | 27.59 | 24.78 |
Phase | 25.38 | 22.76 |
Time | 0.31 | 0.79 |
Interaction | -0.31 | -0.69 |
Discussion and Recommendations
This study developed a method for estimating between case heterogeneity in level-1 variances and provides insight into how different modeling approaches for the level-1 error covariance matrices impacts statistical inferences. The results of this study indicate that the different modeling methods in the level-1 error structure can have an impact on the variance components, in some cases, both fixed effects and variance components. This finding is particularly distinguished from previous works that have investigated other misspecifications of the level-1 error structure. The previous studies have found that the fixed effects are generally robust to misspecifications of the level-1 error structure, however, this study found that the misspecification of the level-1 error structure can have an impact on fixed effects when analyzing data that show one or more cases that have substantially different variability than the others. In addition, this study suggests that accuracy and precision of the variance components can be improved by modeling between case variation in the level-1 error structure, and the effectiveness of modeling between case variation increased as the degree of the heterogeneity in the data increased.
The results of this study lead to various implications for applied single-case researchers, as well as for methodologists. Single-case researchers can feel comfortable interpreting the overall treatment effects when they have data that show no or a moderate degree of between case variation in the within-case variance, regardless of whether between-case heterogeneity has been explicitly modeled. However, when the heterogeneity becomes more severe, and particularly when one case has variance that is very different from the others (as in our follow-up study), researchers are encouraged to model the variation in the level-1 error variances. Doing so can lead to less error in the estimate of the average treatment effect, and less bias in the variance estimates. To facilitate modeling between case variation in the level-1 error variances, this study has developed a Bayesian model and corresponding OpenBUGS code, which is accessible to applied researchers for use in their own research (see Supplementary Materials).
This study also provides a few implications for methodologists who study the use of multilevel modeling to conduct single-case data analyses. Although this study was helpful in providing some initial guidance about the use of multilevel modeling for single-case data when there are differences in the within-case variation, this Monte Carlo study was limited by the conditions that were examined. The conditions were chosen based on a review of single-case literature and applied studies that used two-level models to analyze single-case data, but only some of the possible options were included in this study. More simulation work can be done to document the impact of various modeling options across a broader range of heterogeneous data conditions, including those that involve more complex dependent error structures than AR(1). In addition, it is not clear the degree to which biases may be reduced through the choice of alternative priors or may be less pronounced in contexts where simpler models without trends are appropriate. By contrasting conditions and models with and without trends, along with estimation incorporating a wider range of prior distributions, and in particular more informative priors for the random components, future research could make it possible to further refine modeling advice.