Measurement invariance assesses whether a latent variable is measured equivalently across groups. This equivalence indicates that a measure quantitatively has the same meaning to each group and is therefore measuring the same construct in the same way across groups. Demonstrating measurement invariance is vital for the generalizability of psychometric measurement. For any measure where validity and reliability are necessary, tests for measurement invariance prior to administration across qualitatively distinct groups or time points should be conducted (Vandenberg & Lance, 2000). Many researchers claim that comparisons between cultures, administration modes, language versions, or sociodemographic groups cannot be credibly interpreted unless a measure demonstrates invariance (Borsboom, 2006). If measurement invariance is violated, score differences between groups can be the result of measurement rather than the true latent variable (Chen, 2007).
Traditionally, measurement invariance is tested using either Item Response Theory (IRT) or Structural Equation Modeling (SEM; Stark et al., 2006). Because SEM is more prevalent across psychological domains (Putnick & Bornstein, 2016), we narrow our focus to this framework. Within SEM, four consecutive tests are used to establish measurement invariance: configural (equivalence of factor structure), metric (equivalence of factor loadings), scalar (equivalence of item intercepts), and strict (equivalence of item residuals; Widaman & Reise, 1997). For a measure to be considered fully invariant, it must pass each of these tests.
The current paper presents a method to test metric invariance using network psychometrics in the Exploratory Graph Analysis (EGA) framework (Golino et al., 2020; Golino & Epskamp, 2017). First, a brief overview of the limitations associated with testing for measurement invariance in traditional psychometrics, focusing on metric invariance, is provided. Afterward, EGA is introduced and the proposed method to test metric invariance is discussed.
Measurement Invariance in Traditional Psychometrics
Factorial Invariance
Testing for factorial (measurement) invariance is conducted by comparing a more constrained model to the previous stage’s less constrained model, from a weaker to stronger level of invariance (e.g., configural model to metric model), using a likelihood ratio test. The constrained model sets relevant parameters (e.g., loadings) to be equal across groups and model fit is compared to the unconstrained model where the same parameters are estimated freely. If the constrained model has a better fit, then invariance at that level is established. This process starts by testing configural invariance.
Configural invariance (factor structure equivalence) is established by assessing the fit of a Multi-Group Confirmatory Factor Analysis model. Following R. E. Millsap (2011), let the common factor model be defined as:
1
where γjk is the latent intercept for variable j in population k, λjmk is the factor pattern loadings for variable j corresponding to the M common factors in population k, Wm represents the common factor scores for factor m, and Uj is the unique factor score for variable j. When and it is assumed that and that it is uncorrelated with W.
The unconditional mean and covariance structure for the measured variables X can then be expressed as:
2
and
3
where and . Finally, the test for configural invariance can be defined as,
4
and
5
for where Λkc denotes the pattern loadings matrices that have the factor structure with configural invariance.
This formula implies that each population has the same number of factors containing the same distribution of variables. If the model fits satisfactorily on all groups, then the organization of items into constructs is appropriate for all groups (Putnick & Bornstein, 2016). In other words, configural invariance is established such that the pattern of zero and nonzero loadings (fixed and free loadings) exists in all groups (Widaman & Reise, 1997). Configural invariance only demonstrates that similar, not equivalent, latent factors exist in all groups. Similar latent factors contain the same items across groups but do not necessarily imply that the groups have equivalent loadings, intercepts, or error terms. Testing for equivalent loadings is the next step.
Metric invariance (loading equivalence) can be defined as
6
and
7
for . This model constrains loadings to be equivalent across groups and is then compared to the configural model (unconstrained model). If the metric invariance model has a better fit, then each item contributes to their respective latent factors (and the overall latent construct) similarly across all groups (Putnick & Bornstein, 2016). If metric invariance is not established, then comparisons of factor variances and covariances (and subsequently scaled correlations) across groups cannot be made (Widaman & Reise, 1997). Without metric invariance, testing for scalar and strict invariance should not be conducted; however, testing for partial invariance of loadings is often appropriate.
Partial Invariance
Partial invariance occurs when only a portion of a parameter set lacks invariance. For metric invariance, the goal is to determine how many loadings lack invariance in each latent factor. Opinions vary on what level or proportion of partial invariance is permissible (Putnick & Bornstein, 2016). Testing for partial invariance can be useful to provide a more fine-grained perspective on which specific item parameters are noninvariant. In the case of metric invariance, individual constraints can be selectively introduced to Λ (loadings) and tested. It’s possible that metric invariance is not found because only one item’s loading is noninvariant across groups.
If partial invariance is found, at any level, then the researcher must determine (based on substantive reasoning or empirical criteria) how to handle instances of noninvariance. Arguably, identifying specific instances of noninvariance provides more useful information than an omnibus test for invariance, which only indicates that invariance exists across the parameters but not any one parameter specifically. Testing for partial invariance provides the same level of information as an omnibus test (whether noninvariance is present) but also where, if any, noninvariance exists.
It’s possible that partial invariance testing could identify noninvariance not identified by an omnibus test. This problem is well documented for omnibus tests (Raykov et al., 2013), and the potential effects of misidentifying items as invariant can be consequential. Prior research has indicated that conducting individual local tests can lead to a more accurate evaluation of noninvariance (Jung & Yoon, 2016; Raykov et al., 2020; Stark et al., 2006). Therefore, examining local tests rather than relying on overall global testing provides more detailed information and lowers the risk of Type II errors.
There are several methods available to test partial invariance. Some methods use referent indicators or assume a specific indicator is invariant a priori, which presents many issues. These issues and some potential solutions are discussed in the next section; however, due to the issues associated with the selection of a referent, our study focuses on methods that test partial invariance that do not require the selection of a single referent indicator. Instead, we consider the following methods: factor-ratio test (Rensvold & Cheung, 1998), a data-driven, sequential application of the modification index proposed by Yoon and Millsap (2007), and a method using a multiple testing procedure proposed by Raykov et al. (2013).
The factor-ratio test assesses partial invariance by comparing a fully unconstrained model to versions of a constrained model. Multiple constrained models are defined using all possible combinations of referent variables and choosing one of the remaining variables to test for invariance. A simulation study conducted by French and Finch (2008) found that this method works well to control false positive rates across data conditions and can successfully identify invariant items even when noninvariant items are present in the same factor. This procedure is computationally greedy as it investigates all possible combinations of referent indicators.
Yoon and Millsap (2007) proposed a data-driven method that sequentially evaluates modification indices. Within a fully constrained metric model, the factor variance of only one group is fixed to one. The factor variance of the other group is estimated freely. For both groups, factor loadings are constrained. Modification indexes are evaluated to estimate the change in when the fixed parameters are freed. If an invariance constraint shows a significant modification index, then that parameter is relaxed. The process is continued until all modification indices are non-significant. Their simulation study found that this method controls false positive rates very well but primarily in “ideal” data conditions (large sample sizes, greater difference in loadings, low cross loadings). A limitation of this approach is that model misspecifications can lead to artificial inflation of Type I error rates (Kim & Yoon, 2011; Whittaker, 2012), especially as model modifications are made throughout the testing process (Yoon & Millsap, 2007).
Finally, Raykov et al. (2013) introduced a multiple comparison method which uses the Benjamini-Hochberg procedure (BH-procedure; Benjamini & Hochberg, 1995) to control the Type I error rate introduced by multiple comparisons. The method compares two models using testing. One model (the baseline model) is a fully constrained model; the other model frees one set of parameters (e.g., loadings) across groups. The two models are compared and this process is repeated for all parameters. The number of tests conducted is equal to the number of variables. Zhang and Yang (2022) found that this method maintains high rates of power to detect noninvariance across varying data conditions (sample size, degree of noninvariance, proportion of noninvariance, and location of noninvariance). Although this method circumvents the choice of a referent indicator, the use of a fully constrained baseline model (i.e., including any model with constrained noninvariant items) could negatively impact accuracy (Benjamini & Hochberg, 1995). Given the cumbersome nature of the factor-ratio test and model misspecification limitations with the data-driven approach proposed by Yoon and Millsap (2007), we choose to focus on the multiple comparison method of Raykov et al. (2013) in this study.
Problems With Traditional Testing
goodness of fit statistics are commonly used across all four measurement invariance tests, including tests of partial invariance. Putnick and Bornstein (2016) tested model fit alternatives such as Root Mean Square Error of Approximation (RMSEA), Standardized Root Mean-square Residual (SRMR), Comparative Fit Index (CFI), and Tucker-Lewis Index (TLI), finding that the choice of criterion could impact the discovery rate of invariant indicators. Importantly, model fit indices can further be impacted by disparate sample sizes across groups (Chen, 2007; Kaplan & George, 1995).
Another concern is that each stage requires certain decisions to be made about the model specification which, if incorrectly made, can have unanticipated consequences. A referent indicator used in partial metric invariance, for example, assumes that the chosen item is invariant which can adversely impact model interpretation (Johnson et al., 2009). Using these traditional methods, the assumption of noninvariance is not frequently tested, most likely due to the complicated nature of the methods available to test it (Finch & French, 2008). It is often unknown which items are invariant a priori. Procedures have been developed to identify which items are noninvariant prior to selecting a referent indicator (Cheung & Lau, 2012; Cheung & Rensvold, 1999; Rensvold & Cheung, 2001). These tests, however, can be quite complicated from both a conceptual and implementation standpoint with varying evidence of their statistical power (French & Finch, 2006; Jung & Yoon, 2016).
Lack of reporting or proper specification has been found in one out of four studies employing measurement invariance tests (Schroeders & Gnambs, 2020). After researchers analyzed the components of each study, they found that the most influential predictor of model misspecification was the software used, concluding a dearth in the statistical training of psychologists as the cause. R. Millsap and Olivera-Aguilar (2012) similarly pointed out that both the skill and experience levels of researchers have strong impacts on the effectiveness of testing measurement invariance.
In light of these findings, our study proposes a method that does not require intensive model specifications and model comparisons, with all model parameters tested without the need to introduce further testing or adjustments. Additionally, this method is straightforward to implement in the popular R statistical software (R Core Team, 2020). The primary goal of the current work is to provide a method to test metric invariance in the EGA framework.
Exploratory Graph Analysis
Network psychometric methods are an alternative to latent variable modeling. Networks represent variables as nodes (circles) and their relationships (e.g., partial correlations) as edges (lines). Because the relationships between variables are not known a priori, they must be estimated. There are many methods to estimate a network with the graphical least absolute shrinkage and selection operator (GLASSO; Epskamp & Fried, 2018; Friedman et al., 2008) being one of the most common.
A key feature of network models is that each node is (usually) not connected to all other nodes (known as sparsity). Often, some nodes are more densely connected to each other relative to other nodes in the network. These sets of connected nodes are often referred to as communities, which are consistent with latent factors when data are generated from a factor model (Golino & Epskamp, 2017). Community detection algorithms are a common, data-driven way to identify communities in networks (Fortunato, 2010). The combination of the GLASSO with the Walktrap community detection algorithm (Pons & Latapy, 2006) has been labeled as Exploratory Graph Analysis (Golino et al., 2020; EGA; Golino & Epskamp, 2017) in the network psychometrics literature.
Across the broader field of network psychometrics, several methods have been developed to identify differences in network structure (Van Borkulo et al., 2022; Williams et al., 2020) and sub-groups (Danaher et al., 2014; Haslbeck & Bork, 2022; Jones et al., 2020). Although these methods aim to identify differences between networks, they all tend to treat the networks as unidimensional—that is, these methods do not account for the community structure of the network. Therefore, unless the construct is assumed to be unidimensional, the detected differences are unlikely to parallel traditional measurement invariance procedures. Establishing community structure and a metric consistent with factor loadings is key for developing such a comparison method.
Recent work has demonstrated that a node’s strength or absolute sum of a node’s connections to other nodes is related to confirmatory factor analysis (CFA) loadings (Hallquist et al., 2021). Hallquist et al. (2021) found, however, that the strength of a node is comprised of both dominant and cross loadings. To circumvent this issue, Christensen and Golino (2021b) proposed a measure called that splits a node’s strength based on the dimensions identified by EGA. In their simulation, they found that this measure was consistent with factor loadings when data were generated by a factor model. The development of network loadings opened the door for broader measurement evaluation within network psychometrics such as item selection, weighted between-person scores, and hierarchical dimensionality assessment (Christensen & Golino, 2021b; Jiménez et al., 2023).
The goal of this study is to leverage these network loadings to establish a method within the network psychometric framework to test for measurement invariance. The extent of measurement invariance within the network psychometrics framework only includes configural and metric invariance because latent variables are not estimated using networks. Consequently, intercepts and residual variances are not feasible because there are no latent factors created. Therefore, measurement invariance using network psychometrics, like network loadings, is a heuristic for configural and metric invariance in latent factors rather than a direct equivalent.
Present Research
Configural Invariance
Before introducing the proposed method to test metric invariance, configural invariance must be established first. Configural invariance in the EGA framework exists when the same nodes have been partitioned into the same communities for all groups. This task can be initially tested in a cursory way by estimating EGA separately for each group and comparing their structures. Even if the initial structure as defined by EGA indicates configural invariance, further testing should be conducted to minimize any effects of sampling variability. In other words, additional testing should be conducted to test if items are consistently organized into the same communities or if the number of communities and their structure fluctuates.
Bootstrap EGA (bootEGA; Christensen & Golino, 2021a) produces a sampling distribution of EGA results that can be used to evaluate the stability of the identified structure. One statistic called structural consistency assesses the proportion of bootstraps in which the same exact structure as the initial EGA was recovered. If the groups are pooled together into one sample, higher structural consistency indicates that it is more likely for this structure to be representative of the population structure for all groups. Lower indicates that configural noninvariance may be present. Additionally, if varying the number of samples drawn in bootEGA (or even which specific samples are drawn) shows structural variation, configural noninvariance may be present.
can be further broken down to assess the stability of items (proportion of bootstraps in which an item was assigned to the same dimension as the original EGA). Items showing a stability of are considered to be unstable (Christensen & Golino, 2021a). If there are distinct groups in a sample, then it is expected that each resample will have a different proportion of cases from each group. Therefore, if each group has a different configuration of assignment of nodes to communities, this lack of configural invariance will appear as items showing instability in community assignment. To reach configural invariance, items displaying instability should be removed.
To test for configural invariance in the EGA framework, a straightforward approach is to conduct bootEGA on the entire sample and remove items with stability. Without these items, bootEGA can be re-applied to identify any further items contributing to instability. This process should be repeated until a consistent common structure (i.e., all item stabilities ) can be identified across all groups within a sample. Importantly, this approach does not allow for partial configural invariance.
Metric Invariance in the EGA Framework
Once configural invariance is established, metric invariance can be tested. The proposed method tests the equivalence of network loadings across groups via permutation testing. Permutation testing has many advantages over traditional hypothesis testing approaches. Permutation tests make no parametric assumptions about populations, making them more flexible and robust to parametric deviations (Chihara & Hesterberg, 2022). Further, permutation tests can be applied to any test statistic, providing flexibility to adapt the model to any hypothesis or statistic (Chihara & Hesterberg, 2022; Ludbrook & Dudley, 1998). To elaborate on our procedure, we first must define network loadings.
Let W represent a symmetric v × v network made up of edge weights (e.g., partial correlations) where v is the number of variables. Node strength is then defined as
8
where is the absolute weight between node i and j and Si is node i’s strength or the sum of the absolute weights between node i and all n other nodes. Node strength can then be split between the communities identified by EGA:
9
where ℓic is the sum of the edge weights in community c that are connected to node i (i.e., node i’s loading for community c), and C is the number of estimated communities.
This formulation computes the absolute sum of a node’s connections to each community resulting in within (assigned) and between (non-assigned) community strengths. In other words, a node’s strength is divided into its connections to each community in the network. Equation (9) can be standardized using the following formula:
10
where is equal to the square root of the sum of all the weights for the nodes in community c.
Standardized loadings, ℵ, are absolute weights and, as is done in factor analysis, the signs are added after the loadings are computed (Comrey & Lee, 2013). However, unlike factor analysis, the number of communities is extracted from the network’s structure before computing network loadings. Additionally, variables have already been assigned to a community rather than being assigned to the community where to which they have the highest loading (as is done in factor analysis). Due to a network’s sparsity (i.e., lack of edges between some nodes), it is possible for a node to have a network loading of zero to some communities because it does not have any connections to nodes in those communities.
To test the equivalence of network loadings across groups, we propose applying a permutation test which works as follows. The original o × k data, D (where o is sample size and k is number of variables), is split by grouping variable G into two groups, G1 and G2, to form two new datasets, D1 and D2, respectively. EGA is performed separately on D1 and D2. In order for further testing to occur, the community structure as identified by EGA must be identical for both D1 and D2 (i.e., configural invariance). Once established, corresponding k × c network loading matrices and are computed where c is the number of communities. The difference between the two matrices is then computed,
11
to form an k × c matrix which contains the difference for each network loading. Only the differences between assigned community loadings are retained, representing a vector of assigned loadings, τ. To form a null distribution for each loading difference to be compared to, the grouping variable G is permutated and becomes GR and the original data D is split by GR to form two new datasets, and , thereby removing the original relationship between item responses and group membership. This process is done repeatedly a P number of times, , creating P new datasets, and .
EGA is performed on each permuted dataset and , network loadings are computed, and the difference between the assigned network loadings for each item is calculated to create a vector representing the null distribution for each item as follows:
12
where represents a v × c matrix of differences between loading matrices, and . From , only the assigned community loadings are retained, forming . Within each variable, v, these differences are put in ascending order, , forming a null distribution of the difference in network loadings if there was a random relationship between group assignment and network loading. The final step is to compare each test statistic to their respective null distributions at α = .05. p-values for item invariance we calculated as follows:
13
This formulation of pvalue is a vector whose elements are a two-sided p-value for each respective variable. If any p-value is less than .05, then metric invariance was violated. If not all p-values are less than .05, then partial metric invariance has been found; however, as previously mentioned, there is no agreement in the literature, to our knowledge, as to what constitutes an acceptable level of partial invariance.
The method described above is specifically outlined for the comparison of two groups. Conveniently, this model can be easily extended to three or more groups without sacrificing computational efficiency. Similar to logic used when conducting multiple comparisons after an omnibus test (Maxwell et al., 2018), it stands to reason that if noninvariance were to be found using this method, then it would be found between the groups with the largest difference in loadings. Therefore, for each variable we need only identify the groups with the minimum and maximum network loadings. If these two groups are significantly different from one another, then invariance cannot be supported. In this way, this method runs the same number of tests regardless of how many groups are being assessed. If noninvariance is found for an item, should the researcher wish, follow up tests can continue to be conducted to identify which groups specifically are different from one another. For each variable, the minimum loading would be compared to the second highest loading. If noninvariance is again found, the minimum loading would be then compared to the third highest loading, so on and so forth, until no significant differences are found.
Method
The following section outlines the methods used for each portion of the simulation study. As a benchmark for our proposed method, we compare it to the procedure presented by Raykov et al. (2013), which is outlined first. After, we discuss the multiple comparison procedure (aforementioned BH-procedure) and how it is applied in the current study. The data generation, conditions, and evaluation metrics are also described.
SEM Procedure
To test metric and partial metric invariance using SEM, we estimated two models: a configural, unconstrained model, see Equation (4), and a model with loadings constrained to equality across k populations, constrained metric model, see Equation (6). In order to directly compare to our proposed method, we tested for partial metric invariance. Testing for partial metric invariance was conducted using three methods: Free, Fixed, or Wald. The Free method follows the method proposed by Raykov et al. (2013). Using the {semTools} package (Version 0.5-6; Jorgensen et al., 2022), the Fixed and Wald methods are run simultaneously with Free. Since it is a common software method for researchers to use in practice, we evaluated the results of all three approaches. In all methods, an original model was chosen to be either the constrained or unconstrained model. After, the loadings were either fixed or freed iteratively to create a new model which was then compared to the original model. Using these methods circumvented a common problem in many approaches to invariance testing: we did not exclude the testing of any variables by fixing the loading of one variable per factor to 1. In this way, we could make direct comparisons between the SEM and proposed methods.
The Free method uses the constrained model as the original model. Iteratively, each variable j is freed in the matrix Λ to create J models. Each model is then compared to the original model using a likelihood ratio test and an assessment of CFI for a total of J tests. The Fixed method uses the unconstrained model as the original model. Iteratively, each variable j is constrained to be equal across populations k to create J models. Each model is then compared to the original model using a likelihood ratio test and an assessment of CFI for a total of J tests. The Wald method is similar to Free. It uses the constrained model as the original model, but rather than iteratively freeing each variable j and conducting likelihood ratio tests, it uses a multivariate Wald test. Nonetheless, multiple hypotheses are being tested. These methods do not adjust for Type I error rate and so it is often necessary to apply a multiple comparison test (Raykov et al., 2013).
Multiple Comparison Problem
Within both partial invariance frameworks (EGA and SEM), multiple hypotheses are tested which can artificially inflate the Type I error rate. To adjust for this inflation, a multiple comparison procedure (MCP) can be applied (Raykov et al., 2013; Steinberg, 2001). To select which MCP to apply, it’s important to consider consequences in the trade off of identifying (non)invariant items. Most MCPs focus on controlling the Family Wise Error Rate (FWER). FWER attempts to avoid making any Type I error and is inclined toward the notion that a Type I error is a serious issue.
In the context of partial invariance, a Type I error would suggest that a variable is noninvariant when it is truly invariant. In most research contexts, the cost of falsely identifying an item as invariant is greater than falsely identifying an item as noninvariant, particularly if the construct will be used to compare across groups (Shi et al., 2019). Therefore, FWER may suggest that more variables are invariant than there truly are, potentially leading to more costly consequences than using an uncorrected p-value. An alternative and less conservative MCP is the Benjamini-Hochberg procedure (BH-procedure; Benjamini & Hochberg, 1995) which controls the False Discovery Rate (FDR). FDR takes a more balanced approach to the multiple comparison problem by using the expected number of falsely rejected null hypotheses if any null hypotheses are rejected to determine its correction. Formally, FDR works to control ϕ:
14
where V is the number of falsely rejected null hypotheses and S is the number of correctly rejected null hypotheses out of the set of all hypotheses tested. The BH-procedure provides adequate control over false positives while showing marked improvements in power above and beyond traditional MCP methods (e.g., Tukey, Bonferroni, Scheffe; Benjamini & Hochberg, 1995).
The BH-procedure works by sorting individual p-values in ascending order and assigning them a rank. The adjusted p-value is computed using:
15
where mr represents the total number of p-values and r represents to rank for the corresponding p-value j.
Raykov et al. (2013) first proposed the use of the BH-procedure to test partial invariance due to its more liberal approach of lowering the risk of false positive noninvariant variables relative to FWER’s focus on lowering the risk of false positive noninvariant variables entirely. Given that the consequences are usually more dire when not correctly detecting noninvariant variables, the BH-procedure was preferred over FWER as our MCP. All p-values calculated using the BH-procedure, hereafter, are referred to as corrected p-values.
Data Generation
Data was generated following a common factor model, as was done by Golino et al. (2020). We begin by computing a population correlation matrix for each group, , with communalities in the diagonal,
16
where is the reproduced population correlation matrix for each group G, is a k × m factor loading matrix for k variables and m factors for each group G, and Phi (Φ) is the structure matrix of the latent variables (i.e., a m × m matrix of correlations among factors). The population does not contain any correlated residuals and therefore no minor factors.
Then, by inserting unities in the diagonal of it becomes a full rank matrix and is now population correlation matrix . Each group in G is assigned a matrix. A Cholesky decomposition is performed on each :
17
If any is not semi-positive definite or an item’s communality is greater than 0.90, then a new matrix is constructed. From this, the sample data matrix (continuous variables) can be computed as:
18
where is a matrix of random standard normal deviates with rows equal to the sample size and columns equal to the number of variables.
Design
The overall design of the simulation study followed closely that of Kim and Yoon (2011) with a few modifications. A two-factor model was simulated, each factor containing six variables, similar to Kim and Yoon (2011) and Yoon and Millsap (2007). Typically, simulation studies investigating invariance methods use unidimensional models; however, we decided to simulate two factors. This approach allowed us to manipulate interfactor correlation and investigate whether it impacted the power of the proposed method. Only one variable in one factor was simulated to have unequal dominant loadings across group. Since our main goal was to assess each method’s ability to identify noninvariant items correctly, having only one noninvariant item allows for a direct estimate of the true positive rate. Additionally, it allows us to compare the ability of each method to detect invariant items within factors both with and without noninvariant items.
For simplicity, we only simulated two groups. Factor loadings were set to be the same across factors for each respective variable (0.80, 0.70, 0.60, 0.80, 0.70, 0.60). Keeping high, static factor loadings allowed us to make sure configural invariance was not negatively impacted, particularly for data conditions with a high difference in loadings and/or a high interfactor correlation. Similar to Golino and Epskamp (2017), the correlation between factors was set to be low (0.30), medium (0.50), or high (0.70).
The loading of Variable 5 in Factor 1 (0.70) was decreased in G1 by either 0.20 (small difference) or 0.40 (large difference) as was done in Kim and Yoon (2011). Static factor loadings ensured that the magnitude of loading differences will have the same interpretation across data conditions (Yoon & Millsap, 2007). Per group, there was either the same sample size per group (500 or 1000 in both G1 and G2) or different sample sizes per group (500 in G1 and 1000 in G2). This design allowed us to compare the new method’s ability to detect noninvariant items in conditions that traditional methods currently usually struggle (i.e., disparate and/or small sample). This resulted in 18 separate conditions. For each condition, 500 datasets were simulated.
Measurement invariance was tested on each simulated dataset using both EGA in the {EGAnet} package (Version 1.1.1; Golino & Christensen, 2022) and SEM using the {lavaan} (Version 0.6.17; Rosseel, 2012) and {semTools} in R. All analyses were conducted in R and full code can be found in Jamison, Golino, and Christensen (2024).
Data Analysis
To assess the accuracy of each model’s (non)invariance detection, we used confusion matrix metrics. Because loadings were only changed for one variable (Variable 5 in Factor 1) and all other variables had equivalent loadings in the population, noninvariance should only be detected for Variable 5. Therefore, a true positive (TP) occurs when the model identifies noninvariance in Variable 5, and a false positive (FP) occurs when any other variable is identified as noninvariant. A false negative (FN) occurs when the model identifies Variable 5 as invariant and a true negative (TN) occurs when any other variable is identified as invariant. An item is considered noninvariant if its significance is and invariant if it’s significance is .
The following confusion matrix metrics were used to identify more specific measures of accuracy: , Sensitivity, Specificity, and F1. The {caret} package (version 6.0.94; Kuhn, 2022) in R was used to calculate Sensitivity, Specificity, and F1. All metrics were calculated separately using both uncorrected and corrected (using the BH-procedure) p-values.
, or , provides a straightforward, overall assessment of method accuracy of correctly identifying invariance or noninvariance. Sensitivity or represents the proportion of true positives correctly identified by the method out of all the truly noninvariant items. Specificity, or , represents the proportion of true negatives correctly identified by the method out of all true null hypotheses. F1, or , provides a similar metric to Sensitivity but places greater emphasis on identifying Variable 5 as noninvariant relative to identifying the other variables as invariant.
It’s important to contextualize these measures in our current simulation. Because there is only one possible TP or FN (i.e., Variable 5), Sensitivity breaks down to TP. either breaks down to if Variable 5 is identified as invariant or when Variable 5 is identified as noninvariant. Similarly, F1 either breaks down to zero if Variable 5 is identified as invariant or when Variable 5 is identified as noninvariant. Therefore, F1 is weighted toward identifying the noninvariant variable while lowering the (relative) cost of a FP. Finally, Specificity is a pure measure of the extent to which all invariant variables are correctly identified as invariant. Within the context of our study, greater weight should be given to detecting noninvariance over invariance. Therefore, Sensitivity and F1 should be given greater weight over Specificity and .
Results
We assessed method accuracy overall as well as across simulation conditions. When indicating simulation conditions, we will use the following labels: “Correlation Between Factors” indicates variation in the correlation between factors (0.3, 0.5, 0.7), “Diff” indicates the level of noninvariance (0.2 or 0.4), “N” indicates sample size (500, 1000, Different), and “p-Value” indicates the significance level where “corrected” indicates the BH-procedure adjusted p-values and “uncorrected” indicates the standard, unadjusted p-values.
Effect of MCP on p-Values
Configural invariance was recovered in 99.73% of the simulated datasets using EGA and 100% using SEM. To provide a direct, full comparison between both methods (rather than removing items showing configural noninvariance), only those datasets where configural invariance was found for EGA were retained and the others were discarded. All datasets were retained for analysis using SEM methods. Within each method, the accuracy of metric invariance methods were assessed. Figures 1 and 2 show the mean and 95% confidence interval across all datasets of the p-values split by method, sample size, correlation between factors, and loading difference. A dashed line intercepts the y-axis at .05 representing the α level. The mean p-value is represented for each variable.
Figure 1
Figure 2
In both Figures 1 and 2, Variable 5 is the only variable which should be significant or should show a mean p-value consistently below .05. Across both uncorrected (Figure 1) and corrected (Figure 2) p-values for all four methods, regardless of condition, the lowest mean p-value across variables is indeed Variable 5. This is in line with the manipulation used, changing the loading between groups by either 0.2 or 0.4 for only Variable 5. When the p-value is not corrected, as in Figure 1, the Free method has a lower mean p-value for all variables in Factor 1 (where noninvariance was simulated to exist), but not in Factor 2 where no noninvariance was simulated. This pattern is not present for the other methods. When the p-value is corrected (see Figure 2), the average p-value is higher than when it is uncorrected regardless of whether an item is invariant or not. All 3 SEM methods have a more noticeable increase in average p-value for Variable 5 when the difference in loadings for Variable 5 is set to 0.2 and sample size is either different or 500. Under these same conditions, this same trend in EGA is only noticeable when the correlation between factors increases to 0.7.
Hit Rate
In almost all cases, corrected p-values produce a higher mean than uncorrected p-Values (Figure 3). When the difference in loadings is 0.4, EGA, Fixed, and Wald all have almost perfect across all variables. In this condition, the same trend arises in Free as was seen in Figures 1 and 2: mean is lower in general for items in Factor 1, however its level of mean for Factor 2 when noninvariance is not present, is more similar to that of the other three methods.
Figure 3
When the difference in loading is set to 0.2, for Variable 5, all four methods experience lower mean when the p-value is corrected as compared to the uncorrected p-value. This trend is most notable for Fixed, Free, and Wald when sample size is “Different” or 500, but does not appear when sample size is 1000. EGA only shows this trend when sample size is 500 and gradually becomes more disparate as the correlation between factors increases from 0.3 to 0.7. The magnitude of this effect is the same for Fixed, Free, and Wald regardless of the correlation between factors. This indicates that EGA’s ability to correctly identify noninvariant variables is not as heavily influenced by data structures as Fixed, Free, and Wald. The Free method is better able to accurately identify invariant variables when noninvariant items are not present in the same factor.
Overall Metrics
Looking at the accuracy methods overall (not split by simulation condition), when a correction is applied, an interesting pattern appears (Table 1). Both F1 and Specificity increase for all four methods, but Sensitivity decreases. Using uncorrected p-values, Sensitivity is nearly 1 for all four methods, EGA being the highest at 0.99 and Fixed the lowest at 0.96. Once the BH-procedure is applied, Sensitivity decreases for all four methods, most dramatically for Fixed and Wald, falling below 0.90. When using corrected p-values, EGA has the highest values for F1 (0.91) and is tied for the highest Specificity with Fixed and Wald at 0.99. Free has the lowest values (using corrected p-values) of all of four methods for both F1 (0.83) and Sensitivity (0.97). For all four methods, Sensitivity increased slightly by applying the BH-procedure, while F1 values dramatically increase by applying the BH-procedure, going up on average by 0.14.
Table 1
Sensitivity | F1 | Specificity | ||||
---|---|---|---|---|---|---|
Type | Uncorrected | Corrected | Uncorrected | Corrected | Uncorrected | Corrected |
EGA | 0.99 | 0.93 | 0.76 | 0.91 | 0.94 | 0.99 |
Fixed | 0.96 | 0.88 | 0.76 | 0.88 | 0.95 | 0.99 |
Free | 0.98 | 0.93 | 0.65 | 0.83 | 0.90 | 0.97 |
Wald | 0.97 | 0.89 | 0.77 | 0.89 | 0.95 | 0.99 |
Sensitivity
When the difference in loadings is 0.4, all methods in all conditions have perfect Sensitivity regardless of whether or not the p-value is corrected (Figure 4). When the difference in loadings is 0.2, uncorrected p-values lead to a higher level of Sensitivity. In this condition, almost perfect Sensitivity is achieved using corrected p-values when sample size size is 1000 for all methods. When the difference in loadings is set to 0.2, corrected p-values are used, and sample size is either “Different” or 500, EGA and Free are performing better than Fixed and Wald. However, EGA is more heavily influenced by the increase in correlation between factors; when the correlation between factors reaches 0.7, EGA’s performance falls below Free’s to the same level as Fixed and Wald. Though when the correlation between factors is 0.3 or 0.5, EGA outperforms Free. All in all, setting the difference in loadings to 0.4 does not affect the ability of any of the methods to identify TP’s (noninvariant variables). However, when the difference is lower, correcting p-values lowers the Sensitivity for all the methods. EGA is, again, less affected by this difference and sample size in its ability to detect TP’s except when the correlation between factors is high.
Figure 4
F1
In all conditions and across all four methods, corrected p-values produce higher F1 values than uncorrected p-values (Figure 5). When the difference between loadings is set to 0.4, EGA, Fixed, and Wald have similar (and nearly perfect) F1 values. Free, however, has lower F1 values in this condition than the other three methods, particularly when the sample size is increased to 1000. When the difference between loadings is set to 0.2 and F1 is calculated using corrected p-values, a similar pattern arises that was seen in Sensitivity. EGA outperforms the other three methods when sample size is “Different” or 500. However, EGA is more heavily influenced by the increase in correlation between factors; when the correlation between factors reached 0.7, EGA’s performance falls below Free’s to the same level as Fixed and Wald. When the correlation between factors is 0.3 or 0.5, EGA outperforms Free.
Figure 5
Specificity
Across all these conditions, Specificity calculated using corrected p-values is higher than uncorrected p-values (Figure 6). All methods have consistently high and comparable levels of Specificity, except for the same trend that has been appearing for Free. When the difference in loadings increases from 0.2 to 0.4, the Specificity for the Free method decreases. Altogether, this indicates that each method is able to comparably recover TN’s or invariant items (except for the Free method in one condition).
Figure 6
Applying the Test for Metric Invariance to the BAPQ
To demonstrate a substantive application of this approach, we apply our proposed test for metric invariance to the Broad Autism Phenotype Questionnaire (BAPQ; Hurley et al., 2007). Appendix B contains the results from the application of the traditional partial invariance SEM method as implemented by the partialInvariance() function in {semTools}. Data was obtained from the Simons Foundation Powering Autism Research for Knowledge (SPARK) of the Simons Foundation Autism Research Initiative (SFARI), a large research initiative which has collected data from over 50,000 individuals with autism and their families (Feliciano et al., 2018). The BAPQ is a 36-item questionnaire designed to assess autism-related traits in adults. Participants are asked to rate the how often a statement applies to them on a 6-point Likert scale ranging from (1) Very Rarely to (6) Very Often. Items were intended to relate to one of three domains: aloofness, rigid personality, or pragmatic language.
This questionnaire was given to the parents (either mother or father) of an autistic child to assess their phenotypic level of autistic traits. We begin assessing measurement invariance between mothers and fathers by establishing configural invariance. To do so we apply EGA separately to the data on mothers and the data on fathers and compare their community structures. For this example, we are using the {EGAnet} package (Version 2.0.6; Golino & Christensen, 2024).
# Load EGAnet Package
library(EGAnet)
# Load in the Data
load("../2. Data/bapq.all.RData")
# Set mother indices
mother <- bapq.all$Parent == "Mother"
# Extract items only
items <- bapq.all[,4:39]
## Mother
ega.mother <- EGA (data = items[mother,])
## Father
ega.father <- EGA(data = items[!mother,])
Visually we can see that the two graphs contain nonequivalent community structures (Figure 7). We can apply the invariance() function to the data, which will first identify a common structure that exists using bootEGA(), removing item stabilities less than 0.70, to establish configural invariance. After, establishing configural invariance, the procedure will continue to test metric invariance.1
# Perform invariance
bapq_invariance <- invariance(
data = items, group = bapq.all$Parent,
ncores = 8, seed = 1, loading.method = "experimental"
)
Figure 7
The function will print out how many items were identified for configural invariance, for example: Configural invariance was found with 32 variables. To view which items were removed from the the original 36, the following object can be accessed:
[1] "q12" "q23" "q25" "q28"
In Figure 8, the before and after item stability are plotted with the latter being accessed in the results using plot(bapq_invariance$configural.results$item_stability).
Figure 8
Evaluating each group separately, we can see that the both groups have equivalent structures (Figure 9):
## Father
ega.father <- EGA(data = items[!mother, stable_names])
## Mother
ega.mother <- EGA(data = items[mother, stable_names])
Figure 9
Finally, we can print the results of invariance to see a table that breaks down the metric invariance for each item:
# Print summary
summary(bapq_invariance)
Membership Difference p p_BH sig Direction
q01 1 -0.006 0.662 0.850
q05 1 0.028 0.038 0.174 * Father > Mother
q09 1 0.022 0.208 0.428
q16 1 0.027 0.054 0.192 .
q18 1 -0.001 0.948 0.979
q27 1 0.030 0.076 0.243 .
q31 1 0.033 0.030 0.174 * Father > Mother
q36 1 -0.018 0.354 0.539
q02 2 0.017 0.296 0.515
q04 2 0.003 0.870 0.960
q07 2 -0.021 0.214 0.428
q10 2 -0.017 0.332 0.531
q11 2 0.038 0.038 0.174 * Father > Mother
q14 2 -0.004 0.810 0.926
q17 2 -0.016 0.306 0.515
q20 2 -0.059 0.004 0.064 ** Father < Mother
q21 2 -0.043 0.004 0.064 ** Father < Mother
q29 2 0.044 0.006 0.064 ** Father > Mother
q32 2 0.019 0.214 0.428
q34 2 0.024 0.152 0.428
q03 3 -0.032 0.044 0.176 * Father < Mother
q06 3 0.001 0.922 0.979
q08 3 -0.004 0.798 0.926
q13 3 0.010 0.564 0.785
q15 3 -0.017 0.264 0.497
q19 3 -0.007 0.664 0.850
q22 3 0.039 0.030 0.174 * Father > Mother
q24 3 0.022 0.182 0.428
q26 3 -0.019 0.186 0.428
q30 3 0.000 0.984 0.984
q33 3 0.005 0.708 0.871
q35 3 -0.011 0.422 0.614
----
Signif. code: 0 ’***’ 0.001 ’**’ 0.01 ’*’ 0.05 ’.’ 0.1 ’n.s.’ 1
The item text for the 6 items showing metric noninvariance using uncorrected p-values are displayed in Table 2. Using the corrected p-values, there were no items that were detected as noninvariant. The noninvariant items detected with the uncorrected p-values spanned each dimension of the BAPQ. All differences between mothers and fathers corresponded with deficits in fathers relative to mothers—that is, there was larger loadings for items related to deficits or behaviors contrary to norms in the general population or smaller loadings for items related to norms in the general population.
Table 2
Item Label | Item Description | p | pBH | Direction |
---|---|---|---|---|
q11 | I feel disconnected or ’out of sync’ in conversations with others | .048 | .256 | Father > Mother |
q20 | I speak too loudly or softly | .002 | .064 | Father < Mother |
q21 | I can tell when someone is not interested in what I’m saying | .004 | .064 | Father < Mother |
q29 | I leave long pauses in conversation | .010 | .107 | Father > Mother |
q03 | I am comfortable with unexpected changes in plans | .040 | .256 | Father < Mother |
q22 | I have a hard time dealing with changes in my routine | .026 | .208 | Father > Mother |
These results can further be visualized by using the plot() function on the output of invariance (Figure 10).
Figure 10
Discussion
Establishing measurement invariance is crucial for the use of any measurement across groups in any clinical or research setting. Traditionally, SEM approaches are the most common methods for testing measurement invariance. Previous research within network psychometrics has established a handful of methods for comparing networks, but nothing comparable to SEM that accounts for multidimensionality. With the introduction of network loadings by Christensen and Golino (2021b), the space for further methodological development in network psychometrics has opened including the configural and metric invariance methods proposed in this study.
The simulation compared the proposed metric invariance method to existing SEM methods, manipulating sample size, loadings difference, and correlation between factors. Three methods in the SEM framework were used to test partial metric invariance: Fixed, Free, and Wald. In all four methods, we first tested for configural invariance, then tested for metric, and then partial metric invariance. A key addition to our comparisons was the inclusion of a multiple comparison procedure. Most MCPs control Family Wise Error Rate (FWER) which is concerned with controlling the number of Type I errors made in general. Raykov et al. (2013) propose using the Benjamin-Hochberg procedure (BH-procedure) introduced by Benjamini and Hochberg (1995) to control the False Discovery Rate (FDR) when testing partial invariance. Since there is not a high level of risk in falsely identifying noninvariant items and there should be more emphasis on correctly identifying noninvariant items rather than invariant items.
The results of our simulation indicate that applying the BH-procedure provided a gain in the corrected identification of invariant items but not noninvariant items. This is in line with literature indicating that independent tests do not benefit from the application of an MCP (Rubin, 2024). Identifying noninvariant items was particularly challenging for the BH-procedure when the difference between loadings was small. Because smaller differences between groups will have larger p-values, there is a greater chance that detected differences will result in values at or near 0.05 which often end up with non-significant values after correction. For the corrected p-values in conditions with smaller differences, this poorer detection of noninvariant items was reflected in the .
When the difference in loadings was higher, all methods correctly identified the noninvariant item, regardless of p-value correction. But when the difference in loadings was lower, sample size and interfactor correlation differentially impacted the accuracy for the noninvariant item and the uncorrected p-value was more accurate in these particular cases. The EGA approach was less influenced by these cases than the three SEM methods. With a smaller sample size or different sample sizes, the EGA approach’s accurate detection of noninvariant items was also better than the SEM methods. As the correlation between factors increased, however, the accuracy decreased when sample size was either “Different” or 500.
These results were further corroborated by Sensitivity or the ability to detect noninvariant items. All methods were better at identifying noninvariant items when the difference in loadings was larger. The EGA approach performed better than the SEM methods when sample size was “Different” or 500 but was negatively impacted by the increase in correlation between factors. Finally, the SEM Free method’s detection of invariant items was different across the two factors: was lower for invariant items Factor 1, where noninvariance was present, and higher for the invariant items in Factor 2. Specificity indicated that all methods performed comparably at identifying invariant variables. Of the three SEM methods, the Free method showed the lowest accuracy across all metrics, except for Sensitivity where it showed a similar ability to correctly identify noninvariant variables.
Turning to the p-value correction, the results indicate that including a p-value correction provides a gain in the ability of each method to correctly identify invariant items, but in some instances may hinder their ability to correctly identify noninvariant items, particularly for SEM. We believe that this latter consequence is problematic. The goal is often to detect whether noninvariance exists. In most cases, applied researchers would prefer to err on the side of caution when determining whether groups are equivalent. When p-values are corrected, there is a bias toward suggesting items are invariant.
This finding raises the question of whether correcting p-values is useful to apply in the context of metric invariance or if it hinders the ability of these methods to properly identify noninvariant items. The results by variable in particular indicate that uncorrected p-values are more accurate when the difference in loadings is lower and equally as accurate when the difference in loadings is higher. These results are paralleled by Specificity where there is little to no concerning effect of falsely identifying an item as noninvariant.
Our results, however, do not discount the utility of p-value correction in testing metric invariance. Instead, we recommend in practice that noninvariant variables identified both uncorrected and corrected p-values should be evaluated. Based on the results of both p-values, the researcher can determine the consequences associated with each result, leveraging their knowledge of the literature, research context, and research questions. Another alternative is to change the α level when applying the MCP. In Appendix A we have included the all results with an additional condition where the corrected p-values are assessed for significance at the α = 0.10 level. The results indicate that this method slightly improves the accuracy of identifying noninvariant items for the EGA approach but makes no impact for the SEM methods.
The use of any latent variable measure across qualitatively distinct groups should necessitate the testing of measurement invariance. Current methodology for testing measurement invariance is problematic from a conceptual and software implementation standpoint. The proposed method is easier to implement in software than the existing methods. It also shows a stronger ability to correctly identify noninvariant items in several data conditions, namely differing sample sizes across groups or lower sample sizes within groups, especially when the correlation between factors is not very high.
Additionally, given that the communities estimated by EGA represent latent dimensions when the data generation mechanism is a latent variable model, EGA can still be applied even when this is not the case (Golino et al., 2022; Kjellström & Golino, 2019). Therefore, it is both important and intriguing to note that unlike existing measurement invariance methods using SEM, the proposed method does not necessitate a latent variable model as the data generation mechanism and could be used in other applications such as topic modeling.
Conclusion
Ensuring the equivalence of a measure across assessment groups is vital to the efficacy of group comparison. Although many methods have been proposed to improve measurement invariance in the SEM framework, many unresolved problems still remain. The EGA approach proposed in this study performed comparably to existing SEM methods and, in several conditions, outperformed them with the aim of detecting noninvariant items. The method was then applied to a substantive dataset to demonstrate its assessment of metric invariance in a real-world dataset, finding important differences in the BAPQ inventory that exist between mothers and fathers of children with ASD.
Appendices
Appendix A
Overall Metrics
Table 3
Sensitivity | F1 | Specificity | |||||||
---|---|---|---|---|---|---|---|---|---|
Type | Uncorrected a = .05 | Corrected a = .05 | Corrected a = .10 | Uncorrected a = .05 | Corrected a = .05 | Corrected a = .10 | Uncorrected a = .05 | Corrected a = .05 | Corrected a = .10 |
EGA | 0.99 | 0.93 | 0.96 | 0.76 | 0.91 | 0.87 | 0.94 | 0.99 | 0.98 |
Fixed | 0.96 | 0.88 | 0.91 | 0.76 | 0.88 | 0.84 | 0.95 | 0.99 | 0.98 |
Free | 0.98 | 0.93 | 0.95 | 0.65 | 0.83 | 0.76 | 0.90 | 0.97 | 0.95 |
Wald | 0.97 | 0.89 | 0.91 | 0.77 | 0.89 | 0.85 | 0.95 | 0.99 | 0.98 |
Hit Rate
Figure 11
Sensitivity
Figure 12
F1
Figure 13
Specificity
Figure 14
Appendix B
SEM Partial Invariance With the BAPQ
We applied a traditional SEM approach to test for partial metric invariance on the BAPQ dataset using the {lavaan} and {semTools} packages in R. First, we start with a three factor CFA model with the structure as outlined by Hurley et al. (2007) and Broderick et al. (2015). Table 4 shows the model fit statistics from this CFA model.
#Loading libraries
library (semTools)
library (lavaan)
# Confirmatory Three Factor Model
cfa.model <- "
aloof =~ q01 + q05 + q09 + q12 + q16 + q18 + q23 + q25 + q27 + q28 + q31 + q36
pragmatic =~ q02 + q04 + q07 + q10 + q11 + q14 + q17 + q20 + q21 + q29 + q32 + q34
rigid =~ q03 + q06 + q08 + q13 + q15 + q19 + q22 + q24 + q26 + q30 + q33 + q35"
cfa_fit <- cfa(cfa.model, data = bapq.all)
Table 4
Metric | Value |
---|---|
CFI | 0.80 |
TLI | 0.79 |
RMSEA | 0.07 |
These model fit statistics do not meet the guidelines set up by Hu and Bentler (1999). To see if we can find a model that fits these data well, we next run an EFA, investigating the fit of models containing one, two, three, and four factors. Table 5 shows the model fit statistics from these EFA models.
# Obtaining item names
items <- bapq.all[,4:39]
var.names <- names(items)
# Assessing EFA from 1 to 4 factors
fit <- efa(data = bapq.all[,var.names], nfactors = 1:4)
Table 5
Number of Factors | AIC | BIC | df | p-value | CFI | RMSEA | |
---|---|---|---|---|---|---|---|
1 | 592574.3 | 593051.3 | 27448.65 | 594 | 0 | 0.64 | 0.09 |
2 | 582739.1 | 583448.0 | 17543.47 | 559 | 0 | 0.77 | 0.07 |
3 | 575437.4 | 576371.4 | 10173.73 | 525 | 0 | 0.87 | 0.06 |
4 | 572586.0 | 573738.6 | 7256.29 | 492 | 0 | 0.91 | 0.05 |
These results indicate that a four factor model is the best for these data. Using this model, we assessed configural invariance. Table 6 shows the model fit statistics from this test.
# Four-Factor configural invariance model
conf <- "
f1 =~ q02 + q04 + q14 + q17 + q20 + q29 + q32
f2 =~ q03 + q06 + q08 + q13 + q15 + q19 + q22 + q24 + q26 + q30 + q33 + q35
f3 =~ q01 + q05 + q09 + q10 + q11 + q12 + q16 + q18 + q23 + q25 + q27 + q28 + q31 + q36
f4 =~ q07 + q21 + q34"
configural <- cfa(conf, data = bapq.all, std.lv = TRUE, group = "Parent")
Table 6
Metric | Value |
---|---|
CFI | 0.80 |
TLI | 0.79 |
RMSEA | 0.07 |
These model fit statistics also do not meet the guidelines set up by Hu and Bentler (1999). From this model, we iteratively pruned items with the lowest factor loadings, each time reassessing configural invariance. The best fitting model that increased the values of CFI and TLI while not drastically increasing RMSEA is a two-factor model. Table 7 shows the model fit statistics from this configural invariance model.
# Two-Factor configural invariance model
conf <- "
f2 =~ q03 + q08 + q13 + q19 + q22 + q24
f3 =~ q01 + q09 + q16 + q23 + q25 + q36"
configural <- cfa (conf, data = bapq.all, std.lv = TRUE, group = "Parent")
Table 7
Metric | Value |
---|---|
CFI | 0.92 |
TLI | 0.91 |
RMSEA | 0.09 |
Note that, ideally, we would see CFI and TLI values above 0.95 and RMSEA below 0.05. However, we could not attain that fit using this modeling approach on these data. This is the best fitting configural model, therefore for demonstration purposes, we will continue with this factor structure to test for partial metric invariance.
Using this pruned two-factor model, we assessed partial metric invariance using the partialInvariance() function. We do this with both corrected (Benjamini-Hochberg) p-values and uncorrected p-values.
# Metric invariance model
weak <- "
f2 =~ q03 + q08 + q13 + q19 + q22 + q24
f3 =~ q01 + q09 + q16 + q23 + q25 + q36
f2 ~~ c(1, NA)*f2
f3 ~~ c(1, NA)*f3"
weak <- cfa(weak, data = bapq.all, group="Parent", group.equal="loadings"
models <- list(fit.configural = configural, fit.loadings = weak)
# Partial invariance models
pi_model <- partialInvariance(models, "metric")
pi_model.h <- partialInvariance(models, "metric", p.adjust = "hochberg")
Table 8 shows the p-values for each item split by the p-values were uncorrected or corrected and which method (Free, Fixed, or Wald) was used. The cells highlighted gray indicate an item that was identified as noninvariant. Note that there is not a great deal of difference between corrected and uncorrected p-values except for those calculated using the Wald method. In this table, within each testing method, at least half of the items are identified as metric noninvariant. However, these results should not be considered for any substantive interpretation because configural invariance was not reliably established.
Table 8
Item | Uncorrected | Corrected | |||||
---|---|---|---|---|---|---|---|
Label | Description | Free | Fixed | Wald | Free | Fixed | Wald |
q03 | I am comfortable with unexpected changes in plans. | 0.12 | 0.02 | 0.12 | 0.12 | 0.02 | 0.35 |
q08 | I have to warm myself up to the idea of visiting an unfamiliar place. | 0.06 | 0.02 | 0.12 | 0.06 | 0.02 | 0.35 |
q13 | I feel a strong need for sameness from day to day. | 0.03 | 0.07 | 0.00 | 0.03 | 0.07 | 0.00 |
q19 | I look forward to trying new things. | 0.06 | 0.02 | 0.10 | 0.06 | 0.02 | 0.35 |
q22 | I have a hard time dealing with changes in my routine. | 0.02 | 0.07 | 0.00 | 0.02 | 0.07 | 0.00 |
q24 | I act very set in my ways. | 0.01 | 0.00 | 0.43 | 0.01 | 0.00 | 0.43 |
q01 | I like being around other people. | 0.11 | 0.09 | 0.00 | 0.11 | 0.09 | 0.00 |
q09 | I enjoy being in social situations. | 0.00 | 0.08 | 0.00 | 0.00 | 0.08 | 0.00 |
q16 | I look forward to situations where I can meet new people. | 0.07 | 0.03 | 0.08 | 0.07 | 0.03 | 0.35 |
q23 | I am good at making small talk. | 0.00 | 0.04 | 0.01 | 0.00 | 0.04 | 0.09 |
q25 | I feel like I am really connecting with other people. | 0.05 | 0.01 | 0.28 | 0.05 | 0.01 | 0.43 |
q36 | I enjoy chatting with people. | 0.00 | 0.09 | 0.00 | 0.00 | 0.09 | 0.00 |