<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article
  PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD with MathML3 v1.2 20190208//EN" "JATS-journalpublishing1-mathml3.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ali="http://www.niso.org/schemas/ali/1.0/" article-type="research-article" dtd-version="1.2" xml:lang="en">
<front>
<journal-meta><journal-id journal-id-type="publisher-id">METH</journal-id><journal-id journal-id-type="nlm-ta">Methodology</journal-id>
<journal-title-group>
<journal-title>Methodology</journal-title><abbrev-journal-title abbrev-type="pubmed">Methodology</abbrev-journal-title>
</journal-title-group>
<issn pub-type="ppub">1614-1881</issn>
<issn pub-type="epub">1614-2241</issn>
<publisher><publisher-name>PsychOpen</publisher-name></publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">meth.16517</article-id>
<article-id pub-id-type="doi">10.5964/meth.16517</article-id>
<article-categories>
<subj-group subj-group-type="heading"><subject>Original Article</subject></subj-group>

<subj-group subj-group-type="badge">
<subject>Code</subject>
</subj-group>

</article-categories>		
<title-group>
<article-title>Evaluating the Standard Error Estimation of the Local Structural-After-Measurement (LSAM) Approach in Structural Equation Modeling</article-title>
<alt-title alt-title-type="right-running">Standard Error Estimation in LSAM</alt-title>
<alt-title specific-use="APA-reference-style" xml:lang="en">Evaluating the standard error estimation of the Local Structural-After-Measurement (LSAM) approach in structural equation modeling</alt-title>
</title-group>
<contrib-group content-type="authors">
<contrib id="author-1" contrib-type="author" corresp="yes"><contrib-id contrib-id-type="orcid" authenticated="false">https://orcid.org/0000-0001-5328-6602</contrib-id><name name-style="western"><surname>Can</surname><given-names>Seda</given-names></name><xref ref-type="corresp" rid="cor1">*</xref><xref ref-type="aff" rid="aff1">1</xref></contrib>
<contrib id="author-2" contrib-type="author"><contrib-id contrib-id-type="orcid" authenticated="false">https://orcid.org/0000-0002-4129-4477</contrib-id><name name-style="western"><surname>Rosseel</surname><given-names>Yves</given-names></name><xref ref-type="aff" rid="aff2">2</xref></contrib>
<contrib contrib-type="editor">
<name>
	<surname>Estrada</surname>
	<given-names>Eduardo</given-names>
</name>
<xref ref-type="aff" rid="aff3"/>
</contrib>
	<aff id="aff1"><label>1</label><institution content-type="dept">Department of Psychology</institution>, <institution>İzmir University of Economics</institution>, <addr-line><city>İzmir</city></addr-line>, <country country="TR">Türkiye</country></aff>
	<aff id="aff2"><label>2</label><institution content-type="dept">Department of Data Analysis</institution>, <institution>Ghent University</institution>, <addr-line><city>Ghent</city></addr-line>, <country country="BE">Belgium</country></aff>
	<aff id="aff3">Autonomous University of Madrid, Madrid, <country>Spain</country></aff>
</contrib-group>
<author-notes>	
	<corresp id="cor1">Department of Psychology, İzmir University of Economics, Sakarya Cd. No:156, 35330 Balçova/İzmir, Türkiye. <email xlink:href="seda.can@ieu.edu.tr">seda.can@ieu.edu.tr</email></corresp>
</author-notes>
<pub-date pub-type="epub"><day>18</day><month>12</month><year>2025</year></pub-date>
<pub-date pub-type="collection" publication-format="electronic"><year>2025</year></pub-date>
<volume>21</volume>
<issue>4</issue>

<fpage>249</fpage>
<lpage>285</lpage>
<history>
<date date-type="received">
<day>14</day>
<month>01</month>
<year>2025</year>
</date>
<date date-type="accepted">
<day>16</day>
<month>09</month>
<year>2025</year>
</date>
</history>
<permissions><copyright-year>2025</copyright-year><copyright-holder>Can &amp; Rosseel</copyright-holder><license license-type="open-access" specific-use="CC BY 4.0" xlink:href="https://creativecommons.org/licenses/by/4.0/"><ali:license_ref>https://creativecommons.org/licenses/by/4.0/</ali:license_ref><license-p>This is an open access article distributed under the terms of the Creative Commons Attribution 4.0 International License, CC BY 4.0, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.</license-p></license></permissions>
<abstract>
<p>Accurate estimation of standard errors (SEs) is essential in SEM as they quantify the uncertainty of parameter estimates, are fundamental to computing test statistics, and ensure robust population inferences. This study evaluated SEs within the Local Structural-After-Measurement (LSAM) framework, a two-step approach to SEM. Two simulation studies examined analytic and resampling-based SE methods under varying conditions, including normal and nonnormal data, different sample sizes, and both correct and misspecified models. The nonparametric bootstrap yielded near-unbiased SEs under nonnormality, even when models were misspecified, while the parametric bootstrap performed well under normal conditions with correct model specification. The analytic two-step method performed well under normal conditions but showed increased bias with nonnormal data and smaller samples. The robust two-step method reduced this bias in larger samples, though some underestimation remained in small-sample and misspecified conditions. To complement SE bias results, 90% coverage rates were assessed. Findings confirm LSAM’s capability for accurate SE estimation in challenging research contexts.</p>
</abstract>
<kwd-group kwd-group-type="author"><kwd>standard errors (SEs)</kwd><kwd>local structural-after-measurement (LSAM) approach</kwd><kwd>two-step estimation</kwd><kwd>nonparametric bootstrapping</kwd><kwd>parametric bootstrapping</kwd></kwd-group>

</article-meta>
</front>
<body>
	<sec sec-type="intro" id="intro"><title/>
<sec id="s1"><title>Evaluating the Standard Error Estimation of the Local Structural-After-Measurement (LSAM) Approach in Structural Equation Modeling</title>
<p id="s1.p1">Structural equation modeling (SEM) is extensively used in the social and behavioral sciences to explore relationships between latent variables (<xref ref-type="bibr" rid="ref-8">Bollen, 1989</xref>). SEM typically comprises of a measurement part, which connects latent variables to observable indicators, and a structural part, which captures the hypothesized relationships among these latent variables. While parameter estimation in SEM has been well-studied across various contexts (e.g., normal vs. nonnormal data, correct vs. misspecified models, different estimators), the standard errors (SEs) of these estimates have received relatively less attention (<xref ref-type="bibr" rid="ref-21">Deng et al., 2018</xref>). Since SEM typically relies on sample data to estimate parameters, it is essential to account for sampling variability, as samples rather than entire populations are used. SEs reflect this variability, indicating the precision of parameter estimates: smaller SEs suggest greater precision, while larger SEs indicate more uncertainty. SEs also play a crucial role in computing test statistics, such as <inline-formula id="ieqn-1"><mml:math id="mml-ieqn-1"><mml:mi>z</mml:mi></mml:math></inline-formula>-scores or <inline-formula id="ieqn-2"><mml:math id="mml-ieqn-2"><mml:mi>t</mml:mi></mml:math></inline-formula>-scores, which help determine whether estimated parameters significantly differ from zero or other specified values. Accurate estimation of SEs not only strengthens the reliability of research conclusions but also ensures that population inferences are robust and well-supported (<xref ref-type="bibr" rid="ref-61">Yuan &amp; Hayashi, 2006</xref>).</p>
<p id="s1.p2">In SEM, parameters can be estimated in two ways: through the standard estimation approach, which uses a joint or “system-wide” estimation procedure, and the structural-after-measurement (SAM) approach, which divides the estimation process into two parts.<xref ref-type="fn" rid="fn-1"><sup>1</sup></xref> Recent simulation evidence by <xref ref-type="bibr" rid="ref-23">Dhaene and Rosseel (2023)</xref> indicates that LSAM yields more accurate point estimates than joint SEM — particularly in small to moderate samples — demonstrating its efficiency and stability. While this work provides valuable insights into point estimation accuracy, no studies have investigated the behavior of SEs within this framework. This study addresses this gap by systematically evaluating SE estimation under varying conditions within the LSAM approach to SEM, focusing specifically on continuous (and complete) data.</p>
<p id="s1.p3">The rest of the paper is organized as follows. First, we discuss the standard estimation approach in SEM, highlighting key features and challenges. We then introduce the SAM approach, focusing on the local SAM variant, and review methods for SE calculation. Following this, we present the design, methodology, and results of our simulation studies. Finally, we discuss the results and offer insights into future research directions and limitations.</p></sec>
<sec id="s2"><title>The Standard Estimation Approach in SEM</title>
<p id="s2.p1">Parameter estimation in SEM involves a search for parameter values in order to make the model-implied covariance matrix as close as possible to the observed covariance matrix by minimizing a discrepancy function between the two. Various methods exist to define this discrepancy function, leading to different approaches to parameter estimation. Since the early 1970s (<xref ref-type="bibr" rid="ref-33">Jöreskog, 1973</xref>; <xref ref-type="bibr" rid="ref-36">Keesling, 1972</xref>; <xref ref-type="bibr" rid="ref-57">Wiley, 1973</xref>), ML has become the dominant estimator in SEM. This dominance, coupled with its widespread use as the default estimator in SEM packages such as LISREL (<xref ref-type="bibr" rid="ref-34">Jöreskog &amp; Sörbom, 1996</xref>), EQS (<xref ref-type="bibr" rid="ref-7">Bentler, 2004</xref>), Mplus (<xref ref-type="bibr" rid="ref-41">Muthén &amp; Muthén, 2010</xref>), OpenMx (<xref ref-type="bibr" rid="ref-42">Neale et al., 2016</xref>), and lavaan (<xref ref-type="bibr" rid="ref-50">Rosseel, 2012</xref>), led <xref ref-type="bibr" rid="ref-49">Rosseel and Loh (2024)</xref> to refer to ML as the ‘standard’ estimation approach in the SEM framework. In this paper, we similarly refer to SEM ML as the standard estimation approach.</p>
<p id="s2.p2">Three key aspects characterize the standard ML estimator in SEM: First, it relies on iterative optimization procedures; second, it achieves optimal statistical properties only when all assumptions are met (i.e., independent and identically distributed observations, normal distribution, and correctly specified model) and the sample size is sufficiently large; and third, all parameters — both in the measurement and structural parts — are estimated simultaneously, a process referred to as system-wide estimation by <xref ref-type="bibr" rid="ref-9">Bollen (1996)</xref>. These aspects, however, can lead to several issues, particularly when the sample size is not sufficiently large. One significant problem is nonconvergence, where the iterative optimization procedure fails to find a solution that minimizes the discrepancy function (<xref ref-type="bibr" rid="ref-1">Anderson &amp; Gerbing, 1984</xref>; <xref ref-type="bibr" rid="ref-12">Boomsma, 1985</xref>; <xref ref-type="bibr" rid="ref-19">De Jonckere &amp; Rosseel, 2022</xref>, <xref ref-type="bibr" rid="ref-20">2023</xref>; <xref ref-type="bibr" rid="ref-44">Nevitt &amp; Hancock, 2004</xref>; <xref ref-type="bibr" rid="ref-60">Yuan &amp; Bentler, 1997</xref>). Even when solutions are found, they might be improper, with parameters estimated beyond their expected range, such as negative variances or correlations greater than one (<xref ref-type="bibr" rid="ref-17">Chen et al., 2001</xref>; <xref ref-type="bibr" rid="ref-26">Gerbing &amp; Anderson, 1987</xref>; <xref ref-type="bibr" rid="ref-55">van Driel, 1978</xref>).</p>
	<p id="s2.p3">While the simultaneous estimation of all parameters in both the measurement and structural parts is a defining characteristic of standard estimation, this system-wide approach can be effective only when both parts are correctly specified. However, <xref ref-type="bibr" rid="ref-8">Bollen (1989)</xref> points out that the assumption of a correctly specified model rarely holds, given the approximate nature of statistical models. When misspecification occurs in any part of the model — such as a missing cross-loading or error covariance — this can lead to bias across all parameters, including those in correctly specified parts (<xref ref-type="bibr" rid="ref-9">Bollen, 1996</xref>; <xref ref-type="bibr" rid="ref-35">Kaplan, 1988</xref>). In addition, <xref ref-type="bibr" rid="ref-15">Burt (1973</xref>, <xref ref-type="bibr" rid="ref-16">1976)</xref> identified interpretational confounding as another issue associated with system-wide estimation. In this context, factor loadings for a latent variable, which are expected to remain consistent across models, may vary depending on the structural model applied. This variation can cause the empirical meaning of the latent variable to deviate from its intended meaning, potentially leading to ambiguous inferences across the models. It should be noted that, this concern is specific to system-wide estimators such as ML, where the measurement and structural components are estimated jointly, making parameter estimates in the measurement model susceptible to model misspecification in the structural part. Despite these drawbacks, the ML estimator remains powerful under optimal conditions — such as large sample sizes, normally distributed data, and a correctly specified model — and is widely used in applied settings.</p></sec>
<sec id="s3"><title>Local Structural-After-Measurement (LSAM) Approach</title>
<p id="s3.p1">Challenges associated with system-wide estimation in SEM have led researchers to revisit earlier methodologies within the SEM framework. One such approach, introduced by <xref ref-type="bibr" rid="ref-16">Burt (1976)</xref>, involved a two-stage estimation procedure. This method independently fits measurement models before estimating the remaining parameters in the full model to address these challenges. Burt’s work laid the foundation for multi-stage procedures, which were later expanded upon by scholars such as <xref ref-type="bibr" rid="ref-31">Hunter and Gerbing (1982)</xref> and <xref ref-type="bibr" rid="ref-38">Lance et al. (1988)</xref>. More recently, <xref ref-type="bibr" rid="ref-49">Rosseel and Loh (2024)</xref> coined the term ‘Structural-after-measurement’ (SAM) to describe this two-stage or two-step approach.</p>
<p id="s3.p2">The key advantage of SAM lies in its ability to separate the estimation of measurement models from structural models, thereby preventing one from influencing the other. Building on this rationale, recent advancements include <xref ref-type="bibr" rid="ref-3">Bakk and Kuha (2018)</xref>’s multi-step approach for latent class models, <xref ref-type="bibr" rid="ref-37">Kuha and Bakk (2023)</xref>’s application to latent trait/item response theory models, and <xref ref-type="bibr" rid="ref-39">Levy (2023)</xref>’s Bayesian multi-step approach. Of these methods, the local SAM (LSAM) approach proposed by <xref ref-type="bibr" rid="ref-49">Rosseel and Loh (2024)</xref> stands out as a notable approach within the SEM framework, expanding on previous work in bias-corrected factor score regression (<xref ref-type="bibr" rid="ref-18">Croon, 2002</xref>; <xref ref-type="bibr" rid="ref-22">Devlieger et al., 2016</xref>; <xref ref-type="bibr" rid="ref-56">Wall &amp; Amemiya, 2000</xref>). For a comprehensive discussion of related approaches and their relevance within the SAM framework, see <xref ref-type="bibr" rid="ref-49">Rosseel and Loh (2024)</xref>.</p>
	<p id="s3.p3">LSAM operates in two steps: (1) estimating parameters of the measurement model, and (2) estimating parameters of the structural model. For <inline-formula id="ieqn-3"><mml:math id="mml-ieqn-3"><mml:mi>m</mml:mi></mml:math></inline-formula> latent factors (<inline-formula id="ieqn-4"><mml:math id="mml-ieqn-4"><mml:mi mathvariant="bold-italic">η</mml:mi></mml:math></inline-formula>) measured via <inline-formula id="ieqn-5"><mml:math id="mml-ieqn-5"><mml:mi>p</mml:mi></mml:math></inline-formula> observed variables, LSAM uses sample summary statistics (the sample mean vector, <inline-formula id="ieqn-6"><mml:math id="mml-ieqn-6"><mml:mrow><mml:mover><mml:mi mathvariant="bold-italic">y</mml:mi><mml:mo mathvariant="bold" stretchy="false">¯</mml:mo></mml:mover></mml:mrow></mml:math></inline-formula>, and the sample variance-covariance matrix, <inline-formula id="ieqn-7"><mml:math id="mml-ieqn-7"><mml:mi mathvariant="bold-italic">S</mml:mi></mml:math></inline-formula>), along with estimated measurement parameters, to estimate the mean vector <inline-formula id="ieqn-8"><mml:math id="mml-ieqn-8"><mml:mrow><mml:mi mathvariant="normal">E</mml:mi></mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mi mathvariant="bold-italic">η</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> and the variance-covariance matrix <inline-formula id="ieqn-9"><mml:math id="mml-ieqn-9"><mml:mrow><mml:mi mathvariant="normal">V</mml:mi><mml:mi mathvariant="normal">a</mml:mi><mml:mi mathvariant="normal">r</mml:mi></mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mi mathvariant="bold-italic">η</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> of the latent variables. In Step 1, LSAM estimates the free elements of the measurement model, including intercepts (<inline-formula id="ieqn-10"><mml:math id="mml-ieqn-10"><mml:mi mathvariant="bold-italic">ν</mml:mi></mml:math></inline-formula>), factor loadings (<inline-formula id="ieqn-11"><mml:math id="mml-ieqn-11"><mml:mi mathvariant="bold">Λ</mml:mi></mml:math></inline-formula>), and residual variances (<inline-formula id="ieqn-12"><mml:math id="mml-ieqn-12"><mml:mi mathvariant="bold">Θ</mml:mi></mml:math></inline-formula>). To ensure proper mapping from observed to latent variables, LSAM employs a mapping matrix (<inline-formula id="ieqn-13"><mml:math id="mml-ieqn-13"><mml:mi mathvariant="bold-italic">M</mml:mi></mml:math></inline-formula>) that satisfies <inline-formula id="ieqn-14"><mml:math id="mml-ieqn-14"><mml:mrow><mml:mi mathvariant="bold-italic">M</mml:mi><mml:mi mathvariant="bold">Λ</mml:mi></mml:mrow><mml:mo>=</mml:mo><mml:msub><mml:mi mathvariant="bold-italic">I</mml:mi><mml:mi>m</mml:mi></mml:msub></mml:math></inline-formula>, where <inline-formula id="ieqn-15"><mml:math id="mml-ieqn-15"><mml:msub><mml:mi mathvariant="bold-italic">I</mml:mi><mml:mi>m</mml:mi></mml:msub></mml:math></inline-formula> is the identity matrix. Using the ML discrepancy function, <inline-formula id="ieqn-16"><mml:math id="mml-ieqn-16"><mml:mi mathvariant="bold-italic">M</mml:mi></mml:math></inline-formula> is computed as follows<xref ref-type="fn" rid="fn-2"><sup>2</sup></xref><sup>,</sup><xref ref-type="fn" rid="fn-3"><sup>3</sup></xref>:
<disp-formula id="eqn-1"><label>1</label><mml:math id="mml-eqn-1" display="block"><mml:mi mathvariant="bold-italic">M</mml:mi><mml:mo>=</mml:mo><mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:msup><mml:mi mathvariant="bold">Λ</mml:mi><mml:mi mathvariant="normal">⊤</mml:mi></mml:msup><mml:msup><mml:mi mathvariant="bold">Θ</mml:mi><mml:mrow><mml:mo>−</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mi mathvariant="bold">Λ</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mrow><mml:mo>−</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:msup><mml:mi mathvariant="bold">Λ</mml:mi><mml:mi mathvariant="normal">⊤</mml:mi></mml:msup><mml:msup><mml:mi mathvariant="bold">Θ</mml:mi><mml:mrow><mml:mo>−</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mo>.</mml:mo></mml:math></disp-formula></p>
<p id="s3.p4">The estimated mapping matrix (<inline-formula id="ieqn-17"><mml:math id="mml-ieqn-17"><mml:mrow><mml:mover><mml:mi mathvariant="bold-italic">M</mml:mi><mml:mo stretchy="false">^</mml:mo></mml:mover></mml:mrow></mml:math></inline-formula>) is used to compute the latent variable central moments:
<disp-formula id="eqn-2"><label>2</label><mml:math id="mml-eqn-2" display="block"><mml:mrow><mml:mover><mml:mi>E</mml:mi><mml:mo stretchy="false">^</mml:mo></mml:mover></mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mi mathvariant="bold-italic">η</mml:mi><mml:mo stretchy="false">)</mml:mo><mml:mo>=</mml:mo><mml:mrow><mml:mover><mml:mi mathvariant="bold-italic">M</mml:mi><mml:mo stretchy="false">^</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mover><mml:mi mathvariant="bold-italic">y</mml:mi><mml:mo mathvariant="bold" stretchy="false">¯</mml:mo></mml:mover></mml:mrow><mml:mo>−</mml:mo><mml:mrow><mml:mover><mml:mi mathvariant="bold-italic">ν</mml:mi><mml:mo stretchy="false">^</mml:mo></mml:mover></mml:mrow><mml:mo>]</mml:mo></mml:mrow></mml:math></disp-formula>
<disp-formula id="eqn-3"><label>3</label><mml:math id="mml-eqn-3" display="block"><mml:mrow><mml:mover><mml:mtext>Var</mml:mtext><mml:mo>^</mml:mo></mml:mover></mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mi mathvariant="bold-italic">η</mml:mi><mml:mo stretchy="false">)</mml:mo><mml:mo>=</mml:mo><mml:mrow><mml:mover><mml:mi mathvariant="bold-italic">M</mml:mi><mml:mo stretchy="false">^</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mo>[</mml:mo><mml:mi mathvariant="bold-italic">S</mml:mi><mml:mo>−</mml:mo><mml:mrow><mml:mover><mml:mi mathvariant="bold">Θ</mml:mi><mml:mo stretchy="false">^</mml:mo></mml:mover></mml:mrow><mml:mo>]</mml:mo></mml:mrow><mml:msup><mml:mrow><mml:mover><mml:mi mathvariant="bold-italic">M</mml:mi><mml:mo stretchy="false">^</mml:mo></mml:mover></mml:mrow><mml:mi mathvariant="normal">⊤</mml:mi></mml:msup><mml:mo>.</mml:mo></mml:math></disp-formula></p>
<p id="s3.p5">In Step 2, LSAM uses <inline-formula id="ieqn-18"><mml:math id="mml-ieqn-18"><mml:mrow><mml:mover><mml:mi>E</mml:mi><mml:mo stretchy="false">^</mml:mo></mml:mover></mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mi mathvariant="bold-italic">η</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> and <inline-formula id="ieqn-19"><mml:math id="mml-ieqn-19"><mml:mrow><mml:mover><mml:mtext>Var</mml:mtext><mml:mo>^</mml:mo></mml:mover></mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mi mathvariant="bold-italic">η</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> to estimate the structural model parameters, including intercepts (<inline-formula id="ieqn-20"><mml:math id="mml-ieqn-20"><mml:mi mathvariant="bold-italic">a</mml:mi></mml:math></inline-formula>), regression coefficients (<inline-formula id="ieqn-21"><mml:math id="mml-ieqn-21"><mml:mi mathvariant="bold-italic">β</mml:mi></mml:math></inline-formula>), and the variance-covariance matrix of the disturbances (<inline-formula id="ieqn-22"><mml:math id="mml-ieqn-22"><mml:mi mathvariant="bold">Ψ</mml:mi></mml:math></inline-formula>). The choice of estimator in this step depends on the model’s complexity: Ordinary Least Squares (OLS) is appropriate for recursive models, Two-Stage Least Squares (2SLS) is suitable for non-recursive models, and Maximum Likelihood (ML) or Generalized Least Squares (GLS) can be employed for more complex path models.</p>
<p id="s3.p6">The LSAM approach offers several distinct advantages over standard estimation in SEM (<xref ref-type="bibr" rid="ref-49">Rosseel &amp; Loh, 2024</xref>). One key benefit is that splitting the measurement and structural parts allows for the implementation of more complex models (e.g., multi-group mixture models; <xref ref-type="bibr" rid="ref-46">Perez Alonso et al., 2024</xref>) that cannot be accommodated in a joint estimation framework. The second advantage is the solution to interpretational confounding, where factor loadings can vary depending on the structural model applied, as mentioned earlier. Another notable advantage is the flexibility of LSAM in allowing different estimators to be used in the measurement and structural parts. For instance, noniterative estimators can be employed for the measurement part, which are often faster and more stable (<xref ref-type="bibr" rid="ref-23">Dhaene &amp; Rosseel, 2023</xref>). Overall, these advantages highlight LSAM as a flexible and efficient estimation strategy, particularly for more complex models that pose challenges for standard estimation approaches.</p></sec>
<sec id="s4"><title>Different Approaches to Compute SEs</title>
<p id="s4.p1">Various methods exist for computing SEs in standard SEM with continuous data. The process of obtaining so-called “standard” (i.e., non-robust) SEs first involves calculating the unit information matrix, which can typically be either observed or expected (<xref ref-type="bibr" rid="ref-52">Savalei, 2010</xref>). Within SEM, these two matrices are typically not equivalent (<xref ref-type="bibr" rid="ref-61">Yuan &amp; Hayashi, 2006</xref>), and most SEM software, including <monospace>lavaan</monospace> (<xref ref-type="bibr" rid="ref-50">Rosseel, 2012</xref>), defaults to the expected information matrix for SE calculation. This information matrix is then inverted and divided by the sample size (<italic>N</italic>) to derive the variance-covariance matrix of the model parameters. Finally, SEs are obtained by taking the square root of the diagonal elements of this matrix.</p>
<p id="s4.p2">It is widely acknowledged that when large-sample theory is used to derive analytic expressions for SEs, their performance can suffer in many practical settings due to violations of the underlying assumptions (<xref ref-type="bibr" rid="ref-61">Yuan &amp; Hayashi, 2006</xref>). Robust SEs are often recommended within the ML framework to protect against deviations from normality and correct structural specification (<xref ref-type="bibr" rid="ref-2">Arminger &amp; Schoenberg, 1989</xref>; <xref ref-type="bibr" rid="ref-51">Satorra &amp; Bentler, 1994</xref>; <xref ref-type="bibr" rid="ref-53">Savalei &amp; Rosseel, 2022</xref>). Research on SE performance in SEM is limited, focusing primarily on nonnormality and model misspecifications within the joint SEM approach (<xref ref-type="bibr" rid="ref-40">Maydeu-Olivares, 2017</xref>; <xref ref-type="bibr" rid="ref-44">Nevitt &amp; Hancock, 2004</xref>; <xref ref-type="bibr" rid="ref-60">Yuan &amp; Bentler, 1997</xref>). However, the performance of SEs in the SAM approach, despite its advantages over standard SEM, remains unexplored.</p>
<p id="s4.p3">Unlike the standard SEM approach, LSAM separates the estimation of SEs into components related to the measurement and structural parts. SEs for the measurement part can be computed using standard approaches. For the structural part, however, the two-step procedure introduces an additional source of variability. Specifically, the SEs for the structural model must account for the uncertainty carried over from the measurement model estimation. Failing to consider this source of uncertainty results in biased SE estimates (<xref ref-type="bibr" rid="ref-4">Bakk et al., 2017</xref>). Using the analytic procedure below, a joint information matrix is computed for all parameters in the full model, arranged as a partitioned matrix so that the first rows and columns correspond to the parameters of the measurement part:
<disp-formula id="eqn-4"><label>4</label><mml:math id="mml-eqn-4" display="block"><mml:mi mathvariant="bold-italic">I</mml:mi><mml:mo>=</mml:mo><mml:mrow><mml:mo>[</mml:mo><mml:mtable columnspacing="1em" rowspacing="4pt" columnalign="center center center center center center center center center center center center center center center"><mml:mtr><mml:mtd><mml:msub><mml:mi mathvariant="bold-italic">I</mml:mi><mml:mrow><mml:mn>11</mml:mn></mml:mrow></mml:msub></mml:mtd><mml:mtd><mml:msub><mml:mi mathvariant="bold-italic">I</mml:mi><mml:mrow><mml:mn>12</mml:mn></mml:mrow></mml:msub></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:msub><mml:mi mathvariant="bold-italic">I</mml:mi><mml:mrow><mml:mn>21</mml:mn></mml:mrow></mml:msub></mml:mtd><mml:mtd><mml:msub><mml:mi mathvariant="bold-italic">I</mml:mi><mml:mrow><mml:mn>22</mml:mn></mml:mrow></mml:msub></mml:mtd></mml:mtr></mml:mtable><mml:mo>]</mml:mo></mml:mrow><mml:mo>.</mml:mo></mml:math></disp-formula></p>
<p id="s4.p4">In this matrix, the subscript “1” refers to the measurement part of the model, while “2” denotes the structural part. The two-step corrected variance-covariance matrix for the structural parameters (<inline-formula id="ieqn-23"><mml:math id="mml-ieqn-23"><mml:msub><mml:mi mathvariant="bold">Σ</mml:mi><mml:mrow><mml:mn>2</mml:mn><mml:mo stretchy="false">(</mml:mo><mml:mn>1</mml:mn><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:msub></mml:math></inline-formula>) is then computed as follows:
<disp-formula id="eqn-5"><label>5</label><mml:math id="mml-eqn-5" display="block"><mml:msub><mml:mi mathvariant="bold">Σ</mml:mi><mml:mrow><mml:mn>2</mml:mn><mml:mo stretchy="false">(</mml:mo><mml:mn>1</mml:mn><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msubsup><mml:mi mathvariant="bold-italic">I</mml:mi><mml:mrow><mml:mn>22</mml:mn></mml:mrow><mml:mrow><mml:mo>−</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msubsup><mml:mo>+</mml:mo><mml:msubsup><mml:mi mathvariant="bold-italic">I</mml:mi><mml:mrow><mml:mn>22</mml:mn></mml:mrow><mml:mrow><mml:mo>−</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msubsup><mml:msub><mml:mi mathvariant="bold-italic">I</mml:mi><mml:mrow><mml:mn>21</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mi mathvariant="bold">Σ</mml:mi><mml:mrow><mml:mn>11</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mi mathvariant="bold-italic">I</mml:mi><mml:mrow><mml:mn>12</mml:mn></mml:mrow></mml:msub><mml:msubsup><mml:mi mathvariant="bold-italic">I</mml:mi><mml:mrow><mml:mn>22</mml:mn></mml:mrow><mml:mrow><mml:mo>−</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msubsup><mml:mo>,</mml:mo></mml:math></disp-formula>where <inline-formula id="ieqn-24"><mml:math id="mml-ieqn-24"><mml:msub><mml:mi mathvariant="bold">Σ</mml:mi><mml:mrow><mml:mn>11</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula> represents the variance-covariance matrix derived in Step 1. This procedure follows the method outlined in Equation (17) from <xref ref-type="bibr" rid="ref-4">Bakk et al. (2017)</xref>, building on the work of <xref ref-type="bibr" rid="ref-27">Gong and Samaniego (1981)</xref> and refined further by <xref ref-type="bibr" rid="ref-45">Parke (1986)</xref>. For further details on obtaining these two-step corrected SEs, we refer the reader to Appendix D in <xref ref-type="bibr" rid="ref-49">Rosseel and Loh (2024)</xref>.</p>
	<p id="s4.p5">In the presence of nonnormality, it is critical to use robust standard errors to avoid biased inference. While two-step corrected SEs account for uncertainty carried over from the measurement model, they still rely on the assumptions of correct model specification and multivariate normality. To address this limitation, <xref ref-type="bibr" rid="ref-59">Yuan and Chan (2002)</xref> proposed a robust version of the two-step correction. For technical details, see their Equations (4a) and (4b). A brief description of how ‘robust’ two-step standard errors are computed in the <monospace>sam()</monospace> function in lavaan (version 0.6-20 or higher) is included in the <xref ref-type="app" rid="app01">Appendix</xref>.</p>
<p id="s4.p6">The two-step corrected SEs adjust only for the additional variability introduced by separate estimation of the measurement model and are appropriate under normality but not robust to distributional violations. In our study, we implemented both versions to compare their performance under normal and nonnormal conditions within the LSAM framework.</p>
<p id="s4.p7">An alternative to analytic expressions for SEs is the resampling approach. A widely used method is bootstrapping (<xref ref-type="bibr" rid="ref-24">Efron, 1979</xref>; <xref ref-type="bibr" rid="ref-25">Efron &amp; Tibshirani, 1993</xref>), where new samples are generated either by resampling with replacement from the original data (i.e., nonparametric bootstrapping) or, in the parametric bootstrapping case, by assuming a specific distribution (e.g., a multivariate normal distribution). For both methods, parameter estimates are calculated for each bootstrap sample, and the standard deviation across all samples is used to approximate the SE for each model parameter.</p>
<p id="s4.p8">In SEM, bootstrapping has been widely adopted for obtaining accurate SEs (<xref ref-type="bibr" rid="ref-10">Bollen &amp; Stine, 1990</xref>, <xref ref-type="bibr" rid="ref-11">1992</xref>; <xref ref-type="bibr" rid="ref-13">Boomsma, 1986</xref>; <xref ref-type="bibr" rid="ref-29">Hancock &amp; Liu, 2012</xref>; <xref ref-type="bibr" rid="ref-32">Ievers-Landis et al., 2011</xref>; <xref ref-type="bibr" rid="ref-43">Nevitt &amp; Hancock, 2001</xref>). <xref ref-type="bibr" rid="ref-13">Boomsma (1986)</xref> showed that bootstrap SEs in covariance structure analysis tend to be larger than ML SEs under skewed data conditions. Subsequent studies expanded bootstrapping to estimate SEs for standardized coefficients, as well as direct, indirect, and total effects (<xref ref-type="bibr" rid="ref-10">Bollen &amp; Stine, 1990</xref>; <xref ref-type="bibr" rid="ref-54">Stine, 1989</xref>). Empirical research further validated its effectiveness in real data settings (<xref ref-type="bibr" rid="ref-62">Yung &amp; Bentler, 1996</xref>). <xref ref-type="bibr" rid="ref-43">Nevitt and Hancock (2001)</xref> highlighted the advantages of bootstrapping over ML under nonnormality, showing that bootstrap SEs performed better in terms of bias and variability for sample sizes <inline-formula id="ieqn-25"><mml:math id="mml-ieqn-25"><mml:mi>n</mml:mi><mml:mo>≥</mml:mo><mml:mn>200</mml:mn></mml:math></inline-formula>. <xref ref-type="bibr" rid="ref-61">Yuan and Hayashi (2006)</xref> demonstrated that in addition to robust SEs, bootstrap SEs remained consistent under model misspecifications, unlike non-robust SEs derived from information matrices, which proved unreliable when assumptions of normality and correct structural specification were violated.</p>
<p id="s4.p9">While prior studies have primarily focused on nonparametric bootstrapping, we incorporate both parametric and nonparametric methods in our SE estimation to evaluate their relative performance. Parametric bootstrapping, by assuming a specified distribution, has the potential to provide more stable and accurate SE estimates, particularly in small sample sizes where empirical data may fall short in capturing the sampling distribution (<xref ref-type="bibr" rid="ref-30">Hesterberg, 2015</xref>).</p>
	<p id="s4.p10">In contrast to nonparametric bootstrapping, which resamples the original dataset with replacement, parametric bootstrapping simulates new datasets from a fully specified model. In SEM, this typically involves generating data from the model-implied covariance matrix and mean vector, assuming a multivariate normal distribution. Parameter estimates are obtained from the original sample, and multiple bootstrap datasets are drawn from a <inline-formula id="ieqn-26"><mml:math id="mml-ieqn-26"><mml:mi>p</mml:mi></mml:math></inline-formula>-dimensional multivariate normal distribution with mean vector <inline-formula id="ieqn-27"><mml:math id="mml-ieqn-27"><mml:mi mathvariant="bold-italic">μ</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi mathvariant="bold-italic">θ</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> and covariance matrix <inline-formula id="ieqn-28"><mml:math id="mml-ieqn-28"><mml:mi mathvariant="bold">Σ</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi mathvariant="bold-italic">θ</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula>, where <inline-formula id="ieqn-29"><mml:math id="mml-ieqn-29"><mml:mi mathvariant="bold-italic">θ</mml:mi></mml:math></inline-formula> are the fitted model parameters.</p>
<p id="s4.p11">The parametric approach offers several advantages when model assumptions are approximately valid. In particular, generating resamples from the model-implied distribution — rather than relying on the empirical distribution — can reduce sampling variability and yield more accurate standard error estimates. These advantages are especially relevant in small sample contexts, where the empirical distribution may inadequately represent the underlying population structure (<xref ref-type="bibr" rid="ref-30">Hesterberg, 2015</xref>).</p>
	<p id="s4.p12">To our knowledge, no studies have investigated the estimation of SEs within the LSAM approach, nor have they explored both nonparametric and parametric bootstrapping in the LSAM framework. Therefore, this study aims to assess the performance of both analytic and resampling-based SEs in the LSAM approach. We consider correctly specified and misspecified models across varying sample sizes, using both normal and nonnormal distributions. Additionally, standard and robust SEs from system-wide ML were included, with the aim of illustrating the potential extent of bias that may arise when the standard approach is used under the conditions of our simulation design.</p></sec></sec>
<sec sec-type="method" id="s5"><title>Method</title>
	<p id="s5.p1">To evaluate SE estimation under varying conditions, two simulation studies were conducted, differing primarily in the models employed. In <xref ref-type="sec" rid="s6_2">Study 1</xref>, data were generated from a simple structural equation model in which a latent variable <inline-formula id="ieqn-30"><mml:math id="mml-ieqn-30"><mml:msub><mml:mi>f</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:math></inline-formula> predicts another latent variable <inline-formula id="ieqn-31"><mml:math id="mml-ieqn-31"><mml:msub><mml:mi>f</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:math></inline-formula>, as illustrated in <xref ref-type="fig" rid="fig-1">Figure 1</xref>. Each latent variable was measured by three continuous indicators. The model and population values were based on those described by <xref ref-type="bibr" rid="ref-48">Rosseel and Devlieger (2018)</xref>. In <xref ref-type="sec" rid="s6_5">Study 2</xref>, the model retained the two latent factors (<inline-formula id="ieqn-32"><mml:math id="mml-ieqn-32"><mml:msub><mml:mi>f</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:math></inline-formula> and <inline-formula id="ieqn-33"><mml:math id="mml-ieqn-33"><mml:msub><mml:mi>f</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:math></inline-formula>) with their continuous three indicators from <xref ref-type="sec" rid="s6_2">Study 1</xref>. To increase complexity and better reflect real-world SEM applications, the model was expanded to include two exogenous observed variables (<italic>X</italic> and <italic>Y</italic>) and one endogenous observed variable (<italic>Z</italic>), thereby incorporating relationships among observed and latent variables. The second model is illustrated in <xref ref-type="fig" rid="fig-2">Figure 2</xref>, with the population values specified to ensure that the explained variance in <italic>Z</italic> was determined to be 40%.</p><fig id="fig-1" position="anchor" orientation="portrait"><label>Figure 1</label><caption><title>The Model and Unstandardized Population Values Used in the Simulations for Study 1</title><p><italic>Note</italic>. Residual covariances (dashed double-headed arrows) are included in the population model but omitted in the analysis model under the misspecified condition. For scaling purposes, the first factor loading of each latent variable is fixed to 1 (denoted by 1* in the figure).</p></caption><graphic mimetype="image" mime-subtype="png" xlink:href="meth.16517-f1.png" position="anchor" orientation="portrait"/></fig><fig id="fig-2" position="anchor" orientation="portrait"><label>Figure 2</label><caption><title>The Model and Unstandardized Population Values Used in the Simulations for Study 2</title><p><italic>Note</italic>. For scaling purposes, the first factor loading of each latent variable is fixed to 1 (denoted by 1* in the figure).</p></caption><graphic mimetype="image" mime-subtype="png" xlink:href="meth.16517-f2.png" position="anchor" orientation="portrait"/></fig>
	<p id="s5.p2">A few characteristics were common to both studies: (i) the methods being evaluated, (ii) the outcome measures of interest (i.e., SE and coverage rate), and (iii) manipulations of sample size. However, normality and misspecification were varied differently in each study. In <xref ref-type="sec" rid="s6_2">Study 1</xref>, the misspecification condition was introduced by omitting two residual covariances between the second and third indicators within each latent variable from the analysis model, which were specified as <inline-formula id="ieqn-34"><mml:math id="mml-ieqn-34"><mml:mn>0.40</mml:mn></mml:math></inline-formula> in the population model. In contrast, <xref ref-type="sec" rid="s6_5">Study 2</xref> introduced misspecification in the structural part by removing the path from <inline-formula id="ieqn-35"><mml:math id="mml-ieqn-35"><mml:msub><mml:mi>f</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:math></inline-formula> to <inline-formula id="ieqn-36"><mml:math id="mml-ieqn-36"><mml:msub><mml:mi>f</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:math></inline-formula>. For normality, <xref ref-type="sec" rid="s6_2">Study 1</xref> focused on nonnormal latent scores with skewness of <inline-formula id="ieqn-37"><mml:math id="mml-ieqn-37"><mml:mo> - </mml:mo><mml:mn>2</mml:mn></mml:math></inline-formula> and excess kurtosis of <inline-formula id="ieqn-38"><mml:math id="mml-ieqn-38"><mml:mn>8</mml:mn></mml:math></inline-formula>, while <xref ref-type="sec" rid="s6_5">Study 2</xref> extended nonnormality to include exogenous variables, disturbances, and residuals.</p><?figure fig-1?>
	<p id="s5.p3">In <xref ref-type="sec" rid="s6_5">Study 2</xref>, nonnormal exogenous variables were generated with skewness of <inline-formula id="ieqn-39"><mml:math id="mml-ieqn-39"><mml:mo> - </mml:mo><mml:mn>2</mml:mn></mml:math></inline-formula> and excess kurtosis of <inline-formula id="ieqn-40"><mml:math id="mml-ieqn-40"><mml:mn>8</mml:mn></mml:math></inline-formula>, consistent with <xref ref-type="sec" rid="s6_2">Study 1</xref>. Nonnormal disturbances (<inline-formula id="ieqn-41"><mml:math id="mml-ieqn-41"><mml:msub><mml:mi>ζ</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:math></inline-formula> and <inline-formula id="ieqn-42"><mml:math id="mml-ieqn-42"><mml:msub><mml:mi>ζ</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:math></inline-formula>) were generated using centered exponential distributions, with rate <inline-formula id="ieqn-43"><mml:math id="mml-ieqn-43"><mml:mn>1</mml:mn></mml:math></inline-formula> and variances set to <inline-formula id="ieqn-44"><mml:math id="mml-ieqn-44"><mml:mn>0.91</mml:mn></mml:math></inline-formula> and <inline-formula id="ieqn-45"><mml:math id="mml-ieqn-45"><mml:mn>0.71</mml:mn></mml:math></inline-formula>, respectively. For the normally distributed data, disturbances were drawn from normal distributions with the same variances. These disturbances were added to the latent variables (<inline-formula id="ieqn-46"><mml:math id="mml-ieqn-46"><mml:msub><mml:mi>f</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:math></inline-formula> and <inline-formula id="ieqn-47"><mml:math id="mml-ieqn-47"><mml:msub><mml:mi>f</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:math></inline-formula>) to introduce variability according to the specified distribution type. Similarly, measurement errors (<inline-formula id="ieqn-48"><mml:math id="mml-ieqn-48"><mml:mi mathvariant="bold-italic">ϵ</mml:mi></mml:math></inline-formula>) were generated either from multivariate normal distributions (for normal conditions) or from centered exponential distributions (for nonnormal conditions), scaled to match the diagonal elements of the specified measurement error covariance matrix (<inline-formula id="ieqn-49"><mml:math id="mml-ieqn-49"><mml:mi mathvariant="bold">Θ</mml:mi></mml:math></inline-formula>). Under nonnormal conditions, residuals were generated independently for each indicator, resulting in uncorrelated errors. The centering of exponential distributions was accomplished by subtracting <inline-formula id="ieqn-50"><mml:math id="mml-ieqn-50"><mml:mn>1</mml:mn><mml:mrow><mml:mo>/</mml:mo></mml:mrow><mml:mi>λ</mml:mi></mml:math></inline-formula> from each draw, where <inline-formula id="ieqn-51"><mml:math id="mml-ieqn-51"><mml:mn>1</mml:mn><mml:mrow><mml:mo>/</mml:mo></mml:mrow><mml:mi>λ</mml:mi></mml:math></inline-formula> corresponds to the mean of an exponential distribution with <inline-formula id="ieqn-52"><mml:math id="mml-ieqn-52"><mml:mi>λ</mml:mi></mml:math></inline-formula> as the specified rate parameter. The exogenous variables <italic>X</italic> and <italic>Y</italic> were generated with a target population correlation of 0.4, as specified in the population covariance matrix <inline-formula id="ieqn-53"><mml:math id="mml-ieqn-53"><mml:mi mathvariant="bold">Φ</mml:mi></mml:math></inline-formula>. Thus, the second study moved the misspecification from the measurement to the structural part and enhanced nonnormality, expanding it from latent scores to three layers of nonnormality.</p><?figure fig-2?>
<p id="s5.p4">For the standard estimation approach, standard SEs were computed using the conventional ML estimator (<monospace>se = ”standard”</monospace> in <monospace>lavaan</monospace>). Robust SEs were obtained using the sandwich estimator (<monospace>se = ”sandwich”</monospace>), which corresponds to the MLR method <xref ref-type="bibr" rid="ref-58">Yuan and Bentler (2000)</xref>. In the LSAM approach, SEs were computed using: (a) two-step standard errors, (b) robust two-step standard errors following <xref ref-type="bibr" rid="ref-59">Yuan and Chan (2002)</xref>’s correction for nonnormality, (c) nonparametric bootstrap, and (d) parametric bootstrap. The bootstrap methods were implemented via the <monospace>sam()</monospace> function in the <monospace>lavaan</monospace> package. For nonparametric bootstrapping, we set <monospace>se = ”bootstrap”</monospace> and specified <monospace>bootstrap.type = ”ordinary”</monospace>, which resamples with replacement from the empirical data; for parametric bootstrapping, we set <monospace>se = ”bootstrap”</monospace> with <monospace>bootstrap.type = ”parametric”</monospace>, generating datasets from the model-implied multivariate normal distribution defined by the estimated parameter values. For both procedures, 1000 bootstrap resamples were drawn per dataset. The standard deviation of the bootstrap parameter estimates was used as the estimated SE.</p>
<p id="s5.p5">To evaluate the accuracy of SE estimates, we computed both empirical and model-based SEs. Empirical SEs were calculated as the standard deviation of point estimates across replications: 10,000 replications were used for non-resampling methods (e.g., standard and two-step approaches), and 1,000 for resampling-based methods (i.e., nonparametric and parametric bootstrap). Model-based SEs were obtained by averaging the SE estimates provided by each method across these replications. Bias was assessed by calculating the ratio of the model-based SE to the empirical SE for each method, with a ratio of <inline-formula id="ieqn-54"><mml:math id="mml-ieqn-54"><mml:mn>1</mml:mn></mml:math></inline-formula> indicating unbiased SE estimates, and ratios greater or less than <inline-formula id="ieqn-55"><mml:math id="mml-ieqn-55"><mml:mn>1</mml:mn></mml:math></inline-formula> reflecting over- or underestimation, respectively. To provide a more comprehensive assessment of standard error accuracy, we also calculated coverage rates for each parameter of interest based on confidence intervals obtained from each estimation method. Coverage was defined as the proportion of replications in which the model-based 90% confidence interval contained the corresponding population value.</p>
<p id="s5.p6">To evaluate the effect of sample size on the performance of the estimation methods, five sample sizes (<inline-formula id="ieqn-56"><mml:math id="mml-ieqn-56"><mml:mn>50</mml:mn><mml:mo>,</mml:mo><mml:mn>100</mml:mn><mml:mo>,</mml:mo><mml:mn>200</mml:mn><mml:mo>,</mml:mo><mml:mn>500</mml:mn><mml:mo>,</mml:mo></mml:math></inline-formula> and <inline-formula id="ieqn-57"><mml:math id="mml-ieqn-57"><mml:mn>1000</mml:mn></mml:math></inline-formula>) were selected to represent a range from small to large samples. This range allowed for a comprehensive assessment of SE accuracy across varying data sizes.</p>
	<p id="s5.p7">All simulations were conducted in <monospace>R</monospace> (<xref ref-type="bibr" rid="ref-47">R Core Team, 2024</xref>) using the <monospace>lavaan</monospace> package (Version 0.6–16; <xref ref-type="bibr" rid="ref-50">Rosseel, 2012</xref>). Nonnormal data for latent scores were generated using the <monospace>rIG</monospace> function from the <monospace>covsim</monospace> package (Version 1.0.0; <xref ref-type="bibr" rid="ref-28">Grønneberg et al., 2022</xref>). Data generation was performed using custom <monospace>R</monospace> functions designed to simulate datasets based on specified SEM models and population parameters. The option <monospace>bounds = TRUE</monospace> (<xref ref-type="bibr" rid="ref-19">De Jonckere &amp; Rosseel, 2022</xref>) was incorporated within these custom <monospace>R</monospace> functions when using the <monospace>sem()</monospace> function, ensuring improved convergence and enabling for a fair evaluation across all conditions. The full <monospace>R</monospace> code, including simulation details and population values, is available in our OSF repository via <xref ref-type="bibr" rid="r16.5">Can and Rosseel (2025)</xref>.</p></sec>
<sec sec-type="results" id="s6"><title>Results</title>
<sec id="s6_1"><title>Convergence</title>
	<p id="s6.ss6_1.p1">No non-convergent solutions were observed in either <xref ref-type="sec" rid="s6_2">Study 1</xref> or <xref ref-type="sec" rid="s6_5">Study 2</xref>: the local SAM approach consistently achieved convergence across all iterations. While convergence failures are often expected for smaller sample sizes in joint SEM ML, the application of bounds successfully mitigated this issue. As a result, no convergence problems were encountered for the SEM approach across all conditions.</p></sec>
<sec id="s6_2"><title>Study 1</title></sec>
<sec id="s6_3"><title>Standard Error Bias</title>
<p id="s6.ss6_3.p1"><xref ref-type="fig" rid="fig-3">Figure 3</xref> and <xref ref-type="table" rid="table-1">Table 1</xref> present results for SE Bias across various sample sizes and estimation methods. Results are shown under both correctly specified and misspecified models, with normal and nonnormal data. The results begin with a comparison of LSAM methods across conditions, followed by an analysis of standard and robust SEs under the SEM approach, to illustrate the extent of potential SE bias associated with standard estimation.</p><fig id="fig-3" position="anchor" orientation="portrait"><label>Figure 3</label><caption><title>Bias in SEs Across Various Sample Sizes and SE Methods Under Different Conditions in Study 1</title><p id="s6.ss6_3.p2"><italic>Note</italic>. Dashed and dotted horizontal lines indicate reference thresholds at 1.0, 1.1, and 0.9, respectively.</p></caption><graphic mimetype="image" mime-subtype="png" xlink:href="meth.16517-f3.png" position="anchor" orientation="portrait"/></fig>
<table-wrap id="table-1" position="anchor" orientation="landscape">
<label>Table 1</label><caption><title>Bias Values for SE Methods Across Sample Sizes and Conditions in Study 1</title></caption>
<table style="compact-2"><colgroup>
<col width="20%" align="left"/>
<col width=""/>
<col width=""/>
<col width=""/>
<col width=""/>
<col width=""/>
<col width=""/>
<col width=""/></colgroup>
<thead>
<tr>
<th/>
<th/>
<th colspan="2" align="center">SEM Approach<hr/></th>
<th colspan="4" align="center">LSAM Approach<hr/></th>
</tr>
<tr>
<th valign="bottom">Condition</th>
<th valign="bottom">Sample Size</th>
<th>Standard</th>
<th>Robust</th>
<th>Two-step standard</th>
<th>Two-step robust</th>
<th>Nonparametric</th>
<th>Parametric</th>
</tr>
</thead>
<tbody>	
<tr>
<td>Normal/Correct</td>
<td>50</td>
<th align="char" char=".">0.81</th>
<th align="char" char=".">1.18</th>
<td align="char" char=".">1.05</td>
<td align="char" char=".">1.04</td>
<td align="char" char=".">1.06</td>
<td align="char" char=".">0.95</td>
</tr>
<tr>
<td/>
<td>100</td>
<th align="char" char=".">0.88</th>
<td align="char" char=".">1.01</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">1.00</td>
<td align="char" char=".">1.07</td>
<td align="char" char=".">1.01</td>
</tr>
<tr>
<td/>
<td>200</td>
<td align="char" char=".">0.96</td>
<td align="char" char=".">1.02</td>
<td align="char" char=".">1.01</td>
<td align="char" char=".">1.02</td>
<td align="char" char=".">1.07</td>
<td align="char" char=".">0.99</td>
</tr>
<tr>
<td/>
<td>500</td>
<td align="char" char=".">0.98</td>
<td align="char" char=".">1.00</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">1.01</td>
<td align="char" char=".">0.97</td>
<td align="char" char=".">0.98</td>
</tr>
<tr>
<td/>
<td>1000</td>
<td align="char" char=".">0.98</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">1.01</td>
<td align="char" char=".">1.04</td>
<td align="char" char=".">1.01</td>
</tr>
<tr style="grey-border-top">
<td>Normal/Misspecified</td>
<td>50</td>
<th align="char" char=".">0.82</th>
<th align="char" char=".">1.14</th>
<td align="char" char=".">1.03</td>
<td align="char" char=".">0.98</td>
<td align="char" char=".">1.09</td>
<td align="char" char=".">1.02</td>
</tr>
<tr>
<td/>
<td>100</td>
<th align="char" char=".">0.87</th>
<td align="char" char=".">1.00</td>
<td align="char" char=".">0.96</td>
<td align="char" char=".">0.95</td>
<td align="char" char=".">1.07</td>
<td align="char" char=".">0.99</td>
</tr>
<tr>
<td/>
<td>200</td>
<td align="char" char=".">0.94</td>
<td align="char" char=".">1.01</td>
<td align="char" char=".">0.98</td>
<td align="char" char=".">0.97</td>
<td align="char" char=".">1.03</td>
<td align="char" char=".">0.99</td>
</tr>
<tr>
<td/>
<td>500</td>
<td align="char" char=".">0.95</td>
<td align="char" char=".">1.00</td>
<td align="char" char=".">0.97</td>
<td align="char" char=".">0.96</td>
<td align="char" char=".">1.00</td>
<td align="char" char=".">0.95</td>
</tr>
<tr>
<td/>
<td>1000</td>
<td align="char" char=".">0.95</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">0.96</td>
<td align="char" char=".">0.95</td>
<td align="char" char=".">1.03</td>
<td align="char" char=".">0.98</td>
</tr>
	<tr style="grey-border-top">
<td>Nonnormal/Correct</td>
<td>50</td>
<th align="char" char=".">0.67</th>
<td align="char" char=".">1.07</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">1.06</td>
<td align="char" char=".">1.02</td>
<td align="char" char=".">0.91</td>
</tr>
<tr>
<td/>
<td>100</td>
<th align="char" char=".">0.79</th>
<td align="char" char=".">1.08</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.96</td>
<td align="char" char=".">0.98</td>
<th align="char" char=".">0.89</th>
</tr>
<tr>
<td/>
<td>200</td>
<th align="char" char=".">0.81</th>
<td align="char" char=".">0.95</td>
<th align="char" char=".">0.86</th>
<td align="char" char=".">0.96</td>
<td align="char" char=".">0.97</td>
<td align="char" char=".">0.90</td>
</tr>
<tr>
<td/>
<td>500</td>
<th align="char" char=".">0.85</th>
<td align="char" char=".">0.98</td>
<th align="char" char=".">0.87</th>
<td align="char" char=".">1.01</td>
<td align="char" char=".">0.98</td>
<th align="char" char=".">0.87</th>
</tr>
<tr>
<td/>
<td>1000</td>
<th align="char" char=".">0.86</th>
<td align="char" char=".">0.99</td>
<th align="char" char=".">0.86</th>
<td align="char" char=".">1.02</td>
<td align="char" char=".">1.00</td>
<th align="char" char=".">0.88</th>
</tr>
	<tr style="grey-border-top">
<td>Nonnormal/Misspecified</td>
<td>50</td>
<th align="char" char=".">0.66</th>
<td align="char" char=".">1.06</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">1.00</td>
<td align="char" char=".">1.00</td>
<td align="char" char=".">0.96</td>
</tr>
<tr>
<td/>
<td>100</td>
<th align="char" char=".">0.75</th>
<td align="char" char=".">1.01</td>
<th align="char" char=".">0.85</th>
<th align="char" char=".">0.87</th>
<td align="char" char=".">0.97</td>
<th align="char" char=".">0.87</th>
</tr>
<tr>
<td/>
<td>200</td>
<th align="char" char=".">0.77</th>
<td align="char" char=".">0.95</td>
<th align="char" char=".">0.82</th>
<th align="char" char=".">0.86</th>
<td align="char" char=".">1.06</td>
<th align="char" char=".">0.83</th>
</tr>
<tr>
<td/>
<td>500</td>
<th align="char" char=".">0.81</th>
<td align="char" char=".">0.98</td>
<th align="char" char=".">0.83</th>
<td align="char" char=".">0.88</td>
<td align="char" char=".">1.02</td>
<th align="char" char=".">0.87</th>
</tr>
<tr>
<td/>
<td>1000</td>
<th align="char" char=".">0.80</th>
<td align="char" char=".">0.97</td>
<th align="char" char=".">0.81</th>
<th align="char" char=".">0.87</th>
<td align="char" char=".">0.96</td>
<th align="char" char=".">0.82</th>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic>Note</italic>. Standard error bias values less than 0.90 or greater than 1.10 are bolded to indicate deviation from the unbiased value of 1.00.</p>
</table-wrap-foot>
</table-wrap>
<p id="s6.ss6_3.p3">The analytic two-step method for LSAM (“SAM Two-step”) produced SE estimates close to the unbiased ratio of 1 under normal conditions, irrespective of model specification. Under nonnormal conditions, however, it exhibited increasing bias, with underestimation reaching up to 19% (bias = 0.81), particularly in misspecified models. A similar pattern was observed for the robust LSAM (“SAM Robust”) under normal conditions. However, under the nonnormal/correct condition, it produced nearly unbiased SE values as the sample size increased. In contrast, under the nonnormal/misspecified condition, it tended to underestimate SEs (e.g., bias = 0.87 at <inline-formula id="ieqn-58"><mml:math id="mml-ieqn-58"><mml:mi>N</mml:mi><mml:mo>=</mml:mo><mml:mn>100</mml:mn></mml:math></inline-formula>). The nonparametric bootstrap method (“SAM Nonparametric”) demonstrated strong performance, especially under nonnormal conditions, yielding nearly unbiased SEs (e.g., bias = 1.00 at <inline-formula id="ieqn-59"><mml:math id="mml-ieqn-59"><mml:mi>N</mml:mi><mml:mo>=</mml:mo><mml:mn>50</mml:mn></mml:math></inline-formula> in the nonnormal/misspecified condition). Under normal conditions, it slightly overestimated SEs in smaller samples, with bias decreasing from 9% (bias = 1.09) to 3% (bias = 1.03) as <italic>N</italic> increased. The parametric bootstrap method (“SAM Parametric”) provided accurate SE estimates under normal conditions, with minimal bias ranging from 5% underestimation (bias = 0.95) to near-unbiased values (bias = 1.01). However, under nonnormal conditions, it showed greater underestimation, with bias ranging from 4% to 18% (bias = 0.96–0.82), especially under model misspecification.</p>
<p id="s6.ss6_3.p4">For the standard SEM estimation approach, the classic method based on the expected information matrix (“SEM Standard”) consistently underestimated SEs across all conditions. The largest underestimations occurred under nonnormality and misspecification, with bias ranging from 20% to 34% (bias = 0.80–0.66). Under normal conditions, bias improved with larger sample sizes, ranging from 19% underestimation (bias = 0.81) at <inline-formula id="ieqn-60"><mml:math id="mml-ieqn-60"><mml:mi>N</mml:mi><mml:mo>=</mml:mo><mml:mn>50</mml:mn></mml:math></inline-formula> to 2% (bias = 0.98) at <inline-formula id="ieqn-61"><mml:math id="mml-ieqn-61"><mml:mi>N</mml:mi><mml:mo>=</mml:mo><mml:mn>1000</mml:mn></mml:math></inline-formula>. The robust SEM method (“SEM Robust”) exhibited overestimation of SEs in small samples under normality (e.g., bias = 1.18 at <inline-formula id="ieqn-62"><mml:math id="mml-ieqn-62"><mml:mi>N</mml:mi><mml:mo>=</mml:mo><mml:mn>50</mml:mn></mml:math></inline-formula>), with accuracy improving as sample size increased (bias = 0.99 at <inline-formula id="ieqn-63"><mml:math id="mml-ieqn-63"><mml:mi>N</mml:mi><mml:mo>=</mml:mo><mml:mn>1000</mml:mn></mml:math></inline-formula>). In nonnormal conditions, it outperformed SEM Standard, producing SE bias values generally closer to 1.</p></sec>
<sec id="s6_4"><title>Coverage Rates</title>
<p id="s6.ss6_4.p1"><xref ref-type="table" rid="table-2">Table 2</xref> presents results for coverage rates and bias of point estimates across various sample sizes and estimation methods.</p>
<table-wrap id="table-2" position="anchor" orientation="landscape">
<label>Table 2</label><caption><title>Coverage Rates and Bias of Point Estimates for SE Methods Across Sample Sizes and Conditions in Study 1</title></caption>
<table style="compact-2"><colgroup>
<col width="13%" align="left"/>
<col width=""/>
<col width=""/>
<col width=""/>
<col width=""/>
<col width=""/>
<col width=""/>
<col width=""/>
<col width=""/>
<col width=""/>
<col width=""/>
<col width=""/>
<col width=""/>
<col width=""/>		
</colgroup>
<thead>
<tr>
<th/>
<th/>
<th colspan="4" align="center" valign="bottom">SEM Approach<hr/></th>
<th colspan="8" align="center" valign="bottom">LSAM Approach<hr/></th>
</tr>
<tr>
<th/>
<th/>
<th colspan="2" align="center">Standard<hr/></th>
<th colspan="2" align="center">Robust<hr/></th>
<th colspan="2" align="center">Two-Step Standard<hr/></th>
<th colspan="2" align="center">Two-Step Robust<hr/></th>
<th colspan="2" align="center">Nonparametric<hr/></th>
<th colspan="2" align="center">Parametric<hr/></th>
</tr>
<tr>
<th valign="bottom">Condition</th>
<th valign="bottom">Sample Size</th>
<th valign="bottom">Coverage</th>
<th valign="bottom">Bias</th>
<th valign="bottom">Coverage</th>
<th valign="bottom">Bias</th>
<th valign="bottom">Coverage</th>
<th valign="bottom">Bias</th>
<th valign="bottom">Coverage</th>
<th valign="bottom">Bias</th>
<th valign="bottom">Coverage</th>
<th valign="bottom">Bias</th>
<th valign="bottom">Coverage</th>
<th valign="bottom">Bias</th>
</tr>
</thead>
<tbody>		
<tr>
<td>Normal/Correct</td>
<td>50</td>
<td align="char" char=".">0.84</td>
<td align="char" char=".">0.02</td>
<td align="char" char=".">0.88</td>
<td align="char" char=".">0.02</td>
<td align="char" char=".">0.87</td>
<td align="char" char=".">−0.03</td>
<td align="char" char=".">0.86</td>
<td align="char" char=".">−0.03</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">−0.03</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">−0.02</td>
</tr>
<tr>
<td/>
<td>100</td>
<td align="char" char=".">0.86</td>
<td align="char" char=".">0.01</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.01</td>
<td align="char" char=".">0.88</td>
<td align="char" char=".">−0.02</td>
<td align="char" char=".">0.87</td>
<td align="char" char=".">−0.02</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">−0.02</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">−0.02</td>
</tr>
<tr>
<td/>
<td>200</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.00</td>
<td align="char" char=".">0.91</td>
<td align="char" char=".">0.00</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">−0.01</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">−0.01</td>
<td align="char" char=".">0.91</td>
<td align="char" char=".">−0.01</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">−0.01</td>
</tr>
<tr>
<td/>
<td>500</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.00</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.00</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.00</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.00</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.00</td>
<td align="char" char=".">0.88</td>
<td align="char" char=".">−0.01</td>
</tr>
<tr>
<td/>
<td>1000</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.00</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.00</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.00</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.00</td>
<td align="char" char=".">0.91</td>
<td align="char" char=".">0.00</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.00</td>
</tr>
<tr style="grey-border-top">
<td>Normal/Misspecified</td>
<td>50</td>
<td align="char" char="."><bold>0.72</bold></td>
<td align="char" char=".">−0.08</td>
<td align="char" char="."><bold>0.78</bold></td>
<td align="char" char=".">−0.08</td>
<td align="char" char="."><bold>0.71</bold></td>
<td align="char" char=".">−0.11</td>
<td align="char" char="."><bold>0.69</bold></td>
<td align="char" char=".">−0.11</td>
<td align="char" char="."><bold>0.77</bold></td>
<td align="char" char=".">−0.11</td>
<td align="char" char="."><bold>0.77</bold></td>
<td align="char" char=".">−0.11</td>
</tr>
<tr>
<td/>
<td>100</td>
<td align="char" char="."><bold>0.67</bold></td>
<td align="char" char=".">−0.09</td>
<td align="char" char="."><bold>0.72</bold></td>
<td align="char" char=".">−0.09</td>
<td align="char" char="."><bold>0.64</bold></td>
<td align="char" char=".">−0.11</td>
<td align="char" char="."><bold>0.63</bold></td>
<td align="char" char=".">−0.11</td>
<td align="char" char="."><bold>0.68</bold></td>
<td align="char" char=".">−0.11</td>
<td align="char" char="."><bold>0.65</bold></td>
<td align="char" char=".">−0.11</td>
</tr>
<tr>
<td/>
<td>200</td>
<td align="char" char="."><bold>0.59</bold></td>
<td align="char" char=".">−0.10</td>
<td align="char" char="."><bold>0.62</bold></td>
<td align="char" char=".">−0.10</td>
<td align="char" char="."><bold>0.55</bold></td>
<td align="char" char=".">−0.11</td>
<td align="char" char="."><bold>0.54</bold></td>
<td align="char" char=".">−0.11</td>
<td align="char" char="."><bold>0.56</bold></td>
<td align="char" char=".">−0.11</td>
<td align="char" char="."><bold>0.55</bold></td>
<td align="char" char=".">−0.11</td>
</tr>
<tr>
<td/>
<td>500</td>
<td align="char" char="."><bold>0.36</bold></td>
<td align="char" char=".">−0.10</td>
<td align="char" char="."><bold>0.39</bold></td>
<td align="char" char=".">−0.10</td>
<td align="char" char="."><bold>0.33</bold></td>
<td align="char" char=".">−0.10</td>
<td align="char" char="."><bold>0.32</bold></td>
<td align="char" char=".">−0.10</td>
<td align="char" char="."><bold>0.35</bold></td>
<td align="char" char=".">−0.10</td>
<td align="char" char="."><bold>0.34</bold></td>
<td align="char" char=".">−0.11</td>
</tr>
<tr>
<td/>
<td>1000</td>
<td align="char" char="."><bold>0.15</bold></td>
<td align="char" char=".">−0.10</td>
<td align="char" char="."><bold>0.16</bold></td>
<td align="char" char=".">−0.10</td>
<td align="char" char="."><bold>0.12</bold></td>
<td align="char" char=".">−0.10</td>
<td align="char" char="."><bold>0.12</bold></td>
<td align="char" char=".">−0.10</td>
<td align="char" char="."><bold>0.14</bold></td>
<td align="char" char=".">−0.10</td>
<td align="char" char="."><bold>0.11</bold></td>
<td align="char" char=".">−0.10</td>
</tr>
	<tr style="grey-border-top">
<td>Nonnormal/Correct</td>
<td>50</td>
<td align="char" char="."><bold>0.79</bold></td>
<td align="char" char=".">0.02</td>
<td align="char" char=".">0.83</td>
<td align="char" char=".">0.02</td>
<td align="char" char=".">0.84</td>
<td align="char" char=".">−0.01</td>
<td align="char" char=".">0.82</td>
<td align="char" char=".">−0.01</td>
<td align="char" char=".">0.91</td>
<td align="char" char=".">0.00</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">−0.01</td>
</tr>
<tr>
<td/>
<td>100</td>
<td align="char" char=".">0.82</td>
<td align="char" char=".">0.02</td>
<td align="char" char=".">0.87</td>
<td align="char" char=".">0.02</td>
<td align="char" char=".">0.84</td>
<td align="char" char=".">−0.01</td>
<td align="char" char=".">0.85</td>
<td align="char" char=".">−0.01</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.00</td>
<td align="char" char=".">0.84</td>
<td align="char" char=".">−0.02</td>
</tr>
<tr>
<td/>
<td>200</td>
<td align="char" char=".">0.83</td>
<td align="char" char=".">0.01</td>
<td align="char" char=".">0.88</td>
<td align="char" char=".">0.01</td>
<td align="char" char=".">0.84</td>
<td align="char" char=".">0.00</td>
<td align="char" char=".">0.87</td>
<td align="char" char=".">0.00</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.00</td>
<td align="char" char=".">0.87</td>
<td align="char" char=".">0.00</td>
</tr>
<tr>
<td/>
<td>500</td>
<td align="char" char=".">0.84</td>
<td align="char" char=".">0.01</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.01</td>
<td align="char" char=".">0.85</td>
<td align="char" char=".">0.00</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.00</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.00</td>
<td align="char" char=".">0.84</td>
<td align="char" char=".">0.00</td>
</tr>
<tr>
<td/>
<td>1000</td>
<td align="char" char=".">0.84</td>
<td align="char" char=".">0.00</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.00</td>
<td align="char" char=".">0.84</td>
<td align="char" char=".">0.00</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.00</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.00</td>
<td align="char" char=".">0.84</td>
<td align="char" char=".">0.00</td>
</tr>
	<tr style="grey-border-top">
<td>Nonnormal/Misspecified</td>
<td>50</td>
<td align="char" char="."><bold>0.70</bold></td>
<td align="char" char=".">−0.04</td>
<td align="char" char="."><bold>0.76</bold></td>
<td align="char" char=".">−0.04</td>
<td align="char" char="."><bold>0.67</bold></td>
<td align="char" char=".">−0.09</td>
<td align="char" char="."><bold>0.66</bold></td>
<td align="char" char=".">−0.09</td>
<td align="char" char="."><bold>0.77</bold></td>
<td align="char" char=".">−0.09</td>
<td align="char" char="."><bold>0.74</bold></td>
<td align="char" char=".">−0.09</td>
</tr>
<tr>
<td/>
<td>100</td>
<td align="char" char="."><bold>0.65</bold></td>
<td align="char" char=".">−0.07</td>
<td align="char" char="."><bold>0.71</bold></td>
<td align="char" char=".">−0.07</td>
<td align="char" char="."><bold>0.62</bold></td>
<td align="char" char=".">−0.10</td>
<td align="char" char="."><bold>0.62</bold></td>
<td align="char" char=".">−0.10</td>
<td align="char" char="."><bold>0.68</bold></td>
<td align="char" char=".">−0.09</td>
<td align="char" char="."><bold>0.58</bold></td>
<td align="char" char=".">−0.11</td>
</tr>
<tr>
<td/>
<td>200</td>
<td align="char" char="."><bold>0.56</bold></td>
<td align="char" char=".">−0.09</td>
<td align="char" char="."><bold>0.64</bold></td>
<td align="char" char=".">−0.09</td>
<td align="char" char="."><bold>0.53</bold></td>
<td align="char" char=".">−0.10</td>
<td align="char" char="."><bold>0.55</bold></td>
<td align="char" char=".">−0.10</td>
<td align="char" char="."><bold>0.61</bold></td>
<td align="char" char=".">−0.10</td>
<td align="char" char="."><bold>0.54</bold></td>
<td align="char" char=".">−0.10</td>
</tr>
<tr>
<td/>
<td>500</td>
<td align="char" char="."><bold>0.37</bold></td>
<td align="char" char=".">−0.10</td>
<td align="char" char="."><bold>0.47</bold></td>
<td align="char" char=".">−0.10</td>
<td align="char" char="."><bold>0.34</bold></td>
<td align="char" char=".">−0.10</td>
<td align="char" char="."><bold>0.38</bold></td>
<td align="char" char=".">−0.10</td>
<td align="char" char="."><bold>0.42</bold></td>
<td align="char" char=".">−0.10</td>
<td align="char" char="."><bold>0.38</bold></td>
<td align="char" char=".">−0.10</td>
</tr>
<tr>
<td/>
<td>1000</td>
<td align="char" char="."><bold>0.19</bold></td>
<td align="char" char=".">−0.10</td>
<td align="char" char="."><bold>0.27</bold></td>
<td align="char" char=".">−0.10</td>
<td align="char" char="."><bold>0.17</bold></td>
<td align="char" char=".">−0.10</td>
<td align="char" char="."><bold>0.20</bold></td>
<td align="char" char=".">−0.10</td>
<td align="char" char="."><bold>0.26</bold></td>
<td align="char" char=".">−0.10</td>
<td align="char" char="."><bold>0.17</bold></td>
<td align="char" char=".">−0.10</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic>Note</italic>. Bolded values indicate coverage rates of 0.80 or lower.</p>
</table-wrap-foot>
</table-wrap>
<p id="s6.ss6_4.p2">Coverage rates were generally close to 0.90 under correctly specified models, especially as sample size increased. For example, under the Normal/Correct condition, SAM Two-step yielded a coverage rate that increased from 0.87 at <inline-formula id="ieqn-64"><mml:math id="mml-ieqn-64"><mml:mi>N</mml:mi><mml:mo>=</mml:mo><mml:mn>50</mml:mn></mml:math></inline-formula> to 0.90 at <inline-formula id="ieqn-65"><mml:math id="mml-ieqn-65"><mml:mi>N</mml:mi><mml:mo>=</mml:mo><mml:mn>1000</mml:mn></mml:math></inline-formula>. SAM Two-step and SAM Robust consistently achieved coverage rates close to 0.90 across distributional conditions. SAM Nonparametric also showed similar performance under correct model specification and normal data. SAM Parametric yielded slightly improved coverage in small samples compared to SAM Two-step and SAM Robust, particularly in nonnormal/correct conditions, and converged to values near 0.90 with larger sample sizes. Although all LSAM methods demonstrated coverage rates close to 0.90 under correct model specification, SAM Parametric tended to yield slightly more accurate coverage in normal data conditions, whereas SAM Nonparametric demonstrated slightly higher coverage in nonnormal data conditions.</p>
<p id="s6.ss6_4.p3">In contrast, under misspecified models, coverage rates decreased substantially across all methods, exhibiting similarly low coverage levels across sample sizes under misspecification. Among LSAM methods, bootstrap approaches showed slightly better performance at smaller sample sizes in the normal/misspecified condition (e.g., coverage = 0.77 at <inline-formula id="ieqn-66"><mml:math id="mml-ieqn-66"><mml:mi>N</mml:mi><mml:mo>=</mml:mo><mml:mn>50</mml:mn></mml:math></inline-formula> for SAM Nonparametric). SEM Robust followed a similar trend to the LSAM methods and consistently outperformed SEM Standard across all misspecified conditions.</p>
<p id="s6.ss6_4.p4">Coverage rates under model misspecification can be interpreted by considering the role of point estimate bias. When the true parameter value was 0.25, point estimates consistently underestimated this value by approximately 0.10 across estimation methods, corresponding to a 40% relative underestimation. Despite reasonably accurate SE estimates from the LSAM approach, the resulting confidence intervals were centered on biased estimates. This decline in coverage rates may be primarily attributed to biased point estimates rather than inaccuracies in SE estimation. Even when SEs appropriately reflected the variability of estimates, confidence intervals may have failed to include the true parameter value because they were centered on systematically underestimated point estimates. This shift in location — rather than width — offers a plausible explanation for the reduced coverage rates observed under misspecification. In contrast, under correctly specified models, point estimates were nearly unbiased across all methods, and coverage rates consistently approached 0.90. These results suggest that low coverage under misspecification is largely driven by point estimate bias rather than SE bias.</p></sec>
<sec id="s6_5"><title>Study 2</title></sec>
<sec id="s6_6"><title>Standard Error Bias</title>
<p id="s6.ss6_6.p1"><xref ref-type="table" rid="table-3">Table 3</xref> presents SE bias for regression coefficients under correctly specified models with normal and nonnormal data across various sample sizes and estimation methods.</p>
<table-wrap id="table-3" position="anchor" orientation="landscape">
<label>Table 3</label><caption><title>Bias Values for SE Methods Across Sample Sizes and Conditions for Correctly Specified Models in Study 2</title></caption>
<table style="compact-1"><colgroup>
<col width="13%" align="left"/>
<col width=""/>
<col width=""/>
<col width=""/>
<col width=""/>
<col width=""/>
<col width=""/>
<col width=""/>
<col width=""/>
<col width=""/>
<col width=""/>
<col width=""/>
</colgroup>
<thead>
<tr>
<th/>
<th/>
<th colspan="5">Normal/Correct<hr/></th>
<th colspan="5">Nonnormal/Correct<hr/></th>
</tr>
<tr>
<th valign="bottom">SE Method</th>
	<th valign="bottom">Sample Size</th>
	<th valign="bottom"><inline-formula id="ieqn-67"><mml:math id="mml-ieqn-67"><mml:msub><mml:mi>f</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>∼</mml:mo><mml:mi>X</mml:mi></mml:math></inline-formula></th>
	<th valign="bottom"><inline-formula id="ieqn-68"><mml:math id="mml-ieqn-68"><mml:msub><mml:mi>f</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>∼</mml:mo><mml:mi>Y</mml:mi></mml:math></inline-formula></th>
	<th valign="bottom"><inline-formula id="ieqn-69"><mml:math id="mml-ieqn-69"><mml:msub><mml:mi>f</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>∼</mml:mo><mml:msub><mml:mi>f</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:math></inline-formula></th>
	<th valign="bottom"><inline-formula id="ieqn-70"><mml:math id="mml-ieqn-70"><mml:mi>Z</mml:mi><mml:mo>∼</mml:mo><mml:msub><mml:mi>f</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:math></inline-formula></th>
	<th valign="bottom"><inline-formula id="ieqn-71"><mml:math id="mml-ieqn-71"><mml:mi>Z</mml:mi><mml:mo>∼</mml:mo><mml:msub><mml:mi>f</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:math></inline-formula></th>
	<th valign="bottom"><inline-formula id="ieqn-72"><mml:math id="mml-ieqn-72"><mml:msub><mml:mi>f</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>∼</mml:mo><mml:mi>X</mml:mi></mml:math></inline-formula></th>
	<th valign="bottom"><inline-formula id="ieqn-73"><mml:math id="mml-ieqn-73"><mml:msub><mml:mi>f</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>∼</mml:mo><mml:mi>Y</mml:mi></mml:math></inline-formula></th>
	<th valign="bottom"><inline-formula id="ieqn-74"><mml:math id="mml-ieqn-74"><mml:msub><mml:mi>f</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>∼</mml:mo><mml:msub><mml:mi>f</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:math></inline-formula></th>
	<th valign="bottom"><inline-formula id="ieqn-75"><mml:math id="mml-ieqn-75"><mml:mi>Z</mml:mi><mml:mo>∼</mml:mo><mml:msub><mml:mi>f</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:math></inline-formula></th>
	<th valign="bottom"><inline-formula id="ieqn-76"><mml:math id="mml-ieqn-76"><mml:mi>Z</mml:mi><mml:mo>∼</mml:mo><mml:msub><mml:mi>f</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:math></inline-formula></th>
</tr>
</thead>
<tbody>
<tr style="background-lightblue; white-border-top; white-border-bottom">
<th align="left" colspan="12">LSAM Approach</th>
</tr>
<tr>
<td>Two-step standard</td>
<td>50</td>
<td align="char" char=".">0.96</td>
<td align="char" char=".">0.95</td>
<td align="char" char=".">0.96</td>
<td align="char" char=".">0.95</td>
<td align="char" char=".">0.96</td>
<td align="char" char=".">0.95</td>
<td align="char" char=".">0.93</td>
<td align="char" char=".">0.92</td>
<td align="char" char=".">0.95</td>
<td align="char" char=".">0.95</td>
</tr>
<tr>
<td/>
<td>100</td>
<td align="char" char=".">0.97</td>
<td align="char" char=".">0.97</td>
<td align="char" char=".">0.97</td>
<td align="char" char=".">0.97</td>
<td align="char" char=".">0.98</td>
<td align="char" char=".">0.96</td>
<td align="char" char=".">0.97</td>
<td align="char" char=".">0.96</td>
<td align="char" char=".">0.97</td>
<td align="char" char=".">0.98</td>
</tr>
<tr>
<td/>
<td>200</td>
<td align="char" char=".">1.00</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">0.98</td>
<td align="char" char=".">0.98</td>
<td align="char" char=".">1.00</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">0.98</td>
<td align="char" char=".">0.98</td>
<td align="char" char=".">0.98</td>
<td align="char" char=".">0.98</td>
</tr>
<tr>
<td/>
<td>500</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">1.00</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">0.98</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">1.00</td>
<td align="char" char=".">1.00</td>
</tr>
<tr>
<td/>
<td>1000</td>
<td align="char" char=".">1.00</td>
<td align="char" char=".">1.00</td>
<td align="char" char=".">1.00</td>
<td align="char" char=".">1.00</td>
<td align="char" char=".">1.01</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">1.00</td>
</tr>
<tr style="grey-border-top">
<td>Two-step robust</td>
<td>50</td>
<td align="char" char=".">0.97</td>
<td align="char" char=".">0.94</td>
<td align="char" char=".">0.96</td>
<td align="char" char=".">0.96</td>
<td align="char" char=".">0.94</td>
<td align="char" char=".">0.91</td>
<td align="char" char="."><bold>0.87</bold></td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.93</td>
<td align="char" char=".">0.91</td>
</tr>
<tr>
<td/>
<td>100</td>
<td align="char" char=".">0.97</td>
<td align="char" char=".">0.96</td>
<td align="char" char=".">0.96</td>
<td align="char" char=".">0.97</td>
<td align="char" char=".">0.96</td>
<td align="char" char=".">0.92</td>
<td align="char" char=".">0.93</td>
<td align="char" char=".">0.94</td>
<td align="char" char=".">0.96</td>
<td align="char" char=".">0.95</td>
</tr>
<tr>
<td/>
<td>200</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">0.98</td>
<td align="char" char=".">0.98</td>
<td align="char" char=".">0.98</td>
<td align="char" char=".">0.97</td>
<td align="char" char=".">0.96</td>
<td align="char" char=".">0.96</td>
<td align="char" char=".">0.97</td>
<td align="char" char=".">0.97</td>
<td align="char" char=".">0.95</td>
</tr>
<tr>
<td/>
<td>500</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">1.00</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">0.97</td>
<td align="char" char=".">0.98</td>
<td align="char" char=".">0.97</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">0.98</td>
</tr>
<tr>
<td/>
<td>1000</td>
<td align="char" char=".">1.00</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">1.00</td>
<td align="char" char=".">1.00</td>
<td align="char" char=".">0.98</td>
<td align="char" char=".">0.98</td>
<td align="char" char=".">0.98</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">0.98</td>
<td align="char" char=".">0.98</td>
</tr>
<tr style="grey-border-top">
<td>Nonparametric</td>
<td>50</td>
<td align="char" char=".">0.98</td>
<td align="char" char=".">1.01</td>
<td align="char" char=".">0.97</td>
<td align="char" char=".">1.06</td>
<td align="char" char=".">1.02</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">1.02</td>
<td align="char" char=".">1.02</td>
<td align="char" char=".">1.09</td>
<td align="char" char=".">1.01</td>
</tr>
<tr>
<td/>
<td>100</td>
<td align="char" char=".">1.03</td>
<td align="char" char=".">1.04</td>
<td align="char" char=".">1.02</td>
<td align="char" char=".">1.02</td>
<td align="char" char=".">1.00</td>
<td align="char" char=".">0.98</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">1.02</td>
<td align="char" char=".">0.98</td>
<td align="char" char=".">0.96</td>
</tr>
<tr>
<td/>
<td>200</td>
<td align="char" char=".">0.98</td>
<td align="char" char=".">1.02</td>
<td align="char" char=".">0.98</td>
<td align="char" char=".">1.02</td>
<td align="char" char=".">0.98</td>
<td align="char" char=".">0.96</td>
<td align="char" char=".">1.07</td>
<td align="char" char=".">0.96</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">1.03</td>
</tr>
<tr>
<td/>
<td>500</td>
<td align="char" char=".">1.02</td>
<td align="char" char=".">1.03</td>
<td align="char" char=".">1.00</td>
<td align="char" char=".">1.02</td>
<td align="char" char=".">1.03</td>
<td align="char" char=".">1.01</td>
<td align="char" char=".">0.97</td>
<td align="char" char=".">0.97</td>
<td align="char" char=".">1.01</td>
<td align="char" char=".">0.99</td>
</tr>
<tr>
<td/>
<td>1000</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">0.98</td>
<td align="char" char=".">1.00</td>
<td align="char" char=".">1.00</td>
<td align="char" char=".">1.01</td>
<td align="char" char=".">1.01</td>
<td align="char" char=".">0.97</td>
<td align="char" char=".">1.02</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">1.01</td>
</tr>
<tr style="grey-border-top">
<td>Parametric</td>
<td>50</td>
<td align="char" char=".">0.94</td>
<td align="char" char=".">0.98</td>
<td align="char" char=".">1.00</td>
<td align="char" char=".">1.00</td>
<td align="char" char=".">1.03</td>
<td align="char" char=".">0.96</td>
<td align="char" char=".">0.97</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">1.05</td>
<td align="char" char=".">1.00</td>
</tr>
<tr>
<td/>
<td>100</td>
<td align="char" char=".">1.01</td>
<td align="char" char=".">1.01</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">1.02</td>
<td align="char" char=".">0.96</td>
<td align="char" char=".">0.97</td>
<td align="char" char=".">0.98</td>
<td align="char" char=".">0.97</td>
<td align="char" char=".">1.00</td>
</tr>
<tr>
<td/>
<td>200</td>
<td align="char" char=".">0.97</td>
<td align="char" char=".">0.97</td>
<td align="char" char=".">1.01</td>
<td align="char" char=".">1.03</td>
<td align="char" char=".">1.00</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">1.00</td>
<td align="char" char=".">1.00</td>
<td align="char" char=".">1.03</td>
<td align="char" char=".">1.01</td>
</tr>
<tr>
<td/>
<td>500</td>
<td align="char" char=".">1.00</td>
<td align="char" char=".">0.98</td>
<td align="char" char=".">1.01</td>
<td align="char" char=".">1.00</td>
<td align="char" char=".">0.98</td>
<td align="char" char=".">0.97</td>
<td align="char" char=".">1.01</td>
<td align="char" char=".">1.01</td>
<td align="char" char=".">0.98</td>
<td align="char" char=".">0.97</td>
</tr>
<tr style="grey-border-bottom">
<td/>
<td>1000</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">0.96</td>
<td align="char" char=".">0.95</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">1.02</td>
<td align="char" char=".">1.01</td>
<td align="char" char=".">0.97</td>
<td align="char" char=".">1.00</td>
<td align="char" char=".">1.01</td>
<td align="char" char=".">1.03</td>
</tr>
<tr style="background-lightblue; white-border-top; white-border-bottom">
<th align="left"  colspan="12">SEM Approach</th><?pagebreak-before?>
</tr>
<tr>
<td>Standard</td>
<td>50</td>
<td align="char" char=".">1.04</td>
<td align="char" char=".">0.96</td>
<td align="char" char=".">0.94</td>
<td align="char" char=".">0.94</td>
<td align="char" char=".">0.94</td>
<td align="char" char="."><bold>1.26</bold></td>
<td align="char" char="."><bold>1.29</bold></td>
<td align="char" char=".">0.91</td>
<td align="char" char=".">0.95</td>
<td align="char" char=".">0.96</td>
</tr>
<tr>
<td/>
<td>100</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">0.98</td>
<td align="char" char=".">0.96</td>
<td align="char" char=".">0.96</td>
<td align="char" char=".">0.97</td>
<td align="char" char=".">0.98</td>
<td align="char" char=".">0.97</td>
<td align="char" char=".">0.96</td>
<td align="char" char=".">0.97</td>
<td align="char" char=".">0.97</td>
</tr>
<tr>
<td/>
<td>200</td>
<td align="char" char=".">1.01</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">0.98</td>
<td align="char" char=".">0.98</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">0.97</td>
<td align="char" char=".">0.98</td>
<td align="char" char=".">0.98</td>
</tr>
<tr>
<td/>
<td>500</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">1.00</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">1.01</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">1.00</td>
<td align="char" char=".">1.00</td>
</tr>
<tr>
<td/>
<td>1000</td>
<td align="char" char=".">1.00</td>
<td align="char" char=".">1.00</td>
<td align="char" char=".">1.00</td>
<td align="char" char=".">1.00</td>
<td align="char" char=".">1.01</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">1.00</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">1.00</td>
</tr>
<tr style="grey-border-top">
<td>Robust</td>
<td>50</td>
<td align="char" char=".">0.95</td>
<td align="char" char=".">0.94</td>
<td align="char" char=".">0.96</td>
<td align="char" char=".">0.94</td>
<td align="char" char=".">0.94</td>
<td align="char" char=".">0.90</td>
<td align="char" char="."><bold>0.87</bold></td>
<td align="char" char="."><bold>0.89</bold></td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
</tr>
<tr>
<td/>
<td>100</td>
<td align="char" char=".">0.96</td>
<td align="char" char=".">0.97</td>
<td align="char" char=".">0.96</td>
<td align="char" char=".">0.96</td>
<td align="char" char=".">0.97</td>
<td align="char" char=".">0.92</td>
<td align="char" char=".">0.93</td>
<td align="char" char=".">0.94</td>
<td align="char" char=".">0.95</td>
<td align="char" char=".">0.95</td>
</tr>
<tr>
<td/>
<td>200</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">0.98</td>
<td align="char" char=".">0.98</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">0.96</td>
<td align="char" char=".">0.96</td>
<td align="char" char=".">0.96</td>
<td align="char" char=".">0.96</td>
<td align="char" char=".">0.96</td>
</tr>
<tr>
<td/>
<td>500</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">1.00</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">0.98</td>
<td align="char" char=".">0.97</td>
<td align="char" char=".">0.98</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">0.99</td>
</tr>
<tr>
<td/>
<td>1000</td>
<td align="char" char=".">1.00</td>
<td align="char" char=".">1.00</td>
<td align="char" char=".">1.00</td>
<td align="char" char=".">1.00</td>
<td align="char" char=".">1.01</td>
<td align="char" char=".">0.98</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">0.98</td>
<td align="char" char=".">0.98</td>
<td align="char" char=".">0.99</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic>Note</italic>. Standard error bias values less than 0.90 or greater than 1.10 are bolded to indicate deviation from the unbiased value of 1.00.</p>
</table-wrap-foot>
</table-wrap>
<p id="s6.ss6_6.p2">Under normal data, all LSAM methods produced bias values close to 1 across regression coefficients and sample sizes. Bias values for SAM Two-step and SAM Robust tended to slightly underestimate SEs in small samples (e.g., bias = 0.94–0.97 at <inline-formula id="ieqn-87"><mml:math id="mml-ieqn-87"><mml:mi>N</mml:mi><mml:mo>=</mml:mo><mml:mn>50</mml:mn></mml:math></inline-formula>) converging to near-unbiased values as sample size increased. SAM Nonparametric and SAM Parametric were generally close to 1 (ranging from 0.94 to 1.06).</p>
<p id="s6.ss6_6.p3">Under the nonnormal condition, SAM Nonparametric delivered near-unbiased SE estimates across regression coefficients. SAM Two-step and SAM Parametric exhibited slightly more variability, with bias levels depending on the specific parameter and sample size. SAM Robust showed the greatest variability among the LSAM methods, particularly at smaller sample sizes (e.g., for <inline-formula id="ieqn-88"><mml:math id="mml-ieqn-88"><mml:msub><mml:mi>f</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>∼</mml:mo><mml:mi>X</mml:mi></mml:math></inline-formula> and <inline-formula id="ieqn-89"><mml:math id="mml-ieqn-89"><mml:msub><mml:mi>f</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>∼</mml:mo><mml:mi>Y</mml:mi></mml:math></inline-formula>) (see <xref ref-type="fig" rid="fig-4">Figure 4</xref>).</p><fig id="fig-4" position="anchor" orientation="portrait"><label>Figure 4</label><caption><title>Bias in SEs Across Structural Coefficients for Sample Sizes and SE Methods Under Nonnormally Distributed Data With a Correct Model in Study 2</title></caption><graphic mimetype="image" mime-subtype="png" xlink:href="meth.16517-f4.png" position="anchor" orientation="portrait"/></fig>
<p id="s6.ss6_6.p4">For SEM methods, SEM Standard showed minor underestimation in small samples under normality (e.g., 0.94 at <inline-formula id="ieqn-90"><mml:math id="mml-ieqn-90"><mml:mi>N</mml:mi><mml:mo>=</mml:mo><mml:mn>50</mml:mn></mml:math></inline-formula>) but converged toward unbiased estimates at larger sample sizes. SEM Robust maintained stable SE bias across all coefficients and sample sizes, with estimates showing approximately 6% underestimation to 1% overestimation (bias values ranging from 0.94 to 1.01). Under nonnormal data, SEM Robust outperformed SEM Standard, which showed more variability, particularly for <inline-formula id="ieqn-91"><mml:math id="mml-ieqn-91"><mml:msub><mml:mi>f</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>∼</mml:mo><mml:mi>X</mml:mi></mml:math></inline-formula> and <inline-formula id="ieqn-92"><mml:math id="mml-ieqn-92"><mml:msub><mml:mi>f</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>∼</mml:mo><mml:mi>Y</mml:mi></mml:math></inline-formula> in small samples.</p>
<p id="s6.ss6_6.p5">For misspecified models (see <?A3B2 "tbl4",5,"anchor"?><xref ref-type="table" rid="table-4">Table 4</xref>), all LSAM methods produced values close to 1 for the <inline-formula id="ieqn-93"><mml:math id="mml-ieqn-93"><mml:msub><mml:mi>f</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>∼</mml:mo><mml:mi>X</mml:mi></mml:math></inline-formula> and <inline-formula id="ieqn-94"><mml:math id="mml-ieqn-94"><mml:msub><mml:mi>f</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>∼</mml:mo><mml:mi>Y</mml:mi></mml:math></inline-formula> regression coefficients under normally distributed data. However, differences emerged for regression coefficients involving the endogenous observed variable <italic>Z</italic>. For <inline-formula id="ieqn-95"><mml:math id="mml-ieqn-95"><mml:mi>Z</mml:mi><mml:mo>∼</mml:mo><mml:msub><mml:mi>f</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:math></inline-formula> and <inline-formula id="ieqn-96"><mml:math id="mml-ieqn-96"><mml:mi>Z</mml:mi><mml:mo>∼</mml:mo><mml:msub><mml:mi>f</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:math></inline-formula>, SAM Nonparametric performed best, producing the most accurate SE estimates across sample sizes. In contrast, all other LSAM methods displayed underestimation, ranging from 19% to 10% (SE bias values from 0.81 to 0.90). SEM Standard exhibited higher bias values, particularly for smaller sample sizes in the <inline-formula id="ieqn-97"><mml:math id="mml-ieqn-97"><mml:mi>Z</mml:mi><mml:mo>∼</mml:mo><mml:msub><mml:mi>f</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:math></inline-formula> coefficient. SEM Robust outperformed SEM Standard, yielding lower bias across all regression coefficients and sample sizes.</p>
<table-wrap id="table-4" position="anchor" orientation="landscape">
<label>Table 4</label><caption><title>Bias Values for SE Methods Across Sample Sizes and Conditions for Misspecified Models in Study 2</title></caption>
<table style="compact-1"><colgroup>
<col width="" align="left"/>
<col width=""/>
<col width=""/>
<col width=""/>
<col width=""/>
<col width=""/>
<col width=""/>
<col width=""/>
<col width=""/>
<col width=""/>
</colgroup>
<thead>
<tr>
<th/>
<th/>	
<th colspan="4" align="center">Normal/Misspecified<hr/></th>
<th colspan="4" align="center">Nonnormal/Misspecified<hr/></th>
</tr>
<tr>
	<th valign="bottom">SE Method</th>
	<th valign="bottom">Sample Size</th>
	<th valign="bottom"><inline-formula id="ieqn-98"><mml:math id="mml-ieqn-98"><mml:msub><mml:mi>f</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>∼</mml:mo><mml:mi>X</mml:mi></mml:math></inline-formula></th>
	<th valign="bottom"><inline-formula id="ieqn-99"><mml:math id="mml-ieqn-99"><mml:msub><mml:mi>f</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>∼</mml:mo><mml:mi>Y</mml:mi></mml:math></inline-formula></th>
	<th valign="bottom"><inline-formula id="ieqn-100"><mml:math id="mml-ieqn-100"><mml:mi>Z</mml:mi><mml:mo>∼</mml:mo><mml:msub><mml:mi>f</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:math></inline-formula></th>
	<th valign="bottom"><inline-formula id="ieqn-101"><mml:math id="mml-ieqn-101"><mml:mi>Z</mml:mi><mml:mo>∼</mml:mo><mml:msub><mml:mi>f</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:math></inline-formula></th>
	<th valign="bottom"><inline-formula id="ieqn-102"><mml:math id="mml-ieqn-102"><mml:msub><mml:mi>f</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>∼</mml:mo><mml:mi>X</mml:mi></mml:math></inline-formula></th>
	<th valign="bottom"><inline-formula id="ieqn-103"><mml:math id="mml-ieqn-103"><mml:msub><mml:mi>f</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>∼</mml:mo><mml:mi>Y</mml:mi></mml:math></inline-formula></th>
	<th valign="bottom"><inline-formula id="ieqn-104"><mml:math id="mml-ieqn-104"><mml:mi>Z</mml:mi><mml:mo>∼</mml:mo><mml:msub><mml:mi>f</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:math></inline-formula></th>
	<th valign="bottom"><inline-formula id="ieqn-105"><mml:math id="mml-ieqn-105"><mml:mi>Z</mml:mi><mml:mo>∼</mml:mo><mml:msub><mml:mi>f</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:math></inline-formula></th>
</tr>
</thead>
<tbody>
<tr style="background-lightblue; white-border-top; white-border-bottom">
<th align="left" colspan="10">LSAM Approach</th>
</tr>
<tr>
<td>Two-step standard</td>
<td>50</td>
<td align="char" char=".">0.97</td>
<td align="char" char=".">0.96</td>
<td align="char" char="."><bold>0.83</bold></td>
<td align="char" char="."><bold>0.85</bold></td>
<td align="char" char=".">0.96</td>
<td align="char" char=".">0.93</td>
<td align="char" char="."><bold>0.82</bold></td>
<td align="char" char="."><bold>0.83</bold></td>
</tr>
<tr>
<td/>
<td>100</td>
<td align="char" char=".">0.97</td>
<td align="char" char=".">0.98</td>
<td align="char" char="."><bold>0.85</bold></td>
<td align="char" char="."><bold>0.87</bold></td>
<td align="char" char=".">0.96</td>
<td align="char" char=".">0.96</td>
<td align="char" char="."><bold>0.85</bold></td>
<td align="char" char="."><bold>0.86</bold></td>
</tr>
<tr>
<td/>
<td>200</td>
<td align="char" char=".">1.00</td>
<td align="char" char=".">0.99</td>
<td align="char" char="."><bold>0.87</bold></td>
<td align="char" char="."><bold>0.88</bold></td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">0.98</td>
<td align="char" char="."><bold>0.86</bold></td>
<td align="char" char="."><bold>0.86</bold></td>
</tr>
<tr>
<td/>
<td>500</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">1.00</td>
<td align="char" char="."><bold>0.87</bold></td>
<td align="char" char="."><bold>0.88</bold></td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">0.98</td>
<td align="char" char="."><bold>0.88</bold></td>
<td align="char" char="."><bold>0.88</bold></td>
</tr>
<tr>
<td/>
<td>1000</td>
<td align="char" char=".">1.00</td>
<td align="char" char=".">0.99</td>
<td align="char" char="."><bold>0.88</bold></td>
<td align="char" char="."><bold>0.89</bold></td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">0.99</td>
<td align="char" char="."><bold>0.87</bold></td>
<td align="char" char="."><bold>0.88</bold></td>
</tr>
<tr style="grey-border-top">
<td>Two-step robust</td>
<td>50</td>
<td align="char" char=".">0.97</td>
<td align="char" char=".">0.95</td>
<td align="char" char="."><bold>0.83</bold></td>
<td align="char" char="."><bold>0.81</bold></td>
<td align="char" char=".">0.91</td>
<td align="char" char="."><bold>0.89</bold></td>
<td align="char" char="."><bold>0.79</bold></td>
<td align="char" char="."><bold>0.77</bold></td>
</tr>
<tr>
<td/>
<td>100</td>
<td align="char" char=".">0.97</td>
<td align="char" char=".">0.97</td>
<td align="char" char="."><bold>0.84</bold></td>
<td align="char" char="."><bold>0.83</bold></td>
<td align="char" char=".">0.93</td>
<td align="char" char=".">0.93</td>
<td align="char" char="."><bold>0.82</bold></td>
<td align="char" char="."><bold>0.81</bold></td>
</tr>
<tr>
<td/>
<td>200</td>
<td align="char" char=".">1.00</td>
<td align="char" char=".">0.98</td>
<td align="char" char="."><bold>0.85</bold></td>
<td align="char" char="."><bold>0.84</bold></td>
<td align="char" char=".">0.96</td>
<td align="char" char=".">0.97</td>
<td align="char" char="."><bold>0.84</bold></td>
<td align="char" char="."><bold>0.82</bold></td>
</tr>
<tr>
<td/>
<td>500</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">1.00</td>
<td align="char" char="."><bold>0.86</bold></td>
<td align="char" char="."><bold>0.84</bold></td>
<td align="char" char=".">0.98</td>
<td align="char" char=".">0.97</td>
<td align="char" char="."><bold>0.86</bold></td>
<td align="char" char="."><bold>0.85</bold></td>
</tr>
<tr>
<td/>
<td>1000</td>
<td align="char" char=".">1.00</td>
<td align="char" char=".">0.99</td>
<td align="char" char="."><bold>0.87</bold></td>
<td align="char" char="."><bold>0.85</bold></td>
<td align="char" char=".">0.98</td>
<td align="char" char=".">0.99</td>
<td align="char" char="."><bold>0.85</bold></td>
<td align="char" char="."><bold>0.85</bold></td>
</tr>
<tr style="grey-border-top">
<td>Nonparametric</td>
<td>50</td>
<td align="char" char=".">0.95</td>
<td align="char" char=".">0.96</td>
<td align="char" char=".">1.05</td>
<td align="char" char=".">1.02</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">1.01</td>
<td align="char" char=".">1.09</td>
<td align="char" char=".">1.01</td>
</tr>
<tr>
<td/>
<td>100</td>
<td align="char" char=".">0.97</td>
<td align="char" char=".">1.01</td>
<td align="char" char=".">1.02</td>
<td align="char" char=".">1.01</td>
<td align="char" char=".">0.98</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">0.98</td>
<td align="char" char=".">0.96</td>
</tr>
<tr>
<td/>
<td>200</td>
<td align="char" char=".">1.02</td>
<td align="char" char=".">1.00</td>
<td align="char" char=".">0.97</td>
<td align="char" char=".">0.98</td>
<td align="char" char=".">0.96</td>
<td align="char" char=".">1.03</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">1.03</td>
</tr>
<tr>
<td/>
<td>500</td>
<td align="char" char=".">0.98</td>
<td align="char" char=".">1.00</td>
<td align="char" char=".">1.04</td>
<td align="char" char=".">1.03</td>
<td align="char" char=".">1.01</td>
<td align="char" char=".">0.97</td>
<td align="char" char=".">1.01</td>
<td align="char" char=".">0.99</td>
</tr>
<tr>
<td/>
<td>1000</td>
<td align="char" char=".">0.98</td>
<td align="char" char=".">1.01</td>
<td align="char" char=".">1.00</td>
<td align="char" char=".">1.02</td>
<td align="char" char=".">1.01</td>
<td align="char" char=".">0.94</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">1.01</td>
</tr>
<tr style="grey-border-top">
<td>Parametric</td>
<td>50</td>
<td align="char" char=".">0.97</td>
<td align="char" char=".">0.99</td>
<td align="char" char="."><bold>0.89</bold></td>
<td align="char" char="."><bold>0.88</bold></td>
<td align="char" char=".">0.96</td>
<td align="char" char=".">0.95</td>
<td align="char" char=".">0.91</td>
<td align="char" char="."><bold>0.87</bold></td>
</tr>
<tr>
<td/>
<td>100</td>
<td align="char" char=".">1.02</td>
<td align="char" char=".">0.99</td>
<td align="char" char="."><bold>0.86</bold></td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.96</td>
<td align="char" char=".">0.94</td>
<td align="char" char="."><bold>0.84</bold></td>
<td align="char" char="."><bold>0.88</bold></td>
</tr>
<tr>
<td/>
<td>200</td>
<td align="char" char=".">1.01</td>
<td align="char" char=".">0.96</td>
<td align="char" char=".">0.90</td>
<td align="char" char="."><bold>0.87</bold></td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">0.91</td>
<td align="char" char=".">0.90</td>
</tr>
<tr>
<td/>
<td>500</td>
<td align="char" char=".">0.98</td>
<td align="char" char=".">1.01</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.97</td>
<td align="char" char=".">0.97</td>
<td align="char" char="."><bold>0.87</bold></td>
<td align="char" char="."><bold>0.86</bold></td>
</tr>
<tr style="grey-border-bottom">
<td/>
<td>1000</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">1.02</td>
<td align="char" char="."><bold>0.88</bold></td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">1.01</td>
<td align="char" char=".">0.96</td>
<td align="char" char="."><bold>0.89</bold></td>
<td align="char" char=".">0.91</td>
</tr>
<tr style="background-lightblue; white-border-top; white-border-bottom">
<th align="left" colspan="10">SEM Approach</th><?pagebreak-before?>
</tr>
<tr>
<td>Standard</td>
<td>50</td>
<td align="char" char=".">1.04</td>
<td align="char" char=".">0.98</td>
<td align="char" char="."><bold>0.86</bold></td>
<td align="char" char="."><bold>0.87</bold></td>
<td align="char" char="."><bold>1.25</bold></td>
<td align="char" char="."><bold>1.31</bold></td>
<td align="char" char="."><bold>0.87</bold></td>
<td align="char" char="."><bold>0.88</bold></td>
</tr>
<tr>
<td/>
<td>100</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">0.99</td>
<td align="char" char="."><bold>0.89</bold></td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.98</td>
<td align="char" char=".">0.96</td>
<td align="char" char="."><bold>0.89</bold></td>
<td align="char" char="."><bold>0.89</bold></td>
</tr>
<tr>
<td/>
<td>200</td>
<td align="char" char=".">1.00</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.92</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">0.98</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
</tr>
<tr>
<td/>
<td>500</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">1.00</td>
<td align="char" char=".">0.91</td>
<td align="char" char=".">0.91</td>
<td align="char" char=".">1.01</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">0.92</td>
<td align="char" char=".">0.92</td>
</tr>
<tr>
<td/>
<td>1000</td>
<td align="char" char=".">1.00</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">0.92</td>
<td align="char" char=".">0.93</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">1.00</td>
<td align="char" char=".">0.91</td>
<td align="char" char=".">0.92</td>
</tr>
<tr style="grey-border-top">
<td>Robust</td>
<td>50</td>
<td align="char" char=".">0.95</td>
<td align="char" char=".">0.94</td>
<td align="char" char=".">0.94</td>
<td align="char" char=".">0.94</td>
<td align="char" char="."><bold>0.89</bold></td>
<td align="char" char="."><bold>0.88</bold></td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
</tr>
<tr>
<td/>
<td>100</td>
<td align="char" char=".">0.96</td>
<td align="char" char=".">0.97</td>
<td align="char" char=".">0.96</td>
<td align="char" char=".">0.98</td>
<td align="char" char=".">0.91</td>
<td align="char" char=".">0.93</td>
<td align="char" char=".">0.95</td>
<td align="char" char=".">0.95</td>
</tr>
<tr>
<td/>
<td>200</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">0.98</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">0.96</td>
<td align="char" char=".">0.96</td>
<td align="char" char=".">0.96</td>
<td align="char" char=".">0.96</td>
</tr>
<tr>
<td/>
<td>500</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">1.00</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">0.98</td>
<td align="char" char=".">0.97</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">0.99</td>
</tr>
<tr>
<td/>
<td>1000</td>
<td align="char" char=".">1.00</td>
<td align="char" char=".">0.99</td>
<td align="char" char=".">1.00</td>
<td align="char" char=".">1.01</td>
<td align="char" char=".">0.98</td>
<td align="char" char=".">1.00</td>
<td align="char" char=".">0.98</td>
<td align="char" char=".">0.99</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic>Note</italic>. Standard error bias values less than 0.90 or greater than 1.10 are bolded to indicate deviation from the unbiased value of 1.00.</p>
</table-wrap-foot>
</table-wrap>
<p id="s6.ss6_6.p6">Under nonnormal data with the misspecified model (see <xref ref-type="fig" rid="fig-5">Figure 5</xref>), SAM Nonparametric delivered nearly unbiased SEs for <inline-formula id="ieqn-114"><mml:math id="mml-ieqn-114"><mml:msub><mml:mi>f</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>∼</mml:mo><mml:mi>X</mml:mi></mml:math></inline-formula> and <inline-formula id="ieqn-115"><mml:math id="mml-ieqn-115"><mml:msub><mml:mi>f</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>∼</mml:mo><mml:mi>Y</mml:mi></mml:math></inline-formula>, followed closely by SAM Parametric. SAM Two-step showed higher bias in smaller sample sizes, with underestimation around 7%. SAM Robust displayed slightly more bias in smaller samples for these coefficients (e.g., 0.91 and 0.89 at <inline-formula id="ieqn-116"><mml:math id="mml-ieqn-116"><mml:mi>N</mml:mi><mml:mo>=</mml:mo><mml:mn>50</mml:mn></mml:math></inline-formula>), but approached values near 1 as sample size increased (e.g., 0.98 and 0.99 at <inline-formula id="ieqn-117"><mml:math id="mml-ieqn-117"><mml:mi>N</mml:mi><mml:mo>=</mml:mo><mml:mn>1000</mml:mn></mml:math></inline-formula>). SEM Standard exhibited the largest biases for smaller sample sizes, with 25% and 31% overestimation for <inline-formula id="ieqn-118"><mml:math id="mml-ieqn-118"><mml:msub><mml:mi>f</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>∼</mml:mo><mml:mi>X</mml:mi></mml:math></inline-formula> and <inline-formula id="ieqn-119"><mml:math id="mml-ieqn-119"><mml:msub><mml:mi>f</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>∼</mml:mo><mml:mi>Y</mml:mi></mml:math></inline-formula>, respectively. SEM Robust performed better but still slightly underestimated these coefficients by about 10% in the smallest sample size. Bias values for both standard and robust SEs decreased with increasing sample sizes.</p><fig id="fig-5" position="anchor" orientation="portrait"><label>Figure 5</label><caption><title>Bias in SEs Across Structural Coefficients for Sample Sizes and SE Methods Under Nonnormally Distributed Data With a Misspecified Model in Study 2</title></caption><graphic mimetype="image" mime-subtype="png" xlink:href="meth.16517-f5.png" position="anchor" orientation="portrait"/></fig>
<p id="s6.ss6_6.p7">Among LSAM methods, for regression coefficients involving <italic>Z</italic>, SAM Nonparametric again performed best, providing accurate SE estimates. SEM Standard and SEM Robust both exhibited underestimation for <inline-formula id="ieqn-120"><mml:math id="mml-ieqn-120"><mml:mi>Z</mml:mi><mml:mo>∼</mml:mo><mml:msub><mml:mi>f</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:math></inline-formula> in the smallest sample size, with underestimation of 13% and 10%, respectively, but the values improved as sample size increased.</p></sec><?table table-4?><?figure fig-5?>
<sec id="s6_7"><title>Coverage Rates</title>
<p id="s6.ss6_7.p1"><xref ref-type="table" rid="table-5">Table 5</xref> presents results for coverage rates in correctly specified models across various sample sizes and estimation methods.</p>
<table-wrap id="table-5" position="anchor" orientation="landscape">
<label>Table 5</label><caption><title>Coverage Rates for SE Methods Across Sample Sizes and Conditions for Correctly Specified Models in Study 2</title></caption>
<table style="compact-1">
<col width="13%" align="left"/>
<col width=""/>
<col width=""/>
<col width=""/>
<col width=""/>
<col width=""/>
<col width=""/>
<col width=""/>
<col width=""/>
<col width=""/>
<col width=""/>
<col width=""/>

<thead>
<tr>
<th/>
<th/>
<th colspan="5" align="center">Normal/Correct<hr/></th>
<th colspan="5" align="center">Nonnormal/Correct<hr/></th>
</tr>
<tr>
<th valign="bottom">SE Method</th>
<th valign="bottom">Sample Size</th>
<th valign="bottom"><inline-formula id="ieqn-121"><mml:math id="mml-ieqn-121"><mml:msub><mml:mi>f</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>∼</mml:mo><mml:mi>X</mml:mi></mml:math></inline-formula></th>
<th valign="bottom"><inline-formula id="ieqn-122"><mml:math id="mml-ieqn-122"><mml:msub><mml:mi>f</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>∼</mml:mo><mml:mi>Y</mml:mi></mml:math></inline-formula></th>
<th valign="bottom"><inline-formula id="ieqn-123"><mml:math id="mml-ieqn-123"><mml:msub><mml:mi>f</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>∼</mml:mo><mml:msub><mml:mi>f</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:math></inline-formula></th>
<th valign="bottom"><inline-formula id="ieqn-124"><mml:math id="mml-ieqn-124"><mml:mi>Z</mml:mi><mml:mo>∼</mml:mo><mml:msub><mml:mi>f</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:math></inline-formula></th>
<th valign="bottom"><inline-formula id="ieqn-125"><mml:math id="mml-ieqn-125"><mml:mi>Z</mml:mi><mml:mo>∼</mml:mo><mml:msub><mml:mi>f</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:math></inline-formula></th>
<th valign="bottom"><inline-formula id="ieqn-126"><mml:math id="mml-ieqn-126"><mml:msub><mml:mi>f</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>∼</mml:mo><mml:mi>X</mml:mi></mml:math></inline-formula></th>
<th valign="bottom"><inline-formula id="ieqn-127"><mml:math id="mml-ieqn-127"><mml:msub><mml:mi>f</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>∼</mml:mo><mml:mi>Y</mml:mi></mml:math></inline-formula></th>
<th valign="bottom"><inline-formula id="ieqn-128"><mml:math id="mml-ieqn-128"><mml:msub><mml:mi>f</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>∼</mml:mo><mml:msub><mml:mi>f</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:math></inline-formula></th>
<th valign="bottom"><inline-formula id="ieqn-129"><mml:math id="mml-ieqn-129"><mml:mi>Z</mml:mi><mml:mo>∼</mml:mo><mml:msub><mml:mi>f</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:math></inline-formula></th>
<th valign="bottom"><inline-formula id="ieqn-130"><mml:math id="mml-ieqn-130"><mml:mi>Z</mml:mi><mml:mo>∼</mml:mo><mml:msub><mml:mi>f</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:math></inline-formula></th>
</tr>
</thead>
<tbody>	
<tr style="background-lightblue; white-border-top; white-border-bottom">
<th align="left" colspan="12">LSAM Approach</th>
</tr>	
<tr>
<td>Two-step standard</td>
<td>50</td>
<td align="char" char=".">0.88</td>
<td align="char" char=".">0.88</td>
<td align="char" char=".">0.88</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.88</td>
<td align="char" char=".">0.88</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.89</td>
</tr>
<tr>
<td/>
<td>100</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
</tr>
<tr>
<td/>
<td>200</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.89</td>
</tr>
<tr>
<td/>
<td>500</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
</tr>
<tr>
<td/>
<td>1000</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
</tr>
<tr style="grey-border-top">
<td>Two-step robust</td>
<td>50</td>
<td align="char" char=".">0.88</td>
<td align="char" char=".">0.87</td>
<td align="char" char=".">0.87</td>
<td align="char" char=".">0.88</td>
<td align="char" char=".">0.87</td>
<td align="char" char=".">0.86</td>
<td align="char" char=".">0.84</td>
<td align="char" char=".">0.86</td>
<td align="char" char=".">0.87</td>
<td align="char" char=".">0.86</td>
</tr>
<tr>
<td/>
<td>100</td>
<td align="char" char=".">0.88</td>
<td align="char" char=".">0.88</td>
<td align="char" char=".">0.88</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.88</td>
<td align="char" char=".">0.87</td>
<td align="char" char=".">0.87</td>
<td align="char" char=".">0.87</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.88</td>
</tr>
<tr>
<td/>
<td>200</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.88</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.88</td>
</tr>
<tr>
<td/>
<td>500</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.89</td>
</tr>
<tr>
<td/>
<td>1000</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.89</td>
</tr>
<tr style="grey-border-top">
<td>Nonparametric</td>
<td>50</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.88</td>
<td align="char" char=".">0.91</td>
<td align="char" char=".">0.91</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.92</td>
<td align="char" char=".">0.91</td>
<td align="char" char=".">0.93</td>
<td align="char" char=".">0.90</td>
</tr>
<tr>
<td/>
<td>100</td>
<td align="char" char=".">0.91</td>
<td align="char" char=".">0.91</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.91</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.91</td>
<td align="char" char=".">0.91</td>
<td align="char" char=".">0.88</td>
</tr>
<tr>
<td/>
<td>200</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.92</td>
<td align="char" char=".">0.88</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.92</td>
<td align="char" char=".">0.88</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.90</td>
</tr>
<tr>
<td/>
<td>500</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.91</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.91</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
</tr>
<tr>
<td/>
<td>1000</td>
<td align="char" char=".">0.91</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.91</td>
<td align="char" char=".">0.91</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.91</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
</tr>
<tr style="grey-border-top">
<td>Parametric</td>
<td>50</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.91</td>
<td align="char" char=".">0.91</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.93</td>
<td align="char" char=".">0.92</td>
</tr>
<tr>
<td/>
<td>100</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.91</td>
<td align="char" char=".">0.92</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.91</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.88</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.91</td>
</tr>
<tr>
<td/>
<td>200</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.91</td>
<td align="char" char=".">0.91</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.92</td>
<td align="char" char=".">0.90</td>
</tr>
<tr>
<td/>
<td>500</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.88</td>
</tr>
<tr style="grey-border-bottom">
<td/>
<td>1000</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.88</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.91</td>
<td align="char" char=".">0.91</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.91</td>
</tr>
<tr style="background-lightblue; white-border-top; white-border-bottom"><?pagebreak-before?>
<th align="left" colspan="12">SEM Approach</th>
</tr>
<tr>
<td>Standard</td>
<td>50</td>
<td align="char" char=".">0.91</td>
<td align="char" char=".">0.88</td>
<td align="char" char=".">0.88</td>
<td align="char" char=".">0.88</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.96</td>
<td align="char" char=".">0.96</td>
<td align="char" char=".">0.88</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.90</td>
</tr>
<tr>
<td/>
<td>100</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
</tr>
<tr>
<td/>
<td>200</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.90</td>
</tr>
<tr>
<td/>
<td>500</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.91</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
</tr>
<tr>
<td/>
<td>1000</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
</tr>
<tr style="grey-border-top">
<td>Robust</td>
<td>50</td>
<td align="char" char=".">0.87</td>
<td align="char" char=".">0.87</td>
<td align="char" char=".">0.88</td>
<td align="char" char=".">0.87</td>
<td align="char" char=".">0.88</td>
<td align="char" char=".">0.86</td>
<td align="char" char=".">0.85</td>
<td align="char" char=".">0.86</td>
<td align="char" char=".">0.87</td>
<td align="char" char=".">0.87</td>
</tr>
<tr>
<td/>
<td>100</td>
<td align="char" char=".">0.88</td>
<td align="char" char=".">0.88</td>
<td align="char" char=".">0.88</td>
<td align="char" char=".">0.88</td>
<td align="char" char=".">0.88</td>
<td align="char" char=".">0.87</td>
<td align="char" char=".">0.87</td>
<td align="char" char=".">0.87</td>
<td align="char" char=".">0.88</td>
<td align="char" char=".">0.88</td>
</tr>
<tr>
<td/>
<td>200</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.88</td>
<td align="char" char=".">0.88</td>
<td align="char" char=".">0.88</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.88</td>
</tr>
<tr>
<td/>
<td>500</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
</tr>
<tr>
<td/>
<td>1000</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.90</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic>Note</italic>. Bolded values indicate coverage rates of 0.80 or lower.</p>
</table-wrap-foot>
</table-wrap>
<p id="s6.ss6_7.p2">For correctly specified models, SAM Nonparametric and SAM Parametric consistently yielded the most accurate coverage rates, with values close to 0.90 across all regression coefficients, sample sizes, and distributional conditions. SAM Two-step also performed adequately, although slight undercoverage was observed for some coefficients at smaller sample sizes (e.g., values around 0.87–0.89 at <inline-formula id="ieqn-141"><mml:math id="mml-ieqn-141"><mml:mi>N</mml:mi><mml:mo>=</mml:mo><mml:mn>50</mml:mn></mml:math></inline-formula>). SAM Robust showed similar pattern in small samples, particularly under nonnormal conditions, with values declining to approximately 0.84–0.86 for the <inline-formula id="ieqn-142"><mml:math id="mml-ieqn-142"><mml:msub><mml:mi>f</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>∼</mml:mo><mml:mi>Y</mml:mi></mml:math></inline-formula> and <inline-formula id="ieqn-143"><mml:math id="mml-ieqn-143"><mml:msub><mml:mi>f</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>∼</mml:mo><mml:msub><mml:mi>f</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:math></inline-formula> parameters. Its performance improved as sample size increased.</p><?table table-5?>
<p id="s6.ss6_7.p3">Among the SEM approaches, SEM Standard provided acceptable coverage under normal conditions but demonstrated overcoverage for the <inline-formula id="ieqn-144"><mml:math id="mml-ieqn-144"><mml:msub><mml:mi>f</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>∼</mml:mo><mml:mi>X</mml:mi></mml:math></inline-formula> and <inline-formula id="ieqn-145"><mml:math id="mml-ieqn-145"><mml:msub><mml:mi>f</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>∼</mml:mo><mml:mi>Y</mml:mi></mml:math></inline-formula> parameters in the smallest sample size under nonnormality. SEM Robust showed similar performance to SEM Standard under normal conditions, yielding more stable coverage overall. However, it exhibited slight undercoverage for the <inline-formula id="ieqn-146"><mml:math id="mml-ieqn-146"><mml:msub><mml:mi>f</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>∼</mml:mo><mml:mi>X</mml:mi></mml:math></inline-formula> and <inline-formula id="ieqn-147"><mml:math id="mml-ieqn-147"><mml:msub><mml:mi>f</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>∼</mml:mo><mml:mi>Y</mml:mi></mml:math></inline-formula> parameters (e.g., 0.86–0.85 at <inline-formula id="ieqn-148"><mml:math id="mml-ieqn-148"><mml:mi>N</mml:mi><mml:mo>=</mml:mo><mml:mn>50</mml:mn></mml:math></inline-formula>), which improved as sample size increased.</p>
	<p id="s6.ss6_7.p4">For misspecified models (see <xref ref-type="table" rid="table-6">Table 6</xref>), across both normal and nonnormal data, SAM Nonparametric consistently outperformed the other LSAM methods, particularly for regression coefficients involving the endogenous variable <italic>Z</italic>. For <inline-formula id="ieqn-149"><mml:math id="mml-ieqn-149"><mml:mi>Z</mml:mi><mml:mo>∼</mml:mo><mml:msub><mml:mi>f</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:math></inline-formula> and <inline-formula id="ieqn-150"><mml:math id="mml-ieqn-150"><mml:mi>Z</mml:mi><mml:mo>∼</mml:mo><mml:msub><mml:mi>f</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:math></inline-formula>, SAM Nonparametric maintained coverage rates between 0.90 and 0.92 across all sample sizes. For the exogenous coefficient <inline-formula id="ieqn-151"><mml:math id="mml-ieqn-151"><mml:msub><mml:mi>f</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>∼</mml:mo><mml:mi>X</mml:mi></mml:math></inline-formula>, all LSAM methods showed reasonably high coverage across conditions (typically ranging from 0.87 to 0.90). However, performance for <inline-formula id="ieqn-152"><mml:math id="mml-ieqn-152"><mml:msub><mml:mi>f</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>∼</mml:mo><mml:mi>Y</mml:mi></mml:math></inline-formula> was less stable, with low coverage rates observed across all estimation methods—particularly as sample size increased. These coverage issues may stem from biased point estimates rather than from inaccuracies in SE estimation, consistent with the findings from <xref ref-type="sec" rid="s6_2">Study 1</xref>.</p>
<table-wrap id="table-6" position="anchor" orientation="landscape">
<label>Table 6</label><caption><title>Coverage Rates for SE Methods Across Sample Sizes and Conditions for Misspecified Models in Study 2</title></caption>
<table style="compact-1"><colgroup>
<col width="" align="left"/>
<col width=""/>
<col width=""/>
<col width=""/>
<col width=""/>
<col width=""/>
<col width=""/>
<col width=""/>
<col width=""/>
<col width=""/>
</colgroup>
<thead>
<tr>
<th/>
<th/>
<th colspan="4" align="center">Normal/Misspecified<hr/></th>
<th colspan="4" align="center">Nonnormal/Misspecified<hr/></th>
</tr>
<tr>
	<th>Method</th>
	<th>Sample Size</th>
<th><inline-formula id="ieqn-153"><mml:math id="mml-ieqn-153"><mml:msub><mml:mi>f</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>∼</mml:mo><mml:mi>X</mml:mi></mml:math></inline-formula></th>
<th><inline-formula id="ieqn-154"><mml:math id="mml-ieqn-154"><mml:msub><mml:mi>f</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>∼</mml:mo><mml:mi>Y</mml:mi></mml:math></inline-formula></th>
<th><inline-formula id="ieqn-155"><mml:math id="mml-ieqn-155"><mml:mi>Z</mml:mi><mml:mo>∼</mml:mo><mml:msub><mml:mi>f</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:math></inline-formula></th>
<th><inline-formula id="ieqn-156"><mml:math id="mml-ieqn-156"><mml:mi>Z</mml:mi><mml:mo>∼</mml:mo><mml:msub><mml:mi>f</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:math></inline-formula></th>
<th><inline-formula id="ieqn-157"><mml:math id="mml-ieqn-157"><mml:msub><mml:mi>f</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>∼</mml:mo><mml:mi>X</mml:mi></mml:math></inline-formula></th>
<th><inline-formula id="ieqn-158"><mml:math id="mml-ieqn-158"><mml:msub><mml:mi>f</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>∼</mml:mo><mml:mi>Y</mml:mi></mml:math></inline-formula></th>
<th><inline-formula id="ieqn-159"><mml:math id="mml-ieqn-159"><mml:mi>Z</mml:mi><mml:mo>∼</mml:mo><mml:msub><mml:mi>f</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:math></inline-formula></th>
<th><inline-formula id="ieqn-160"><mml:math id="mml-ieqn-160"><mml:mi>Z</mml:mi><mml:mo>∼</mml:mo><mml:msub><mml:mi>f</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:math></inline-formula></th>
</tr>
</thead>
<tbody>	
<tr style="background-lightblue; white-border-top; white-border-bottom">
<th align="left" colspan="10">LSAM Approach</th>
</tr>		
<tr>
<td>Two-step standard</td>
<td>50</td>
<td align="char" char=".">0.88</td>
<td align="char" char=".">0.87</td>
<td align="char" char=".">0.83</td>
<td align="char" char=".">0.84</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.86</td>
<td align="char" char=".">0.83</td>
<td align="char" char=".">0.84</td>
</tr>
<tr>
<td/>
<td>100</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.86</td>
<td align="char" char=".">0.85</td>
<td align="char" char=".">0.85</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.85</td>
<td align="char" char=".">0.85</td>
<td align="char" char=".">0.85</td>
</tr>
<tr>
<td/>
<td>200</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.82</td>
<td align="char" char=".">0.84</td>
<td align="char" char=".">0.85</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.82</td>
<td align="char" char=".">0.85</td>
<td align="char" char=".">0.84</td>
</tr>
<tr>
<td/>
<td>500</td>
<td align="char" char=".">0.90</td>
<td align="char" char="."><bold>0.72</bold></td>
<td align="char" char=".">0.85</td>
<td align="char" char=".">0.85</td>
<td align="char" char=".">0.90</td>
<td align="char" char="."><bold>0.70</bold></td>
<td align="char" char=".">0.85</td>
<td align="char" char=".">0.86</td>
</tr>
<tr>
<td/>
<td>1000</td>
<td align="char" char=".">0.90</td>
<td align="char" char="."><bold>0.54</bold></td>
<td align="char" char=".">0.85</td>
<td align="char" char=".">0.86</td>
<td align="char" char=".">0.90</td>
<td align="char" char="."><bold>0.54</bold></td>
<td align="char" char=".">0.85</td>
<td align="char" char=".">0.86</td>
</tr>
<tr style="grey-border-top">
<td>Two-step robust</td>
<td>50</td>
<td align="char" char=".">0.88</td>
<td align="char" char=".">0.85</td>
<td align="char" char=".">0.82</td>
<td align="char" char=".">0.81</td>
<td align="char" char=".">0.86</td>
<td align="char" char=".">0.82</td>
<td align="char" char="."><bold>0.80</bold></td>
<td align="char" char="."><bold>0.79</bold></td>
</tr>
<tr>
<td/>
<td>100</td>
<td align="char" char=".">0.88</td>
<td align="char" char=".">0.85</td>
<td align="char" char=".">0.84</td>
<td align="char" char=".">0.82</td>
<td align="char" char=".">0.87</td>
<td align="char" char=".">0.81</td>
<td align="char" char=".">0.82</td>
<td align="char" char=".">0.82</td>
</tr>
<tr>
<td/>
<td>200</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.82</td>
<td align="char" char=".">0.84</td>
<td align="char" char=".">0.83</td>
<td align="char" char=".">0.89</td>
<td align="char" char="."><bold>0.79</bold></td>
<td align="char" char=".">0.83</td>
<td align="char" char=".">0.82</td>
</tr>
<tr>
<td/>
<td>500</td>
<td align="char" char=".">0.90</td>
<td align="char" char="."><bold>0.72</bold></td>
<td align="char" char=".">0.84</td>
<td align="char" char=".">0.83</td>
<td align="char" char=".">0.89</td>
<td align="char" char="."><bold>0.68</bold></td>
<td align="char" char=".">0.84</td>
<td align="char" char=".">0.84</td>
</tr>
<tr>
<td/>
<td>1000</td>
<td align="char" char=".">0.90</td>
<td align="char" char="."><bold>0.54</bold></td>
<td align="char" char=".">0.85</td>
<td align="char" char=".">0.84</td>
<td align="char" char=".">0.89</td>
<td align="char" char="."><bold>0.53</bold></td>
<td align="char" char=".">0.84</td>
<td align="char" char=".">0.84</td>
</tr>
<tr style="grey-border-top">
<td>Nonparametric</td>
<td>50</td>
<td align="char" char=".">0.87</td>
<td align="char" char=".">0.86</td>
<td align="char" char=".">0.92</td>
<td align="char" char=".">0.91</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.88</td>
<td align="char" char=".">0.93</td>
<td align="char" char=".">0.90</td>
</tr>
<tr>
<td/>
<td>100</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.87</td>
<td align="char" char=".">0.91</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.82</td>
<td align="char" char=".">0.91</td>
<td align="char" char=".">0.88</td>
</tr>
<tr>
<td/>
<td>200</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.82</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.88</td>
<td align="char" char=".">0.89</td>
<td align="char" char="."><bold>0.80</bold></td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.90</td>
</tr>
<tr>
<td/>
<td>500</td>
<td align="char" char=".">0.89</td>
<td align="char" char="."><bold>0.68</bold></td>
<td align="char" char=".">0.92</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
<td align="char" char="."><bold>0.68</bold></td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
</tr>
<tr>
<td/>
<td>1000</td>
<td align="char" char=".">0.89</td>
<td align="char" char="."><bold>0.56</bold></td>
<td align="char" char=".">0.91</td>
<td align="char" char=".">0.91</td>
<td align="char" char=".">0.90</td>
<td align="char" char="."><bold>0.53</bold></td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.90</td>
</tr>
<tr style="grey-border-top">
<td>Parametric</td>
<td>50</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.88</td>
<td align="char" char=".">0.85</td>
<td align="char" char=".">0.87</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.87</td>
<td align="char" char=".">0.86</td>
</tr>
<tr>
<td/>
<td>100</td>
<td align="char" char=".">0.91</td>
<td align="char" char=".">0.87</td>
<td align="char" char=".">0.85</td>
<td align="char" char=".">0.86</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.85</td>
<td align="char" char=".">0.83</td>
<td align="char" char=".">0.86</td>
</tr>
<tr>
<td/>
<td>200</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.81</td>
<td align="char" char=".">0.86</td>
<td align="char" char=".">0.85</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.83</td>
<td align="char" char=".">0.87</td>
<td align="char" char=".">0.85</td>
</tr>
<tr>
<td/>
<td>500</td>
<td align="char" char=".">0.90</td>
<td align="char" char="."><bold>0.71</bold></td>
<td align="char" char=".">0.86</td>
<td align="char" char=".">0.86</td>
<td align="char" char=".">0.89</td>
<td align="char" char="."><bold>0.71</bold></td>
<td align="char" char=".">0.85</td>
<td align="char" char=".">0.85</td>
</tr>
<tr style="grey-border-bottom">
<td/>
<td>1000</td>
<td align="char" char=".">0.90</td>
<td align="char" char="."><bold>0.53</bold></td>
<td align="char" char=".">0.85</td>
<td align="char" char=".">0.86</td>
<td align="char" char=".">0.91</td>
<td align="char" char="."><bold>0.53</bold></td>
<td align="char" char=".">0.86</td>
<td align="char" char=".">0.87</td>
</tr>
<tr style="background-lightblue; white-border-top; white-border-bottom">
<th align="left" colspan="10">SEM Approach</th><?pagebreak-before?>
</tr>
<tr>
<td>Standard</td>
<td>50</td>
<td align="char" char=".">0.91</td>
<td align="char" char=".">0.87</td>
<td align="char" char=".">0.84</td>
<td align="char" char=".">0.85</td>
<td align="char" char=".">0.96</td>
<td align="char" char=".">0.96</td>
<td align="char" char=".">0.85</td>
<td align="char" char=".">0.86</td>
</tr>
<tr>
<td/>
<td>100</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.86</td>
<td align="char" char=".">0.85</td>
<td align="char" char=".">0.86</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.85</td>
<td align="char" char=".">0.86</td>
<td align="char" char=".">0.87</td>
</tr>
<tr>
<td/>
<td>200</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.82</td>
<td align="char" char=".">0.85</td>
<td align="char" char=".">0.87</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.82</td>
<td align="char" char=".">0.84</td>
<td align="char" char=".">0.86</td>
</tr>
<tr>
<td/>
<td>500</td>
<td align="char" char=".">0.90</td>
<td align="char" char="."><bold>0.72</bold></td>
<td align="char" char=".">0.82</td>
<td align="char" char=".">0.87</td>
<td align="char" char=".">0.90</td>
<td align="char" char="."><bold>0.70</bold></td>
<td align="char" char=".">0.82</td>
<td align="char" char=".">0.87</td>
</tr>
<tr>
<td/>
<td>1000</td>
<td align="char" char=".">0.90</td>
<td align="char" char="."><bold>0.55</bold></td>
<td align="char" char="."><bold>0.78</bold></td>
<td align="char" char=".">0.87</td>
<td align="char" char=".">0.90</td>
<td align="char" char="."><bold>0.54</bold></td>
<td align="char" char="."><bold>0.78</bold></td>
<td align="char" char=".">0.87</td>
</tr>
<tr style="grey-border-top">
<td>Robust</td>
<td>50</td>
<td align="char" char=".">0.87</td>
<td align="char" char=".">0.85</td>
<td align="char" char=".">0.87</td>
<td align="char" char=".">0.88</td>
<td align="char" char=".">0.85</td>
<td align="char" char=".">0.81</td>
<td align="char" char=".">0.87</td>
<td align="char" char=".">0.87</td>
</tr>
<tr>
<td/>
<td>100</td>
<td align="char" char=".">0.88</td>
<td align="char" char=".">0.85</td>
<td align="char" char=".">0.88</td>
<td align="char" char=".">0.88</td>
<td align="char" char=".">0.87</td>
<td align="char" char="."><bold>0.80</bold></td>
<td align="char" char=".">0.88</td>
<td align="char" char=".">0.88</td>
</tr>
<tr>
<td/>
<td>200</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.82</td>
<td align="char" char=".">0.88</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.88</td>
<td align="char" char=".">0.79</td>
<td align="char" char=".">0.88</td>
<td align="char" char=".">0.88</td>
</tr>
<tr>
<td/>
<td>500</td>
<td align="char" char=".">0.90</td>
<td align="char" char="."><bold>0.72</bold></td>
<td align="char" char=".">0.85</td>
<td align="char" char=".">0.89</td>
<td align="char" char=".">0.89</td>
<td align="char" char="."><bold>0.68</bold></td>
<td align="char" char=".">0.87</td>
<td align="char" char=".">0.90</td>
</tr>
<tr>
<td/>
<td>1000</td>
<td align="char" char=".">0.90</td>
<td align="char" char="."><bold>0.55</bold></td>
<td align="char" char=".">0.82</td>
<td align="char" char=".">0.90</td>
<td align="char" char=".">0.89</td>
<td align="char" char="."><bold>0.54</bold></td>
<td align="char" char=".">0.83</td>
<td align="char" char=".">0.90</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic>Note</italic>. Bolded values indicate coverage rates of 0.80 or lower.</p>
</table-wrap-foot>
</table-wrap>
<p>For the SEM methods, coverage for the exogenous coefficient <inline-formula id="ieqn-169"><mml:math id="mml-ieqn-169"><mml:msub><mml:mi>f</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>∼</mml:mo><mml:mi>X</mml:mi></mml:math></inline-formula> remained relatively high across conditions for both SEM Standard and SEM Robust. SEM Standard produced slightly inflated coverage in small samples under nonnormality (e.g., 0.96 at <inline-formula id="ieqn-170"><mml:math id="mml-ieqn-170"><mml:mi>N</mml:mi><mml:mo>=</mml:mo><mml:mn>50</mml:mn></mml:math></inline-formula>), but values remained close to or slightly below 0.90 as sample size increased. For coefficients involving the endogenous variable <italic>Z</italic>, both SEM methods demonstrated consistent undercoverage. For instance, coverage for <inline-formula id="ieqn-171"><mml:math id="mml-ieqn-171"><mml:mi>Z</mml:mi><mml:mo>∼</mml:mo><mml:msub><mml:mi>f</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:math></inline-formula> in SEM Standard decreased from 0.85 to 0.78 with increasing sample size. SEM Robust provided slightly better coverage rates for these parameters, though coverage rates for <inline-formula id="ieqn-172"><mml:math id="mml-ieqn-172"><mml:mi>Z</mml:mi><mml:mo>∼</mml:mo><mml:msub><mml:mi>f</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:math></inline-formula> remained low in larger samples, consistent with SEM Standard.</p></sec></sec><?table table-6?>
	<sec sec-type="discussion" id="s7"><title>Discussion</title>
<p id="s7.p1">Previous research has shown little or no interest in the estimation of SEs within the SAM approach. This study aimed to evaluate SEs for structural coefficients in two SEM models using the LSAM framework, which includes four distinct methods: analytic two-step, robust two-step, nonparametric bootstrap, and parametric bootstrap. The performance of these SE methods was compared under varying conditions of sample size, data distribution, and model specification. Additionally, we included standard and robust SEs derived from joint ML to assess how traditional methods performed under identical conditions.</p>
		<p id="s7.p2">With this aim, we conducted two studies. In <xref ref-type="sec" rid="s6_2">Study 1</xref>, a two-factor structural model was tested, with misspecification applied to the measurement part. In <xref ref-type="sec" rid="s6_5">Study 2</xref>, we expanded the initial model to a more complex one, better reflecting real-world applications. Additionally, misspecification was introduced within the structural part. While nonnormality was limited to latent scores in <xref ref-type="sec" rid="s6_2">Study 1</xref>, <xref ref-type="sec" rid="s6_5">Study 2</xref> extended nonnormality to exogenous variables, disturbances, and residuals. Thus, in <xref ref-type="sec" rid="s6_5">Study 2</xref>, we not only introduced misspecification in the structural part and added multiple sources of nonnormality, but also tested a more complex structural model, offering a more comprehensive assessment of LSAM’s performance for SE estimation under varying conditions.</p>
	<p id="s7.p3">Similar to the findings of <xref ref-type="bibr" rid="ref-49">Rosseel and Loh (2024)</xref> and <xref ref-type="bibr" rid="ref-23">Dhaene and Rosseel (2023)</xref>, the LSAM approach demonstrated robust performance, achieving convergence in all iterations. In the standard estimation approach, convergence issues are frequently encountered, particularly with smaller sample sizes (<xref ref-type="bibr" rid="ref-1">Anderson &amp; Gerbing, 1984</xref>; <xref ref-type="bibr" rid="ref-12">Boomsma, 1985</xref>; <xref ref-type="bibr" rid="ref-44">Nevitt &amp; Hancock, 2004</xref>; <xref ref-type="bibr" rid="ref-60">Yuan &amp; Bentler, 1997</xref>). These challenges, however, were effectively addressed by employing bounded estimation (<xref ref-type="bibr" rid="ref-19">De Jonckere &amp; Rosseel, 2022</xref>), which ensured successful convergence across all conditions, including those involving small sample sizes, for the SEM approach.</p>
<p id="s7.p4">Among SE methods, SAM Nonparametric excelled under nonnormal conditions, delivering near-unbiased SE estimates regardless of model specification. SAM Parametric consistently produced minimal bias under normal conditions across all sample sizes in correct models. The SAM Two-step method performed well under normal conditions for both correctly specified but showed greater variability under nonnormal data, especially in smaller sample sizes and misspecified models. Compared to SAM Two-step, the robust variant performed better in larger samples under nonnormality with correct models. Although prior studies primarily focused on point estimates (e.g., MSE values), our findings expand on the work of <xref ref-type="bibr" rid="ref-49">Rosseel and Loh (2024)</xref> and <xref ref-type="bibr" rid="ref-23">Dhaene and Rosseel (2023)</xref> by examining SEs within the LSAM framework. While these studies confirmed robustness for point estimates, our results demonstrate that SEs are also robust under varying conditions. In this sense, the LSAM approach is strengthened — not only are the point estimates accurate, but the SEs are as well. This is important because accurate SEs are central to evaluating the significance of regression coefficients.</p>
		<p id="s7.p5">Coverage rates in <xref ref-type="sec" rid="s6_2">Study 1</xref> and <xref ref-type="sec" rid="s6_5">Study 2</xref> exhibited consistent performance patterns across estimation methods within each study. In both studies coverage rates approached 0.90 under correctly specified models across all estimation methods. Under model misspecification, some parameters exhibited very low coverage rates (for all estimation methods), but that is solely due to the fact that the point estimates for these parameters were severely biased in these settings.</p>
		<p id="s7.p6">Previous research in joint SEM has shown that nonparametric bootstrapping is well capable of estimating SEs. In this study, we set out to evaluate whether similar conclusions can be drawn for the structural coefficients in our two models when adopting the LSAM approach. Our findings align with research in joint ML SEM that highlights the effectiveness of nonparametric bootstrapping for SE estimation (<xref ref-type="bibr" rid="ref-10">Bollen &amp; Stine, 1990</xref>, <xref ref-type="bibr" rid="ref-11">1992</xref>; <xref ref-type="bibr" rid="ref-43">Nevitt &amp; Hancock, 2001)</xref>. For instance, <xref ref-type="bibr" rid="ref-43">Nevitt and Hancock (2001)</xref> showed that bootstrap SEs outperform ML SEs in terms of bias and variability under nonnormality, particularly for sample sizes <inline-formula id="ieqn-173"><mml:math id="mml-ieqn-173"><mml:mi>n</mml:mi><mml:mo>≥</mml:mo><mml:mn>200</mml:mn></mml:math></inline-formula>. Our study extends these conclusions to the LSAM framework, demonstrating that SAM Nonparametric maintains robust performance across varying sample sizes and model specifications, even in smaller sample conditions. Additionally, <xref ref-type="bibr" rid="ref-61">Yuan and Hayashi (2006)</xref> emphasized the consistency of bootstrap SEs under model misspecification — a finding echoed in our results, where SAM Nonparametric delivered near-unbiased estimates despite structural misspecification in <xref ref-type="sec" rid="s6_5">Study 2</xref>. By evaluating these methods across two models of increasing complexity, our results confirm and expand upon the utility of bootstrapping methods for SE estimation under both normal and nonnormal conditions within the LSAM framework.</p>
<p id="s7.p7">Importantly, our study seems to be the first to incorporate parametric bootstrapping within the SEM framework, alongside nonparametric methods. By assuming a specified distribution, parametric bootstrapping demonstrated its potential to provide stable and accurate SE estimates, particularly in smaller sample sizes (<xref ref-type="bibr" rid="ref-30">Hesterberg, 2015</xref>) and when the assumed data distribution, such as normality, closely approximates the true distribution. Our results indicate its effectiveness under normal conditions, even in the presence of model misspecification, and its consistent performance across varying sample sizes. However, it should be noted that a notable drawback of both parametric and nonparametric bootstrapping methods is that they require considerable computational time (<xref ref-type="bibr" rid="ref-21">Deng et al., 2018</xref>).</p>
		<p id="s7.p8">We acknowledge several limitations in this study. First, the findings are specific to the conditions manipulated in our simulations. Additionally, <xref ref-type="sec" rid="s6_2">Study 1</xref> examined a simple two-factor SEM, while <xref ref-type="sec" rid="s6_5">Study 2</xref> extended the model by incorporating observed exogenous and endogenous variables. Expanding the scope to include more latent variables or exploring complex models, such as latent growth models, could improve the generalizability of these findings and provide greater support to applied researchers.</p>
<p id="s7.p9">Regarding two-step SE estimation in LSAM, the current approach relies on returning to the global model to compute the joint information matrix, which is somewhat incompatible with local SAM. Since a local approach has not (yet) been developed, we opted for this approach. Importantly, we extended this framework to include the robust two-step correction proposed by <xref ref-type="bibr" rid="ref-59">Yuan and Chan (2002)</xref>, allowing SEs to be adjusted for nonnormality. This still requires reliance on the global approach, and future research should focus on developing a method that eliminates the need to switch back to a global perspective.</p>
<p id="s7.p10">A notable appealing advantage of SAM is its flexibility in expanding the range of possible estimators. By separating the estimation of the measurement model (Step 1) from the structural model (Step 2), SAM enables the use of non-iterative methods from factor-analytic literature in the first step (see <xref ref-type="bibr" rid="ref-23">Dhaene &amp; Rosseel, 2023</xref>). Once estimates for the measurement part are obtained — either iteratively or through closed-form expressions — structural coefficients can also be estimated via closed-form expressions. While this study employed standard iterative estimators for SE estimation, future research could explore the potential advantages of non-iterative estimators. To date, no analytic method exists for obtaining SEs in non-iterative LSAM. Bootstrapping remains the only available procedure, but developing analytic approaches for SE computation would further leverage the flexibility of the LSAM approach.</p>
<p id="s7.p11">We conclude that LSAM SE methods offer significant advantages in research settings prone to smaller sample sizes, misspecification, and nonnormality, providing accurate SE estimates under these challenging conditions.</p></sec>
</body>
<back>
<ref-list><title>References</title>
	<ref id="ref-1"><mixed-citation publication-type="journal"><string-name name-style="western"><surname>Anderson</surname>, <given-names>J. C.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Gerbing</surname>, <given-names>D. W.</given-names></string-name> (<year>1984</year>). <article-title>The effect of sampling error on convergence, improper solutions, and goodness-of-fit indices for maximum likelihood confirmatory factor analysis</article-title>. <source>Psychometrika</source>, <volume>49</volume>(<issue>2</issue>), <fpage>155</fpage>–<lpage>173</lpage>. <pub-id pub-id-type="doi">10.1007/BF02294170</pub-id></mixed-citation></ref>
	<ref id="ref-2"><mixed-citation publication-type="journal"><string-name name-style="western"><surname>Arminger</surname>, <given-names>G.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Schoenberg</surname>, <given-names>R. J</given-names></string-name>. (<year>1989</year>). <article-title>Pseudo maximum likelihood estimation and a test for misspecification in mean and covariance structure models</article-title>. <source>Psychometrika</source>, <volume>54</volume>(<issue>3</issue>), <fpage>409</fpage>–<lpage>425</lpage>. <pub-id pub-id-type="doi">10.1007/BF02294626</pub-id></mixed-citation></ref>
	<ref id="ref-3"><mixed-citation publication-type="journal"><string-name name-style="western"><surname>Bakk</surname>, <given-names>Z.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Kuha</surname>, <given-names>J</given-names></string-name>. (<year>2018</year>). <article-title>Two-step estimation of models between latent classes and external variables</article-title>. <source>Psychometrika</source>, <volume>83</volume>(<issue>4</issue>), <fpage>871</fpage>–<lpage>892</lpage>. <pub-id pub-id-type="doi">10.1007/s11336-017-9592-7</pub-id></mixed-citation></ref>
	<ref id="ref-4"><mixed-citation publication-type="journal"><string-name name-style="western"><surname>Bakk</surname>, <given-names>Z.</given-names></string-name>, <string-name name-style="western"><surname>Oberski</surname>, <given-names>D. L.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Vermunt</surname>, <given-names>J. K</given-names></string-name>. (<year>2017</year>). <article-title>Relating latent class assignments to external variables: Standard errors for correct inference</article-title>. <source>Political Analysis</source>, <volume>22</volume>(<issue>4</issue>), <fpage>520</fpage>–<lpage>540</lpage>. <pub-id pub-id-type="doi">10.1093/pan/mpu003</pub-id></mixed-citation></ref>
<ref id="ref-5"><mixed-citation publication-type="journal"><string-name name-style="western"><surname>Bartlett</surname>, <given-names>M. S</given-names></string-name>. (<year>1937</year>). <article-title>The statistical conception of mental factors</article-title>. <source>British Journal of Psychology</source>, <volume>28</volume>, <fpage>97</fpage>–<lpage>104</lpage>.</mixed-citation></ref>
<ref id="ref-6"><mixed-citation publication-type="journal"><string-name name-style="western"><surname>Bartlett</surname>, <given-names>M. S</given-names></string-name>. (<year>1938</year>). <article-title>Methods of estimating mental factors</article-title>. <source>Nature</source>, <volume>141</volume>, <fpage>609</fpage>–<lpage>610</lpage>.</mixed-citation></ref>
<ref id="ref-7"><mixed-citation publication-type="book"><string-name name-style="western"><surname>Bentler</surname>, <given-names>P. M</given-names></string-name>. (<year>2004</year>). <source> <italic>EQS 6 structural equations program book</italic> [Computer software manual]</source>. <publisher-name>Multivariate Software.</publisher-name>.</mixed-citation></ref>
<ref id="ref-8"><mixed-citation publication-type="book"><string-name name-style="western"><surname>Bollen</surname>, <given-names>K. A</given-names></string-name>. (<year>1989</year>). <source> <italic>Structural equations with latent variables</italic></source>. <publisher-name>John Wiley &amp; Sons</publisher-name>.</mixed-citation></ref>
	<ref id="ref-9"><mixed-citation publication-type="journal"><string-name name-style="western"><surname>Bollen</surname>, <given-names>K. A</given-names></string-name>. (<year>1996</year>). <article-title>An alternative two stage least squares (2sls) estimator for latent variable equations</article-title>. <source>Psychometrika</source>, <volume>61</volume>(<issue>1</issue>), <fpage>109</fpage>–<lpage>121</lpage>. <pub-id pub-id-type="doi">10.1007/BF02296961</pub-id></mixed-citation></ref>
<ref id="ref-10"><mixed-citation publication-type="edited-book"><string-name name-style="western"><surname>Bollen</surname>, <given-names>K. A.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Stine</surname>, <given-names>R. A</given-names></string-name>. (<year>1990</year>). Direct and indirect effects: Classical and bootstrap estimates of variability. In C. C. Clogg (Ed.), <italic>Sociological methodology</italic>, (pp. 115–140). Blackwell.</mixed-citation></ref>
	<ref id="ref-11"><mixed-citation publication-type="journal"><string-name name-style="western"><surname>Bollen</surname>, <given-names>K. A.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Stine</surname>, <given-names>R. A</given-names></string-name>. (<year>1992</year>). <article-title>Bootstrapping goodness-of-fit measures in structural equation models</article-title>. <source>Sociological Methods &amp; Research</source>, <volume>21</volume>(<issue>2</issue>), <fpage>205</fpage>–<lpage>229</lpage>. <pub-id pub-id-type="doi">10.1177/0049124192021002004</pub-id></mixed-citation></ref>
	<ref id="ref-12"><mixed-citation publication-type="journal"><string-name name-style="western"><surname>Boomsma</surname>, <given-names>A</given-names></string-name>. (<year>1985</year>). <article-title>Nonconvergence, improper solutions, and starting values in lisrel maximum likelihood estimation</article-title>. <source>Psychometrika</source>, <volume>50</volume>(<issue>2</issue>), <fpage>229</fpage>–<lpage>242</lpage>. <pub-id pub-id-type="doi">10.1007/BF02294248</pub-id></mixed-citation></ref>
<ref id="ref-13"><mixed-citation publication-type="web"><string-name name-style="western"><surname>Boomsma</surname>, <given-names>A</given-names></string-name>. (<year>1986</year>). On the use of bootstrap and jackknife in covariance structure analysis. In N. L. F. De Antoni &amp; A. Rizzi (Eds.), <italic>Compstat 1986: Proceedings in computational statistics</italic>, (pp. 205–210). Physica.</mixed-citation></ref>
	<ref id="ref-14"><mixed-citation publication-type="journal"><string-name name-style="western"><surname>Burghgraeve</surname>, <given-names>E.</given-names></string-name>, <string-name name-style="western"><surname>Neve</surname>, <given-names>J. D.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Rosseel</surname>, <given-names>Y</given-names></string-name>. (<year>2021</year>). <article-title>Estimating structural equation models using James–Stein type shrinkage estimators</article-title>. <source>Psychometrika</source>, <volume>86</volume>(<issue>2</issue>), <fpage>668</fpage>–<lpage>668</lpage>. <pub-id pub-id-type="doi">10.1007/s11336-021-09766-1</pub-id></mixed-citation></ref>
	<ref id="ref-15"><mixed-citation publication-type="journal"><string-name name-style="western"><surname>Burt</surname>, <given-names>R. S</given-names></string-name>. (<year>1973</year>). <article-title>Confirmatory factor-analytic structures and the theory construction process</article-title>. <source>Sociological Methods &amp; Research</source>, <volume>2</volume>(<issue>2</issue>), <fpage>131</fpage>–<lpage>190</lpage>. <pub-id pub-id-type="doi">10.1177/004912417300200201</pub-id></mixed-citation></ref>
	<ref id="ref-16"><mixed-citation publication-type="journal"><string-name name-style="western"><surname>Burt</surname>, <given-names>R. S</given-names></string-name>. (<year>1976</year>). <article-title>Interpretational confounding of unobserved variables in structural equation models</article-title>. <source>Sociological Methods &amp; Research</source>, <volume>5</volume>(<issue>1</issue>), <fpage>3</fpage>–<lpage>52</lpage>. <pub-id pub-id-type="doi">10.1177/004912417600500101</pub-id></mixed-citation></ref>
	<ref id="r16.5"><mixed-citation publication-type="web">Can, S., &amp; Rosseel, Y. (2025). <italic>Evaluating the standard error estimation of Local Structural-After-Measurement (LSAM) approach in structural equation modeling</italic> [OSF project page containing study code]. Open Science Framework. <ext-link ext-link-type="uri" xlink:href="https://osf.io/ygte5">https://osf.io/ygte5/overview</ext-link></mixed-citation></ref>
	<ref id="ref-17"><mixed-citation publication-type="journal"><string-name name-style="western"><surname>Chen</surname>, <given-names>F.</given-names></string-name>, <string-name name-style="western"><surname>Bollen</surname>, <given-names>K. A.</given-names></string-name>, <string-name name-style="western"><surname>Paxton</surname>, <given-names>P.</given-names></string-name>, <string-name name-style="western"><surname>Curran</surname>, <given-names>P. J.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Kirby</surname>, <given-names>J. B</given-names></string-name>. (<year>2001</year>). <article-title>Improper solutions in structural equation models: Causes, consequences, and strategies</article-title>. <source>Sociological Methods &amp; Research</source>, <volume>29</volume>(<issue>4</issue>), <fpage>468</fpage>–<lpage>508</lpage>. <pub-id pub-id-type="doi">10.1177/0049124101029004003</pub-id></mixed-citation></ref>
<ref id="ref-18"><mixed-citation publication-type="book"><string-name name-style="western"><surname>Croon</surname>, <given-names>M</given-names></string-name>. (<year>2002</year>). Using predicted latent scores in general latent structure models. In G. Marcoulides &amp; I. Moustaki (Eds.), <italic>Latent variable and latent structure models</italic>, (pp.195–223). Lawrence Erlbaum.</mixed-citation></ref>
	<ref id="ref-19"><mixed-citation publication-type="journal"><string-name name-style="western"><surname>De Jonckere</surname>, <given-names>J.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Rosseel</surname>, <given-names>Y</given-names></string-name>. (<year>2022</year>). <article-title>Using bounded estimation to avoid nonconvergence in small sample structural equation modeling</article-title>. <source>Structural Equation Modeling</source>, <volume>29</volume>(<issue>3</issue>), <fpage>412</fpage>–<lpage>427</lpage>. <pub-id pub-id-type="doi">10.1080/10705511.2021.1982716</pub-id></mixed-citation></ref>
	<ref id="ref-20"><mixed-citation publication-type="journal"><string-name name-style="western"><surname>De Jonckere</surname>, <given-names>J.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Rosseel</surname>, <given-names>Y</given-names></string-name>. (<year>2023</year>). <article-title>A model-based shrinkage target to avoid non-convergence in small sample SEM</article-title>. <source>Structural Equation Modeling</source>, <volume>30</volume>(<issue>6</issue>), <fpage>941</fpage>–<lpage>955</lpage>. <pub-id pub-id-type="doi">10.1080/10705511.2023.2171420</pub-id></mixed-citation></ref>
	<ref id="ref-21"><mixed-citation publication-type="journal"><string-name name-style="western"><surname>Deng</surname>, <given-names>L.</given-names></string-name>, <string-name name-style="western"><surname>Yang</surname>, <given-names>M.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Marcoulides</surname>, <given-names>K. M</given-names></string-name>. (<year>2018</year>). <article-title>Structural equation modeling with many variables: A systematic review of issues and developments</article-title>. <source>Frontiers in Psychology</source>, <volume>9</volume>, <elocation-id>580</elocation-id>. <pub-id pub-id-type="doi">10.3389/fpsyg.2018.00580</pub-id></mixed-citation></ref>
	<ref id="ref-22"><mixed-citation publication-type="journal"><string-name name-style="western"><surname>Devlieger</surname>, <given-names>I.</given-names></string-name>, <string-name name-style="western"><surname>Mayer</surname>, <given-names>A.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Rosseel</surname>, <given-names>Y</given-names></string-name>. (<year>2016</year>). <article-title>Hypothesis testing using factor score regression: A comparison of four methods</article-title>. <source>Educational and Psychological Measurement</source>, <volume>76</volume>(<issue>5</issue>), <fpage>741</fpage>–<lpage>770</lpage>. <pub-id pub-id-type="doi">10.1177/0013164415607618</pub-id></mixed-citation></ref>
	<ref id="ref-23"><mixed-citation publication-type="journal"><string-name name-style="western"><surname>Dhaene</surname>, <given-names>S.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Rosseel</surname>, <given-names>Y</given-names></string-name>. (<year>2023</year>). <article-title>An evaluation of non-iterative estimators in the Structural After Measurement (SAM) approach to Structural Equation Modeling (SEM)</article-title>. <source>Structural Equation Modeling</source>, <volume>30</volume>(<issue>6</issue>), <fpage>926</fpage>–<lpage>940</lpage>. <pub-id pub-id-type="doi">10.1080/10705511.2023.2220135</pub-id></mixed-citation></ref>
	<ref id="ref-24"><mixed-citation publication-type="journal"><string-name name-style="western"><surname>Efron</surname>, <given-names>B</given-names></string-name>. (<year>1979</year>). <article-title>Bootstrap methods: Another look at the jackknife</article-title>. <source>Annals of Statistics</source>, <volume>7</volume>(<issue>1</issue>, <fpage>1</fpage>–<lpage>26</lpage>. <pub-id pub-id-type="doi">10.1214/aos/117634455</pub-id></mixed-citation></ref>
<ref id="ref-25"><mixed-citation publication-type="book"><string-name name-style="western"><surname>Efron</surname>, <given-names>B.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Tibshirani</surname>, <given-names>T. J</given-names></string-name>. (<year>1993</year>). <source>An introduction to the bootstrap</source>. <publisher-name>Chapman &amp; Hall</publisher-name>.</mixed-citation></ref>
	<ref id="ref-26"><mixed-citation publication-type="journal"><string-name name-style="western"><surname>Gerbing</surname>, <given-names>D. W.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Anderson</surname>, <given-names>J. C</given-names></string-name>. (<year>1987</year>). <article-title>Improper solutions in the analysis of covariance structures: Their interpretability and a comparison of alternate respecifications</article-title>. <source>Psychometrika</source>, <volume>52</volume>(<issue>1</issue>), <fpage>99</fpage>–<lpage>111</lpage>. <pub-id pub-id-type="doi">10.1007/BF02293958</pub-id></mixed-citation></ref>
<ref id="ref-27"><mixed-citation publication-type="journal"><string-name name-style="western"><surname>Gong</surname>, <given-names>G.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Samaniego</surname>, <given-names>F. J</given-names></string-name>. (<year>1981</year>). <article-title>Pseudo maximum likelihood estimation: Theory and applications</article-title>. <source>Annals of Statistics</source>, <volume>9</volume>(<issue>4</issue>), <fpage>861</fpage>–<lpage>869</lpage>.</mixed-citation></ref>
	<ref id="ref-28"><mixed-citation publication-type="journal"><string-name name-style="western"><surname>Grønneberg</surname>, <given-names>S.</given-names></string-name>, <string-name name-style="western"><surname>Foldnes</surname>, <given-names>N.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Marcoulides</surname>, <given-names>K. M</given-names></string-name>. (<year>2022</year>). <article-title>Covsim: An R package for simulating non-normal data for structural equation models using copulas</article-title>. <source>Journal of Statistical Software</source>, <volume>102</volume>(<issue>3</issue>), <fpage>1</fpage>–<lpage>45</lpage>. <pub-id pub-id-type="doi">10.18637/jss.v102.i03</pub-id></mixed-citation></ref>
<ref id="ref-29"><mixed-citation publication-type="book"><string-name name-style="western"><surname>Hancock</surname>, <given-names>G. R.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Liu</surname>, <given-names>M</given-names></string-name>. (<year>2012</year>). Bootstrapping standard errors and data-model fit statistics in structural equation modeling. In R. H. Hoyle (Ed.), <italic>Handbook of structural equation modeling</italic>, (pp. 296–306). <publisher-name>Guilford Press</publisher-name>.</mixed-citation></ref>
	<ref id="ref-30"><mixed-citation publication-type="journal"><string-name name-style="western"><surname>Hesterberg</surname>, <given-names>T. C</given-names></string-name>. (<year>2015</year>). <article-title>What teachers should know about the bootstrap: Resampling in the undergraduate statistics curriculum</article-title>. <source>American Statistician</source>, <volume>69</volume>(<issue>4</issue>), <fpage>371</fpage>–<lpage>386</lpage>. <pub-id pub-id-type="doi">10.1080/00031305.2015.1089789</pub-id></mixed-citation></ref>
<ref id="ref-31"><mixed-citation publication-type="book"><string-name name-style="western"><surname>Hunter</surname>, <given-names>J. E.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Gerbing</surname>, <given-names>D. W</given-names></string-name>. (<year>1982</year>). Unidimensional measurement, second order factor analysis, and causal models. In B. M. Staw &amp; L. L. Cummings (Eds.), <italic>Research in organizational behavior</italic> (Vol. 4, pp. 267–320). <publisher-name>JAI Press</publisher-name>.</mixed-citation></ref>
	<ref id="ref-32"><mixed-citation publication-type="journal"><string-name name-style="western"><surname>Ievers-Landis</surname>, <given-names>C. E.</given-names></string-name>, <string-name name-style="western"><surname>Burant</surname>, <given-names>C. J.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Hazen</surname>, <given-names>R</given-names></string-name>. (<year>2011</year>). <article-title>The concept of bootstrapping of structural equation models with smaller samples: An illustration using mealtime rituals in diabetes management</article-title>. <source>Journal of Developmental and Behavioral Pediatrics</source>, <volume>32</volume>(<issue>8</issue>), <fpage>619</fpage>–<lpage>626</lpage>. <pub-id pub-id-type="doi">10.1097/DBP.0b013e31822bc74f</pub-id></mixed-citation></ref>
<ref id="ref-33"><mixed-citation publication-type="book"><string-name name-style="western"><surname>Jöreskog</surname>, <given-names>K. G</given-names></string-name>. (<year>1973</year>). A general method for estimating a linear structural equation system. In A. S. Duncan (Ed.), <italic>Structural equation models in the social sciences</italic>, (pp. 85–112). <publisher-name>Seminar Press</publisher-name>.</mixed-citation></ref>
<ref id="ref-34"><mixed-citation publication-type="book"><string-name name-style="western"><surname>Jöreskog</surname>, <given-names>K. G.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Sörbom</surname>, <given-names>D</given-names></string-name>. (<year>1996</year>). <italic>Lisrel 8: User’s reference guide</italic> [Computer software manual]. <publisher-name>Scientific Software International</publisher-name>.</mixed-citation></ref>
	<ref id="ref-35"><mixed-citation publication-type="journal"><string-name name-style="western"><surname>Kaplan</surname>, <given-names>D</given-names></string-name>. (<year>1988</year>). <article-title>The impact of specification error on the estimation, testing, and improvement of structural equation models</article-title>. <source>Multivariate Behavioral Research</source>, <volume>23</volume>(<issue>1</issue>), <fpage>69</fpage>–<lpage>86</lpage>. <pub-id pub-id-type="doi">10.1207/s15327906mbr2301_4</pub-id></mixed-citation></ref>
<ref id="ref-36"><mixed-citation publication-type="thesis"><string-name name-style="western"><surname>Keesling</surname>, <given-names>J. W</given-names></string-name>. (<year>1972</year>). <italic>Maximum likelihood approaches to causal flow analysis</italic> [Unpublished doctoral dissertation]. University of Chicago.</mixed-citation></ref>
<ref id="ref-37"><mixed-citation publication-type="journal"><string-name name-style="western"><surname>Kuha</surname>, <given-names>J.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Bakk</surname>, <given-names>Z</given-names></string-name>. (<year>2023</year>). <article-title>Two-step estimation of latent trait models</article-title>. <source><italic>arXiv preprint, arXiv:2303.16101</italic></source>.</mixed-citation></ref>
	<ref id="ref-38"><mixed-citation publication-type="journal"><string-name name-style="western"><surname>Lance</surname>, <given-names>C. E.</given-names></string-name>, <string-name name-style="western"><surname>Cornwell</surname>, <given-names>J. M.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Mulaik</surname>, <given-names>S. A</given-names></string-name>. (<year>1988</year>). <article-title>Limited information parameter estimates for latent or mixed manifest and latent variable models</article-title>. <source>Multivariate Behavioral Research</source>, <volume>23</volume>(<issue>2</issue>), <fpage>171</fpage>–<lpage>187</lpage>. <pub-id pub-id-type="doi">10.1207/s15327906mbr2302_3</pub-id></mixed-citation></ref>
	<ref id="ref-39"><mixed-citation publication-type="journal"><string-name name-style="western"><surname>Levy</surname>, <given-names>R</given-names></string-name>. (<year>2023</year>). <article-title>Precluding interpretational confounding in factor analysis with a covariate or outcome via measurement and uncertainty preserving parametric modeling</article-title>. <source>Structural Equation Modeling</source>, <volume>30</volume>(<issue>5</issue>), <fpage>719</fpage>–<lpage>736</lpage>. <pub-id pub-id-type="doi">10.1080/10705511.2022.2154214</pub-id>.</mixed-citation></ref>
	<ref id="ref-40"><mixed-citation publication-type="journal"><string-name name-style="western"><surname>Maydeu-Olivares</surname>, <given-names>A</given-names></string-name>. (<year>2017</year>). <article-title>Maximum likelihood estimation of structural equation models for continuous data: Standard errors and goodness of fit</article-title>. <source>Structural Equation Modeling</source>, <volume>24</volume>(<issue>3</issue>), <fpage>383</fpage>–<lpage>394</lpage>. <pub-id pub-id-type="doi">10.1080/10705511.2016.1269606</pub-id></mixed-citation></ref>
	<ref id="ref-41"><mixed-citation publication-type="book"><string-name name-style="western"><surname>Muthén</surname>, <given-names>L. K.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Muthén</surname>, <given-names>B. O</given-names></string-name>. (<year>2010</year>). <italic>Mplus user’s guide</italic> (6<sup>th</sup> ed.) [Computer software manual]. Muthén &amp; Muthén.</mixed-citation></ref>
	<ref id="ref-42"><mixed-citation publication-type="journal"><string-name name-style="western"><surname>Neale</surname>, <given-names>M. C.</given-names></string-name>, <string-name name-style="western"><surname>Hunter</surname>, <given-names>M. D.</given-names></string-name>, <string-name name-style="western"><surname>Pritikin</surname>, <given-names>J. N.</given-names></string-name>, <string-name name-style="western"><surname>Zahery</surname>, <given-names>M.</given-names></string-name>, <string-name name-style="western"><surname>Brick</surname>, <given-names>T. R.</given-names></string-name>, <string-name name-style="western"><surname>Kirkpatrick</surname>, <given-names>R. M.</given-names></string-name>, <string-name name-style="western"><surname>Estabrook</surname>, <given-names>R.</given-names></string-name>, <string-name name-style="western"><surname>Bates</surname>, <given-names>T. C.</given-names></string-name>, <string-name name-style="western"><surname>Maes</surname>, <given-names>H. H.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Boker</surname>, <given-names>S. M</given-names></string-name>. (<year>2016</year>). <article-title>Openmx 2.0: Extended structural equation and statistical modeling</article-title>. <source>Psychometrika</source>, <volume>81</volume>(<issue>2</issue>), <fpage>535</fpage>–<lpage>549</lpage>. <pub-id pub-id-type="doi">10.1007/s11336-014-9435-8</pub-id> </mixed-citation></ref>
	<ref id="ref-43"><mixed-citation publication-type="journal"><string-name name-style="western"><surname>Nevitt</surname>, <given-names>J.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Hancock</surname>, <given-names>G. R</given-names></string-name>. (<year>2001</year>). <article-title>Performance of bootstrapping approaches to model test statistics and parameter standard error estimation in structural equation modeling</article-title>. <source>Structural Equation Modeling</source>, <volume>8</volume>(<issue>3</issue>), <fpage>353</fpage>–<lpage>377</lpage>. <pub-id pub-id-type="doi">10.1207/S15328007SEM0803_2</pub-id></mixed-citation></ref>
	<ref id="ref-44"><mixed-citation publication-type="journal"><string-name name-style="western"><surname>Nevitt</surname>, <given-names>J.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Hancock</surname>, <given-names>G. R</given-names></string-name>. (<year>2004</year>). <article-title>Evaluating small sample approaches for model test statistics in structural equation modeling</article-title>. <source>Multivariate Behavioral Research</source>, <volume>39</volume>(<issue>3</issue>), <fpage>439</fpage>–<lpage>478</lpage>. <pub-id pub-id-type="doi">10.1207/S15327906MBR3903_3</pub-id></mixed-citation></ref>
<ref id="ref-45"><mixed-citation publication-type="journal"><string-name name-style="western"><surname>Parke</surname>, <given-names>W. R</given-names></string-name>. (<year>1986</year>). <article-title>Pseudo maximum likelihood estimation: The asymptotic distribution</article-title>. <source>Annals of Statistics</source>, <volume>14</volume>(<issue>1</issue>), <fpage>355</fpage>–<lpage>357</lpage>.</mixed-citation></ref>
	<ref id="ref-46"><mixed-citation publication-type="journal"><string-name name-style="western"><surname>Perez Alonso</surname>, <given-names>A. F.</given-names></string-name>, <string-name name-style="western"><surname>Rosseel</surname>, <given-names>Y.</given-names></string-name>, <string-name name-style="western"><surname>Vermunt</surname>, <given-names>J. K.</given-names></string-name>, &amp; <string-name name-style="western"><surname>De Roover</surname>, <given-names>K</given-names></string-name>. (<year>2024</year>). <article-title>Mixture multigroup structural equation modeling: A novel method for comparing structural relations across many groups</article-title> <comment>[Advance online publication]</comment>. <source>Psychological Methods</source>. <pub-id pub-id-type="doi">10.1037/met0000667</pub-id></mixed-citation></ref>
	<ref id="ref-47"><mixed-citation publication-type="book"> <collab>R Core Team</collab> (<year>2024</year>). <italic>R: A language and environment for statistical computing</italic> [Computer software manual]. <publisher-name>R Foundation for Statistical Computing</publisher-name>. <ext-link ext-link-type="uri" xlink:href="https://www.R-project.org/">https://www.R-project.org/</ext-link></mixed-citation></ref>
	<ref id="ref-50"><mixed-citation publication-type="journal"><string-name name-style="western"><surname>Rosseel</surname>, <given-names>Y</given-names></string-name>. (<year>2012</year>). <article-title>Lavaan: An R package for structural equation modeling</article-title>. <source>Journal of Statistical Software</source>, <volume>48</volume>, <fpage>1</fpage>–<lpage>36</lpage>. <pub-id pub-id-type="doi">10.18637/jss.v048.i02</pub-id></mixed-citation></ref>	
<ref id="ref-48"><mixed-citation publication-type="web"><string-name name-style="western"><surname>Rosseel</surname>, <given-names>Y.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Devlieger</surname>, <given-names>I</given-names></string-name>. (<year>2018</year>). <italic>Why we may not need SEM after all</italic> [Conference presentation]. SEM Working Group Meeting, Amsterdam, the Netherlands.</mixed-citation></ref>
	<ref id="ref-49"><mixed-citation publication-type="journal"><string-name name-style="western"><surname>Rosseel</surname>, <given-names>Y.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Loh</surname>, <given-names>W. W</given-names></string-name>. (<year>2024</year>). <article-title>A structural after measurement approach to structural equation modeling</article-title>. <source>Psychological Methods</source>, <volume>29</volume>(<issue>3</issue>), <fpage>561</fpage>–<lpage>588</lpage>. <pub-id pub-id-type="doi">10.1037/met0000503</pub-id></mixed-citation></ref>	
<ref id="ref-51"><mixed-citation publication-type="journal"><string-name name-style="western"><surname>Satorra</surname>, <given-names>A.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Bentler</surname>, <given-names>P. M</given-names></string-name>. (<year>1994</year>). Corrections to test statistics and standard errors in covariance structure analysis. In A. von Eye &amp; C. C. Clogg (Eds.), <italic>Latent variables analysis: Applications for developmental research</italic>, (pp. 399–419). SAGE Publications.</mixed-citation></ref>
	<ref id="ref-52"><mixed-citation publication-type="journal"><string-name name-style="western"><surname>Savalei</surname>, <given-names>V</given-names></string-name>. (<year>2010</year>). <article-title>Expected versus observed information in SEM with incomplete normal and nonnormal data</article-title>. <source>Psychological Methods</source>, <volume>15</volume>(<issue>4</issue>), <fpage>352</fpage>–<lpage>367</lpage>. <pub-id pub-id-type="doi">10.1037/a0020143</pub-id></mixed-citation></ref>
	<ref id="ref-53"><mixed-citation publication-type="journal"><string-name name-style="western"><surname>Savalei</surname>, <given-names>V.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Rosseel</surname>, <given-names>Y</given-names></string-name>. (<year>2022</year>). <article-title>Computational options for standard errors and test statistics with incomplete normal and nonnormal data in SEM</article-title>. <source>Structural Equation Modeling</source>, <volume>29</volume>(<issue>2</issue>), <fpage>163</fpage>–<lpage>181</lpage>. <pub-id pub-id-type="doi">10.1080/10705511.2021.1877548</pub-id></mixed-citation></ref>
	<ref id="ref-54"><mixed-citation publication-type="journal"><string-name name-style="western"><surname>Stine</surname>, <given-names>R. A</given-names></string-name>. (<year>1989</year>). <article-title>An introduction to bootstrap methods: Examples and ideas</article-title>. <source>Sociological Methods and Research</source>, <volume>18</volume>(<issue>2–3</issue>), <fpage>243</fpage>–<lpage>291</lpage>. <pub-id pub-id-type="doi">10.1177/0049124189018002003</pub-id></mixed-citation></ref>
	<ref id="ref-55"><mixed-citation publication-type="journal"><string-name name-style="western"><surname>van Driel</surname>, <given-names>O. P</given-names></string-name>. (<year>1978</year>). <article-title>On various causes of improper solutions in maximum likelihood factor analysis</article-title>. <source><italic>Psychometrika</italic></source>, <volume>43</volume>(<issue>2</issue>), <fpage>225</fpage>–<lpage>243</lpage>. <pub-id pub-id-type="doi">10.1007/BF02293865</pub-id></mixed-citation></ref>
	<ref id="ref-56"><mixed-citation publication-type="journal"><string-name name-style="western"><surname>Wall</surname>, <given-names>M. M.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Amemiya</surname>, <given-names>Y</given-names></string-name>. (<year>2000</year>). <article-title>Estimation for polynomial structural equation models</article-title>. <source><italic>Journal of the American Statistical Association</italic></source>, <volume>95</volume>(<issue>451</issue>), <fpage>929</fpage>–<lpage>940</lpage>. <pub-id pub-id-type="doi">10.2307/2669475</pub-id></mixed-citation></ref>
<ref id="ref-57"><mixed-citation publication-type="book"><string-name name-style="western"><surname>Wiley</surname>, <given-names>D. E</given-names></string-name> (<year>1973</year>). The identification problem for structural equation models with unmeasured variables. In A. S. Duncan (Ed.), <italic>Structural equation models in the social sciences</italic> (pp. 69–83). <publisher-name>Seminar Press</publisher-name>.</mixed-citation></ref>
	<ref id="ref-60"><mixed-citation publication-type="journal"><string-name name-style="western"><surname>Yuan</surname>, <given-names>K.-H.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Bentler</surname>, <given-names>P. M.</given-names></string-name> (<year>1997</year>). <article-title>Improving parameter tests in covariance structure analysis</article-title>. <source><italic>Computational Statistical Data Analysis</italic></source>, <volume>26</volume>(<issue>4</issue>), <fpage>177</fpage>–<lpage>198</lpage>. <pub-id pub-id-type="doi">10.1016/S0167-9473(97)00025-X</pub-id></mixed-citation></ref>
	<ref id="ref-58"><mixed-citation publication-type="journal"><string-name name-style="western"><surname>Yuan</surname>, <given-names>K.-H.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Bentler</surname>, <given-names>P. M.</given-names></string-name> (<year>2000</year>). <article-title>Three likelihood-based methods for mean and covariance structure analysis with nonnormal missing data</article-title>. <source><italic>Sociological Methodology</italic></source>, <volume>30</volume>(<issue>1</issue>), <fpage>165</fpage>–<lpage>200</lpage>. <pub-id pub-id-type="doi">10.1111/0081-1750.00078</pub-id></mixed-citation></ref>
	<ref id="ref-59"><mixed-citation publication-type="journal"><string-name name-style="western"><surname>Yuan</surname>, <given-names>K.-H.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Chan</surname>, <given-names>W.</given-names></string-name> (<year>2002</year>). <article-title>Fitting structural equation models using estimating equations: A model segregation approach</article-title>. <source><italic>British Journal of Mathematical and Statistical Psychology</italic></source>, <volume>55</volume>(<issue>1</issue>), <fpage>41</fpage>–<lpage>62</lpage>. <pub-id pub-id-type="doi">10.1348/000711002159699</pub-id></mixed-citation></ref>
	<ref id="ref-61"><mixed-citation publication-type="journal"><string-name name-style="western"><surname>Yuan</surname>, <given-names>K.-H.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Hayashi</surname>, <given-names>K.</given-names></string-name> (<year>2006</year>). <article-title>Standard errors in covariance structure models: Asymptotics versus bootstrap</article-title>. <source><italic>British Journal of Mathematical and Statistical Psychology</italic></source>, <volume>59</volume>(<issue>2</issue>), <fpage>397</fpage>–<lpage>417</lpage>. <pub-id pub-id-type="doi">10.1348/000711005X85896</pub-id></mixed-citation></ref>
<ref id="ref-62"><mixed-citation publication-type="book"><string-name name-style="western"><surname>Yung</surname>, <given-names>Y.-F.</given-names></string-name>, &amp; <string-name name-style="western"><surname>Bentler</surname>, <given-names>P. M.</given-names></string-name> (<year>1996</year>). Bootstrapping techniques in analysis of mean and covariance structures. In G. A. Marcoulides &amp; R. E. Schumacker (Eds.), <italic>Advanced structural equation modeling: Issues and techniques</italic> (pp. 195–226). <publisher-name>Lawrence Erlbaum Associates</publisher-name>.</mixed-citation></ref>
</ref-list><fn-group content-type="footnotes"><fn id="fn-1"><label>1</label>
<p>It could be argued that equation-by-equation approaches, such as the model-implied instrumental variable two-stage least squares (MIIV-2SLS) estimator (<xref ref-type="bibr" rid="ref-9">Bollen, 1996</xref>), or the James–Stein estimator (<xref ref-type="bibr" rid="ref-14">Burghgraeve et al., 2021</xref>), constitute a third approach, but they are not considered in this paper.</p></fn><fn id="fn-2"><label>2</label>
<p>Note that this expression may look familiar, as it corresponds to Bartlett’s factor score matrix used for computing factor scores (<xref ref-type="bibr" rid="ref-5">Bartlett, 1937</xref>, <xref ref-type="bibr" rid="ref-6">1938</xref>). Moreover, factor score regression (FSR) with Croon’s correction (<xref ref-type="bibr" rid="ref-18">Croon, 2002</xref>) represents a special case of LSAM, utilizing the mapping matrix from this equation.</p></fn><fn id="fn-3"><label>3</label>
<p><xref ref-type="bibr" rid="ref-49">Rosseel and Loh (2024)</xref> present alternative formulations of the mapping matrix <inline-formula id="ieqn-13a"><mml:math id="mml-ieqn-13a"><mml:mi mathvariant="bold-italic">M</mml:mi></mml:math></inline-formula> derived from different discrepancy functions (e.g., ML, GLS, ULS).</p></fn></fn-group>
<app-group>
<app id="app01"><title>Appendix</title>
<p><bold>Robust Two-Step Corrected Standard Errors</bold></p>
<p id="s8.ss8.p1">This section provides a brief description of how ‘robust’ two-step corrected standard errors are computed in the <monospace>sam()</monospace> function in lavaan (Version 0.6-20 or higher). The formulas are based on (<xref ref-type="bibr" rid="ref-59">Yuan &amp; Chan, 2002</xref>).</p>
	<p id="s8.ss8.p2">For simplicity, we only assume a covariance structure <inline-formula id="ieqn-174"><mml:math id="mml-ieqn-174"><mml:mi mathvariant="bold">Σ</mml:mi><mml:mo mathvariant="bold" stretchy="false">(</mml:mo><mml:mi mathvariant="bold-italic">ϑ</mml:mi><mml:mo mathvariant="bold" stretchy="false">)</mml:mo></mml:math></inline-formula> without a meanstructure. We will also assume that normaly theory maximum likelihood (ML) estimation is used in both steps, although the formulas can easily be adapted for other estimators like normal theory GLS, or WLS/ADF. The sample covariance matrix is denoted by <inline-formula id="ieqn-175"><mml:math id="mml-ieqn-175"><mml:mrow><mml:mi mathvariant="bold">S</mml:mi></mml:mrow></mml:math></inline-formula>. Let <inline-formula id="ieqn-176"><mml:math id="mml-ieqn-176"><mml:mrow><mml:mi mathvariant="bold">s</mml:mi></mml:mrow><mml:mo>=</mml:mo><mml:mtext>vech</mml:mtext><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi mathvariant="bold">S</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> be the vector of nonredundant elements in <inline-formula id="ieqn-177"><mml:math id="mml-ieqn-177"><mml:mrow><mml:mi mathvariant="bold">S</mml:mi></mml:mrow></mml:math></inline-formula> by stacking the lower-triangular columns (including the diagonal) into a single vector. Similarly, we write <inline-formula id="ieqn-178"><mml:math id="mml-ieqn-178"><mml:mi mathvariant="bold-italic">σ</mml:mi><mml:mo>=</mml:mo><mml:mtext>vech</mml:mtext><mml:mo stretchy="false">(</mml:mo><mml:mi mathvariant="bold">Σ</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> and <inline-formula id="ieqn-179"><mml:math id="mml-ieqn-179"><mml:mi mathvariant="bold-italic">σ</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi mathvariant="bold-italic">ϑ</mml:mi><mml:mo stretchy="false">)</mml:mo><mml:mo>=</mml:mo><mml:mtext>vech</mml:mtext><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi mathvariant="bold">Σ</mml:mi><mml:mo mathvariant="bold" stretchy="false">(</mml:mo><mml:mi mathvariant="bold-italic">ϑ</mml:mi><mml:mo mathvariant="bold" stretchy="false">)</mml:mo></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula>.</p>
<p id="s8.ss8.p3">In local SAM, the <inline-formula id="ieqn-180"><mml:math id="mml-ieqn-180"><mml:msub><mml:mi>T</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:math></inline-formula> parameters <inline-formula id="ieqn-181"><mml:math id="mml-ieqn-181"><mml:msub><mml:mi mathvariant="bold-italic">ϑ</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:math></inline-formula> related to the measurement part of the model are estimated in a first step (possibly in parts), while the <inline-formula id="ieqn-182"><mml:math id="mml-ieqn-182"><mml:msub><mml:mi>T</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:math></inline-formula> parameters <inline-formula id="ieqn-183"><mml:math id="mml-ieqn-183"><mml:msub><mml:mi mathvariant="bold-italic">ϑ</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:math></inline-formula> related to the structural part of the model are estimated in a second step.</p>
<p id="s8.ss8.p4">For the first step, we can write <inline-formula id="ieqn-184"><mml:math id="mml-ieqn-184"><mml:msub><mml:mrow><mml:mover><mml:mi mathvariant="bold-italic">ϑ</mml:mi><mml:mo stretchy="false">^</mml:mo></mml:mover></mml:mrow><mml:mn>1</mml:mn></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mi>h</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi mathvariant="bold">s</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> where <inline-formula id="ieqn-185"><mml:math id="mml-ieqn-185"><mml:msub><mml:mi>h</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo stretchy="false">(</mml:mo><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> is an implicit function that maps <inline-formula id="ieqn-186"><mml:math id="mml-ieqn-186"><mml:mrow><mml:mi mathvariant="bold">s</mml:mi></mml:mrow></mml:math></inline-formula> to <inline-formula id="ieqn-187"><mml:math id="mml-ieqn-187"><mml:msub><mml:mrow><mml:mover><mml:mi mathvariant="bold-italic">ϑ</mml:mi><mml:mo stretchy="false">^</mml:mo></mml:mover></mml:mrow><mml:mn>1</mml:mn></mml:msub></mml:math></inline-formula>. Let <inline-formula id="ieqn-188"><mml:math id="mml-ieqn-188"><mml:mi mathvariant="bold">Γ</mml:mi></mml:math></inline-formula> denote (<italic>N</italic> times) the asymptotic variance matrix of the sample statistics <inline-formula id="ieqn-189"><mml:math id="mml-ieqn-189"><mml:mrow><mml:mi mathvariant="bold">s</mml:mi></mml:mrow></mml:math></inline-formula>. Then, by using the Delta method, we find that an estimate of (<italic>N</italic> times) the covariance matrix of <inline-formula id="ieqn-190"><mml:math id="mml-ieqn-190"><mml:msub><mml:mrow><mml:mover><mml:mi mathvariant="bold-italic">ϑ</mml:mi><mml:mo stretchy="false">^</mml:mo></mml:mover></mml:mrow><mml:mn>1</mml:mn></mml:msub></mml:math></inline-formula> can be written as
<disp-formula id="eqn-6"><label>A1</label><mml:math id="mml-eqn-6" display="block"><mml:mtext>NACOV</mml:mtext><mml:mo stretchy="false">(</mml:mo><mml:msub><mml:mrow><mml:mover><mml:mi mathvariant="bold-italic">ϑ</mml:mi><mml:mo stretchy="false">^</mml:mo></mml:mover></mml:mrow><mml:mn>1</mml:mn></mml:msub><mml:mo stretchy="false">)</mml:mo><mml:mo>=</mml:mo><mml:mrow><mml:mi mathvariant="bold">P</mml:mi></mml:mrow><mml:mrow><mml:mover><mml:mi mathvariant="bold">Γ</mml:mi><mml:mo stretchy="false">^</mml:mo></mml:mover></mml:mrow><mml:msup><mml:mrow><mml:mi mathvariant="bold">P</mml:mi></mml:mrow><mml:mrow><mml:mi>T</mml:mi></mml:mrow></mml:msup></mml:math></disp-formula> 
where <inline-formula id="ieqn-191"><mml:math id="mml-ieqn-191"><mml:mrow><mml:mi mathvariant="bold">P</mml:mi></mml:mrow></mml:math></inline-formula> is the jacobian of the implicit function <inline-formula id="ieqn-192"><mml:math id="mml-ieqn-192"><mml:msub><mml:mi>h</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo stretchy="false">(</mml:mo><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula>, evaluated at <inline-formula id="ieqn-193"><mml:math id="mml-ieqn-193"><mml:mrow><mml:mover><mml:mi mathvariant="bold-italic">σ</mml:mi><mml:mo stretchy="false">^</mml:mo></mml:mover></mml:mrow><mml:mo>=</mml:mo><mml:mtext>vech</mml:mtext><mml:mo stretchy="false">(</mml:mo><mml:mi mathvariant="bold">Σ</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mover><mml:msub><mml:mi mathvariant="bold-italic">ϑ</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo stretchy="false">^</mml:mo></mml:mover></mml:mrow><mml:mo stretchy="false">)</mml:mo><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula>. There are multiple ways to express <inline-formula id="ieqn-194"><mml:math id="mml-ieqn-194"><mml:mrow><mml:mi mathvariant="bold">P</mml:mi></mml:mrow></mml:math></inline-formula>, but if we use the normal theory ML discrepancy function, then a common way to express <inline-formula id="ieqn-195"><mml:math id="mml-ieqn-195"><mml:mrow><mml:mi mathvariant="bold">P</mml:mi></mml:mrow></mml:math></inline-formula> is as follows:
<disp-formula id="eqn-7"><label>A2</label><mml:math id="mml-eqn-7" display="block"><mml:mrow><mml:mi mathvariant="bold">P</mml:mi></mml:mrow><mml:mo>=</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:msubsup><mml:mrow><mml:mover><mml:mi mathvariant="bold-italic">σ</mml:mi><mml:mo>˙</mml:mo></mml:mover></mml:mrow><mml:mn>1</mml:mn><mml:mi>T</mml:mi></mml:msubsup><mml:msub><mml:mrow><mml:mi mathvariant="bold">W</mml:mi></mml:mrow><mml:mn>1</mml:mn></mml:msub><mml:msub><mml:mrow><mml:mover><mml:mi mathvariant="bold-italic">σ</mml:mi><mml:mo>˙</mml:mo></mml:mover></mml:mrow><mml:mn>1</mml:mn></mml:msub><mml:msup><mml:mo stretchy="false">)</mml:mo><mml:mrow><mml:mo>−</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:msubsup><mml:mrow><mml:mover><mml:mi mathvariant="bold-italic">σ</mml:mi><mml:mo>˙</mml:mo></mml:mover></mml:mrow><mml:mn>1</mml:mn><mml:mi>T</mml:mi></mml:msubsup><mml:msub><mml:mrow><mml:mi mathvariant="bold">W</mml:mi></mml:mrow><mml:mn>1</mml:mn></mml:msub></mml:math></disp-formula>
where <inline-formula id="ieqn-196"><mml:math id="mml-ieqn-196"><mml:msub><mml:mrow><mml:mover><mml:mi mathvariant="bold-italic">σ</mml:mi><mml:mo>˙</mml:mo></mml:mover></mml:mrow><mml:mn>1</mml:mn></mml:msub></mml:math></inline-formula> is the jacobian of <inline-formula id="ieqn-197"><mml:math id="mml-ieqn-197"><mml:mi mathvariant="bold-italic">σ</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:msub><mml:mi mathvariant="bold-italic">ϑ</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula>, and <inline-formula id="ieqn-198"><mml:math id="mml-ieqn-198"><mml:msub><mml:mrow><mml:mi mathvariant="bold">W</mml:mi></mml:mrow><mml:mn>1</mml:mn></mml:msub><mml:mo>=</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mn>2</mml:mn></mml:mfrac><mml:msup><mml:mrow><mml:mi mathvariant="bold">D</mml:mi></mml:mrow><mml:mi>T</mml:mi></mml:msup><mml:mo stretchy="false">(</mml:mo><mml:mi mathvariant="bold">Σ</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:msub><mml:mi mathvariant="bold-italic">ϑ</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msup><mml:mo stretchy="false">)</mml:mo><mml:mrow><mml:mo> - </mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mo>⊗</mml:mo><mml:mi mathvariant="bold">Σ</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:msub><mml:mi mathvariant="bold-italic">ϑ</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:msup><mml:mo stretchy="false">)</mml:mo><mml:mrow><mml:mo> - </mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mo stretchy="false">)</mml:mo><mml:mrow><mml:mi mathvariant="bold">D</mml:mi></mml:mrow></mml:math></inline-formula>, and <inline-formula id="ieqn-199"><mml:math id="mml-ieqn-199"><mml:mrow><mml:mi mathvariant="bold">D</mml:mi></mml:mrow></mml:math></inline-formula> is the duplication matrix. To accomodate for <italic>B</italic> measurement blocks, we can partition <inline-formula id="ieqn-200"><mml:math id="mml-ieqn-200"><mml:mrow><mml:mi mathvariant="bold">P</mml:mi></mml:mrow></mml:math></inline-formula> in <italic>B</italic> parts: <inline-formula id="ieqn-201"><mml:math id="mml-ieqn-201"><mml:msub><mml:mrow><mml:mi mathvariant="bold">P</mml:mi></mml:mrow><mml:mn>1</mml:mn></mml:msub></mml:math></inline-formula>, <inline-formula id="ieqn-202"><mml:math id="mml-ieqn-202"><mml:msub><mml:mrow><mml:mi mathvariant="bold">P</mml:mi></mml:mrow><mml:mn>2</mml:mn></mml:msub></mml:math></inline-formula>, …, <inline-formula id="ieqn-203"><mml:math id="mml-ieqn-203"><mml:msub><mml:mrow><mml:mi mathvariant="bold">P</mml:mi></mml:mrow><mml:mi>B</mml:mi></mml:msub></mml:math></inline-formula>. A measurement block typically only needs a subset <inline-formula id="ieqn-204"><mml:math id="mml-ieqn-204"><mml:msub><mml:mrow><mml:mi mathvariant="bold">s</mml:mi></mml:mrow><mml:mi>b</mml:mi></mml:msub></mml:math></inline-formula> of the data, and we can write <inline-formula id="ieqn-205"><mml:math id="mml-ieqn-205"><mml:msub><mml:mrow><mml:mi mathvariant="bold">s</mml:mi></mml:mrow><mml:mi>b</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mi mathvariant="bold">L</mml:mi></mml:mrow><mml:mi>b</mml:mi></mml:msub><mml:mrow><mml:mi mathvariant="bold">s</mml:mi></mml:mrow></mml:math></inline-formula>, where <inline-formula id="ieqn-206"><mml:math id="mml-ieqn-206"><mml:msub><mml:mrow><mml:mi mathvariant="bold">L</mml:mi></mml:mrow><mml:mi>b</mml:mi></mml:msub></mml:math></inline-formula> is a selection matrix. Similarly, a measurement block only produces estimates for a subset <inline-formula id="ieqn-207"><mml:math id="mml-ieqn-207"><mml:msub><mml:mi mathvariant="bold-italic">ϑ</mml:mi><mml:mrow><mml:mn>1</mml:mn><mml:mi>b</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> of <inline-formula id="ieqn-208"><mml:math id="mml-ieqn-208"><mml:msub><mml:mi mathvariant="bold-italic">ϑ</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:math></inline-formula>, and we can write <inline-formula id="ieqn-209"><mml:math id="mml-ieqn-209"><mml:msub><mml:mi mathvariant="bold-italic">ϑ</mml:mi><mml:mrow><mml:mn>1</mml:mn><mml:mi>b</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mi mathvariant="bold">H</mml:mi></mml:mrow><mml:mi>b</mml:mi></mml:msub><mml:msub><mml:mi mathvariant="bold-italic">ϑ</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:math></inline-formula> where <inline-formula id="ieqn-210"><mml:math id="mml-ieqn-210"><mml:msub><mml:mrow><mml:mi mathvariant="bold">H</mml:mi></mml:mrow><mml:mi>b</mml:mi></mml:msub></mml:math></inline-formula> is again a selection matrix. For each measurement block, we have
<disp-formula id="eqn-8"><label>A3</label><mml:math id="mml-eqn-8" display="block"><mml:msub><mml:mrow><mml:mi mathvariant="bold">P</mml:mi></mml:mrow><mml:mi>b</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mi mathvariant="bold">H</mml:mi></mml:mrow><mml:mi>b</mml:mi></mml:msub><mml:mo stretchy="false">(</mml:mo><mml:msubsup><mml:mrow><mml:mover><mml:mi mathvariant="bold-italic">σ</mml:mi><mml:mo>˙</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mi>b</mml:mi></mml:mrow><mml:mi>T</mml:mi></mml:msubsup><mml:msub><mml:mrow><mml:mi mathvariant="bold">W</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mi>b</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mover><mml:mi mathvariant="bold-italic">σ</mml:mi><mml:mo>˙</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mi>b</mml:mi></mml:mrow></mml:msub><mml:msup><mml:mo stretchy="false">)</mml:mo><mml:mrow><mml:mo>−</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:msubsup><mml:mrow><mml:mover><mml:mi mathvariant="bold-italic">σ</mml:mi><mml:mo>˙</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mi>b</mml:mi></mml:mrow><mml:mi>T</mml:mi></mml:msubsup><mml:msub><mml:mrow><mml:mi mathvariant="bold">W</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mi>b</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mrow><mml:mi mathvariant="bold">L</mml:mi></mml:mrow><mml:mi>b</mml:mi></mml:msub><mml:mo>.</mml:mo></mml:math></disp-formula></p>
<p id="s8.ss8.p5">In the second step, we estimate <inline-formula id="ieqn-211"><mml:math id="mml-ieqn-211"><mml:msub><mml:mi mathvariant="bold-italic">ϑ</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:math></inline-formula> as a function of <inline-formula id="ieqn-212"><mml:math id="mml-ieqn-212"><mml:msub><mml:mrow><mml:mover><mml:mi mathvariant="bold-italic">ϑ</mml:mi><mml:mo stretchy="false">^</mml:mo></mml:mover></mml:mrow><mml:mn>1</mml:mn></mml:msub></mml:math></inline-formula> and the data. At this point, we switch back to the global model, and we fill all the estimated parameters into the model matrices of the full model. Based on the model-implied covariance matrix <inline-formula id="ieqn-213"><mml:math id="mml-ieqn-213"><mml:mi mathvariant="bold">Σ</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:msub><mml:mi mathvariant="bold-italic">ϑ</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi mathvariant="bold-italic">ϑ</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula>, we can compute a joint <inline-formula id="ieqn-214"><mml:math id="mml-ieqn-214"><mml:mi>T</mml:mi><mml:mo>×</mml:mo><mml:mi>T</mml:mi></mml:math></inline-formula> information matrix for all the parameters in the full model. We can partition the information matrix as follows:
<disp-formula id="eqn-9"><label>A4</label><mml:math id="mml-eqn-9" display="block"><mml:mi>I</mml:mi><mml:mo>=</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mtable columnspacing="1em" rowspacing="4pt" columnalign="center center"><mml:mtr><mml:mtd><mml:msub><mml:mi>I</mml:mi><mml:mrow><mml:mn>11</mml:mn></mml:mrow></mml:msub></mml:mtd><mml:mtd><mml:msub><mml:mi>I</mml:mi><mml:mrow><mml:mn>12</mml:mn></mml:mrow></mml:msub></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:msub><mml:mi>I</mml:mi><mml:mrow><mml:mn>21</mml:mn></mml:mrow></mml:msub></mml:mtd><mml:mtd><mml:msub><mml:mi>I</mml:mi><mml:mrow><mml:mn>22</mml:mn></mml:mrow></mml:msub></mml:mtd></mml:mtr></mml:mtable><mml:mo>)</mml:mo></mml:mrow><mml:mo>,</mml:mo></mml:math></disp-formula>where the <inline-formula id="ieqn-215"><mml:math id="mml-ieqn-215"><mml:mn>1</mml:mn></mml:math></inline-formula>–index corresponds to the measurement part, and the <inline-formula id="ieqn-216"><mml:math id="mml-ieqn-216"><mml:mn>2</mml:mn></mml:math></inline-formula>–index corresponds to the structural part. The formula for this joint information matrix can be written as
<disp-formula id="ueqn-10"><mml:math id="mml-ueqn-10" display="block"><mml:mtable columnspacing="0em 2em 0em 2em 0em 2em 0em 2em 0em 2em 0em" rowspacing="3pt" columnalign="right left right left right left right left right left right left" displaystyle="true"><mml:mtr><mml:mtd/><mml:mtd><mml:mi>I</mml:mi><mml:mo>=</mml:mo><mml:msup><mml:mrow><mml:mover><mml:mi mathvariant="bold-italic">σ</mml:mi><mml:mo>˙</mml:mo></mml:mover></mml:mrow><mml:mi>T</mml:mi></mml:msup><mml:mrow><mml:mi mathvariant="bold">W</mml:mi></mml:mrow><mml:mrow><mml:mover><mml:mi mathvariant="bold-italic">σ</mml:mi><mml:mo>˙</mml:mo></mml:mover></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>where <inline-formula id="ieqn-217"><mml:math id="mml-ieqn-217"><mml:mrow><mml:mover><mml:mi mathvariant="bold-italic">σ</mml:mi><mml:mo>˙</mml:mo></mml:mover></mml:mrow></mml:math></inline-formula> is the jacobian of <inline-formula id="ieqn-218"><mml:math id="mml-ieqn-218"><mml:mi mathvariant="bold-italic">σ</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:msub><mml:mi mathvariant="bold-italic">ϑ</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi mathvariant="bold-italic">ϑ</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula>, and <inline-formula id="ieqn-219"><mml:math id="mml-ieqn-219"><mml:mrow><mml:mi mathvariant="bold">W</mml:mi></mml:mrow><mml:mo>=</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mn>2</mml:mn></mml:mfrac><mml:msup><mml:mrow><mml:mi mathvariant="bold">D</mml:mi></mml:mrow><mml:mi>T</mml:mi></mml:msup><mml:mo stretchy="false">(</mml:mo><mml:mi mathvariant="bold">Σ</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:msub><mml:mi mathvariant="bold-italic">ϑ</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi mathvariant="bold-italic">ϑ</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:msup><mml:mo stretchy="false">)</mml:mo><mml:mrow><mml:mo> - </mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mo>⊗</mml:mo><mml:mi mathvariant="bold">Σ</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:msub><mml:mi mathvariant="bold-italic">ϑ</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi mathvariant="bold-italic">ϑ</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:msup><mml:mo stretchy="false">)</mml:mo><mml:mrow><mml:mo> - </mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msup><mml:mo stretchy="false">)</mml:mo><mml:mrow><mml:mi mathvariant="bold">D</mml:mi></mml:mrow></mml:math></inline-formula>. Note that in general <inline-formula id="ieqn-220"><mml:math id="mml-ieqn-220"><mml:msub><mml:mrow><mml:mi mathvariant="bold">W</mml:mi></mml:mrow><mml:mn>1</mml:mn></mml:msub><mml:mo>≠</mml:mo><mml:mrow><mml:mi mathvariant="bold">W</mml:mi></mml:mrow></mml:math></inline-formula>, unless the structural model is saturated. Using this notation, let <inline-formula id="ieqn-221"><mml:math id="mml-ieqn-221"><mml:msub><mml:mi>I</mml:mi><mml:mrow><mml:mn>22</mml:mn></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:msubsup><mml:mrow><mml:mover><mml:mi mathvariant="bold-italic">σ</mml:mi><mml:mo>˙</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:msub><mml:mi mathvariant="bold-italic">ϑ</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow><mml:mrow><mml:mi>T</mml:mi></mml:mrow></mml:msubsup><mml:mrow><mml:mi mathvariant="bold">W</mml:mi></mml:mrow><mml:msub><mml:mrow><mml:mover><mml:mi mathvariant="bold-italic">σ</mml:mi><mml:mo>˙</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:msub><mml:mi mathvariant="bold-italic">ϑ</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow></mml:msub><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula>, <inline-formula id="ieqn-222"><mml:math id="mml-ieqn-222"><mml:msub><mml:mi>I</mml:mi><mml:mrow><mml:mn>2</mml:mn><mml:mi>f</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:msubsup><mml:mrow><mml:mover><mml:mi mathvariant="bold-italic">σ</mml:mi><mml:mo>˙</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:msub><mml:mi mathvariant="bold-italic">ϑ</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow><mml:mrow><mml:mi>T</mml:mi></mml:mrow></mml:msubsup><mml:mrow><mml:mi mathvariant="bold">W</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula>, <inline-formula id="ieqn-223"><mml:math id="mml-ieqn-223"><mml:msub><mml:mi>I</mml:mi><mml:mrow><mml:mi>f</mml:mi><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi mathvariant="bold">W</mml:mi></mml:mrow><mml:msub><mml:mrow><mml:mover><mml:mi mathvariant="bold-italic">σ</mml:mi><mml:mo>˙</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:msub><mml:mi mathvariant="bold-italic">ϑ</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow></mml:msub><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula>, and <inline-formula id="ieqn-224"><mml:math id="mml-ieqn-224"><mml:msub><mml:mi>I</mml:mi><mml:mrow><mml:mn>21</mml:mn></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:msubsup><mml:mrow><mml:mover><mml:mi mathvariant="bold-italic">σ</mml:mi><mml:mo>˙</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:msub><mml:mi mathvariant="bold-italic">ϑ</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow><mml:mrow><mml:mi>T</mml:mi></mml:mrow></mml:msubsup><mml:mrow><mml:mi mathvariant="bold">W</mml:mi></mml:mrow><mml:msub><mml:mrow><mml:mover><mml:mi mathvariant="bold-italic">σ</mml:mi><mml:mo>˙</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:msub><mml:mi mathvariant="bold-italic">ϑ</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow></mml:msub><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula>. The ‘robust’ corrected <inline-formula id="ieqn-225"><mml:math id="mml-ieqn-225"><mml:msub><mml:mi>T</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>×</mml:mo><mml:msub><mml:mi>T</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:math></inline-formula> variance–covariance matrix of the structural parameters (<inline-formula id="ieqn-226"><mml:math id="mml-ieqn-226"><mml:msub><mml:mi mathvariant="normal">Σ</mml:mi><mml:mrow><mml:mn>2</mml:mn><mml:mo stretchy="false">(</mml:mo><mml:mn>1</mml:mn><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:msub></mml:math></inline-formula>) can then be expressed as follows:
<disp-formula id="eqn-10"><label>A5</label><mml:math id="mml-eqn-10" display="block"><mml:msub><mml:mi mathvariant="normal">Σ</mml:mi><mml:mrow><mml:mn>2</mml:mn><mml:mo stretchy="false">(</mml:mo><mml:mn>1</mml:mn><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mi>N</mml:mi></mml:mfrac><mml:msubsup><mml:mi>I</mml:mi><mml:mrow><mml:mn>22</mml:mn></mml:mrow><mml:mrow><mml:mo>−</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msubsup><mml:mrow><mml:mo>[</mml:mo><mml:msub><mml:mi>I</mml:mi><mml:mrow><mml:mn>2</mml:mn><mml:mi>f</mml:mi></mml:mrow></mml:msub><mml:mi mathvariant="bold">Γ</mml:mi><mml:msub><mml:mi>I</mml:mi><mml:mrow><mml:mi>f</mml:mi><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>I</mml:mi><mml:mrow><mml:mn>21</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mi mathvariant="bold">P</mml:mi></mml:mrow><mml:mi mathvariant="bold">Γ</mml:mi><mml:msub><mml:mi>I</mml:mi><mml:mrow><mml:mi>f</mml:mi><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>I</mml:mi><mml:mrow><mml:mn>2</mml:mn><mml:mi>f</mml:mi></mml:mrow></mml:msub><mml:mi mathvariant="bold">Γ</mml:mi><mml:msup><mml:mrow><mml:mi mathvariant="bold">P</mml:mi></mml:mrow><mml:mrow><mml:mi>T</mml:mi></mml:mrow></mml:msup><mml:msub><mml:mi>I</mml:mi><mml:mrow><mml:mn>12</mml:mn></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>I</mml:mi><mml:mrow><mml:mn>21</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mi mathvariant="bold">P</mml:mi></mml:mrow><mml:mi mathvariant="bold">Γ</mml:mi><mml:msup><mml:mrow><mml:mi mathvariant="bold">P</mml:mi></mml:mrow><mml:mrow><mml:mi>T</mml:mi></mml:mrow></mml:msup><mml:msub><mml:mi>I</mml:mi><mml:mrow><mml:mn>12</mml:mn></mml:mrow></mml:msub><mml:mo>]</mml:mo></mml:mrow><mml:msubsup><mml:mi>I</mml:mi><mml:mrow><mml:mn>22</mml:mn></mml:mrow><mml:mrow><mml:mo>−</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msubsup><mml:mo>.</mml:mo></mml:math></disp-formula></p>
</app>
</app-group>
	

	
	<sec sec-type="supplementary-material" id="sp1"><title>Supplementary Materials</title>
		<table-wrap position="anchor">
			<table frame='void' style="background-#f3f3f3">
				<col width="60%" align="left"/>
				<col width="40%" align="left"/>
				<thead>
					<tr>
						<th>Type of supplementary materials</th>
						<th>Availability/Access</th>
					</tr>
				</thead>
				<tbody>
					<tr>
						<th colspan="2">Data</th>						
					</tr>
					<tr>
						<td>No study data is available.</td>
						<td>&mdash;</td>
					</tr>	
					<tr style="grey-border-top-dashed">
						<th colspan="2">Code</th>
					</tr>
					<tr>
						<td>Code R - including simulation details and population values.</td>
						<td><xref ref-type="bibr" rid="r16.5">Can and Rosseel (2025)</xref></td>
					</tr>		
					<tr style="grey-border-top-dashed">
						<th colspan="2">Material</th>
					</tr>
					<tr>
						<td>No study materials are available.</td>
						<td>&mdash;</td>
					</tr>
					<tr style="grey-border-top-dashed">
						<th colspan="2">Study/Analysis preregistration</th>
					</tr>	
					<tr>
						<td>The study was not preregistered.</td>
						<td>&mdash;</td>
					</tr>
					<tr style="grey-border-top-dashed">
						<th colspan="2">Other</th>
					</tr>	
					<tr>
						<td>No other material to report.</td>
						<td>&mdash;</td>
					</tr>
				</tbody>
			</table>
		</table-wrap>		
	</sec>
			
			
	

<fn-group>
<fn fn-type="financial-disclosure"><p>The authors have no funding to report.</p></fn>
</fn-group>
<fn-group>
<fn fn-type="conflict"><p>The authors have declared that no competing interests exist.</p></fn>
</fn-group>
<ack>
	<p>The first author acknowledges the support provided by the Scientific and Technological Research Council of Turkey (TUBITAK).</p>
</ack>
</back>
</article>
