^{1}

^{2}

^{1}

^{3}

Rapid guessing is a test taking strategy recommended for increasing the probability of achieving a high score if a time limit prevents an examinee from responding to all items of a scale. The strategy requires responding quickly and without cognitively processing item details. Although there may be no omitted responses after participants' rapid guessing, an open question remains: do the data show unidimensionality, as is expected for data collected by a scale, or bi-dimensionality characterizing data collected with a time limit in testing, speeded data. To answer this question, we simulated speeded and rapid guessing data and performed confirmatory factor analysis using one-factor and two-factor models. The results revealed that speededness was detectable despite the presence of rapid guessing. However, detection may depend on the number of response options for a given set of items.

Rapid guessing is a test taking strategy that consists of responding to items fast and without attempting to solve the item properly (

Another possible consequence is that the irrelevant variance manifests itself as an additional factor in the latent structure of the test. Models for structural investigations (e.g., factor analysis, dimensionality assessment) mostly assume that there is only one latent source of responding that leads to systematic and relevant variation, which is captured by the latent variable included in the measurement model (

Although participants taking a test are expected to spend as much time as necessary on each item and to provide the best possible response, they may deviate from such behavior for various reasons. For example, there are situations, for example, where test scores will have major consequences (e.g., employment, education opportunities) that may lead participants to use inappropriate test taking strategies when completing items in order to increase the chance of reaching a high score (

The advantage promised by rapid guessing is that a random response can be correct. If there are several response options and only one is correct, the probability that the random response is correct is one divided by the number of response options. That is, if there are four options and the participant guesses at random, there is a 25% chance of a correct response compared to not responding at all and ensuring a 0% chance of a correct response. Smaller numbers of response options are associated with larger probabilities of a correct response and larger numbers of response options with smaller probabilities. This strategy could be an advantage for the examinee if the number of correctly completed items serves as measure of performance.

A popular way of investigating the internal structure of a scale to support a scoring inference for validity is with confirmatory factor analysis (CFA). A common assumption in item response theory is unidimensionality, which can, in part, be demonstrated by a one-factor CFA model (

A measurement model specifies the influences that are assumed to determine the participants’ responses to a given item. A one-factor CFA model assumes one latent source of systematic responding that is reflected by latent variable ξ. The contribution of ξ to completing the _{i}. Additionally, assumed contributions are those of random influences that are represented by δ_{i} without further specification (e.g., no correlated residuals). Such a model relates the

A scale is said to show structural validity if this model accounts for the item covariance matrix. However, this is not general or all validity but validity restricted to major characteristics of the circumstances of data collection. One major characteristic is the time span for completing the items of the scale, as time limits in testing can alter the validity of the data (

A modified CFA model of _{primary} and _{additional} include the factor loadings and the

There are two different types of two-factor models. The first type combines two one-factor models into a whole. The major characteristic of this type is that each manifest variable (e.g., item) loads on one latent variable only so that there are no cross-loadings (

It is the first version of the second type of model that is suitable for data originating from two latent sources that simultaneously contribute to at least a few items. More specifically, since one source can be assumed to be active in completing all items whereas the second source is only active in some items, it is a bifactor model that is required for investigating data collected with a time limit in testing. This means that all entries of _{primary} are either free for estimation or constrained to correspond to expected values whereas some entries of _{additional} are fixed to zero. These are the entries regarding items that are not influenced by processing speed, that is, show no omissions:
_{primary} and _{additional}.

Free factor loadings and factor loadings fixed to correspond to expected values have been shown to perform virtually equally well in simulated data if the expected values are adapted to the assumed latent source and the number of participants selecting the strategy (_{additional} for estimation. These types of factor loadings have different properties. Free factor loadings can accommodate all kinds of effects so that there is hardly any impairment in model fit. This means that the factor loadings on the latent variable account for the systematic variation due to the intended latent source and in addition to some degree accommodate systematic variation due to other sources. In contrast, fixed factor loadings can only account for the systematic variation due to the intended latent source. If there is further systematic variation that may be due to a method effect, this leads to model misfit. The greater probability of model misfit may be considered as a downside of fixed factor loadings but there is also an advantage: good model fit indicates that the model captures exactly what it is expected to capture and nothing else.

Values for serving as factor loadings in order to capture systematic variation due to processing speed can be obtained by the cumulative normal distribution function that is approximated by the logistic function. The cumulative normal distribution function is obtained from the normal distribution function that is assumed to characterize the density distribution of latent processing speed. Using the logistic function, the factor loading of the _{i} is defined as follows:

The curve printed as a solid line illustrates the assumed probabilities of a correct response if there is no time limit in testing. This curve suggests that the items are arranged according to their difficulty levels. The curve printed as a dashed line represents the assumed probabilities of a correct response originating from testing with a time limit. The assumed gradual drop-off of participants causes an increasing degree of deviation toward the end of the sequence of items.

There may be consequences of rapid guessing different than leaving the not-reached items as omitted, and begins with the expected distribution of omissions. This distribution needs to be modified to take into consideration that correct and incorrect responses at random replace omissions. For this purpose, a clearly defined expected probability of a correct response at random that is independent of the difficulty level of the item is necessary. We assumed that the data were collected with items showing a multiple-choice response format in order to have a basis for such probabilities. In this case, the expected probability of a correct response at random solely depends on the number of response options. The multiple-choice response format is the most popular response format (

Since the logistic function varies between zero and one and is assumed to provide values corresponding to the expected frequency of omissions divided by the upper limit for the frequency of omissions, it can be perceived as probability. Accordingly, in the following discussion we use probabilities for combining the description of the effect of a time limit with the description of the effect of rapid guessing. The expected probability E[Pr( )] for _{i} (_{o}. To keep this section connected to the previous discussion, we start from

Next, the influence of rapid guessing needs to be quantified. Rapid guessing means that _{i} (_{c} or the set of false (= incorrect) responses C_{f}. The expected probability depends on the number of response options. If we assume that this number is

The majority of correct responses can be assumed to originate from the primary source of responding whereas omissions turned into incorrect responses are more likely than omissions turned into correct responses at random. This suggests that the focus has to be on the incorrect responses in quantifying the effect of rapid guessing on the detection of speededness. Accordingly, the expected probability of an incorrect response due to latent processing speed in combination with rapid guessing is given by

This Figure includes curves depicting the probability of responding correctly if participants use rapid guessing in combination with response formats including two, four, six and eight options. The curves suggest that eight, six and even four response options only cause minor deviations from the curve for

Despite the indicated impairment of the detectability of the effect of a time limit in testing there is also positive news: there is still some chance to detect this effect. Further, the larger the number of response options, the larger is the probability to detect it. Concerns about the effect of the number of response options leads

Confirmatory factor analysis attempts to estimate a model that can reproduce the covariance matrix. This involves comparing the model-implied

Mathematics offers several solutions for relating different types of data to each other. There are link transformations as part of generalized linear models (

The model-implied _{speed&guessing} needs to be considered besides latent variable ξ_{construct}. Furthermore, there are two _{construct} and _{speed&guessing}. The factor loadings of ξ_{speed&guessing} corresponds to λ_{additional_latent_source_i} (

Using this model in combination with dichotomous data requires adaptation that is two-fold in the approach characterizing this work. First, there is adaptation of the scale level of data that occurs in computing probability-based covariances that changes from binary to continuous (see the paragraph preceding the previous paragraph). We symbolize this adaptation by transformation T of _{probability-based covariance}(_{i} (_{i} = 1) represents the probability of the correct response in completing item _{construct} are only necessary in combination with fixed factor loadings on this factor. Otherwise, they can be omitted since their omission does not influence model fit (

The corresponding model-implied

The correctness of the model-implied ^{-1} to the identity matrix on the other hand. In the case of perfect correspondence trace tr(^{-1}) corresponds to ^{2} distribution signifies model misfit (except of in cases where there are more factors included in the model than necessary).

The preconditions for making use of function F are continuous data, invertibility of

Our approach differs from the available data-focused approaches in that it seeks to modify the model in such a way that model and data correspond to each other according to major distributional properties. This means that it makes use of the characteristic of the maximum likelihood estimation function of no restriction regarding the distribution. The factor loadings are modified by multiplication with weights in such a way that the effect of splitting continuous data according to probability level

The main objective of the empirical investigation was to examine if the effect of a time limit in testing was detectable in data despite participants’ rapid guessing. The use of this guessing strategy was an important issue as its strict application would result in the complete disappearance of omissions. Complete disappearance of omissions meant that the effect of a time limit in testing was no longer apparent in descriptive statistics.

The simulated data for this investigation had to show 1) the characteristics of data originating from a time limit situation in testing leading to omissions, 2) the use of a multiple-choice response formats, and 3) rapid guessing. The selected time limit was assumed to allow all participants to complete approximately two-thirds of the 20-item set before they would gradually stop responding properly. Furthermore, the data had to show the consequence of the participants’ rapid guessing. For this purpose, the simulated omissions due to the testing time limit were replaced by simulated random responses.

Data matrices composed of 500 rows and 20 columns were generated by means of three 20 × 20 relational patterns (

The continuous data were dichotomized so that the first simulated item showed a simulated probability of a correct response of .95 and the last simulated item of .50. The simulated probabilities of the simulated items in-between linearly decreased. Furthermore, omissions were integrated into the data matrices using the logistic function. That is for each simulated item (= column) the percentage of simulated participants (=rows) who were expected to be unable to respond within the available time span was determined by the logistic function. After the selection of a simulated participant the entries to this and all following simulated items were turned into omissions. The turning point that marks the switch from the increase in steepness to the decrease of steepness of the logistic function was set to item 18 (

The omissions were replaced by random data (correct responses or incorrect responses at random), as could be expected because of rapid guessing. Because of the crucial influence of the number of response options different multiple-choice response formats were considered. Eight, six, four and two response options were selected for this study. The corresponding probabilities of a correct response at random were 1/8, 1/6, 1/4 and 1/2 respectively. They served the investigation of the hypothesis regarding the number of response option (see end of the theoretical section). Furthermore, no replacement of omissions, that is, no rapid guessing, was also considered in order to have a comparison level. Altogether, there were 500 × 3 (source influence levels) × 5 (response option levels) matrices.

The confirmatory factor models included either one or two latent variables (=factors). One of them was designed to capture systematic variation due to the primary source of responding and the other one to capture systematic variation due to the additional source that was assumed to be latent processing speed. The latent variables were not allowed to correlate with each other. Furthermore, there were 20 manifest variables. The constraints for the factor loadings on the latent variable representing the effect of the time limit in testing were obtained according

The study included the type of model (one-factor vs two-factor models) as main independent variable, the response formats as a minor latent variable and the levels of source influence as control variable. The dependent variable was model fit measured by CFI.

The statistical investigation was conducted using ML-MA version of maximum likelihood estimation (^{2} differences; where a difference of .01 could be considered as substantial regarding the CFI difference (^{2} difference (

The mean CFI results observed for the one-factor and two-factor CFA models are presented as bars in

Source influence level | CFI differences for the following probabilities of a correct response |
||||
---|---|---|---|---|---|

0^{a} |
1/8^{b} |
1/6^{b} |
1/4^{b} |
1/2^{b} | |

.325 | 0.086* | 0.035* | 0.027* | 0.010* | 0.001 |

.375 | 0.061* | 0.021* | 0.016* | 0.007 | 0.001 |

.425 | 0.045* | 0.015* | 0.010* | 0.006 | 0.001 |

^{a}Comparison level. There was no replacement of omissions. ^{b}The probability of a correct response due to chance (instead of an omission).

*

The columns of the table refer to the probability levels (numbers of response options) and the rows to the source influence levels. All differences of the first to third columns were larger than or equal to .01. These result signified a substantial improvement in model fit from the one-factor to the two-factor models for the probabilities of zero, 1/8 and 1/6; that is, for no replacement of omissions and response formats with eight and six response options. In the fourth column there was only one other substantial difference for the lowest source influence level. Not one of the differences reported in the last column reached the level of statistical significance.

The χ^{2} differences for the corresponding one-factor and two-factor confirmatory factor models are included in

Source influence level | χ^{2} differences for the following probabilities of a correct response |
||||
---|---|---|---|---|---|

0^{a} |
1/8^{b} |
1/6^{b} |
1/4^{b} |
1/2^{b} | |

.325 | 32.5^{c}* |
16.7* | 13.6* | 8.0* | 2.6 |

.375 | 33.4^{c}* |
18.7* | 15.1* | 10.0* | 2.8 |

.425 | 33.5^{c}* |
20.9* | 17.1* | 11.3* | 3.2 |

^{a}Comparison level. There was no replacement of omissions due to rapid guessing. ^{b}The probability of a correct response due to chance (instead of an omission). ^{c}Since the two-factor model with free factor loadings on the first factor led in a large number of datasets to estimation problems in this condition, the factor loadings on this factor were fixed to one.

*

This table shows the same structure as ^{2} results were in line with the CFI results with two exceptions. The first exception was the result for the combination of the source level of .375 and the probability of a correct response of 1/4 and the second exception the result for the combination of the source level of .425 and the same probability of a correct response. In both cases the χ^{2} difference signified a substantial difference whereas the CFI difference did not.

In sum, an effect of a time limit in testing was detected if there were no less than either six response options (CFI difference) or four response option (χ^{2} difference), or expressed in a different way, if the probability of a correct response at random was not larger than .167 (or .25) with one exception.

Accurate data are the precondition for the achievement of new insights in science; the control of sources that potentially impair data is an important part of scientific research. A long known issue regarding measurement validity is a time limit in testing (

A time limit in testing creates a special precondition for the statistical investigation of internal structure that requires adaption of the factor model. The special precondition is that two sources of responding need to be considered instead of only one (

Rapid guessing means a violation of this convention. Various reasons can lead to its violation including test preparation courses that advise participants to respond to all items even if there is not enough time for completing them appropriately. Following this advice leads to complete data that may be regarded as desirable because the missing data problem is avoided (

As is demonstrated in our results, there remains the possibility to capture systematic variation that is due to latent processing speed despite rapid guessing. It is not even necessary to modify the model for the investigation of speeded data because of rapid guessing and also not necessary to measure processing times (

Although the use of rapid guessing does not prevent the detection of latent speed as one source of responding, it is not without a negative consequence for structural investigations. The integration of the probability of a correct response at random into the formal representation of the expected effect of the time limit in testing suggests an impairment of the probability of detecting this effect in structural investigations. This impairment is demonstrated to depend on the number of response options. Our results support the hypothesis suggesting such impairment.

A limitation of the present study is the assumption that all participants perform rapid guessing and that omissions completely disappear. Another limitation is the assumption of independence of ability and rapid guessing. Further limitations are the considerations of a single test length, the arrangement of items, the absence of omissions due to other sources, constancy of sample size, independence of factors. Moreover, free factor loadings (

Data:

Compute probability-based covariances according to the following equation:
_{i} and _{j} (

Model:

Select the bifactor model of measurement for the investigation

Select number 1 as fixation for factor loadings on the first factor or set them free

Select an item position as preliminary turning point (

Compute fixations for factor loadings on the second factor according to

Compute weights according to

Insert the information on the number of factors, the weights and the factor loadings in the statistics software

Assure that variance parameters of factors with fixed factor loadings are set free for estimation

Assure that the error variances are set free for estimation

Select the maximum likelihood estimation method

Select the matrix including the probability-based covariances of step 1 as input

Start the program and save the fit results

Repeat the steps 4 to 13 with varying item positions as turning point to identify the turning point yielding the best degree of model fit (if this point is not known)

Compare the fit result for this turning point with the result for a one-factor model

```
TITLE: Karl's Example
DATA:
FILE=S:\COEPrivate\frenchb\Papers\Karl_speed_2021\example_cov.txt;
nobservations = 500;
type = covariance;
VARIABLE: NAMES ARE I1 I2 I3 I4 I5 I6 I7 I8 I9 I10 I11 I12
I13 I14 I15 I16 I17 I18 I19 I20;
USEVARIABLES ARE
I1 I2 I3 I4 I5 I6 I7 I8 I9 I10 I11 I12
I13 I14 I15 I16 I17 I18 I19 I20;
Analysis:
ESTIMATOR = ML;
MODEL:
GEN BY I1* I2 I3 I4 I5 I6 I7 I8 I9 I10 I11 I12
I13 I14 I15 I16 I17 I18 I19 I20;
Speed by I1@0 I2@0 I3@0 I4@0 I5@0 I6@0 I7@0 I8@0 I9@.00001 I10@.00015
I11@.00041 I12@.00114 I13@.00316 I14@.00862 I15@.02309
I16@.05875 I17@.13413 I18@.24850 I19@.34373 I20@.34113;
Gen@1;
Gen with Speed@0;
OUTPUT: STDYX;
```

The authors have no funding to report.

The authors have declared that no competing interests exist.

The authors have no additional (i.e., non-financial) support to report.