Survey research relies largely on closed questions because of their greater efficiency with respect to interviewing, coding, and analysis (Schuman & Presser, 1981). However, the debate about the advantages and disadvantages of using open questions is as old as survey research itself (for an overview, see Converse, 1984). Open questions have at least five advantages. First, they can yield examples for public outreach that reflect respondents’ own words; second, they can be used for the ex-post evaluation of closed survey questions; third, they can be used as a basis for future closed questions; fourth, they can be coded and used in explanatory statistical models; and finally, fifth, they can serve also as a motivational tool by giving respondents an opportunity to express their opinions in their own words (see Singer & Couper, 2017; Zuell, 2016).
In particular, the second and third advantages of open questions have become increasingly popular recently. In the context of web probing (Edgar, Murphy, & Keating, 2016; Meitinger, 2017; Meitinger, Behr, & Braun, 2019), researchers ask open follow-up questions to closed survey questions. Thus, open questions are used as a tool for both the pre- and the post-evaluation of closed survey questions. However, open questions can fulfil this evaluative function only if the answers to open questions are strongly related to the responses to the closed questions under evaluation.1
The comparison between responses to open and closed questions can be done either experimentally or with a multivariate explanatory model. To date, only a few studies have undertaken such comparisons. Two early experimental studies compared the answers to closed and open questions in face-to-face surveys (Schuman & Presser, 1981; Schuman & Scott, 1987). Using a split-ballot design, the authors compared the results of their experiments with open and closed questions about (a) the most important problem facing the United States, (b) the things respondents most preferred in a job (work values), and (c) the most important world events from 1930 until the present day. The studies showed that, although the results sometimes differed, both question formats produced reasonable and informative data of high quality. Over three decades later, Reja, Manfreda, Hlebec, and Vehovar (2003) conducted an experiment in which they compared closed and open questions in an online survey. Like Schuman and Presser (1981) and Schuman and Scott (1987), they found many similarities in the ranking of the values with respect to the most important problem that the Internet was facing at the time.
Another study evaluated the possibility of implementing open questions and closed questions simultaneously by using the two question formats to predict mental health (Friborg & Rosenvinge, 2013). The authors demonstrated that, although open questions yielded more detailed information than closed questions, they also produced more item nonresponse. Based on this finding, they concluded that the advantages and disadvantages of open questions canceled each other out. Thus, they questioned the usefulness of open questions because of their lower efficiency.
Using another approach, Bauer, Barberá, Ackermann, and Venetz (2017) investigated the validity of responses to open questions by using related closed questions. Respondents were asked open questions about the meanings of the political terms “left” and “right.” When the authors compared these answers to the answers to closed question requesting respondents to place themselves on the left-right scale, they found that variation in respondents' associations with "left" and "right" was systematically related to (a) their self-placement on the left-right scale and (b) background variables such as education and culture. The authors concluded that their study suggested that more research was needed on the interpretation of the various abstract concepts that are regularly used in survey questions.
Our study uses an innovative methodology, which includes the random imputation of missing values, to assess the relationship between responses to open and closed questions. This relationship is explored in the area of non-voting behavior, a particularly sensitive topic that is especially interesting to examine because the open question about non-voting is asked as a follow-up to a closed question about voting behavior. This resembles a question series that is typically used in web probing. The case of non-voting behavior allows us also to distinguish between intended and reported behavior, which are both regularly measured in survey practice (Sudman, Bradburn, & Schwarz, 1996). In what follows, we develop theory-based hypotheses, which we test using data from the pre- and post-election surveys conducted in the framework of the German Longitudinal Election Study 2013 (Rattinger et al., 2017).
Background and Hypotheses
At first glance, it seems obvious that there must be a strong relationship between responses to an open question and a related closed question. However, the relationship between the two questions is less obvious when—as in web probing—an open question is used as a follow-up question to better understand the answer to a closed question. In this case, it may be possible that the open answer reflects the reasons for the answer to the closed question. Alternatively, however, the response to the open question may merely be an ex-post legitimation of the answer to the closed question, where the answer to the closed question was given in an automatic processing mode without careful reasoning. In this case, a respondent has to construct an ex-post reason when answering the open question in order to fulfill the expectation that all actions are reasoned. As a result, the answer to the open question does not yield any further information about the true motivation behind the closed answer, because the main purpose of this answer was impression management (e.g., to give the false impression that the response to the closed question was well thought-out). From the perspective of the cognitive response process (Tourangeau, Rips, & Rasinski, 2000), ex-post legitimations are a way of hiding shortcuts in the response process and making the response appear to be well thought-out. In this sense, a respondent who "satisfices" (by expending less energy and settling for a merely satisfactory answer) is difficult to distinguish from a respondent who "optimizes" by going through all steps of the cognitive response process thoroughly in order to give the most accurate answer possible (Krosnick, 1991).
In order to distinguish between reasons for a behavior and ex-post legitimation, it is necessary to go beyond the pair of related open and closed questions and to use additional related variables that help the researcher to discriminate whether or not the responses to an open question report the substantive reasons for the behavior in question. Following this idea, voting versus non-voting seems to be a particularly useful example. Voting behavior is a well-established field of research in which many studies have been conducted, and it has been shown that intended and reported voting behavior can be predicted by theoretically well-founded explanatory variables (e.g., Verba & Nie, 1987).2 The example of voting behavior also seems to be useful because voting is widely considered to be a deliberate decision, and democracies are built on that premise. Thus, it can be assumed that the voting behavior has a high likelihood of being intentional and well-reasoned.
Important explanatory factors in the models employed in these studies are “satisfaction with democracy,” “voting norm,” “political interest,” and “party evaluation” (see Verba & Nie, 1987). In election studies (e.g., the American National Election Study), these key factors are usually measured by closed questions; they should also be reflected by answers to open questions if these answers are substantive reasons rather than legitimations. In what follows, we refer to the responses to these determinants of voting as “established factors.” As established factors are known to have a strong relationship to voting behavior, they should be closely related to corresponding substantive reasons. In contrast, if the responses to an open question are predominantly legitimations, then this strong relationship to a corresponding established factor should be absent. This leads to the following two hypotheses:
H1: If a respondent answers an open question by giving the substantive reason for the behavior in question, then the reason given should correspond to his or her answer to a substantively related closed question.
H2: The predictive power of an established factor should be stronger for respondents whose answers to the related open question correspond to their answers to the closed question than for respondents whose answers are inconsistent.
Behavioral questions can be asked in two ways in surveys: first, as a question requesting a report about a behavior that has already been executed; second, as a question about an intended behavior that will, or might be, executed (see Sudman et al., 1996). Numerous studies have shown that answers to questions about executed behavior are more precise than answers to questions about intended behavior (e.g., Diekmann & Preisendörfer, 1998; LaPiere, 1934; Schahn & Holzer, 1990). With respect to non-voting, we expect that questions about reported (non-voting) behavior in post-election surveys will be easier to answer than questions about intended voting behavior in pre-election surveys because, before an election, a respondent has to imagine a future situation, whereas (shortly) after election day respondents can retrieve behavioral information from memory more easily (Tourangeau et al., 2000). If the cognitive task is more difficult and requires greater cognitive effort, respondents are more likely to reduce their response burden by taking shortcuts in the response process (Krosnick, 1991). Applying this to open questions on non-voting behavior, we expect more legitimations and fewer substantive reasons in the case of intended voting behavior than in the case of recalled voting behavior. This leads to the following hypotheses:
H3: The consistency between responses to an open question and responses to a related closed question is stronger for reported behavior than for intended behavior.
H4: The predictive power of a closed question is stronger for reported behavior than for intended behavior.
Based on the differences between intended and realized behavior (Sudman et al., 1996), we further expect substantial differences between the answers given to open questions in pre-election and in post-election surveys. With respect to reasons for (and legitimations of) a behavioral choice, the responses to open questions can be based either on general characteristics of the situation and on external circumstances (related to external entities and events outside a respondent’s self) or on specific characteristics of a respondent and on internal circumstances (related to a respondent’s internal attitudes and values). Because intention formation is a more general task than recalling a realized behavior, we expect that intended behavior is more likely to be attributed to external circumstances and reported behavior is more likely to be attributed to internal circumstances. This leads to the following hypothesis:
H5: In the case of a question about an intended behavior, respondents are more likely to attribute their choice to internal reasons; in the case of a question about a reported behavior, respondents are more likely to attribute their choice to external reasons.
For our analyses, we used pre- and post-election cross-sectional surveys that collected data on the 2013 German federal election, which was held on September 22, 2013. The surveys were conducted in the framework of the German Longitudinal Election Study (GLES); the data and documentation are publicly available for scientific use (Rattinger et al., 2017). The sample for each survey was drawn using a stratified three-stage random sampling technique: In the first stage, 306 primary sampling units (PSUs) were selected. These PSUs were the starting points for a random route procedure in the second stage, in which interviewers selected the target households. In the third stage, one adult aged 16 or older in each household was invited to take part in the interview. In both surveys, the data collection mode was face-to-face interviewing using computer-assisted personal interviews (CAPI).
The pre-election survey was fielded between July 29 and September 21, 2013. A response rate of 32.1% (RR6, see The American Association for Public Opinion Research [AAPOR], 2016) resulted in a sample size of 2,001 respondents. The post-election survey was fielded between September 23 and December 23, 2013. A response rate of 27.6% (RR6, see AAPOR, 2016) resulted in a sample size of 1,906 respondents. In the pre-election survey, 12.6% of respondents (N = 226) reported that they would not vote in the election; in the post-election survey 15.2% of respondents (N = 289) reported that they had not voted. The actual proportion of non-voters in the German federal election was 28.5% (Federal Returning Officer, 2013). We applied a weight, which accounted for region, household size, sex, age, and education, to calculate the percentages of voters and non-voters because we aimed for representative results for these numbers. The remaining analyses were conducted with unweighted data because they investigated internal consistency.
A closed question about the probability that the respondent would vote (pre-election survey) or about the respondent's voting recall (post-election survey) was, if applicable, immediately followed by an open question asking why the respondent was not going vote or had not voted.3 In the pre-election survey, the wording of this question was: “And why will you probably not vote? Please tell me the most important reason.” In the post-election survey, non-voters were asked “And why did you not vote? Please tell me the most important reason.” In both surveys, answers were coded using a classification scheme comprising 38 categories of reasons that was developed for the GLES study.4 We conducted a content analysis in which we grouped these categories into the following five main categories (see Table 1 for the distributions of these five categories of reasons in the pre-election and post-election surveys):
||Exp.||Diff.||Ho: %Int. = %Report
|Political system (ER)||43.9||101||45.0||129||+||-0.2||-0.24||.814||No|
|Egotism of parties (ER)||30.4||70||10.5||30||+||20.5||5.72||< .001||Yes|
|Political interest (IR)||13.0||30||13.6||39||-||-0.3||-0.18||.856||No|
|Specific circumstances (IR)||10.0||23||26.5||76||-||-16.3||-4.73||< .001||Yes|
Note. Source: German Longitudinal Election Study 2013: Pre- and Post-Election Cross-Section (Rattinger et al., 2017). Exp. = expectation; Diff. = difference (in %); Conf. = confirmation of expectation. Some respondents gave more than one reason. The p-values are based on two-tailed tests.
aN is based on answers of 198 respondents for intended non-voting and on answers of 250 respondents for reported non-voting.
This category covered dissatisfaction with the political system (e.g., “dissatisfied with the political system,” “politicians are incompetent”), as well as low political involvement and lack of influence (e.g., “My vote has no influence.” “My party has no chance.”). When asked about their intention not to vote, 43.9% of the respondents gave a reason in this category; when asked why they had not voted, 45.0% of the respondents gave such a reason.
This category covered low political interest and knowledge (e.g., not interested in politics”). When asked about their intention not to vote, 13.0% of the respondents gave a reason in this category; when asked why they had not voted, such a reason was given by 13.6% of the respondents.
Egotism in Politics
This category covered egotism on the part of politicians and parties (e.g., “politicians care only about themselves,” “empty campaign promises”). When asked why they did not intend to vote, 30.4% of the respondents gave a reason in this category; when asked why they had not voted, such a reason was cited by 10.5% of the respondents.
This category covered circumstances on Election Day (e.g., “sick,” “no time”). When asked why they did not intend to vote, 10.0% of the respondents gave a reason in this category; when asked why they had not voted, such a reason was cited by 26.5% of the respondents.
This category covered all reasons that were not covered by the other four categories (e.g., “Voting is against my religious beliefs.”). When asked why they did not intend to vote, 2.6% of the respondents gave a reason that did not fit into one of the four main categories; when asked why they had not voted, 4.5% of the respondents gave such a reason.
The open questions asked only for the most important reason for not voting. However, if respondents reported more than one reason, a maximum of three reasons were coded. The complete classification scheme used for our analysis can be found in Silber et al. (2020), in the Supplementary Materials. For each of the five categories, a dummy-variable was generated that indicated whether or not a respondent gave a reason (1 = reason given, 0 = no reason given).
Internal and External Reasons
The categories “specific circumstances” and “political interest” were classified as internal reasons because they relate to respondents’ internal attitudes and values, whereas the categories “egotism in politics” and “political system” were classified as external reasons because they relate to external entities and events outside the respondent's self. As our research explored non-voting behavior, all items were coded in such a way that higher values indicated a negative attitude toward voting. Although the open and the closed questions did not match perfectly, the topics were very similar.
Both questionnaires included several questions on political knowledge, attitudes, and behavior, as well as sociodemographic questions. Of these questions, we selected four closed question (items) to compare the answers to the open questions with established factors predicting voting. All four items were asked identically in the pre-election and the post-election surveys.
Satisfaction With Democracy
The dataset included the following question on satisfaction with democracy: “How satisfied or dissatisfied are you with democracy in general in Germany?” (Response categories: very satisfied , satisfied, neither satisfied nor dissatisfied, dissatisfied, very dissatisfied ).
The item on the voting norm was as follows: “In a democracy, it is the duty of every citizen to vote regularly.” (Response categories: strongly agree , slightly agree, neither agree nor disagree, slightly disagree, strongly disagree ).
Respondents were asked the following question on political interest: “How interested are you in politics in general?” (Response categories: very interested , interested, moderately interested, slightly interested, not interested ).
Egotism of Parties
The questionnaire included the following item on the egotism of political parties: “Parties are only interested in votes, not in the opinions of the voters.” (Response categories: strongly agree , slightly agree, neither agree nor disagree, slightly disagree, strongly disagree ).
Item nonresponse was very low—under 8% for the open questions and under 2% for the closed questions. For instance, nonresponse (i.e., “don’t know” and “refusal”) for the closed question on voting participation was 1.1% in the pre-election survey and 0.2% in the post-election survey. Nonresponse for the open questions on non-voting was 4.1% in the pre-election survey and 7.6% for the post-election survey.
Comparability of Open and Closed Questions
The categories of the open questions and the closed questions were selected in an iterative process to be as comparable as possible. Specifically, the category “political system” was linked to the closed questions “satisfaction with democracy” and “voting norm.” The category “political interest” was linked to the closed question “political interest,” and the category “egotism in politics” was linked to the closed question “egotism of parties.”
We employed the following analysis strategies to test our five research hypotheses. The first and third hypotheses were tested by comparing mean differences between the four closed questions. For each of the four questions, we compared the group of respondents who gave such a reason when answering the open question to respondents who did not give such a reason. The dependent variables were "satisfaction with democracy," "voting norm," "political interest," and "egotism of parties." We expected to find relatively small mean differences, because we compared only non-voters, who had a lower variance than the full sample on these four attitudinal questions.
The second and fourth research hypotheses were tested using a classic behavioral voting model (see Verba & Nie, 1987). As only non-voters were asked the open questions on non-voting, we could not investigate directly the link between voting behavior and the answers to these open questions. We solved this problem by randomly imputing values of the open questions for the voters, who were not asked these questions, so that the marginal distributions of the imputed open questions were equal between voters and non-voters. Consequently, the bivariate correlation between voting participation and the open questions on non-voting become zero. Nevertheless, if the responses given to the open questions by the non-voters are substantive reasons for not voting, it can be expected that in the subgroup in which a specific reason was given, the relationship between the established factors and voting participation is higher than in the subgroup in which a specific reason was not given. Because this expectation applies only to reasons given and does not apply to randomly imputed reasons, the difference may be underestimated. Therefore, our approach must be considered to be a conservative test of these two hypotheses.
In order to test the fifth research hypothesis about the reasons for not voting that were given before and after the election, we compared the answers to the open question in the pre-election survey with those to the open question in the post-election survey. The significance of each percentage difference was tested by using the z-statistic.
Our first hypothesis postulated a strong relationship between the established factors of voting behavior and the answers to the open questions. When comparing the means of the four factors “democracy,” “voting norm,” “political interest,” and “egotism” among respondents who gave a reason in that specific category to the open question and respondents who did not give such a reason, we found that only one of the four mean differences was significant (p < .001) and in the expected direction for the pre-election study, whereas all four mean differences were significant and in the expected direction for the post-election study (p < .01; see Table 2).5 This result not only confirmed our first hypothesis about the relationship between answers to the open and closed questions, but also confirmed our third hypothesis postulating a stronger relationship before than after the election. Notably, “democracy” showed the smallest mean difference before the election, which suggests that the reasons in this category may have been given as ex-post justification for not voting.
|Variable||No reason givena
||Diff.||Exp.||Ho: M is larger when reason given
|Voting norm||3.12||141||3.81||129||-.69||-||-4.33||< .001||Yes|
|Pol. interest||3.90||231||4.46||39||-.56||-||-3.54||< .001||Yes|
Note. Source: German Longitudinal Election Study 2013: Pre- and Post-Election Cross-Section (Rattinger et al., 2017). Exp. = expectation; Diff. = difference (in %); Conf. = confirmation of expectation. Higher value reflects negative opinion (e.g., negative opinion about democracy; recoded where necessary). The p-values are based on two-tailed tests.
aAll closed questions had five response categories (coded 1 to 5). The items were coded in such a way that a higher value indicated a negative attitude toward the target issue.
Building on the relationship between answers to open questions and established factors of voting, the second and fourth hypotheses postulated that the prediction of voting behavior by established factors of voting is more accurate if a reason for this behavior is given in response to the open questions than if no reason is given. In all eight comparisons, McFadden’s pseudo-R2 was indeed higher when a corresponding reason was given to an open question (“Model B”) than when no such reason was given (“Model A”). When looking at the effects of the odds of the items that corresponded to the reasons in the open answers, five of the eight effects were significantly higher (p < .05; see Table 3), which partly confirmed the second hypothesis. Again, “democracy” did not show a significant effect, which further confirmed the assumption that reasons in the “democracy” category given in response to the open question on non-voting behavior were given more as ex-post justifications than as substantive reasons for not voting. In line with our fourth hypothesis, we found significantly stronger differences for three of the four factors in the post-election survey compared to two of the four factors in the pre-election survey. This finding further confirms our assumption that the behavioral predictions are more accurate overall after the election than before the election.
|Variable||Model 0 (all)
||Model A (no reason given)
||Model B (reason given)
||Exp.||Ho: eβ is larger in Model B
|Voting norm||2.34||< .001||.405||2.18||< .001||.352||2.58||< .001||.469||-||-1.19||.275||No|
|Pol. interest||2.77||< .001||.405||2.53||< .001||.395||5.89||< .001||.511||-||-2.05||.041||Yes|
|Egotism||1.87||< .001||.405||1.60||.001||.357||3.25||< .001||.527||-||-2.31||.021||Yes|
|Voting norm||1.97||< .001||.324||1.72||< .001||.195||2.59||< .001||.525||-||-3.16||.002||Yes|
|Pol. interest||2.50||< .001||.324||2.29||< .001||.292||5.21||< .001||.574||-||-2.24||.025||Yes|
|Egotism||1.55||< .001||.324||1.45||< .001||.298||4.06||< .001||.636||-||-2.31||.021||Yes|
Note. Source: German Longitudinal Election Study 2013: Pre- and Post-Election Cross-Section (Rattinger et al., 2017). Exp. = expectation; Diff. = difference (in %); Conf. = confirmation of expectation. Coefficients and model fits are based on logistic regression models (for the full regression models including sample sizes, see Silber et al., 2020, in the Supplementary Materials). The p-values are based on two-tailed tests.
The fifth hypothesis postulated that respondents give more internal reasons before an election and more external reasons after an election. Table 1 shows the results for the pre-election and post-election surveys with respect to the internal reasons “political interest” and “specific circumstances” and the external reasons “political system” and “egotism of parties.” The comparison between the percentage of respondents who gave internal and external reasons showed that respondents did, in fact, report more external reasons before the election (“egotism,” z = 5.72, p < .001), and more internal reasons after the election (“specific circumstances,” z = -4.73, p < .001, see Table 1). This result confirms our fifth hypothesis. It is notable that the differences between external and internal reasons before and after the election were driven solely by “egotism of parties”, on the one hand, and “specific circumstances,” on the other, which suggests that respondents tended to attribute their behavior to the egotism of others before the election and to personal circumstances such as health and time constraints after the election.
The strong relationship between the established factor “egotism” and the answers to the open question before the election (see Table 2) further supports our fifth hypothesis that external reasons are especially involved in the justification of non-voting decisions when respondents are asked before the election; the strong relationship between “political interest” and the answers to the open question after the election (see Table 2) support the hypothesis that internal reasons are especially involved in the justification of non-voting decisions after the election.
The present study supports the notion that respondents do, in fact, give substantive reasons when answering an open question on non-voting behavior. The quality of the answers was evaluated by testing five hypotheses. Results showed, first, that the reasons given to the open questions had strong links (63% significant) to corresponding established factors of voting behavior (e.g., “political interest” and “voting norm”). Second, these links were stronger after the election (100% significant) than before the election (25% significant). Third, the answers to the open questions increased the relationship between established factors and voting behavior (63% significant). This was true for three of the four factors (i.e., “political interest,” “voting norm,” and “egotism”). Only the factor “democracy,” which also had the weakest relationship to the closed questions, did not have this predictive capability. This finding suggests that this factor was used more as an ex-post justification than as a substantive reason for not voting. Fourth, the predictions of voting behavior were again more accurate after the election (75% significant) than before the election (50% significant). And, finally, fifth, respondents gave significantly more external reasons (i.e., “egotism of parties”) before the election and significantly more internal reasons (i.e., “specific circumstances” such as “I did not have time,” or “I was sick.”) after the election.
The findings of this study are in line with those of Schuman and Presser’s (1981) split-ballot design experiments on the relationship between open and closed questions, which also showed that, although not identical, the outcomes were often quite comparable. The divergent finding on the factor “democracy” is conspicuous because it was the only factor that did not have a strong relationship to the related closed questions. We see two possible explanations for this: First, the concept of “democracy” appears to have been relatively vague for the majority of the respondents. This explanation corresponds to the finding of Bauer et al. (2017) that the meaning and interpretation of the political concepts “left” and “right” varied across respondents, which led the authors to conclude that this may also be the case for other abstract concepts used in survey questions. Second, the respondents may have given a reason in the “democracy” category as an ex-post-legitimation of their answer to the closed question on voting participation.
Our research studied non-voting, a sensitive behavior that was reported by only about 15% of the respondents in the pre- and post-election surveys. Future studies could replicate our approach using a sensitive behavior that is reported by a larger subgroup of respondents (e.g., substance use (Johnson, 2014), discrimination (Petzold & Wolbring, 2019), and many other topics (see Tourangeau & Yan, 2007).
Another limitation was that we could use only cross-sectional data. Future studies could investigate the same research question using a longitudinal study design. Although the cross-sectional nature of our data does not limit our conclusion regarding open questions in general, it does affect to a certain extent, our conclusion with respect to the comparison of the pre-election and post-election surveys.
Our study does not allow us to verify the reported reasons at the respondent level. Future studies could use a mixed-methods design that includes qualitative methodology to obtain more in-depth knowledge on this issue. For instance, such a study design could combine a standardized interview with cognitive interviews in order to verify and understand the answers of respondents (see Reeve et al., 2011; Hadler, Neuert, Lenzner, & Menold, 2018). Other interesting additions would be to include a validation of turnout using official voting records (e.g., Ansolabehere & Hersh, 2012) or to incorporate an experimental design such as a factorial experiment (see Petzold & Wolbring, 2019) or an experiment on question wording (see Henriques, Silva, Severo, Fraga, & Ramos, 2019).
Within our study, we could not validate whether the responses to the open questions about non-voting reflect the causal mechanism. Even though voting behavior is considered to be a deliberated behavior so that respondents are likely to be aware of the reasons behind their behavior, respondents still may not give substantive reasons for the voting behavior when asked directly. Our study addressed this limitation in two ways. First, the observed differences between reported and intended voting behavior may suggest that respondents give more substantive reasons when asked about reported behavior, which could be seen as evidence that at least some reasons are based on substantive motivations for the voting behavior. Second, in the regression models, we compare respondents who gave a specific reason with respondents who did not give this reason. The higher explanatory power within the group of respondents who gave that reasons may again suggest that at least some reasons are based on substantive motivations for the voting behavior.
A further shortcoming of our study was that only one example of a behavioral open question was examined. Thus, the findings of our study can only be seen as a small piece of evidence that contributes to the comparison of open and closed questions in surveys. Future studies could replicate our approach in other countries or with cross-national datasets in order to investigate the generalizability of our findings. It would also be interesting to explore differences regarding attitudinal, behavioral, and factual questions as well as regarding the sensitivity of the questions. Only when more cumulative evidence along these lines has been collected, reliable conclusions about open questions, in general, can be drawn.
Open questions have well-known advantages, for example that respondents are not influenced during the cognitive response process by specified response categories and are not obliged to select a category that does not completely match their response. Moreover, open questions increase the chance of obtaining new insights into the target field of research. In addition to these advantages, our study shows that the answers to open questions about behavior are (at least partly) based on substantive reasons, are strongly linked to the answers to related closed questions and can be used in explanatory models to predict related behavior. It therefore furnishes evidence in support of approaches such as web probing (e.g., Meitinger et al., 2019) that use open questions for the pre- and post-elevation of closed survey questions.