What Can We Learn From Open Questions in Surveys? A Case Study on Non-Voting Reported in the 2013 German Longitudinal Election Study

Henning Silber*a, Cornelia Zuella, Steffen-M. Kuehnelb

Methodology, 2020, Vol. 16(1), 41–58, https://doi.org/10.5964/meth.2801

Received: 2018-02-17. Accepted: 2019-11-17. Published (VoR): 2020-04-06.

*Corresponding author at: GESIS – Leibniz Institute for the Social Sciences, Department of Survey Design and Methodology, B2 1, 68159 Mannheim, Germany. E-mail: henning.silber@gesis.org

This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Open survey questions are often used to evaluate closed questions. However, they can fulfil this function only if there is a strong link between answers to open questions and answers to related closed questions. Using reasons for non-voting reported in the German Longitudinal Election Study 2013, we investigated this link by examining whether the reported reasons for non-voting may be substantive reasons or ex-post legitimations. We tested five theoretically derived hypotheses about respondents who gave, or did not give, a specific reason. Results showed that (a) answers to open questions were indeed related to answers to closed questions and could be used in explanatory turnout models to predict voting behavior, and (b) the relationship between answers to open and closed questions and the predictive power of reasons given in response to the open questions were stronger in the post-election survey (reported behavior) than in the pre-election survey (intended behavior).

Keywords: open questions, data quality, election, non-voting, random imputation

Survey research relies largely on closed questions because of their greater efficiency with respect to interviewing, coding, and analysis (Schuman & Presser, 1981). However, the debate about the advantages and disadvantages of using open questions is as old as survey research itself (for an overview, see Converse, 1984). Open questions have at least five advantages. First, they can yield examples for public outreach that reflect respondents’ own words; second, they can be used for the ex-post evaluation of closed survey questions; third, they can be used as a basis for future closed questions; fourth, they can be coded and used in explanatory statistical models; and finally, fifth, they can serve also as a motivational tool by giving respondents an opportunity to express their opinions in their own words (see Singer & Couper, 2017; Zuell, 2016).

In particular, the second and third advantages of open questions have become increasingly popular recently. In the context of web probing (Edgar, Murphy, & Keating, 2016; Meitinger, 2017; Meitinger, Behr, & Braun, 2019), researchers ask open follow-up questions to closed survey questions. Thus, open questions are used as a tool for both the pre- and the post-evaluation of closed survey questions. However, open questions can fulfil this evaluative function only if the answers to open questions are strongly related to the responses to the closed questions under evaluation.1

The comparison between responses to open and closed questions can be done either experimentally or with a multivariate explanatory model. To date, only a few studies have undertaken such comparisons. Two early experimental studies compared the answers to closed and open questions in face-to-face surveys (Schuman & Presser, 1981; Schuman & Scott, 1987). Using a split-ballot design, the authors compared the results of their experiments with open and closed questions about (a) the most important problem facing the United States, (b) the things respondents most preferred in a job (work values), and (c) the most important world events from 1930 until the present day. The studies showed that, although the results sometimes differed, both question formats produced reasonable and informative data of high quality. Over three decades later, Reja, Manfreda, Hlebec, and Vehovar (2003) conducted an experiment in which they compared closed and open questions in an online survey. Like Schuman and Presser (1981) and Schuman and Scott (1987), they found many similarities in the ranking of the values with respect to the most important problem that the Internet was facing at the time.

Another study evaluated the possibility of implementing open questions and closed questions simultaneously by using the two question formats to predict mental health (Friborg & Rosenvinge, 2013). The authors demonstrated that, although open questions yielded more detailed information than closed questions, they also produced more item nonresponse. Based on this finding, they concluded that the advantages and disadvantages of open questions canceled each other out. Thus, they questioned the usefulness of open questions because of their lower efficiency.

Using another approach, Bauer, Barberá, Ackermann, and Venetz (2017) investigated the validity of responses to open questions by using related closed questions. Respondents were asked open questions about the meanings of the political terms “left” and “right.” When the authors compared these answers to the answers to closed question requesting respondents to place themselves on the left-right scale, they found that variation in respondents' associations with "left" and "right" was systematically related to (a) their self-placement on the left-right scale and (b) background variables such as education and culture. The authors concluded that their study suggested that more research was needed on the interpretation of the various abstract concepts that are regularly used in survey questions.

Our study uses an innovative methodology, which includes the random imputation of missing values, to assess the relationship between responses to open and closed questions. This relationship is explored in the area of non-voting behavior, a particularly sensitive topic that is especially interesting to examine because the open question about non-voting is asked as a follow-up to a closed question about voting behavior. This resembles a question series that is typically used in web probing. The case of non-voting behavior allows us also to distinguish between intended and reported behavior, which are both regularly measured in survey practice (Sudman, Bradburn, & Schwarz, 1996). In what follows, we develop theory-based hypotheses, which we test using data from the pre- and post-election surveys conducted in the framework of the German Longitudinal Election Study 2013 (Rattinger et al., 2017).

Background and Hypotheses

At first glance, it seems obvious that there must be a strong relationship between responses to an open question and a related closed question. However, the relationship between the two questions is less obvious when—as in web probing—an open question is used as a follow-up question to better understand the answer to a closed question. In this case, it may be possible that the open answer reflects the reasons for the answer to the closed question. Alternatively, however, the response to the open question may merely be an ex-post legitimation of the answer to the closed question, where the answer to the closed question was given in an automatic processing mode without careful reasoning. In this case, a respondent has to construct an ex-post reason when answering the open question in order to fulfill the expectation that all actions are reasoned. As a result, the answer to the open question does not yield any further information about the true motivation behind the closed answer, because the main purpose of this answer was impression management (e.g., to give the false impression that the response to the closed question was well thought-out). From the perspective of the cognitive response process (Tourangeau, Rips, & Rasinski, 2000), ex-post legitimations are a way of hiding shortcuts in the response process and making the response appear to be well thought-out. In this sense, a respondent who "satisfices" (by expending less energy and settling for a merely satisfactory answer) is difficult to distinguish from a respondent who "optimizes" by going through all steps of the cognitive response process thoroughly in order to give the most accurate answer possible (Krosnick, 1991).

In order to distinguish between reasons for a behavior and ex-post legitimation, it is necessary to go beyond the pair of related open and closed questions and to use additional related variables that help the researcher to discriminate whether or not the responses to an open question report the substantive reasons for the behavior in question. Following this idea, voting versus non-voting seems to be a particularly useful example. Voting behavior is a well-established field of research in which many studies have been conducted, and it has been shown that intended and reported voting behavior can be predicted by theoretically well-founded explanatory variables (e.g., Verba & Nie, 1987).2 The example of voting behavior also seems to be useful because voting is widely considered to be a deliberate decision, and democracies are built on that premise. Thus, it can be assumed that the voting behavior has a high likelihood of being intentional and well-reasoned.

Important explanatory factors in the models employed in these studies are “satisfaction with democracy,” “voting norm,” “political interest,” and “party evaluation” (see Verba & Nie, 1987). In election studies (e.g., the American National Election Study), these key factors are usually measured by closed questions; they should also be reflected by answers to open questions if these answers are substantive reasons rather than legitimations. In what follows, we refer to the responses to these determinants of voting as “established factors.” As established factors are known to have a strong relationship to voting behavior, they should be closely related to corresponding substantive reasons. In contrast, if the responses to an open question are predominantly legitimations, then this strong relationship to a corresponding established factor should be absent. This leads to the following two hypotheses:

  • H1: If a respondent answers an open question by giving the substantive reason for the behavior in question, then the reason given should correspond to his or her answer to a substantively related closed question.

  • H2: The predictive power of an established factor should be stronger for respondents whose answers to the related open question correspond to their answers to the closed question than for respondents whose answers are inconsistent.

Behavioral questions can be asked in two ways in surveys: first, as a question requesting a report about a behavior that has already been executed; second, as a question about an intended behavior that will, or might be, executed (see Sudman et al., 1996). Numerous studies have shown that answers to questions about executed behavior are more precise than answers to questions about intended behavior (e.g., Diekmann & Preisendörfer, 1998; LaPiere, 1934; Schahn & Holzer, 1990). With respect to non-voting, we expect that questions about reported (non-voting) behavior in post-election surveys will be easier to answer than questions about intended voting behavior in pre-election surveys because, before an election, a respondent has to imagine a future situation, whereas (shortly) after election day respondents can retrieve behavioral information from memory more easily (Tourangeau et al., 2000). If the cognitive task is more difficult and requires greater cognitive effort, respondents are more likely to reduce their response burden by taking shortcuts in the response process (Krosnick, 1991). Applying this to open questions on non-voting behavior, we expect more legitimations and fewer substantive reasons in the case of intended voting behavior than in the case of recalled voting behavior. This leads to the following hypotheses:

  • H3: The consistency between responses to an open question and responses to a related closed question is stronger for reported behavior than for intended behavior.

  • H4: The predictive power of a closed question is stronger for reported behavior than for intended behavior.

Based on the differences between intended and realized behavior (Sudman et al., 1996), we further expect substantial differences between the answers given to open questions in pre-election and in post-election surveys. With respect to reasons for (and legitimations of) a behavioral choice, the responses to open questions can be based either on general characteristics of the situation and on external circumstances (related to external entities and events outside a respondent’s self) or on specific characteristics of a respondent and on internal circumstances (related to a respondent’s internal attitudes and values). Because intention formation is a more general task than recalling a realized behavior, we expect that intended behavior is more likely to be attributed to external circumstances and reported behavior is more likely to be attributed to internal circumstances. This leads to the following hypothesis:

  • H5: In the case of a question about an intended behavior, respondents are more likely to attribute their choice to internal reasons; in the case of a question about a reported behavior, respondents are more likely to attribute their choice to external reasons.

Method

Data

For our analyses, we used pre- and post-election cross-sectional surveys that collected data on the 2013 German federal election, which was held on September 22, 2013. The surveys were conducted in the framework of the German Longitudinal Election Study (GLES); the data and documentation are publicly available for scientific use (Rattinger et al., 2017). The sample for each survey was drawn using a stratified three-stage random sampling technique: In the first stage, 306 primary sampling units (PSUs) were selected. These PSUs were the starting points for a random route procedure in the second stage, in which interviewers selected the target households. In the third stage, one adult aged 16 or older in each household was invited to take part in the interview. In both surveys, the data collection mode was face-to-face interviewing using computer-assisted personal interviews (CAPI).

The pre-election survey was fielded between July 29 and September 21, 2013. A response rate of 32.1% (RR6, see The American Association for Public Opinion Research [AAPOR], 2016) resulted in a sample size of 2,001 respondents. The post-election survey was fielded between September 23 and December 23, 2013. A response rate of 27.6% (RR6, see AAPOR, 2016) resulted in a sample size of 1,906 respondents. In the pre-election survey, 12.6% of respondents (N = 226) reported that they would not vote in the election; in the post-election survey 15.2% of respondents (N = 289) reported that they had not voted. The actual proportion of non-voters in the German federal election was 28.5% (Federal Returning Officer, 2013). We applied a weight, which accounted for region, household size, sex, age, and education, to calculate the percentages of voters and non-voters because we aimed for representative results for these numbers. The remaining analyses were conducted with unweighted data because they investigated internal consistency.

Measures

Open Questions

A closed question about the probability that the respondent would vote (pre-election survey) or about the respondent's voting recall (post-election survey) was, if applicable, immediately followed by an open question asking why the respondent was not going vote or had not voted.3 In the pre-election survey, the wording of this question was: “And why will you probably not vote? Please tell me the most important reason.” In the post-election survey, non-voters were asked “And why did you not vote? Please tell me the most important reason.” In both surveys, answers were coded using a classification scheme comprising 38 categories of reasons that was developed for the GLES study.4 We conducted a content analysis in which we grouped these categories into the following five main categories (see Table 1 for the distributions of these five categories of reasons in the pre-election and post-election surveys):

Table 1

Reasons Given in Open Question About Non-Voting

Reason Intended
Reported
Exp. Diff. Ho: %Int. = %Report
Conf.
% Na % Na z p
External reasons
Political system (ER) 43.9 101 45.0 129 + -0.2 -0.24 .814 No
Egotism of parties (ER) 30.4 70 10.5 30 + 20.5 5.72 < .001 Yes
Internal reasons
Political interest (IR) 13.0 30 13.6 39 - -0.3 -0.18 .856 No
Specific circumstances (IR) 10.0 23 26.5 76 - -16.3 -4.73 < .001 Yes
Others 2.6 6 4.5 13 O -1.8 -1.15 .249 Yes

Note. Source: German Longitudinal Election Study 2013: Pre- and Post-Election Cross-Section (Rattinger et al., 2017). Exp. = expectation; Diff. = difference (in %); Conf. = confirmation of expectation. Some respondents gave more than one reason. The p-values are based on two-tailed tests.

aN is based on answers of 198 respondents for intended non-voting and on answers of 250 respondents for reported non-voting.

Political System

This category covered dissatisfaction with the political system (e.g., “dissatisfied with the political system,” “politicians are incompetent”), as well as low political involvement and lack of influence (e.g., “My vote has no influence.” “My party has no chance.”). When asked about their intention not to vote, 43.9% of the respondents gave a reason in this category; when asked why they had not voted, 45.0% of the respondents gave such a reason.

Political Interest

This category covered low political interest and knowledge (e.g., not interested in politics”). When asked about their intention not to vote, 13.0% of the respondents gave a reason in this category; when asked why they had not voted, such a reason was given by 13.6% of the respondents.

Egotism in Politics

This category covered egotism on the part of politicians and parties (e.g., “politicians care only about themselves,” “empty campaign promises”). When asked why they did not intend to vote, 30.4% of the respondents gave a reason in this category; when asked why they had not voted, such a reason was cited by 10.5% of the respondents.

Specific Circumstances

This category covered circumstances on Election Day (e.g., “sick,” “no time”). When asked why they did not intend to vote, 10.0% of the respondents gave a reason in this category; when asked why they had not voted, such a reason was cited by 26.5% of the respondents.

Other

This category covered all reasons that were not covered by the other four categories (e.g., “Voting is against my religious beliefs.”). When asked why they did not intend to vote, 2.6% of the respondents gave a reason that did not fit into one of the four main categories; when asked why they had not voted, 4.5% of the respondents gave such a reason.

The open questions asked only for the most important reason for not voting. However, if respondents reported more than one reason, a maximum of three reasons were coded. The complete classification scheme used for our analysis can be found in Silber et al. (2020), in the Supplementary Materials. For each of the five categories, a dummy-variable was generated that indicated whether or not a respondent gave a reason (1 = reason given, 0 = no reason given).

Internal and External Reasons

The categories “specific circumstances” and “political interest” were classified as internal reasons because they relate to respondents’ internal attitudes and values, whereas the categories “egotism in politics” and “political system” were classified as external reasons because they relate to external entities and events outside the respondent's self. As our research explored non-voting behavior, all items were coded in such a way that higher values indicated a negative attitude toward voting. Although the open and the closed questions did not match perfectly, the topics were very similar.

Closed Questions

Both questionnaires included several questions on political knowledge, attitudes, and behavior, as well as sociodemographic questions. Of these questions, we selected four closed question (items) to compare the answers to the open questions with established factors predicting voting. All four items were asked identically in the pre-election and the post-election surveys.

Satisfaction With Democracy

The dataset included the following question on satisfaction with democracy: “How satisfied or dissatisfied are you with democracy in general in Germany?” (Response categories: very satisfied [1], satisfied, neither satisfied nor dissatisfied, dissatisfied, very dissatisfied [5]).

Voting Norm

The item on the voting norm was as follows: “In a democracy, it is the duty of every citizen to vote regularly.” (Response categories: strongly agree [1], slightly agree, neither agree nor disagree, slightly disagree, strongly disagree [5]).

Political Interest

Respondents were asked the following question on political interest: “How interested are you in politics in general?” (Response categories: very interested [1], interested, moderately interested, slightly interested, not interested [5]).

Egotism of Parties

The questionnaire included the following item on the egotism of political parties: “Parties are only interested in votes, not in the opinions of the voters.” (Response categories: strongly agree [5], slightly agree, neither agree nor disagree, slightly disagree, strongly disagree [1]).

Item Nonresponse

Item nonresponse was very low—under 8% for the open questions and under 2% for the closed questions. For instance, nonresponse (i.e., “don’t know” and “refusal”) for the closed question on voting participation was 1.1% in the pre-election survey and 0.2% in the post-election survey. Nonresponse for the open questions on non-voting was 4.1% in the pre-election survey and 7.6% for the post-election survey.

Comparability of Open and Closed Questions

The categories of the open questions and the closed questions were selected in an iterative process to be as comparable as possible. Specifically, the category “political system” was linked to the closed questions “satisfaction with democracy” and “voting norm.” The category “political interest” was linked to the closed question “political interest,” and the category “egotism in politics” was linked to the closed question “egotism of parties.”

Analysis

We employed the following analysis strategies to test our five research hypotheses. The first and third hypotheses were tested by comparing mean differences between the four closed questions. For each of the four questions, we compared the group of respondents who gave such a reason when answering the open question to respondents who did not give such a reason. The dependent variables were "satisfaction with democracy," "voting norm," "political interest," and "egotism of parties." We expected to find relatively small mean differences, because we compared only non-voters, who had a lower variance than the full sample on these four attitudinal questions.

The second and fourth research hypotheses were tested using a classic behavioral voting model (see Verba & Nie, 1987). As only non-voters were asked the open questions on non-voting, we could not investigate directly the link between voting behavior and the answers to these open questions. We solved this problem by randomly imputing values of the open questions for the voters, who were not asked these questions, so that the marginal distributions of the imputed open questions were equal between voters and non-voters. Consequently, the bivariate correlation between voting participation and the open questions on non-voting become zero. Nevertheless, if the responses given to the open questions by the non-voters are substantive reasons for not voting, it can be expected that in the subgroup in which a specific reason was given, the relationship between the established factors and voting participation is higher than in the subgroup in which a specific reason was not given. Because this expectation applies only to reasons given and does not apply to randomly imputed reasons, the difference may be underestimated. Therefore, our approach must be considered to be a conservative test of these two hypotheses.

In order to test the fifth research hypothesis about the reasons for not voting that were given before and after the election, we compared the answers to the open question in the pre-election survey with those to the open question in the post-election survey. The significance of each percentage difference was tested by using the z-statistic.

Results

Our first hypothesis postulated a strong relationship between the established factors of voting behavior and the answers to the open questions. When comparing the means of the four factors “democracy,” “voting norm,” “political interest,” and “egotism” among respondents who gave a reason in that specific category to the open question and respondents who did not give such a reason, we found that only one of the four mean differences was significant (p < .001) and in the expected direction for the pre-election study, whereas all four mean differences were significant and in the expected direction for the post-election study (p < .01; see Table 2).5 This result not only confirmed our first hypothesis about the relationship between answers to the open and closed questions, but also confirmed our third hypothesis postulating a stronger relationship before than after the election. Notably, “democracy” showed the smallest mean difference before the election, which suggests that the reasons in this category may have been given as ex-post justification for not voting.

Table 2

Mean Differences Between Established Factors to Predict Voting Through the Reasons for Non-Voting Given in Response to Open Questions

Variable No reason givena
Reason givena
Diff. Exp. Ho: M is larger when reason given
Conf.
M N M N t p
Intended
Democracy 3.30 102 3.28 101 .02 - 0.20 .841 No
Voting norm 3.66 102 3.90 101 -.24 - -1.45 .148 No
Pol. interest 4.13 173 4.37 30 -.24 - -1.37 .174 No
Egotism 4.26 133 4.77 70 -.51 - -4.18 < .001 Yes
Reported
Democracy 2.90 141 3.23 129 -.33 - -2.70 .008 Yes
Voting norm 3.12 141 3.81 129 -.69 - -4.33 < .001 Yes
Pol. interest 3.90 231 4.46 39 -.56 - -3.54 < .001 Yes
Egotism 4.08 340 4.57 30 -.49 - -2.77 .006 Yes

Note. Source: German Longitudinal Election Study 2013: Pre- and Post-Election Cross-Section (Rattinger et al., 2017). Exp. = expectation; Diff. = difference (in %); Conf. = confirmation of expectation. Higher value reflects negative opinion (e.g., negative opinion about democracy; recoded where necessary). The p-values are based on two-tailed tests.

aAll closed questions had five response categories (coded 1 to 5). The items were coded in such a way that a higher value indicated a negative attitude toward the target issue.

Building on the relationship between answers to open questions and established factors of voting, the second and fourth hypotheses postulated that the prediction of voting behavior by established factors of voting is more accurate if a reason for this behavior is given in response to the open questions than if no reason is given. In all eight comparisons, McFadden’s pseudo-R2 was indeed higher when a corresponding reason was given to an open question (“Model B”) than when no such reason was given (“Model A”). When looking at the effects of the odds of the items that corresponded to the reasons in the open answers, five of the eight effects were significantly higher (p < .05; see Table 3), which partly confirmed the second hypothesis. Again, “democracy” did not show a significant effect, which further confirmed the assumption that reasons in the “democracy” category given in response to the open question on non-voting behavior were given more as ex-post justifications than as substantive reasons for not voting. In line with our fourth hypothesis, we found significantly stronger differences for three of the four factors in the post-election survey compared to two of the four factors in the pre-election survey. This finding further confirms our assumption that the behavioral predictions are more accurate overall after the election than before the election.

Table 3

Prediction of Voting Behavior Before and After the Election

Variable Model 0 (all)
Model A (no reason given)
Model B (reason given)
Exp. Ho: eβ is larger in Model B
Conf.
eβ p ps.-R2 eβ p ps.-R2 eβ p ps.-R2 z p
Intended
Democracy 1.22 .074 .405 1.27 .097 .352 1.15 .429 .469 - 0.45 .652 No
Voting norm 2.34 < .001 .405 2.18 < .001 .352 2.58 < .001 .469 - -1.19 .275 No
Pol. interest 2.77 < .001 .405 2.53 < .001 .395 5.89 < .001 .511 - -2.05 .041 Yes
Egotism 1.87 < .001 .405 1.60 .001 .357 3.25 < .001 .527 - -2.31 .021 Yes
Reported
Democracy 1.13 .182 .324 1.05 .638 .195 1.23 .181 .525 - -0.82 .413 No
Voting norm 1.97 < .001 .324 1.72 < .001 .195 2.59 < .001 .525 - -3.16 .002 Yes
Pol. interest 2.50 < .001 .324 2.29 < .001 .292 5.21 < .001 .574 - -2.24 .025 Yes
Egotism 1.55 < .001 .324 1.45 < .001 .298 4.06 < .001 .636 - -2.31 .021 Yes

Note. Source: German Longitudinal Election Study 2013: Pre- and Post-Election Cross-Section (Rattinger et al., 2017). Exp. = expectation; Diff. = difference (in %); Conf. = confirmation of expectation. Coefficients and model fits are based on logistic regression models (for the full regression models including sample sizes, see Silber et al., 2020, in the Supplementary Materials). The p-values are based on two-tailed tests.

The fifth hypothesis postulated that respondents give more internal reasons before an election and more external reasons after an election. Table 1 shows the results for the pre-election and post-election surveys with respect to the internal reasons “political interest” and “specific circumstances” and the external reasons “political system” and “egotism of parties.” The comparison between the percentage of respondents who gave internal and external reasons showed that respondents did, in fact, report more external reasons before the election (“egotism,” z = 5.72, p < .001), and more internal reasons after the election (“specific circumstances,” z = -4.73, p < .001, see Table 1). This result confirms our fifth hypothesis. It is notable that the differences between external and internal reasons before and after the election were driven solely by “egotism of parties”, on the one hand, and “specific circumstances,” on the other, which suggests that respondents tended to attribute their behavior to the egotism of others before the election and to personal circumstances such as health and time constraints after the election.

The strong relationship between the established factor “egotism” and the answers to the open question before the election (see Table 2) further supports our fifth hypothesis that external reasons are especially involved in the justification of non-voting decisions when respondents are asked before the election; the strong relationship between “political interest” and the answers to the open question after the election (see Table 2) support the hypothesis that internal reasons are especially involved in the justification of non-voting decisions after the election.

Discussion

The present study supports the notion that respondents do, in fact, give substantive reasons when answering an open question on non-voting behavior. The quality of the answers was evaluated by testing five hypotheses. Results showed, first, that the reasons given to the open questions had strong links (63% significant) to corresponding established factors of voting behavior (e.g., “political interest” and “voting norm”). Second, these links were stronger after the election (100% significant) than before the election (25% significant). Third, the answers to the open questions increased the relationship between established factors and voting behavior (63% significant). This was true for three of the four factors (i.e., “political interest,” “voting norm,” and “egotism”). Only the factor “democracy,” which also had the weakest relationship to the closed questions, did not have this predictive capability. This finding suggests that this factor was used more as an ex-post justification than as a substantive reason for not voting. Fourth, the predictions of voting behavior were again more accurate after the election (75% significant) than before the election (50% significant). And, finally, fifth, respondents gave significantly more external reasons (i.e., “egotism of parties”) before the election and significantly more internal reasons (i.e., “specific circumstances” such as “I did not have time,” or “I was sick.”) after the election.

The findings of this study are in line with those of Schuman and Presser’s (1981) split-ballot design experiments on the relationship between open and closed questions, which also showed that, although not identical, the outcomes were often quite comparable. The divergent finding on the factor “democracy” is conspicuous because it was the only factor that did not have a strong relationship to the related closed questions. We see two possible explanations for this: First, the concept of “democracy” appears to have been relatively vague for the majority of the respondents. This explanation corresponds to the finding of Bauer et al. (2017) that the meaning and interpretation of the political concepts “left” and “right” varied across respondents, which led the authors to conclude that this may also be the case for other abstract concepts used in survey questions. Second, the respondents may have given a reason in the “democracy” category as an ex-post-legitimation of their answer to the closed question on voting participation.

Limitations

Our research studied non-voting, a sensitive behavior that was reported by only about 15% of the respondents in the pre- and post-election surveys. Future studies could replicate our approach using a sensitive behavior that is reported by a larger subgroup of respondents (e.g., substance use (Johnson, 2014), discrimination (Petzold & Wolbring, 2019), and many other topics (see Tourangeau & Yan, 2007).

Another limitation was that we could use only cross-sectional data. Future studies could investigate the same research question using a longitudinal study design. Although the cross-sectional nature of our data does not limit our conclusion regarding open questions in general, it does affect to a certain extent, our conclusion with respect to the comparison of the pre-election and post-election surveys.

Our study does not allow us to verify the reported reasons at the respondent level. Future studies could use a mixed-methods design that includes qualitative methodology to obtain more in-depth knowledge on this issue. For instance, such a study design could combine a standardized interview with cognitive interviews in order to verify and understand the answers of respondents (see Reeve et al., 2011; Hadler, Neuert, Lenzner, & Menold, 2018). Other interesting additions would be to include a validation of turnout using official voting records (e.g., Ansolabehere & Hersh, 2012) or to incorporate an experimental design such as a factorial experiment (see Petzold & Wolbring, 2019) or an experiment on question wording (see Henriques, Silva, Severo, Fraga, & Ramos, 2019).

Within our study, we could not validate whether the responses to the open questions about non-voting reflect the causal mechanism. Even though voting behavior is considered to be a deliberated behavior so that respondents are likely to be aware of the reasons behind their behavior, respondents still may not give substantive reasons for the voting behavior when asked directly. Our study addressed this limitation in two ways. First, the observed differences between reported and intended voting behavior may suggest that respondents give more substantive reasons when asked about reported behavior, which could be seen as evidence that at least some reasons are based on substantive motivations for the voting behavior. Second, in the regression models, we compare respondents who gave a specific reason with respondents who did not give this reason. The higher explanatory power within the group of respondents who gave that reasons may again suggest that at least some reasons are based on substantive motivations for the voting behavior.

A further shortcoming of our study was that only one example of a behavioral open question was examined. Thus, the findings of our study can only be seen as a small piece of evidence that contributes to the comparison of open and closed questions in surveys. Future studies could replicate our approach in other countries or with cross-national datasets in order to investigate the generalizability of our findings. It would also be interesting to explore differences regarding attitudinal, behavioral, and factual questions as well as regarding the sensitivity of the questions. Only when more cumulative evidence along these lines has been collected, reliable conclusions about open questions, in general, can be drawn.

Conclusion

Open questions have well-known advantages, for example that respondents are not influenced during the cognitive response process by specified response categories and are not obliged to select a category that does not completely match their response. Moreover, open questions increase the chance of obtaining new insights into the target field of research. In addition to these advantages, our study shows that the answers to open questions about behavior are (at least partly) based on substantive reasons, are strongly linked to the answers to related closed questions and can be used in explanatory models to predict related behavior. It therefore furnishes evidence in support of approaches such as web probing (e.g., Meitinger et al., 2019) that use open questions for the pre- and post-elevation of closed survey questions.

Notes

1) Besides the relationship to closed questions, open questions must fulfil various data quality criteria such as a low rate of item nonresponse, a high rate of substantive response, and sufficient response length. These aspects have been studied extensively in previous research (e.g., Behr, Meitinger, Braun, & Kaczmirek, 2017; Schmidt, Gummer, & Roßmann, 2020).

2) The percentage of people who do not vote is typically underestimated in both pre-election and post-election surveys (e.g., Bernstein, Chadha, & Montjoy, 2001). However, it has been shown that this does not substantially affect the predictive power of the explanatory variables, as comparisons of data that included self-reports and data that were validated using voter records revealed similar results (e.g., Katosh & Traugott, 1981).

3) Respondents who reported that they intended to vote (pre-election) or that they had voted (post-election) were asked in a closed follow-up question for which party they intended to vote or had voted, and then in an open question why they intended to vote or had voted for that party. As these open questions are not comparable to the open questions about why the respondent did not intend to vote or had not voted, we did not use them in the present analyses. In order to be comparable, open questions on voting would have to have asked why the respondent intended to vote (pre-election) or had voted (post-election).

4) The German-language classification scheme is part of the project documentation and can be retrieved from https://dbk.gesis.org/dbksearch/download.asp?id=58280.

5) Under the assumption that the four tests for the pre-election and post-election surveys are independent, the probability of obtaining at least one significant result from four tests on the 5% level is 18.5% p = 1 4 0 × 0.05 0 × 0.95 4 , whereas the probability that all four test results will be significant is less than 0.1% p = 4 4 × 0.05 4 × 0.095 0 . This analysis demonstrates that the results of the pre-election study could have occurred by chance, whereas the results of the post-election study are not likely to be a purely random outcome.

Funding

The authors have no funding to report.

Competing Interests

The authors have declared that no competing interests exist.

Acknowledgments

The authors have no support to report.

Data Availability

The data for this article is available for scientific use (for access options see Supplementary Materials).

Supplementary Materials

For this article the following supplementary materials are available:

  • Rattinger et al., (2017):

    • The German Longitudinal Election Study (GLES) 2013: Data, questionnaires, codebook, study description.

  • Silber et al. (2020):

    • Coding categories for reasons for non-voting.

    • Logistic regressions predicting voting behavior before and after the election (related to Table 3).

Index of Supplementary Materials

  • Rattinger, H., Roßteutscher, S., Schmitt-Beck, R., Weßels, B., Wolf, C., Wagner, A., … Scherer, P. (2017). Pre- and post-election cross-section (GLES 2013). GESIS Data Archive, Cologne. ZA5702 Data File Version 3.0.0. https://doi.org/10.4232/1.12810

  • Silber, H., Zuell, C., & Kuehnel, S. M. (2020). Supplementary materials to: What can we learn from open questions in surveys? A case study on non-voting reported in the 2013 German Longitudinal Election Study.PsychOpen. https://doi.org/10.23668/psycharchives.2783

References

  • The American Association for Public Opinion Research. (2016). Standard definitions: Final dispositions of case codes and outcome rates for surveys. The American Association for Public Opinion Research. Retrieved from https://www.aapor.org/AAPOR_Main/media/publications/Standard-Definitions20169theditionfinal.pdf

  • Ansolabehere, S., & Hersh, E. (2012). Validation: What big data reveal about survey misreporting and the real electorate. Political Analysis, 20(4), 437-459. https://doi.org/10.1093/pan/mps023

  • Bauer, P. C., Barberá, P., Ackermann, K., & Venetz, A. (2017). Is the left-right scale a valid measure of ideology? Political Behavior, 39(3), 553-583. https://doi.org/10.1007/s11109-016-9368-2

  • Behr, D., Meitinger, K., Braun, M., & Kaczmirek, L. (2017). Web probing – Implementing probing techniques from cognitive interviewing in web surveys with the goal to assess the validity of survey questions (GESIS Survey Guidelines). Mannheim, Germany: GESIS – Leibniz-Institute for the Social Sciences. https://doi.org/10.15465/gesis-sg_en_023

  • Bernstein, R., Chadha, A., & Montjoy, R. (2001). Overreporting voting: Why it happens and why it matters. Public Opinion Quarterly, 65(1), 22-44. https://doi.org/10.1086/320036

  • Converse, J. M. (1984). Strong arguments and weak evidence: The open/closed questioning controversy of 1940s. Public Opinion Quarterly, 48(1B), 267-282. https://doi.org/10.1093/poq/48.1B.267

  • Diekmann, A., & Preisendörfer, P. (1998). Environmental behavior: Discrepancies between aspirations and reality. Rationality and Society, 10(1), 79-102. https://doi.org/10.1177/104346398010001004

  • Edgar, J., Murphy, J., & Keating, M. (2016). Comparing traditional and crowdsourcing methods for pretesting survey questions. SAGE Open, 6(4), https://doi.org/10.1177/2158244016671770

  • Federal Returning Officer. (2013). Final results by constituencies. Magazine no. 3. Wiesbaden, Germany: Author. Retrieved from https://www.bundeswahlleiter.de/en/bundestagswahlen/2013/publikationen.html

  • Friborg, O., & Rosenvinge, J. H. (2013). A comparison of open-ended and closed questions in the prediction of mental health. Quality & Quantity, 47, 1397-1411. https://doi.org/10.1007/s11135-011-9597-8

  • Hadler, P., Neuert, C., Lenzner, T., & Menold, N. (2018). Preparation of the 7th European Working Conditions Survey (EWCS) – Post test of the 6th EWCS (Final Report April-November 2018. GESIS Projektbericht. Version: 1.0.). GESIS - Pretestlabor. https://doi.org/10.17173/pretest72

  • Henriques, A., Silva, S., Severo, M., Fraga, S., & Ramos, E. (2019). The influence of question wording on interpersonal trust: A comparison in randomly equivalent groups. Methodology: European Journal of Research Methods for the Behavioral and Social Sciences, 15(2), 56-66. https://doi.org/10.1027/1614-2241/a000164

  • Johnson, T. P. (2014). Sources of error in substance use prevalence surveys. International Scholarly Research Notices, 2014, Article 923290. https://doi.org/10.1155/2014/923290

  • Katosh, J. P., & Traugott, M. W. (1981). The consequences of validated and self-reported voting measures. Public Opinion Quarterly, 45, 519-535. https://doi.org/10.1086/268685

  • Krosnick, J. A. (1991). Response strategies for coping with the cognitive demands of attitude measures in surveys. Applied Cognitive Psychology, 5(3), 213-236. https://doi.org/10.1002/acp.2350050305

  • LaPiere, R. T. (1934). Attitudes vs. actions. Social Forces, 13(2), 230-237. https://doi.org/10.2307/2570339

  • Meitinger, K. (2017). Necessary but insufficient: Why measurement invariance tests need online probing as a complementary tool. Public Opinion Quarterly, 81(2), 447-472. https://doi.org/10.1093/poq/nfx009

  • Meitinger, K., Behr, D., & Braun, M. (2019). Using apples and oranges to judge quality? Selection of appropriate cross-national indicators of response quality in open-ended questions. Social Science Computer Review. Advance online publication. https://doi.org/10.1177/0894439319859848

  • Petzold, K., & Wolbring, T. (2019). What can we learn from factorial surveys about human behavior? A validation study comparing field and survey experiments on discrimination. Methodology: European Journal of Research Methods for the Behavioral and Social Sciences, 15(1), 19-30. https://doi.org/10.1027/1614-2241/a000161

  • Reeve, B. B., Willis, G., Shariff-Marco, S. N., Breen, N., Williams, D. R., Gee, G. C., . . . Levin, K. Y., (2011). Comparing cognitive interviewing and psychometric methods to evaluate a racial/ethnic discrimination scale. Field Methods, 23(4), 397-419.

  • Reja, U., Manfreda, K. L., Hlebec, V., & Vehovar, V. (2003). Open-ended vs. close-ended questions in web questionnaires. Metodološki Zvezki, 19, 159-177.

  • Schahn, J., & Holzer, E. (1990). Studies of individual environmental concern: The role of knowledge, gender, and background variables. Environment and Behavior, 22(6), 767-786. https://doi.org/10.1177/0013916590226003

  • Schmidt, K., Gummer, T., & Roßmann, J. (2020). Effects of respondent and survey characteristics on the response quality of an open-ended attitude question in web surveys. Methods, data, analyses, 14(1), 3-34. https://doi.org/10.12758/mda.2019.05

  • Schuman, H., & Presser, S. (1981). Questions and answers: Experiments on question form, wording, and context in attitude surveys. New York, NY, USA: Academic.

  • Schuman, H., & Scott, J. (1987). Problems in the use of survey questions to measure public opinion. Science, 236(4804), 957-959. https://doi.org/10.1126/science.236.4804.957

  • Singer, E., & Couper, M. P. (2017). Some methodological uses of responses to open questions and other verbatim comments in quantitative surveys. Methods, data, analyses, 11(2), 115-134.

  • Sudman, S., Bradburn, N. M., & Schwarz, N. (1996). Thinking about answers: The application of cognitive processes to survey methodology. San Francisco, CA, USA: Jossey-Bass.

  • Tourangeau, R., Rips, L. J., & Rasinski, K. (2000). The psychology of survey response. Cambridge, United Kingdom: Cambridge University Press.

  • Tourangeau, R., & Yan, T. (2007). Sensitive questions in surveys. Psychological Bulletin, 133(5), 859-883. https://doi.org/10.1037/0033-2909.133.5.859

  • Verba, S., & Nie, N. H. (1987). Participation in America: Political democracy and social equality. Chicago, IL, USA: University of Chicago Press.

  • Zuell, C. (2016). Open-ended questions (GESIS Survey Guidelines). Mannheim, Germany: GESIS – Leibniz Institute for the Social Sciences. https://doi.org/10.15465/gesis-sg_en_002