Open survey questions are often used to evaluate closed questions. However, they can fulfil this function only if there is a strong link between answers to open questions and answers to related closed questions. Using reasons for non-voting reported in the German Longitudinal Election Study 2013, we investigated this link by examining whether the reported reasons for non-voting may be substantive reasons or ex-post legitimations. We tested five theoretically derived hypotheses about respondents who gave, or did not give, a specific reason. Results showed that (a) answers to open questions were indeed related to answers to closed questions and could be used in explanatory turnout models to predict voting behavior, and (b) the relationship between answers to open and closed questions and the predictive power of reasons given in response to the open questions were stronger in the post-election survey (reported behavior) than in the pre-election survey (intended behavior).
Survey research relies largely on closed questions because of their greater efficiency with respect to interviewing, coding, and analysis (
In particular, the second and third advantages of open questions have become increasingly popular recently. In the context of web probing (
The comparison between responses to open and closed questions can be done either experimentally or with a multivariate explanatory model. To date, only a few studies have undertaken such comparisons. Two early experimental studies compared the answers to closed and open questions in face-to-face surveys (
Another study evaluated the possibility of implementing open questions and closed questions simultaneously by using the two question formats to predict mental health (
Using another approach,
Our study uses an innovative methodology, which includes the random imputation of missing values, to assess the relationship between responses to open and closed questions. This relationship is explored in the area of non-voting behavior, a particularly sensitive topic that is especially interesting to examine because the open question about non-voting is asked as a follow-up to a closed question about voting behavior. This resembles a question series that is typically used in web probing. The case of non-voting behavior allows us also to distinguish between intended and reported behavior, which are both regularly measured in survey practice (
At first glance, it seems obvious that there must be a strong relationship between responses to an open question and a related closed question. However, the relationship between the two questions is less obvious when—as in web probing—an open question is used as a follow-up question to better understand the answer to a closed question. In this case, it may be possible that the open answer reflects the reasons for the answer to the closed question. Alternatively, however, the response to the open question may merely be an ex-post legitimation of the answer to the closed question, where the answer to the closed question was given in an automatic processing mode without careful reasoning. In this case, a respondent has to construct an ex-post reason when answering the open question in order to fulfill the expectation that all actions are reasoned. As a result, the answer to the open question does not yield any further information about the true motivation behind the closed answer, because the main purpose of this answer was impression management (e.g., to give the false impression that the response to the closed question was well thought-out). From the perspective of the cognitive response process (
In order to distinguish between reasons for a behavior and ex-post legitimation, it is necessary to go beyond the pair of related open and closed questions and to use additional related variables that help the researcher to discriminate whether or not the responses to an open question report the substantive reasons for the behavior in question. Following this idea, voting versus non-voting seems to be a particularly useful example. Voting behavior is a well-established field of research in which many studies have been conducted, and it has been shown that intended and reported voting behavior can be predicted by theoretically well-founded explanatory variables (e.g.,
Important explanatory factors in the models employed in these studies are “satisfaction with democracy,” “voting norm,” “political interest,” and “party evaluation” (see
Behavioral questions can be asked in two ways in surveys: first, as a question requesting a report about a behavior that has already been executed; second, as a question about an intended behavior that will, or might be, executed (see
Based on the differences between intended and realized behavior (
For our analyses, we used pre- and post-election cross-sectional surveys that collected data on the 2013 German federal election, which was held on September 22, 2013. The surveys were conducted in the framework of the German Longitudinal Election Study (GLES); the data and documentation are publicly available for scientific use (
The pre-election survey was fielded between July 29 and September 21, 2013. A response rate of 32.1% (RR6, see
A closed question about the probability that the respondent would vote (pre-election survey) or about the respondent's voting recall (post-election survey) was, if applicable, immediately followed by an open question asking why the respondent was not going vote or had not voted.
Reason | Intended |
Reported |
Exp. | Diff. | Ho: %Int. = %Report |
Conf. | |||
---|---|---|---|---|---|---|---|---|---|
% | % | ||||||||
External reasons | |||||||||
Political system (ER) | 43.9 | 101 | 45.0 | 129 | + | -0.2 | -0.24 | .814 | No |
Egotism of parties (ER) | 30.4 | 70 | 10.5 | 30 | + | 20.5 | 5.72 | < .001 | Yes |
Internal reasons | |||||||||
Political interest (IR) | 13.0 | 30 | 13.6 | 39 | - | -0.3 | -0.18 | .856 | No |
Specific circumstances (IR) | 10.0 | 23 | 26.5 | 76 | - | -16.3 | -4.73 | < .001 | Yes |
Others | 2.6 | 6 | 4.5 | 13 | O | -1.8 | -1.15 | .249 | Yes |
a
This category covered dissatisfaction with the political system (e.g., “dissatisfied with the political system,” “politicians are incompetent”), as well as low political involvement and lack of influence (e.g., “My vote has no influence.” “My party has no chance.”). When asked about their intention not to vote, 43.9% of the respondents gave a reason in this category; when asked why they had not voted, 45.0% of the respondents gave such a reason.
This category covered low political interest and knowledge (e.g., not interested in politics”). When asked about their intention not to vote, 13.0% of the respondents gave a reason in this category; when asked why they had not voted, such a reason was given by 13.6% of the respondents.
This category covered egotism on the part of politicians and parties (e.g., “politicians care only about themselves,” “empty campaign promises”). When asked why they did not intend to vote, 30.4% of the respondents gave a reason in this category; when asked why they had not voted, such a reason was cited by 10.5% of the respondents.
This category covered circumstances on Election Day (e.g., “sick,” “no time”). When asked why they did not intend to vote, 10.0% of the respondents gave a reason in this category; when asked why they had not voted, such a reason was cited by 26.5% of the respondents.
This category covered all reasons that were not covered by the other four categories (e.g., “Voting is against my religious beliefs.”). When asked why they did not intend to vote, 2.6% of the respondents gave a reason that did not fit into one of the four main categories; when asked why they had not voted, 4.5% of the respondents gave such a reason.
The open questions asked only for the most important reason for not voting. However, if respondents reported more than one reason, a maximum of three reasons were coded. The complete classification scheme used for our analysis can be found in
The categories “specific circumstances” and “political interest” were classified as internal reasons because they relate to respondents’ internal attitudes and values, whereas the categories “egotism in politics” and “political system” were classified as external reasons because they relate to external entities and events outside the respondent's self. As our research explored non-voting behavior, all items were coded in such a way that higher values indicated a negative attitude toward voting. Although the open and the closed questions did not match perfectly, the topics were very similar.
Both questionnaires included several questions on political knowledge, attitudes, and behavior, as well as sociodemographic questions. Of these questions, we selected four closed question (items) to compare the answers to the open questions with established factors predicting voting. All four items were asked identically in the pre-election and the post-election surveys.
The dataset included the following question on satisfaction with democracy: “How satisfied or dissatisfied are you with democracy in general in Germany?” (Response categories: very satisfied [1], satisfied, neither satisfied nor dissatisfied, dissatisfied, very dissatisfied [5]).
The item on the voting norm was as follows: “In a democracy, it is the duty of every citizen to vote regularly.” (Response categories: strongly agree [1], slightly agree, neither agree nor disagree, slightly disagree, strongly disagree [5]).
Respondents were asked the following question on political interest: “How interested are you in politics in general?” (Response categories: very interested [1], interested, moderately interested, slightly interested, not interested [5]).
The questionnaire included the following item on the egotism of political parties: “Parties are only interested in votes, not in the opinions of the voters.” (Response categories: strongly agree [5], slightly agree, neither agree nor disagree, slightly disagree, strongly disagree [1]).
Item nonresponse was very low—under 8% for the open questions and under 2% for the closed questions. For instance, nonresponse (i.e., “don’t know” and “refusal”) for the closed question on voting participation was 1.1% in the pre-election survey and 0.2% in the post-election survey. Nonresponse for the open questions on non-voting was 4.1% in the pre-election survey and 7.6% for the post-election survey.
The categories of the open questions and the closed questions were selected in an iterative process to be as comparable as possible. Specifically, the category “political system” was linked to the closed questions “satisfaction with democracy” and “voting norm.” The category “political interest” was linked to the closed question “political interest,” and the category “egotism in politics” was linked to the closed question “egotism of parties.”
We employed the following analysis strategies to test our five research hypotheses. The first and third hypotheses were tested by comparing mean differences between the four closed questions. For each of the four questions, we compared the group of respondents who gave such a reason when answering the open question to respondents who did not give such a reason. The dependent variables were "satisfaction with democracy," "voting norm," "political interest," and "egotism of parties." We expected to find relatively small mean differences, because we compared only non-voters, who had a lower variance than the full sample on these four attitudinal questions.
The second and fourth research hypotheses were tested using a classic behavioral voting model (see
In order to test the fifth research hypothesis about the reasons for not voting that were given before and after the election, we compared the answers to the open question in the pre-election survey with those to the open question in the post-election survey. The significance of each percentage difference was tested by using the
Our first hypothesis postulated a strong relationship between the established factors of voting behavior and the answers to the open questions. When comparing the means of the four factors “democracy,” “voting norm,” “political interest,” and “egotism” among respondents who gave a reason in that specific category to the open question and respondents who did not give such a reason, we found that only one of the four mean differences was significant (
Variable | No reason givena |
Reason givena |
Diff. | Exp. | Ho: |
Conf. | |||
---|---|---|---|---|---|---|---|---|---|
Intended | |||||||||
Democracy | 3.30 | 102 | 3.28 | 101 | .02 | - | 0.20 | .841 | No |
Voting norm | 3.66 | 102 | 3.90 | 101 | -.24 | - | -1.45 | .148 | No |
Pol. interest | 4.13 | 173 | 4.37 | 30 | -.24 | - | -1.37 | .174 | No |
Egotism | 4.26 | 133 | 4.77 | 70 | -.51 | - | -4.18 | < .001 | Yes |
Reported | |||||||||
Democracy | 2.90 | 141 | 3.23 | 129 | -.33 | - | -2.70 | .008 | Yes |
Voting norm | 3.12 | 141 | 3.81 | 129 | -.69 | - | -4.33 | < .001 | Yes |
Pol. interest | 3.90 | 231 | 4.46 | 39 | -.56 | - | -3.54 | < .001 | Yes |
Egotism | 4.08 | 340 | 4.57 | 30 | -.49 | - | -2.77 | .006 | Yes |
aAll closed questions had five response categories (coded 1 to 5). The items were coded in such a way that a higher value indicated a negative attitude toward the target issue.
Building on the relationship between answers to open questions and established factors of voting, the second and fourth hypotheses postulated that the prediction of voting behavior by established factors of voting is more accurate if a reason for this behavior is given in response to the open questions than if no reason is given. In all eight comparisons, McFadden’s pseudo-R2 was indeed higher when a corresponding reason was given to an open question (“Model B”) than when no such reason was given (“Model A”). When looking at the effects of the odds of the items that corresponded to the reasons in the open answers, five of the eight effects were significantly higher (
Variable | Model 0 (all) |
Model A (no reason given) |
Model B (reason given) |
Exp. | Ho: |
Conf. | |||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Intended | |||||||||||||
Democracy | 1.22 | .074 | .405 | 1.27 | .097 | .352 | 1.15 | .429 | .469 | - | 0.45 | .652 | No |
Voting norm | 2.34 | < .001 | .405 | 2.18 | < .001 | .352 | 2.58 | < .001 | .469 | - | -1.19 | .275 | No |
Pol. interest | 2.77 | < .001 | .405 | 2.53 | < .001 | .395 | 5.89 | < .001 | .511 | - | -2.05 | .041 | Yes |
Egotism | 1.87 | < .001 | .405 | 1.60 | .001 | .357 | 3.25 | < .001 | .527 | - | -2.31 | .021 | Yes |
Reported | |||||||||||||
Democracy | 1.13 | .182 | .324 | 1.05 | .638 | .195 | 1.23 | .181 | .525 | - | -0.82 | .413 | No |
Voting norm | 1.97 | < .001 | .324 | 1.72 | < .001 | .195 | 2.59 | < .001 | .525 | - | -3.16 | .002 | Yes |
Pol. interest | 2.50 | < .001 | .324 | 2.29 | < .001 | .292 | 5.21 | < .001 | .574 | - | -2.24 | .025 | Yes |
Egotism | 1.55 | < .001 | .324 | 1.45 | < .001 | .298 | 4.06 | < .001 | .636 | - | -2.31 | .021 | Yes |
The fifth hypothesis postulated that respondents give more internal reasons before an election and more external reasons after an election.
The strong relationship between the established factor “egotism” and the answers to the open question before the election (see
The present study supports the notion that respondents do, in fact, give substantive reasons when answering an open question on non-voting behavior. The quality of the answers was evaluated by testing five hypotheses. Results showed, first, that the reasons given to the open questions had strong links (63% significant) to corresponding established factors of voting behavior (e.g., “political interest” and “voting norm”). Second, these links were stronger after the election (100% significant) than before the election (25% significant). Third, the answers to the open questions increased the relationship between established factors and voting behavior (63% significant). This was true for three of the four factors (i.e., “political interest,” “voting norm,” and “egotism”). Only the factor “democracy,” which also had the weakest relationship to the closed questions, did not have this predictive capability. This finding suggests that this factor was used more as an ex-post justification than as a substantive reason for not voting. Fourth, the predictions of voting behavior were again more accurate after the election (75% significant) than before the election (50% significant). And, finally, fifth, respondents gave significantly more external reasons (i.e., “egotism of parties”) before the election and significantly more internal reasons (i.e., “specific circumstances” such as “I did not have time,” or “I was sick.”) after the election.
The findings of this study are in line with those of
Our research studied non-voting, a sensitive behavior that was reported by only about 15% of the respondents in the pre- and post-election surveys. Future studies could replicate our approach using a sensitive behavior that is reported by a larger subgroup of respondents (e.g., substance use (
Another limitation was that we could use only cross-sectional data. Future studies could investigate the same research question using a longitudinal study design. Although the cross-sectional nature of our data does not limit our conclusion regarding open questions in general, it does affect to a certain extent, our conclusion with respect to the comparison of the pre-election and post-election surveys.
Our study does not allow us to verify the reported reasons at the respondent level. Future studies could use a mixed-methods design that includes qualitative methodology to obtain more in-depth knowledge on this issue. For instance, such a study design could combine a standardized interview with cognitive interviews in order to verify and understand the answers of respondents (see
Within our study, we could not validate whether the responses to the open questions about non-voting reflect the causal mechanism. Even though voting behavior is considered to be a deliberated behavior so that respondents are likely to be aware of the reasons behind their behavior, respondents still may not give substantive reasons for the voting behavior when asked directly. Our study addressed this limitation in two ways. First, the observed differences between reported and intended voting behavior may suggest that respondents give more substantive reasons when asked about reported behavior, which could be seen as evidence that at least some reasons are based on substantive motivations for the voting behavior. Second, in the regression models, we compare respondents who gave a specific reason with respondents who did not give this reason. The higher explanatory power within the group of respondents who gave that reasons may again suggest that at least some reasons are based on substantive motivations for the voting behavior.
A further shortcoming of our study was that only one example of a behavioral open question was examined. Thus, the findings of our study can only be seen as a small piece of evidence that contributes to the comparison of open and closed questions in surveys. Future studies could replicate our approach in other countries or with cross-national datasets in order to investigate the generalizability of our findings. It would also be interesting to explore differences regarding attitudinal, behavioral, and factual questions as well as regarding the sensitivity of the questions. Only when more cumulative evidence along these lines has been collected, reliable conclusions about open questions, in general, can be drawn.
Open questions have well-known advantages, for example that respondents are not influenced during the cognitive response process by specified response categories and are not obliged to select a category that does not completely match their response. Moreover, open questions increase the chance of obtaining new insights into the target field of research. In addition to these advantages, our study shows that the answers to open questions about behavior are (at least partly) based on substantive reasons, are strongly linked to the answers to related closed questions and can be used in explanatory models to predict related behavior. It therefore furnishes evidence in support of approaches such as web probing (e.g.,
The data for this article is available for scientific use (for access options see
For this article the following supplementary materials are available:
The German Longitudinal Election Study (GLES) 2013: Data, questionnaires, codebook, study description.
Coding categories for reasons for non-voting.
Logistic regressions predicting voting behavior before and after the election (related to
Besides the relationship to closed questions, open questions must fulfil various data quality criteria such as a low rate of item nonresponse, a high rate of substantive response, and sufficient response length. These aspects have been studied extensively in previous research (e.g.,
The percentage of people who do not vote is typically underestimated in both pre-election and post-election surveys (e.g.,
Respondents who reported that they intended to vote (pre-election) or that they had voted (post-election) were asked in a closed follow-up question for which party they intended to vote or had voted, and then in an open question why they intended to vote or had voted for that party. As these open questions are not comparable to the open questions about why the respondent did not intend to vote or had not voted, we did not use them in the present analyses. In order to be comparable, open questions on voting would have to have asked why the respondent intended to vote (pre-election) or had voted (post-election).
The German-language classification scheme is part of the project documentation and can be retrieved from
Under the assumption that the four tests for the pre-election and post-election surveys are independent, the probability of obtaining at least one significant result from four tests on the 5% level is 18.5%
The authors have no funding to report.
The authors have declared that no competing interests exist.
The authors have no support to report.