The impact of fan identification and notification on survey response and data quality.
Jordan, Jeremy Scott ; Brandon-Lai, Simon ; Sato, Mikihiro 等
Introduction
In recent years, the use of online methods for data collection has
increased considerably in survey research (Dillman, Smyth, &
Christian, 2009; Pan, 2010; Sue & Ritter, 2007). This trend is
noticeable within the fields of sport management and marketing,
exemplified by four mainstream journals in the field: Sport Marketing
Quarterly (SMQ), Journal of Sport Management (JSM), Sport Management
Review (SMR), and European Sport Management Quarterly (ESMQ) (Shilbury,
2012). From 2009-2012, a total of 223 articles that used survey methods
for data collection were published in these four journals, of which 91
(40.8%) utilized online data collection methods.
As the popularity of online survey research has grown, there has
been much discussion of the benefits and challenges associated with this
method of data collection. One issue that has received considerable
attention is that of topic salience, which is defined as the relative
importance of the research topic to the study population (Anseel,
Lievens, Schollaert, & Choragwicka; 2010; Baruch, 1999; Turner,
Jordan, & Sagas, 2006). This form of interaction between the
researcher and target population has been shown to increase
participation and involvement among survey participants (Anseel et al.,
2010). There is substantial evidence to suggest that when the focus of
the research study is highly salient to potential participants, they are
more likely to respond compared to individuals who are less interested
(e.g., Barrios, Villarroya, Borrego, & Olle, 2011; Groves, Presser,
& Dipko, 2004; Sheehan & McMillan, 1999). In fact, participant
interest in the content of the study has been shown, at times, to be the
most important factor impacting response rates (Baruch, 1999). This has
potentially major implications for the integrity of data obtained
through online collection methods, as it is possible that the responses
of individuals with greater psychological connection to the survey topic
will be over-represented.
With specific reference to the aforementioned sport management
publications (SMQ, JSM, SMR, and ESMQ), over 60% of the articles that
used online data collection selected sport and event consumers (e.g.,
sport fans, sport event participants) as their target population. There
is considerable evidence to suggest that sport consumers form unique
relationships with associated products and organizations, which leads to
differing behavioral outcomes in consumers of other sources of
entertainment (Shilbury, Quick, Westerbeek, & Funk, 2009; Sutton,
McDonald, Milne, & Cimperman, 1997). In spite of these unique
characteristics, there is little empirical evidence to support the
impact of topic salience on survey response among sport consumers. In
other words, given the affective connection sport consumers often have
with sport products (e.g., teams or events), it is important to
determine if the degree to which someone identifies with a sport product
combined with the content of a research study impacts willingness to
participate.
In addition to topic salience, response rate and response quality
have also been identified as being of paramount importance when
conducting survey research (e.g. Kwak & Radler, 2002; Munoz-Leiva,
Sanchez-Fernandez, Montoro-Rios, & Ibanez-Zapata, 2010). While
response rate simply refers to the number of people who responded to the
survey in comparison to the total number who received it, response
quality is typically measured through item nonresponse, or conversely,
item completion rates (Barrios et al., 2011; Deutskens, de Ruyter, &
Wetzels, 2006; Galesic & Bosnjak; 2009; Kwak & Radler, 2002;
Munoz-Leiva et al., 2010). When the average number of questions
respondents leave unanswered is small, the response quality for data
obtained is considered high (Schaefer & Dillman, 1998), and vice
versa.
Both response rate and quality are vital in ensuring that the data
collected is representative of the intended sample, and one of the most
commonly utilized strategies for maximizing both is that of notification
(Cook, Heath, & Thompson, 2000). This process alerts study
participants that a survey either will be or has been sent to them,
using various combinations of pre- and post-notification. Therefore, the
purpose of this study is to investigate the relationship between both
topic salience and notification on the two key metrics of survey data
collection (response rate and response quality) in online surveys among
sport consumers.
Literature Review
Research that has examined the relationship between topic salience,
survey participation and response quality with online data collection
are limited. Given its likely influence on survey response patterns,
sport marketing researchers would benefit from increased understanding
of how respondent characteristics are related to topic salience (Anseel
et al., 2010; Groves, Singer, & Corning, 2000). The differences
between sport consumers and general consumers is their attitudinal
involvement with products and organizations (Shilbury et al., 2009;
Sutton et al., 1997), which necessitates specific examination of this
group as subsequent behavior will be subject to levels of emotional
attachment and identification with sport products or organizations.
Despite this, there is a lack of empirical evidence of such
relationships within the sport marketing literature. Therefore, there is
value in understanding how topic salience would influence the survey
response pattern among sport consumers in sport marketing research. The
following section reviews relevant literature to develop research
questions for the present study.
Topic Salience
Topic salience is the importance of the survey's subject
matter to those who are participating (Lavrakas, 2008), and has been
identified as an important factor in determining willingness to respond
(Anseel et al., 2010; Baruch, 1999; Sheehan & McMillan, 1999; Turner
et al., 2006). Achieving adequate response rates can at times be
challenging as recipients will perceive a certain level of burden
associated with completing and submitting surveys regardless of whether
they are in mail or online formats. When an individual believes the
subject matter of a survey to be of high personal relevance or of
considerable interest, this can offset some of that burden. Conversely,
when topic salience is low, it is considerably less likely to evoke a
response as participants have less motivation to do so (Dillman,
Eltinge, Groves, & Little, 2002; Groves et al., 2004; Marcus,
Bosnjak, Lindner, Pilischenko, & Schutz, 2007).
Theoretically, topic salience operates in a similar fashion to
personal relevance with an issue, object or person, which increases the
likelihood that an individual will put more cognitive effort into
completing the survey. Prior research indicates that personal relevance
reflects the strength of an individual's attitude toward a topic.
This level of interest should increase the level of cognitive
elaboration that occurs when processing information (Petty &
Cacioppo, 1986). For example, individuals who demonstrated strong
identification with a professional team recalled more facts and
exhibited more positive thoughts after reviewing newspaper articles
about the team compared to individuals who were less identified (Funk
& Pritchard, 2006).
Within the present context, the influence of fan identification on
survey response is of particular interest. Fan identification is defined
as "the personal commitment and emotional involvement customers
have with a sport organization" (Sutton et al., 1997, p. 15). These
authors suggest that sport can differ from other types of entertainment
because individuals can develop a stronger connection with a sport brand
that involves high levels of emotional attachment and identification.
Kunkel, Funk, and Hill (2013) note that these brands often refer to
specific teams rather than to the sports in general, and highly involved
consumers are more likely to watch that team play in live games
(Armstrong, 2002), purchase merchandise (Wann & Branscombe, 1993),
and evaluate sponsors more favorably (Filo, Funk, & O'Brien,
2010). In light of the effect that an individual's psychological
connection has on these behaviors, it is plausible that an
individual's degree of fandom could impact survey participation in
some way. This might be especially true in situations where individuals
are being asked to respond to surveys regarding preferred sport leagues,
teams or players.
Subsequently, it would seem prudent to use fan identification as a
proxy for topic salience and determine any relationships with survey
participation and response quality. Despite the lack of empirical
evidence, we anticipate that participants with stronger levels of
identification with a sport team would be more likely to participate in
survey research and provide responses of higher quality when the content
of the research project focuses on that sport team. This leads to the
following two research questions:
RQ1: What is the relationship between fan identification and survey
participation with online data collection?
RQ2: What is the relationship between fan identification and
response quality with online data collection?
Notification and Survey Response
The emergence of online data collection methods has led to numerous
studies comparing online and mail surveys in different ways, with the
comparison of response rates between these two survey modes being most
prevalent. Past studies have generally reported that online surveys
produce a lower response rate than traditional mail surveys (Cole, 2005;
Crawford, Couper, & Lamais, 2001; Kwak & Radler, 2002), with
Shih and Fan's (2008) meta-analysis revealing a difference of 10%.
Subsequent investigation of different methods to increase online
response rate has shown that the use of notification, both pre- and
post-notification, is one of the most effective strategies to do so
(Cook et al., 2000; Kent & Turner, 2002; Munoz-Leiva et al., 2010).
Pre-notification provides a positive and timely notice that the
participant will be receiving a request to take part in a research study
by completing and returning a survey (Dillman et al., 2009) and is
associated with a higher response rate in online survey studies (Hart,
Brennan, Sym, & Larson, 2009). However, examination of the
relationship between pre-notification and response rate among sport
consumers has been extremely limited. In one of the few studies, Kent
and Turner (2002) reported that e-mail pre-notification was effective
for increasing response rates, although this study utilized mail-based
data collection techniques rather than online methods, so it is unclear
if the same finding would be evident with online survey methods.
The use of post-notification, a follow-up message sent after the
initial deployment of a survey, has also been recognized as a way to
increase survey response (Munoz-Leiva et al., 2010; Roth & BeVier,
1998; Shannon & Bradshaw, 2002). In a mail- based survey, Roose,
Lievens, and Waege (2007) found a 12% increase in response rate as a
result of sending a reminder to participants who had been selected as
part of a sample; however, post-notification strategies in online
surveys may not be as effective (Shih & Fan, 2008). Kaplowitz,
Hadlock, and Levine (2004) examined the response rate of online surveys
with a combination of pre- and post-notification messages, finding that
pre-notification messages elicited significantly higher response rates
than no message. Despite this, the positive effect of the
post-notification message was limited to those who didn't receive a
pre-notification message. Given the unique nature of the aforementioned
psychological connections formed by sport consumers (Shilbury et al.,
2009; Sutton et al., 1997), it is important to ascertain the most
effective combination of notification strategies to maximize response
rate when attempting to reach this group. Thus, the following research
question is put forth:
RQ3: What is the relationship between notification techniques and
response rate with online data collection?
Notification and Response Quality
While response rate is an important metric of overall survey
response (Barrios et al, 2011; Munoz-Leiva et al., 2010), it is not
representative of data quality. Although response quality has not
received as much attention within the literature (Schmidt, Calantone,
Griffin & Montoya-Weiss, 2005), it has important implications for
the legitimacy of subsequent data. One of the most frequently employed
criteria to examine data quality is item completion rates (Barrios et
al., 2011; Deutskens et al., 2006; Galesic & Bosnjak; 2009;
Munoz-Leiva et al., 2010), where the quality of data obtained is
defined, in part, by the percentage of items left unanswered (Schaefer
& Dillman, 1998). Participants' decisions regarding whether or
not to answer survey items are impacted by a number of factors including
their ability to understand the questions and retrieve relevant
information as well as their motivation to provide response (Beatty
& Herrmann, 2002).
The convenient response formats common with online surveys have
been associated with higher rates of item completion compared with other
methods of data collection (Kwak & Radler, 2002; Schaefer &
Dillman, 1998), yet the limited work examining the impact of
notification techniques on response quality has not established a clear
relationship (Munoz-Leiva et al., 2010). Deutskens, de Ruyter, Wetzels,
and Oosterveld (2004) suggested that the timing of post-notification
will not influence response quality and found no significant difference
based on the use of early and late notification. Conversely, Diaz de
rada (2005) argued that early respondents were more likely to exert the
necessary attention and effort than late responders, resulting in lower
response quality for those who respond late. As an extension of this
work, Munoz-Leiva et al. (2010) proposed an inverse relationship between
post-notification and response quality; however, results of their work
failed to support their premise as no relationship was evident.
Given the limited amount of research that has examined the presence
of this relationship, including lack of consideration of
pre-notification techniques, the following research question is
presented:
RQ4: What is the relationship between notification techniques and
response quality with online data collection?
Method
Participants
The population in this study consisted of 23,569 email subscribers
to a daily electronic sport newsletter distributed by a newspaper
located in the Northeastern portion of the United States. A total of
1,884 usable surveys were returned for an overall response rate of 8.0%.
Demographic characteristics of participants revealed that the sample was
predominantly male (82%) and Caucasian (86%) with a median age of 48
years, and most (70%) of the respondents possessed an undergraduate or
graduate degree.
Procedure
Data was obtained via an online survey where each participant was
sent an email message explaining the purpose of the study, information
on a prize incentive for completion of the survey, and an email link
that directed them to the website hosting the survey. The incentive, a
random drawing for sport-related merchandise, was made available to all
participants of the study regardless of which notification group they
were assigned, eliminating the potential bias of the incentive option on
the response rate within the research design. Initially, the association
between fan identification, response rate, and response quality were
examined among respondents.
In order to answer the research questions examining the
relationship between notification technique and response rate and
quality, a 2 x 2 factorial design was employed with pre-notification
(Yes/No) and post-notification (Yes/No) as the two factors. For this
study, the pre-notification and post-notification message sent to
participants contained the same content, with the exception that the
post-notification message indicated that the participant had recently
received an email invitation and link to the online survey. Participants
were randomly assigned in to one of four groups. Group 1 (n = 5,892)
received pre-notification of the survey (pre only); Group 2 (n = 5,892)
received both pre-notification and post-notification (pre/post); Group 3
(n = 5,892) received post-notification (post only); and Group 4 (n =
5,893) did not receive notification of any type (control). No
statistically significant differences were identified between the four
groups (pre, pre/post, post and control) on demographic characteristics
such as, gender, ethnicity, age, or level of education, confirming the
random assignment of participants to each group. Additionally, no
between-group differences were observed for fan identification.
One week after the deployment of the survey (i.e., 10 days after
pre-notification), Groups 2 (pre/post) and 3 (post only) were sent a
reminder email message (i.e., post-notification). The reminder message
was only sent to participants in Group 2 and Group 3 who had not
completed the survey prior to the date of post-notification. For all
groups, data collection was closed two weeks after the initial
deployment of the survey.
After the completion data collection, all nonrespondents were asked
to participate in a shorter version of the survey that contained five
items intended to assess the relationship between fan identification and
survey participation. This shortened survey contained four demographic
items (gender, age, zip code, and education) and one item that measured
fan identification. Previous studies have used comparisons between early
and late responders to look at nonresponse bias (Dooley & Lindner,
2003; Jordan, Walker, Kent, & Inoue, 2011), a method based upon the
assumption that late respondents are most similar to nonrespondents
(Armstrong & Overton, 1977; Miller & Smith, 1983). In order to
provide a more rigorous proxy for nonrespondents to the original survey,
a second, shortened survey was distributed online to those who had not
completed the original one to ascertain fan identification scores. A
total of 417 responses were obtained from this group.
Measures
All four groups were linked to a survey that included 49 items that
measured demographic, psychographic and behavioral variables related to
being a fan of collegiate and/or professional sport teams. The data
obtained was used to answer the four research questions developed for
the study related to topic salience, response rate, and response
quality.
Fan Identification: To assess the impact of topic salience on
survey participation and data quality, fan identification was measured
with five items developed by Mael and Ashforth (1992). This scale was
modified for the current study by inserting the name of the team each
participant identified as their "favorite team" instead of
"name of school" as in the original scale. A Likert scale of 1
= Strongly Disagree to 5 = Strongly Agree was used to record responses.
Based on previous research, this scale represents a sound measure of
participants' identification with a team (Gwinner & Swanson,
2003) and serves as an appropriate proxy for topic salience. For the
shortened version of the survey completed by nonrespondents, one of the
five items ("If a story in the media criticized the team, I would
feel embarrassed") was included in the questionnaire so that the
relationship between fan identification and survey participation could
be examined for respondents and nonrespondents.
Response Rate: The current study measured response rate as the
number of returned surveys (completed and partially completed) divided
by the total number of survey requests sent out, which is defined as
Response Rate 2 (RR2) by the American Association for Public Opinion
Research (AAPOR, 2009). This group identified six methods to calculate
response rate depending on the purpose of the study, with RR2 being a
common method used in survey research to calculate response rate (e.g.,
Abraham, Maitland, & Bianchi, 2006; Curtin, Presser, & Singer,
2005; Greelaw & Brown-Welty, 2009, Westrick & Mount, 2008).
Response Quality: Response quality was determined based on the item
completion rate assessed by the number of completed items divided by the
total number of items. The item completion rate is one of the most
frequently employed criteria to examine response quality (e.g.,
Deutskens et al., 2006; Munoz-Leiva et al., 2010; Schaefer &
Dillman, 1998). For the present study, the item completion rate was
calculated based on 49 common items presented to all four groups.
Data Analysis
Independent sample t-tests were conducted to examine fan
identification between respondents and nonrespondents while bivariate
correlation was performed to evaluate associations between fan
identification and response quality. Chi-square analysis was used to
detect differences of response rate among the four groups that received
differing combinations of notification (including control group). As for
notification strategy and response quality, the subsequent analysis
revealed that the assumption of normality and homogeneity of variances
between the groups was violated. Therefore, the nonparametric equivalent
to ANOVA, Kruskal-Wallis test, was employed in this study as this test
does not assume normality or equal variance of the dependent variables
and can provide more robust results than ANOVA if the sample size of
each group is not equal (Field, 2009).
Results
To examine the relationship between fan identification and survey
participation, data from the initial survey (respondents) and the
shortened survey sent to nonrespondents were compared. Independent
sample t-tests revealed that there was no statistically significant
difference on the fan identification item completed by both respondents
(M = 2.41, SD = 1.00) and nonrespondents (M = 2.41, SD = .99), t(2112) =
-.00, p = 1.00, indicating that fan identification did not influence
survey participation in the present study. Additionally, no differences
were found between respondents and the nonrespondent group across the
four demographic items included in the shortened survey (gender, age,
zip code, and education).
Data from those who completed the original survey indicated that
fan identification was positively associated with response quality (p =
.04). However, the correlation coefficients between the two variables
was small ([gamma] (1716) = .05). The mean score of the five fan
identification items was 2.89 (SD = .89) ranging from 2.83 (Group 4) to
2.92 (Group 3). There was no statistical difference in fan
identification between the four groups.
Response rates for Groups 1 (pre), 2 (pre/post), 3 (post) and 4
(control) are summarized in Table 2. As can be seen, Group 3 had the
highest response rate at 10.7%. Chi-square analysis revealed a
significant difference by four groups ([chi square] (3) = 81.89,
Cramer's V = .06, p < .01). The post-hoc analysis showed that
the response rate of Group 3 (post) was statistically higher than that
of Group 1(pre) ([chi square] (1) = 32.69, Cramer's V = .05, p <
.01), Group 2 (pre/post) ([chi square] (1) = 45.23, Cramer's V =
.06, p<.01), and Group 4 (control) ([chi square] (1) = 64.87,
Cramer's V = .07, p<.01). Additionally, Group 1(pre) was
statistically higher than Group 4 (control) ([chi square] (1) = 5.63,
Cramer's V = .02, p = .02). However, the effect sizes were small as
Cramer's V ranged from .02 to .07. No significant difference on
response rate was identified between Group 2 (pre/post) and Group 4
(control) ([chi square] (1) = 1.83, Cramer's V = .01, p = .18).
Descriptive results of response quality across the four groups are
presented in Table 3. The item completion rate for each group ranged
from 90.0% (Group 3) to 92.5% (Group 1 and 2). The Kruskal-Wallis test
indicated no significant differences between groups (H(3) = 2.54, p =
.47), suggesting that notification did not have an effect on response
quality.
Discussion
This study examined the influence of fan identification and
notification on two metrics commonly used to measure survey response:
response rate and response quality. The current research represents an
important extension of previous work to better understand whether
mail-based methods designed to enhance survey response are equally
effective for online survey research in sport marketing and,
specifically, sport consumers.
Topic Salience
Overall, findings revealed that topic salience as measured by fan
identification did not influence survey response. No relationship was
observed between fan identification and survey participation. A
significant correlation was identified between fan identification and
response quality; however, the low correlation coefficient between the
two variables suggests that caution should be used when considering this
finding. Additionally, a significant relationship was demonstrated
between notification and response rate while no relationship was found
between notification and response quality.
In contrast to previous work on topic salience (Anseel et al.,
2010; Baruch, 1999; Sheehan & McMillan, 1999; Turner, et al., 2006),
no evidence of a relationship with survey participation was observed. In
the present study, no difference in the measure of fan identification
was found between respondents and those that did not participate in the
initial data collection. Prior work on the impact of topic salience on
data collection through survey methods would suggest that respondents
who scored higher on fan identification would have been more willing to
participate in the study (e.g., Anseel et al., 2010; Sheehan &
McMillan, 1999) as their psychological connection to the subject matter
would provide an additional motivation that is not present for those
with lower scores. Conversely, nonrespondents would have scored lower on
this measure and would have had less interest in a study on sport fandom
and therefore would be less likely to participate. One potential outcome
of this would be the overrepresentation high fan identification group,
whose overrepresentation in the sample could compromise the integrity of
the results. However, in the present study this potential bias was not
evident as fan identification scores did not differentiate respondents
from nonrespondents.
The contrast between the findings of the present study and the
prior investigations of topic salience and survey response raises two
distinct possibilities. First, other factors besides the subject matter
of the survey may have been considered when individuals made
determinations about whether to participate or demonstrate the cognitive
effort required to respond to survey items. Groves et al. (2004)
suggested that if the information considered by respondents when
deciding whether to respond does not include the survey topic, little
difference between respondents and nonrespondents on variables related
to the focus of the research (in this case, fan identification) will be
evident. In other words, people uninterested in the topic of a study are
likely to be motivated to respond to a survey by other aspects of the
survey request, such as the use of participation maximization
techniques.
One such technique that has become an increasingly popular method
of increasing response to online surveys is providing incentives. These
have been shown to not only encourage individuals to open and begin the
survey, but also to discourage dropout (Goritz, 2004). Additionally, a
positive relationship was observed between the value of the incentive
and its effectiveness. Further research by Goritz (2006) examined the
effectiveness of different types of incentive systems (e.g., redeemable
bonus points, money lottery, and gift lottery), finding that response
quality and survey outcome were not affected. In order to maximize
response rate in online surveys, additional investigation of the types
of incentives and incentive systems that appeal to sport consumers
should be encouraged.
Second, the contrast between the present findings and those of
previous research may provide support for the idea that products
relating to sports entities are consumed differently than those that are
not. This refers to the unique relationships that sport consumers form
with associated products and organizations (Shilbury et al., 2009;
Sutton et al., 1997). It should be noted, however, that consumers of
sport-related products will exist simultaneously as consumers of other
products; therefore, it is not that sport consumers possess unique
characteristics, rather the unique outcomes of the psychological
connections that consumers form with sport-related products that
influence related behaviors. While topic salience may influence
participation (or non-participation) in surveys relating to other areas,
those concerned with sports teams, brands, or products may not be
subject to the same motivations.
It is noteworthy that while motivation to provide survey response
was not differentiated by fan identification, the relative quality of
the response provided was positively correlated with level of
identification. Caution should be used when considering this finding as
the strength of the relationship was small; however, it warrants further
investigation to determine if this relationship could be confirmed in
other research. Conceptually it makes sense that individuals who are
most interested in the topic of the study would demonstrate the greatest
cognitive effort when providing response, but there is limited empirical
support of the premise. Future work should include measures of topic
salience, specifically measures of fan identification, when the topic of
the study relates to sport consumption.
Response Rate
Consistent with previous work (Barrios et al., 2011; Munoz-Leiva et
al., 2010), this study identified a positive and non-linear relationship
between response rates and the combination of pre- and post-notification
in online surveys. First, no significant difference was found in
response rate between the control group (Group 4) and the group that
received multiple forms of notification (Group 2, pre/post). Second,
both Group 1 (pre only) and Group 3 (post only) had response rates
significantly higher than the control group. Third, Group 3 (post only)
had a response rate significantly greater than the other three groups.
The results suggest that increasing the number of notifications will not
equate to a positive increase in overall response, especially when
compared with groups who received only one notification. This represents
an extension of Munoz-Leiva et al. (2010), who found that increasing the
frequency of post-notifications (i.e., follow-up messages) had only a
marginal effect on response rates. This can be explained by the negative
perceptions participants associate with the electronic delivery of
multiple notifications that request participation in a research study
(Anderson & Kanuka, 2003; Solomon, 2001). Recent work illustrates
that individuals receive an average of 32 emails per day (Radicati &
Hoang, 2010) compared with four documents sent via postal mail (Mazzone
& Pickett, 2010). Dillman et al. (2009) suggested that one reason
multiple notifications may not be as effective with online survey
research is the frequency of email correspondence individuals receive
compared with mail correspondence. According to these authors,
individuals are more likely to ignore or forget about notification
messages, especially if they are unsolicited requests for study
participation. Therefore, while the use of multiple notifications has
been shown to increase response rates with mail-based survey research
(Dillman et al., 2009), findings from this study suggest that there may
be limited incremental benefit for using multiple contacts (i.e., pre-
and post-notifications) to maximize survey response with online survey
research for sport consumers. With online research it appears that one
notification message might be optimal in terms of increasing response
rate.
Based on findings from the present study, it would also appear that
utilization of post-notification is more likely to yield the expected
increase to response rate compared with pre-notification. One reason
that post-notification might be more effective in increasing response
rates is the underlying mechanism contributing to survey nonresponse. In
general, participant nonresponse can be categorized as either passive or
active nonresponse (Jordan et al., 2011; Rogelberg, Conway, Sederburg,
Spitzmuller, Aziz, & Knight, 2003). Passive nonresponse is
unintentional in nature and is normally not based on a conscious or
overt decision to decline participation in a research study (Jordan et
al., 2011). Individuals classified as passive nonrespondents are
normally not opposed to participating in a study but for various
reasons, such as forgetfulness or personal time constraints, are unable
to complete the survey. In contrast, active nonrespondents are those
individuals who consciously and purposefully choose not to respond to a
survey request. Given the high number of emails received on a daily
basis (Dillman et al., 2009) individuals might find it easy to forget or
even ignore survey solicitation messages. The use of post-notification
techniques in online surveys could remind passive nonrespondents of the
potential rewards of survey participation and thus be a more effective
method to increase overall response rates with this group. Among active
nonrespondents, however, the negative effects associated with multiple
notifications may be greater than perceived benefits of survey
participation. As a result, refusal to participate in the survey does
not change despite the use of different response maximization strategies
(e.g., post-notification or other inducements) and the use of
post-notification is unlikely to increase survey response with active
nonrespondents. Therefore, when choosing between pre- and
post-notification, it appears studies that incorporate pre-notification
into the research design rather than post-notification would not realize
this same benefit, as no reminder would be sent to prompt action. So, as
suggested by Dillman et al. (2009), one modification that could be made
to notification strategies for online surveys when utilizing online data
collection methods is the elimination of one or more contacts,
specifically the use of pre-notification.
Response Quality
The present study found a relationship between response rate and
use of notification; however, no such relationship was evident with
response quality. The item completion rate for each of the four groups
was similar and no significant differences were evident. This finding is
consistent with Deutskens et al. (2004) who measured response quality
and found no significant difference based on the use of early and late
notification. Additionally, Munoz-Leiva et al. (2010) concluded there
were no significant differences between the number of post-notification
messages sent and response quality. This work combined with findings
from the current study suggest that once an individual decides to
participate in an online research study the use of notification does not
impact the willingness to respond to survey items. So, while
notification might prompt study participation, it does not appear to
impact the degree to which the person responds to survey items. It
should also be noted that item completion rates for all groups were
relatively high, ranging from a low of 90% (Group 3) to a high of 92.5%
(Groups 1 and 2) suggesting that online survey formats might allow for
more convenient response options resulting in a higher completion rates
compared with mail-based survey research (Deutskens et al., 2004; Kwak
& Radler, 2002).
Limitations and Future Research
As in all research, the present study includes limitations that
must be considered when evaluating results. First, the overall response
rate for this study was low, which increases the threat of nonresponse
error. If data obtained from respondents are significantly different
from what would have been provided by nonrespondents, the external
validity and reliability of the study is compromised. Given the research
focus and design of the current study, the use of established response
maximization strategies could not be employed uniformly across all four
groups (e.g., postnotification), likely impacting overall response.
Future research should attempt to confirm findings from the present
study with other samples. Second, the high item completion rates across
four groups (over 90%) may introduce some restrictions in examining the
relationship between notification techniques and response quality.
Online survey formats can result in higher completion rates compared
with mail-based survey research (Kwak & Radler, 2002; Schaefer &
Dillman, 1998). Due to the small variation of the item completion rates,
however, it may be harder to detect the relationship. Future research
may consider developing questionnaires that produce greater variation in
item completion rates.
Conclusion
The use of online data collection techniques in sport management
research, specifically those involving sport consumers, has increased in
recent years. The findings of the present study demonstrate that
willingness to respond to online surveys and the quality of those
responses are not impacted by topic salience among sport consumers. This
supports the use of such data collection methods for researching this
population, alleviating concerns of the overrepresentation of
individuals for whom the subject of the survey is more relevant.
Another method that has been shown to improve survey response,
specifically the response rate of a survey, is the use of notification.
While the findings of the present study confirm the value of
notification, this study found that the use of multiple notifications
might not be the best strategy with online data collection as it did not
increase survey response. Additionally, it was revealed that the use of
post-notification resulted in a response rate significantly higher than
all other notification treatments. Hence, efforts to collect information
on sport consumers using online surveys would be enhanced by a
post-notification-only method.
Authors' Note
The authors would like to acknowledge the Sport Industry Research
Center at Temple University for support of this research.
References
Abraham, K. G., Maitland, A., & Bianchi, S. M. (2006).
Nonresponse in the American time use survey. Public Opinion Quarterly,
70, 676-703.
The American Association for Public Opinion Research (2009).
Standard definitions: Final dispositions of case codes and outcome rates
for surveys (6th ed.). AAPOR.
Anderson, T., & Kanuka, H. (2003). E-research: Methods,
strategies, and issues. Boston, MA: Pearson Education.
Anseel, F., Lievens, F., Schollaert, E., & Choragwicka, B.
(2010). Response rates in Organizational science, 1995-2008: A
meta-analytic review and guidelines for survey researchers. Journal of
Business and Psychology, 25, 335-349.
Armstrong, K. (2002). Race and sport consumption motivations: A
preliminary investigation of a Black Consumers' Sport Motivation
Scale. Journal of Sport Behavior, 25, 309-330.
Armstrong, J.S., & Overton, T.S. (1977). Handling nonresponse
bias in mail surveys. Journal of Marketing Research, 14, 396-402.
Barrios, M., Villarroya, A., Borrego, A., & Olle, C. (2011).
Response Rates and Data Quality in Web and Mail Surveys Administered to
PhD Holders. Social Science Computer Review, 29, 208-220.
Baruch, Y. (1999). Response rate in academic studies-a comparative
analysis. Human Relations, 52, 421-438.
Beatty, P., & Herrmann, D. (2002). To answer or not to answer:
Decision processes related to survey item nonresponse. In Groves et al.
(Eds.), Survey nonresponse, (pp. 71-85), New York, NY: John Wiley &
Sons.
Cole, S. T. (2005). Comparing mail and web-based survey
distribution methods: Results of surveys to leisure travel retailers.
Journal of Travel Research, 43, 422-430.
Cook, C., Heath, F., & Thompson, R. L. (2000). A meta-analysis
of response rates in web- or internet-based surveys. Educational and
Psychology Measurement, 60, 821-836.
Crawford, S.D., Couper, M. P., & Lamias, M. J. (2001). Web
surveys: Perceptions of burden. Social Science Computer Review, 19,
146-162. Curtin, R., Presser, S., & Singer, E. (2005). Changes in
telephone survey nonresponse over the past quarter century. Public
Opinion Quarterly, 69, 87-98.
Deutskens, E., de Ruyter, K., & Wetzels, M. (2006). An
assessment of equivalence between online and mail surveys in service
research. Journal of Survey Research, 8, 346-355.
Deutskens, E., de Ruyter, K., Wetzels, M., & Oosterveld, P.
(2004). Response rate and response quality of internet-based surveys: An
experimental study. Marketing Letters, 15, 21-36.
Diaz de rada, V. (2005). The effect of follow-up mailings on the
response rate and response quality in mail surveys. Quality &
Quantity, 39, 1-18.
Dillman, D. A., Eltinge, J. L., Groves, R. M., & Little, R. J.
A. (2002). Survey nonresponse in design, data collection, and analysis.
In R. Groves, D. Dillman, J. Eltinge, & R. J. A. Little (Eds.),
Survey nonresponse (pp. 3-26). New York, NY: Wiley Interscience.
Dillman, D.A., Smyth, J. D., & Christian, L. M. (2009).
Internet, mail, and mixed-mode surveys: the tailored design method (3rd
ed.). Hoboken, NJ: John Wiley & Sons.
Dooley, L. M., & Linder, J. R. (2003). The handling of
nonresponse error. Human Resource Development Quarterly, 14, 99-110.
Field, A. P. (2009). Discovering statistics using SPSS (3rd ed.).
London: SAGE publications Ltd.
Filo, K., Funk, D. C., & O'Brien, D. (2010). The
antecedents and outcomes of attachment and sponsor image within charity
sport events. Journal of Sport Management, 24, 623-648.
Funk, D. C. & Pritchard, M. P. (2006). Sport publicity:
Commitment's moderation of message effects. Journal of Business
Research, 59, 613-621.
Galesic, M., & Bosnjak, M. (2009). Effects of questionnaire
length on participation and indicators of response quality in a web
survey. Public Opinion Quarterly, 73, 349-360.
Goritz, A. S. (2004). The impact of material incentives on response
quantity, response quality, sample composition, survey outcome, and cost
in online access panels. International Journal of Market Research, 46,
327-345.
Goritz, A. S. (2006). Incentives in web surveys: methodological
issues and a review. International Journal of Internet Science, 1,
58-70.
Greelaw, C., & Brown-Welty, S. (2009). A comparison of
web-based and paper-based survey methods. Evaluation Review, 33,
464-480.
Groves, R. M., Presser, S., & Dipko, S. (2004). The role of
topic interest in survey participation decisions. Public Opinion
Quarterly, 68, 2-31.
Groves, R. M., Singer, E., & Corning, A. (2000).
Leverage-saliency theory of survey participation: Description and an
illustration. Public Opinion Quarterly, 64, 299-308.
Gwinner, K., & Swanson, S. R. (2003). A model of fan
identification: Antecedents and sponsorship outcomes. Journal of
Services Marketing, 17, 275-294.
Hart, A., Brennan, C. W., Sym, D., & Larson, E. (2009). The
impact of personalized prenotification on response rates to an
electronic survey. Western Journal of Nursing Research, 31, 17-23.
Jordan, J. S., Walker, M., Kent, A., & Inoue, Y. (2011). The
frequency of nonresponse analyses in the Journal of Sport Management.
Journal of Sport Management, 25, 229-239.
Kaplowitz, M., Hadlock, T. D., & Levine, R. (2004). A
comparison of web and mail survey response rates. Public Opinion
Quarterly, 68, 94-101.
Kent, A., & Turner, B. (2002). Increasing response rates among
coaches: The role of prenotification methods. Journal of Sport
Management, 16, 230-238.
Kunkel, T., Funk, D, & Hill, B. (2013). Brand architecture,
drivers of consumer involvement, and brand loyalty with professional
sport leagues and teams. Journal of Sport Management, 27, 177-192.
Kwak, N., & Radler, B. (2002). A comparison between mail and
web surveys: Response pattern, respondent profile, and data quality.
Journal of Official Statistics, 18, 257-273.
Lavrakas, P. J. (Ed.). (2008). Encyclopedia of survey research
methods (Vol. 2). Thousand Oaks, CA: SAGE.
Mael, F., & Ashforth, B. E. (1992). Alumni and their alma
mater: A partial test of the reformulated model of organizational
identification. Journal of Organizational Behavior, 13, 103-123.
Marcus, B., Bosnjak, M., Lindner, S., Pilischenko, S., &
Schutz, A. (2007). Compensating for low topic interest and long surveys:
A field experiment on nonresponse in web surveys. Social Science
Computer Review, 25, 372-383.
Mazzone, J., & Pickett, J. (2010). The household diary
study--Mail use & attitudes in FY 2009. United States Postal
Service. Retrieved from
http://about.usps.com/studying-americans-mail-use/household
diary/2009/fullreport-pdf/usps-hds-fy09.pdf
Miller, L. E., & Smith, K. L. (1983). Handling nonresponse
issues. Journal of Extension, 21, 45-50.
Munoz-Leiva, F., Sanchez-Fernandez, J., Montoro-Rfos, F., &
Ibanez-Zapata, J. A. (2010). Improving the response rate and quality in
Web-based surveys through the personalization and frequency of reminder
mailings. Quality & Quantity, 44, 1037-1052.
Pan, B. (2010). Online travel surveys and response patterns.
Journal of Travel Research, 49, 121-135.
Petty, R. E., & Cacioppo, J. T. (1986). The elaboration
likelihood model of persuasion. In L. Berkowitz (Ed.), Advances in
experimental social psychology, (Vol. 19, pp. 123-205). San Diego, CA:
Academic Press.
Radicati, S., & Hoang, Q. (2010). Business user survey, 2010.
The Radicati Group. Retrieved from
http://www.radicati.com/wp/wp-content/uploads/2010/11/Business-User-Survey-2010-Executive- Summary.pdf
Rogelberg, S. G., Conway, J. M., Sederburg, M. E., Spitzmuller, C.,
Aziz, S., & Knight, W. E. (2003). Profiling active and passive
nonrespondents to an organizational survey. Journal of Applied
Psychology, 88, 1104.
Roose, H., Lievens, J., & Waege, H. (2007). The joint effect of
topic interest and follow-up procedures on the response in a mail
questionnaire: An empirical test of the leverage-saliency theory in
audience research. Sociological Methods & Research, 35, 410-428.
Roth, P. L, & BeVier, C. A. (1998). Response rates in MRM/OB
survey research: Norms and correlates, 1990-1994. Journal of Management,
24, 97-117.
Schaefer, D. R., & Dillman, D. A. (1998). Development of a
standard e-mail methodology: Results of an experiment. Public Opinion
Quarterly, 62, 378-397.
Schmidt, J. B., Calantone, R. J., Griffin, A., & Montoya-Weiss,
M. M. (2005). Do certified mail third-wave follow-ups really boost
response rates and quality? Marketing Letters, 16, 129-141.
Shannon, D., & Bradshaw, C. (2002). A comparison of response
rate, response time, and costs of mail and electronic surveys. The
Journal of Experimental Education, 70, 179-192.
Sheehan, K. B., & McMillan, S. J. (1999). Response variation in
e-mail surveys: An exploration. Journal of Advertising Research, 39,
45-54.
Shilbury, D. (2012). Competition: The heart and soul of sport
management. Journal of Sport Management, 26, 1-10.
Shilbury, D., Westerbeek, H., Quick, S., & Funk, D. (2009).
Strategic sport marketing (3rd ed.). Sydney: Allen & Unwin.
Shih, T. H., & Fan, X. (2008). Comparing response rates from
web and mail surveys: A meta-analysis. Field Methods, 20, 249-271.
Solomon, D. J. (2001). Conducting web-based surveys. Practical
Assessment Research & Evaluation, 7(19). Retrieved from
http://pareonline.net/getvn.asp?v=7&n=19
Sue, V. M., & Ritter, L. A. (2007). Conducting online surveys.
Thousand Oaks, CA: Sage.
Sutton, W. A., McDonald, M. A., Milne, G. R., & Cimperman, J.
(1997). Creating and fostering fan identification in professional
sports. Sport Marketing Quarterly, 6, 15-22.
Turner, B. A., Jordan, J. S., & Sagas, M. (2006). Factors
affecting response rates in survey research: The case of intercollegiate
coaches. Applied Research in Coaching and Athletics Annual, 21, 211-237.
Wann, D., & Branscombe, N. (1993). Sports fans: Measuring
degree of identification with their team. Journal of Sport Psychology,
24, 1-17.
Westrick, S. C., & Mount, J. K. (2008). Effects of repeated
callbacks on response rate and nonresponse bias: Results from a 17-state
pharmacy survey. Research in Social and Administrative Pharmacy, 4,
46-58.
Jeremy Scott Jordan, PhD, is an associate professor and director of
the Sport Industry Research Center in the School of Tourism &
Hospitality Management at Temple University. His research interests
include the social impact of sport involvement.
Simon Brandon-Lai is a doctoral student in sport management at
Florida State University. His research interests include sport consumer
behavior and physical activity participation.
Mikihiro Sato is a PhD candidate in the School of Tourism &
Hospitality Management at Temple University. His research interests
include the role of sport in promoting well-being.
Aubrey Kent, PhD, is an associate professor and chair of the School
of Tourism & Hospitality Management at Temple University. His
research interests include industrial/organizational psychology and
corporate social responsibility.
Daniel C. Funk, PhD, is a professor and director of the research
and PhD programs in the School of Tourism & Hospitality Management
at Temple University. His research interests include sport marketing and
sport consumer behavior.
Table 1
Fan Identification by Group
95% CI
Type of notification N Mean SD LB UB
Group 1 (pre) 412 2.87 0.94 2.78 2.96
Group 2 (pre/post) 389 2.91 0.87 2.82 2.99
Group 3 (post) 563 2.92 0.88 2.85 2.99
Group 4 (control) 352 2.83 0.87 2.74 2.92
All 1716 2.89 0.89 2.84 2.93
Note: No significant differences in fan identification were identified
by type of notification.
Table 2
Response Rate by Type of Notification
# of surveys # of Response rate
Notification method sent respondents (%)
Group 1 (pre) 5892 450 7.6
Group 2 (pre/post) 5892 421 7.1
Group 3 (post) 5892 629 10.7
Group 4 (control) 5893 384 6.5
Note: Response rate of Group 3 is statistically higher than that of
all other three groups at p < .01. Response rate of Group 1 is higher
than that of Group 4 at p = .02.
Table 3
Response Quality by Type of Notification
Type of notification Mean (%) SD Median (%)
Group 1 (pre) 92.5 21.5 100.0
Group 2 (pre/post) 92.5 22.0 98.0
Group 3 (post) 90.1 25.4 98.0
Group 4 (control) 91.4 24.5 100.0
Note: No significant difference in response quality was identified
by type of notification.