The topic of web based surveys has been fermenting in background and now it's time for a tasting. One stimulus is Australia's federal election fever (over by the time you read this little contribution) which has embraced the web, and another is the Journal banding survey [1]. These are quite disparate topics, but the underlying technology is pretty much the same, and there may be some informative contrasts.
Firstly, political surveys on the web. Take this well-crafted, 'good fun' example from the ABC, 10 Nov 2007 [2]:
Opinion PollUnderneath there is some standard HTML with lines like <FORM ACTION="/cgi-bin/common/voting/newpoll.pl" METHOD="POST"> and <INPUT TYPE="RADIO" NAME="vote" VALUE="Yes..."> (that brings back nostalgic memories, I haven't written such stuff myself since about 1998). Upon submitting, I was told "794 votes counted" and that the Noes have it, 57% to 43% (somewhat exceeding the more familiar figures in "two party preferred" polls data :-) Because I do not have to be unbiased, I can say "Well done ABC, better than some others!" Numerous examples, some good, some bad, are easily found in the Australian media. For example, a website [3] associated with Channel 10 contained a survey activity investigating the question "Are you a KEVIN07 True believer?" Of the 8 agree/no vote items, the highest scoring item was "I think Peter Garrett's bald head allows his ideas to come out quicker", with 44 of the 62 responses agreeing. Meanwhile, Channel 7's website included an "Election Poll" question "Did the first half of the election campaign change your views?", with a four point scale, recording "14372 votes since Nov 4 2007" [4]. Channel 9's "Passion Pulse" website offered a daily quote, with 7 November's quote from John Howard, "The Australian public believe that when it comes to controlling interest rates... the Coalition is a better bet than Labor", recording 23520 votes, 46% strongly disagree/disagree and 52% agree/strongly agree. Then SBS weighs in with "Who do you blame for the rise in interest rates?" [5], where in a four item field "The Government" was a runaway winner for a change, scoring 41%, though as SBS didn't state the number of votes counted, we should scratch this entry.
Former Labor leader Mark Latham has called it a Seinfeld election, about nothing. Do you agree?
• Yes, both parties are trapped in one upmanship and materialism.
• No, Mark Latham is irrelevant and out of touch.
The lovely thing about political and similar surveys conducted on the web by media organisations is that the results are almost irrelevant. The big thing is participation, engaging with your viewers or readers, cultivating their loyalty, providing a virtual social activity for them, and (hopefully) getting a larger number of respondents compared with similar surveys done by your media rivals. The question of "trusting" a web based survey is not really pertinent, the more appropriate measuring scale may be from "very discouraging" to "very encouraging", or from "ratings disaster" to "ratings bonanza", interpreted in a comparative perspective, i.e. against your rivals. The identities or "demographics" of the respondents do not matter, it's just the numbers that count.
In passing, we could note that in recent years technological advances have made things much easier for aspiring web based survey designers. You don't need to get up to speed on things like <FORM ACTION="/cgi-bin/common/voting/newpoll.pl" METHOD="POST">. For example you can use a free to the Internet facility such as SurveyMonkey ("Online survey software made easy!") [6] or PollDaddy ("Create free polls anywhere online!") [7], or you can run your own survey facility using open source software such as LimeSurvey ("The Leading Open Source Tool for Online Surveys") [8]. Much easier than it was just a few years ago (see, for example, the hassles faced by Carbonaro, Bainbridge & Wolodko, 2002 [9]).
Turning to academic research surveys on the web, life is not that easy. The identities and "demographics" of the respondents do matter. Basically there are two types of approaches, one being to authenticate respondents, and the other being to maintain anonymity and use other techniques that enable the researcher to characterise or selectively limit the population being sampled. The first type, authentication of respondents, is illustrated very well by the widespread use of online, web based questionnaires for student evaluation of teaching [10]. Typically, these are unit or course based, and therefore access must be restricted to students enrolled in a particular unit or course, generally by using the same login name and password that a student has for other purposes such as accessing the university's learning management system. Procedural provisions for guaranteeing student anonymity, well-controlled, familiar, you can trust the results. However, the second type, respondents remaining anonymous, is liable to be more difficult. The Journal banding survey cited earlier [1] is a typical example, stating that:
...your responses to this questionnaire will be anonymous. No information which identifies you in any way will be accessible to the researchers unless you choose to identify yourself in some other way.[1]This is the point at which web based surveys become really interesting. Having no technologically based procedure such as login name and password that limits access to a specific and known population, how can the researcher know the nature of the population that is being sampled? If you don't know some of the basics, like who is in and who is not in, population size, and response rate, can you "trust web based surveys?" This phrasing reflects the title of an article by Gosling, Vazire, Srivastava and John (2004) [11], "Should we trust web-based studies?...". These researchers noted compelling benefits underlying increased interest in web based "self-report questionnaires from self-selected samples" [11], but with an important caution being raised:
However, these benefits cannot be realized until researchers have first evaluated whether this new technique compromises the quality of the data.[11]Another researcher, Schonlau (2004) [12], gives an explicit definition of the core point of contention, and a different cautionary phrase:
Whether Web surveys will develop into mainstream survey research tools depends on the possibility of drawing inferences from convenience samples. Conventional survey sampling wisdom holds that inferences cannot be drawn from convenience samples, thereby negating their use - with the possible exception of pilot studies. Still, convenience samples can be used to conduct experiments within that sample...To amplify the question raised in this column's title, how can we best ensure the quality of data obtained from self selected, anonymous, non-authenticated respondents? Fortunately, many investigators, particularly from the social sciences and health sciences, have published relevant results and recommendations for conducting and reporting web based surveys. An especially comprehensive checklist for authors, reviewers and editors was developed by Gunther Eysenbach, Editor-in-Chief for the Journal of Medical Internet Research [13]. Numerous researchers have compared the findings from surveys delivered by both conventional techniques and web based techniques. For example a Swedish group [14] undertook a
The possibility of drawing inferences from convenience samples is a contentious issue among survey researchers. The excitement needs to be tempered with rational skepticism. [12]
...comparison of a 'gold standard' random selection population-based sexual survey (The Swedish Sexual Life Survey) with an internet-based survey in Sweden which used identical demographic, sexual and relationship questions, to ascertain the biases and degree of comparability between the recruitment methods. [14]The phrase 'gold standard' is one I'll store in memory, for critiquing investigative methods when doing journal article reviews. In another example, Gosling et al (2004) [11] compared "a new large Internet sample (N 361,703) with a set of 510 published traditional samples". One of their cautions is notable: "As with all research, the best studies will seek convergence across multiple methods." They concluded that
Our analyses suggest that the samples gathered using Internet methods are at least as diverse as many of the samples already used in psychological research and are not unusually maladjusted. Internet samples are certainly not representative or even random samples of the general population, but neither are traditional samples in psychology. [11]Whilst this very small sample of research into the question of trusting web based surveys gives positive indications, it's not easy to relate readily to the Journal banding survey, named above as one of the stimuli for this column. The survey's public documentation and reports to date [1, 15, 16, 17] do not contain the details one would expect for a web based survey drawing upon anonymous, self selected respondents. For example, initial email publicity for the survey [15] indicated an intention to extend the invitation "to other relevant national and international professional and research organizations related to education." HERDSA members received an emailed invitation via Roger Landbeck on 13 December 2006, but ODLAA and ASCILITE members appear to have missed out! [18]. Some universities reposted the invitation to internal emailing lists, but we don't know how many. Without a detailed record of the actual distribution of invitations, or access to self reported details such as membership of professional associations, it's difficult for readers to assess how well the sample represents the target population. This has been defined by DEST as "Education Studies; Curriculum Studies; Professional Development of Teachers; Other Education" [19]. Being in the latter group, specifically an editor for an "Other Education" journal [AJET, see 20], I'm a bit jittery about the way the RQF is ticking!
Author: Roger Atkinson retired from Murdoch University's Teaching and Learning Centre in June 2001. His current activities include publishing AJET and honorary work on TL Forum, ascilite Singapore 2007 and other academic conference support and publishing activities. Website (including this article in html format): http://www.roger-atkinson.id.au/ Contact: rjatkinson@bigpond.com
Please cite as: Atkinson, R. (2007). Can we trust web based surveys? HERDSA News, 29(3). http://www.roger-atkinson.id.au/pubs/herdsa-news/29-3.html |