[ Roger Atkinson's Home Page ]

Can we trust web based surveys?

Roger Atkinson

The topic of web based surveys has been fermenting in background and now it's time for a tasting. One stimulus is Australia's federal election fever (over by the time you read this little contribution) which has embraced the web, and another is the Journal banding survey [1]. These are quite disparate topics, but the underlying technology is pretty much the same, and there may be some informative contrasts.

Firstly, political surveys on the web. Take this well-crafted, 'good fun' example from the ABC, 10 Nov 2007 [2]:

Opinion Poll
Former Labor leader Mark Latham has called it a Seinfeld election, about nothing. Do you agree?
•   Yes, both parties are trapped in one upmanship and materialism.
•   No, Mark Latham is irrelevant and out of touch.
Underneath there is some standard HTML with lines like <FORM ACTION="/cgi-bin/common/voting/newpoll.pl" METHOD="POST"> and <INPUT TYPE="RADIO" NAME="vote" VALUE="Yes..."> (that brings back nostalgic memories, I haven't written such stuff myself since about 1998). Upon submitting, I was told "794 votes counted" and that the Noes have it, 57% to 43% (somewhat exceeding the more familiar figures in "two party preferred" polls data :-) Because I do not have to be unbiased, I can say "Well done ABC, better than some others!" Numerous examples, some good, some bad, are easily found in the Australian media. For example, a website [3] associated with Channel 10 contained a survey activity investigating the question "Are you a KEVIN07 True believer?" Of the 8 agree/no vote items, the highest scoring item was "I think Peter Garrett's bald head allows his ideas to come out quicker", with 44 of the 62 responses agreeing. Meanwhile, Channel 7's website included an "Election Poll" question "Did the first half of the election campaign change your views?", with a four point scale, recording "14372 votes since Nov 4 2007" [4]. Channel 9's "Passion Pulse" website offered a daily quote, with 7 November's quote from John Howard, "The Australian public believe that when it comes to controlling interest rates... the Coalition is a better bet than Labor", recording 23520 votes, 46% strongly disagree/disagree and 52% agree/strongly agree. Then SBS weighs in with "Who do you blame for the rise in interest rates?" [5], where in a four item field "The Government" was a runaway winner for a change, scoring 41%, though as SBS didn't state the number of votes counted, we should scratch this entry.

The lovely thing about political and similar surveys conducted on the web by media organisations is that the results are almost irrelevant. The big thing is participation, engaging with your viewers or readers, cultivating their loyalty, providing a virtual social activity for them, and (hopefully) getting a larger number of respondents compared with similar surveys done by your media rivals. The question of "trusting" a web based survey is not really pertinent, the more appropriate measuring scale may be from "very discouraging" to "very encouraging", or from "ratings disaster" to "ratings bonanza", interpreted in a comparative perspective, i.e. against your rivals. The identities or "demographics" of the respondents do not matter, it's just the numbers that count.

In passing, we could note that in recent years technological advances have made things much easier for aspiring web based survey designers. You don't need to get up to speed on things like <FORM ACTION="/cgi-bin/common/voting/newpoll.pl" METHOD="POST">. For example you can use a free to the Internet facility such as SurveyMonkey ("Online survey software made easy!") [6] or PollDaddy ("Create free polls anywhere online!") [7], or you can run your own survey facility using open source software such as LimeSurvey ("The Leading Open Source Tool for Online Surveys") [8]. Much easier than it was just a few years ago (see, for example, the hassles faced by Carbonaro, Bainbridge & Wolodko, 2002 [9]).

Turning to academic research surveys on the web, life is not that easy. The identities and "demographics" of the respondents do matter. Basically there are two types of approaches, one being to authenticate respondents, and the other being to maintain anonymity and use other techniques that enable the researcher to characterise or selectively limit the population being sampled. The first type, authentication of respondents, is illustrated very well by the widespread use of online, web based questionnaires for student evaluation of teaching [10]. Typically, these are unit or course based, and therefore access must be restricted to students enrolled in a particular unit or course, generally by using the same login name and password that a student has for other purposes such as accessing the university's learning management system. Procedural provisions for guaranteeing student anonymity, well-controlled, familiar, you can trust the results. However, the second type, respondents remaining anonymous, is liable to be more difficult. The Journal banding survey cited earlier [1] is a typical example, stating that:

...your responses to this questionnaire will be anonymous. No information which identifies you in any way will be accessible to the researchers unless you choose to identify yourself in some other way.[1]
This is the point at which web based surveys become really interesting. Having no technologically based procedure such as login name and password that limits access to a specific and known population, how can the researcher know the nature of the population that is being sampled? If you don't know some of the basics, like who is in and who is not in, population size, and response rate, can you "trust web based surveys?" This phrasing reflects the title of an article by Gosling, Vazire, Srivastava and John (2004) [11], "Should we trust web-based studies?...". These researchers noted compelling benefits underlying increased interest in web based "self-report questionnaires from self-selected samples" [11], but with an important caution being raised:
However, these benefits cannot be realized until researchers have first evaluated whether this new technique compromises the quality of the data.[11]
Another researcher, Schonlau (2004) [12], gives an explicit definition of the core point of contention, and a different cautionary phrase:
Whether Web surveys will develop into mainstream survey research tools depends on the possibility of drawing inferences from convenience samples. Conventional survey sampling wisdom holds that inferences cannot be drawn from convenience samples, thereby negating their use - with the possible exception of pilot studies. Still, convenience samples can be used to conduct experiments within that sample...
The possibility of drawing inferences from convenience samples is a contentious issue among survey researchers. The excitement needs to be tempered with rational skepticism. [12]
To amplify the question raised in this column's title, how can we best ensure the quality of data obtained from self selected, anonymous, non-authenticated respondents? Fortunately, many investigators, particularly from the social sciences and health sciences, have published relevant results and recommendations for conducting and reporting web based surveys. An especially comprehensive checklist for authors, reviewers and editors was developed by Gunther Eysenbach, Editor-in-Chief for the Journal of Medical Internet Research [13]. Numerous researchers have compared the findings from surveys delivered by both conventional techniques and web based techniques. For example a Swedish group [14] undertook a
...comparison of a 'gold standard' random selection population-based sexual survey (The Swedish Sexual Life Survey) with an internet-based survey in Sweden which used identical demographic, sexual and relationship questions, to ascertain the biases and degree of comparability between the recruitment methods. [14]
The phrase 'gold standard' is one I'll store in memory, for critiquing investigative methods when doing journal article reviews. In another example, Gosling et al (2004) [11] compared "a new large Internet sample (N 361,703) with a set of 510 published traditional samples". One of their cautions is notable: "As with all research, the best studies will seek convergence across multiple methods." They concluded that
Our analyses suggest that the samples gathered using Internet methods are at least as diverse as many of the samples already used in psychological research and are not unusually maladjusted. Internet samples are certainly not representative or even random samples of the general population, but neither are traditional samples in psychology. [11]
Whilst this very small sample of research into the question of trusting web based surveys gives positive indications, it's not easy to relate readily to the Journal banding survey, named above as one of the stimuli for this column. The survey's public documentation and reports to date [1, 15, 16, 17] do not contain the details one would expect for a web based survey drawing upon anonymous, self selected respondents. For example, initial email publicity for the survey [15] indicated an intention to extend the invitation "to other relevant national and international professional and research organizations related to education." HERDSA members received an emailed invitation via Roger Landbeck on 13 December 2006, but ODLAA and ASCILITE members appear to have missed out! [18]. Some universities reposted the invitation to internal emailing lists, but we don't know how many. Without a detailed record of the actual distribution of invitations, or access to self reported details such as membership of professional associations, it's difficult for readers to assess how well the sample represents the target population. This has been defined by DEST as "Education Studies; Curriculum Studies; Professional Development of Teachers; Other Education" [19]. Being in the latter group, specifically an editor for an "Other Education" journal [AJET, see 20], I'm a bit jittery about the way the RQF is ticking!

References

  1. Centre for the Study of Research Training and Impact (SORTI) (2006). Journal banding survey. http://www.newcastle.edu.au/forms/bandingsurvey
  2. ABC. Unleashed. [viewed 10 Nov 2007] http://www.abc.net.au/unleashed/poll/vote/default.htm
  3. MySpace.com. Item posted 29 Oct 2007 09:19 PM by an unknown correspondent. [viewed 10 Nov 2007] http://www.myspace.com/meetthepeople (using PollDaddy facility, http://polldaddy.com/)
  4. Yahoo!7 News. [viewed 10 Nov 2007] http://election2007.yahoo.com.au/
  5. SBS World News Australia. [viewed 10 Nov 2007] http://news.sbs.com.au/worldnewsaustralia/
  6. SurveyMonkey.com - Powerful tool for creating web surveys. Online survey software made easy! http://www.surveymonkey.com/
  7. PollDaddy. Create free polls anywhere online! http://polldaddy.com/ [PollDaddy is "Powed (sic) by GroupSurveys", http://www.group-surveys.com/]
  8. LimeSurvey.org - The Leading Open Source Tool for Online Surveys. http://www.limesurvey.org/
  9. Carbonaro, M., Bainbridge, J. and Wolodko, B. (2002). Using Internet surveys to gather research data from teachers: Trials and tribulations. Australian Journal of Educational Technology, 18(3), 275-292. http://www.ascilite.org.au/ajet/ajet18/carbonaro.html
  10. See, for example, Murdoch University. Student Surveys of Units. http://www.tlc.murdoch.edu.au/eddev/evaluation/survey/unit.html
    Curtin University of Technology. eVALUate - Curtin's System for Student Evaluation of Learning and Teaching http://evaluate.curtin.edu.au/info/
  11. Gosling, S. D., Vazire, S., Srivastava, S. & John, O. P. (2004). Should we trust web-based studies? A comparative analysis of six preconceptions about Internet questionnaires. American Psychologist, 59(2), 93-104. http://www.kent.ac.uk/psychology/studying/literature/handbooks/ethics/gosling2004.pdf
  12. Schonlau, M. (2004). Will web surveys ever become part of mainstream research? Journal of Medical Internet Research, 6(3). http://www.jmir.org/2004/3/e31
  13. Eysenbach, G. (2004). Improving the quality of web surveys: The checklist for reporting results of internet e-surveys (CHERRIES). Journal of Medical Internet Research, 6(3). http://www.jmir.org/2004/3/e34
  14. Ross, M. W., MŒnsson, S. A., Daneback, K., Cooper, A. & Tikkanen, R. (2005). Biases in Internet sexual health samples: Comparison of an Internet sexuality survey and a national sexual health survey in Sweden. Social Science & Medicine, 61(1), 245-52.
  15. AARE (Australian Association for Research in Education). Journal Banding Survey - Message from AARE President Jan Wright. AARE listserver posting, 6 December 2006 12:12 PM.
  16. Holbrook, A., Bourke, S., Preson, G., Cantwell, R. & Scevak, J. (2007). Education journal banding project: A collaborative project undertaken by SORTI in association with the AARE. Australian Association for Research in Education Focus Conference, Canberra, 13-14 June. http://www.newcastle.edu.au/centre/sorti/files/AARE%20Focus%2007%20web.pdf
  17. Centre for the Study of Research Training and Impact (SORTI) (2007). Education journal banding study: A summary of methodology. University of Newcastle, Australia. http://www.newcastle.edu.au/centre/sorti/Banding/mehtod.html
    The ranked lists of journals appear in the files:
    http://www.newcastle.edu.au/centre/sorti/files/Esteem%20ranking%20by%20area.pdf
    http://www.newcastle.edu.au/centre/sorti/files/Overall%20Esteem%20ranking.pdf
    http://www.newcastle.edu.au/centre/sorti/files/Overall%20QScore%20ranking.pdf
    http://www.newcastle.edu.au/centre/sorti/files/QScore%20ranking%20by%20area.pdf
    SORTI's website (viewed 14 Nov 2007) states that "...the RQF journal tier list will be available on this site on Monday 12th November."
  18. Personal observation as a reader of their members' email list postings.
  19. DEST (2007). Rankings Contacts. http://www.dest.gov.au/NR/rdonlyres/92FDBE2E-B011-4286-AF50-795B7C1599AD/18965/RankingsContacts16Oct07.pdf
    For definitions, see DEST (2007). Frequently Asked Questions: Bibliometrics. [viewed 14 Oct 2007] http://www.dest.gov.au/NR/rdonlyres/8F12ADCB-C221-421E-A128-2C344CD58BDF/18516/FAQBibliometrics.pdf
  20. AJET Editorial 23(2). Education journal banding study. http://www.ascilite.org.au/ajet/ajet23/editorial23-4.html
Author: Roger Atkinson retired from Murdoch University's Teaching and Learning Centre in June 2001. His current activities include publishing AJET and honorary work on TL Forum, ascilite Singapore 2007 and other academic conference support and publishing activities. Website (including this article in html format): http://www.roger-atkinson.id.au/ Contact: rjatkinson@bigpond.com

Please cite as: Atkinson, R. (2007). Can we trust web based surveys? HERDSA News, 29(3). http://www.roger-atkinson.id.au/pubs/herdsa-news/29-3.html


[ Roger Atkinson's Home Page ] [ Publications Contents ]
Created 14 Nov 2007. Last correction: 4 June 2009. HTML author: Roger Atkinson [rjatkinson@bigpond.com]
This URL: http://www.roger-atkinson.id.au/pubs/herdsa-news/29-3.html