2. Comparing Three Sample Polling Methods

2.  Polling Methods Used by 1News/Colmar Brunton Polls in NZ and the FijiTimes/Tebbutt and the FijiSun/Razor Polls in Fiji 

CATI at work

  • The methods used, and methods disclosure made by NZ 1 News Colmar Brunton poll provides a useful yardstick to consider the methods used by the two Fijian polls, FijiTimes/Tebbutt and FijiSun/Razor, which use very different methodologies.

The table below summarises the sampling methods used by the three pollsters.

Source: Compiled by Croz Walsh. Click to enlarge

CATI and face-to-face interviewing
The Colman Brunton and Tebbutt polls on preferred political party and Prime Minister typically interview a 1000 plus people randomly selected using computer assisted telephone interview (CATI) software1 in which the sample is drawn by the computer from a list of people having landlines and from the cellphone numbers of people with mobile phones. The questions are read from the computer and answers entered into the computer by the interviewer. The typical questions asked are “Who is your preferred Prime Minister?” and “Which is your preferred party?”

CATI is considerably cheaper than face-to-face interviewing and is reputed to produce high quality data, avoid interviewer misinterpretations and take less time. Callbacks are automatically managed. Greater accuracy is achieved because everything is displayed on the computer, the interviewer has more control of interviews allowing recaps and recalls, and a record is kept of the number of interviews that were incomplete or dropped.

CATI, however, is not without its problems, an important one being that those not covered are likely to include more older people and more of the unemployed, uneducated and those of lower incomes, simply because more of these people do not have landlines or cellphones. In Fjii rural areas are also likely to be under-represented.  One possible result is that those interviewed tend to be more of the political right than political left.

I am not sure how who to interview on landlines is determined but I expect that knowledge of proportions in the total population would allow interviewers to select the person in the household to be interviewed, according to age, sex and other demographic characteristics so that the final poll result comes close to these proportions in the total population.  Razor's face-to-face interviewing is possibly determined by the poll organizers and supervisors to achieve their decided ethnic, age and gender proportions.

Weighting and sizes
The poll results are weighted to approximate the total population with respect to age, gender, location and ethnicity. Incomplete interviews, people hanging up, non-responses and those not answering their phone are automatically recorded which helps final sampling decisions.

Other things being equal, a sample size of a 1,000 plus produces a margin of error of ± 3 percent2 for homogeneous populations, but sample size would arguably need to be more (perhaps as much as 8-9 %) in Fiji where there are major differences between rural and urban areas, income, education, poverty, ethnicity and religion. Sample sizes under 1,000 have higher margins of error, but little improvement is gained by increasing sample sizes above 1,000 – if the population is relatively homogeneous. NZ is 75% Pakeha (European); in Fiji only 53% is indigenous  iTaukei.

The Colmer Brunton poll uses CATI, as described briefly and in the table above and is detail in their report for 1News on May 2018, to randomly sample all of New Zealand. http://www.colmarbrunton.co.nz/wp-content/uploads/2018/05/Prelim_19-23-May-2018_1-NEWS-Colmar-Brunton-Poll-report-.pdf 

The Tebbutt poll is similar. Its sample is drawn proportionally from all four administrative divisions (Central, Western, Northern and Eastern). The island of Rotuma to the north of the main islands, is not sampled.

The Razor poll differs markedly. It interviews 500 people in Central, 300 in Western and 200 in the Northern divisions. The remote Eastern Division and the island of  Rotuma are not sampled.

Their sampling method also differs from Tebbutt polling in that it is conducted in face-to-face interviews at urban bus stops which is rather rough and ready,  but it should result in a good coverage of rural people visiting urban places and the poor who more typically travel by bus. Conversely, the urban educated middle class (which has been vocal in criticizing the Bainimarama Government), possibly over-represented in the Tebbutt polls, would be under-represented. 

Some people claim that there is a “climate of fear” in Fiji that results in opponents to Government not revealing their true political feelings. If this is the case, it would add to the unreliability of polls and possibly offer some explanation for the high proportion who, in Tebbutt polls (see the third posting in this series) said they were “undecided” on the party and preferred prime minister question.   Significantly, Razor polls rarely report more than 10% undecided, though this could be because they were excluded from the poll.

The difference raises an interesting question.

The "comfort" of those interviewed
Do those interviewed by phone feel more free to express their views because the interviewer is more anonymous – or less because the calls and their responses can be automatically traced back to them? Accusations were made in 2014 that Vodafone listens in to calls3. I

In this respect could Razor's bus stop interviews have an advantage over phone polling. Those interviewed would be difficult to trace once their bus leaves!

To the best of my knowledge the only research on the effect of subject bias is was one conducted by my USP postgraduate students in 1995. This research on ethnicity and gender subject biases noted in the first postingshowed statistically-important differences when the ethnicity and gender of interviewer and interviewed were mismatching. This could be less in phone polling (although ethnicity could probably be deduced), or more or less in face-to-face polling, depending on whether the interviewer and interviewed are "matched." It is unclear whether Razor designers and poll supervisors make an effort to match which would make those interviewed more comfortable and therefore likely to express their true opinions.

Both pollsters weight their results, trying to achieve demographic similarities with the total population, but I doubt either test for the possibility of interviewer and interviewed mismatching which almost certainly increases the number of unreliable answers. If they don't test, I would consider this a polling method error no less important than if they did not weight their results by the demographic variables.

The Fiji political polls ask two questions, If an election was held tomorrow who is your preferred Prime Minister and which is your preferred political party. There are, however slight differences in the Tebbutt and Razor polls that could influence results but neither actually asks who they'd vote for. 

In contrast, the Colmar Brunton poll asks “Which political party would you vote for?” or for people who say they don't know “Which one would you be most likely to vote for?”

From theTruth for Fji blog
Paying the piper
Finally, the Tebbutt http://www.tebbuttresearch.com/ polls are paid for by the Fiji Times https://en.wikipedia.org/wiki/Fiji_Times . One poll in 2016 asked those polled for their preferred newspaper. Unsurprisingly, the Fiji Times came first with its rival the Fiji Sun a poor second!

The Times has been generally anti-Government, though in the past two years overtly less so, possibly in fear of retribution. Relations between the Times and Government have been strained and Government does not place any advertisements the paper. 

The Fiji Sun, in contrast, is overtly pro-Government. The Razor polls are paid for by the firm CJ Patel cjp.com.fj/ whose firm also owns the paper. The firm has greatly benefited from Government contracts and the Sun from Government advertising.

The extent to which ownership and political leanings influence poll results is unknown, but while the so called “climate of fear” has diminished, it is still evident and could influence results, especially those of the face-to-face Razor polls. 

One reputable Government opponent who has been critical of poll findings and methodologies is Dr Wadan Narsey https://narseyonfiji.wordpress.com/ This posting would be incomplete if it did not urge readers to read his blog, especially where it refers to polls and his critique of a 2011 Lowry Institute poll.6  

2. Sample size. See Surveysystem.com calculates sample size needed for 95 and 99% confidence level and any selected acceptable margin of error. My estimates of the Tebbutt and Razor polls are based on Surveysystem calculations.
4. Subject bias. A C Walsh, “Ethnicity, Gender and Survey Biases in Fiji”, The Journal of Pacific Studies, 19 (1)1995 :145-157.
5. A little dated but most useful critique of political polls in Fiji is Wadan Narsey's “Making Sense of Opinion Polls.” http://intelligentsiya.blogspot.com/2014/06/prof-wadan-narsey-elections-issues-14.html See also his http://pacific.scoop.co.nz/2011/09/exposing-the-flaws-of-the-fiji-opinion-poll-by-the-lowy-institute/ and my critique of Wadan's criticism in http://www.pmc.aut.ac.nz/articles/rebuttal-arguments-validity-fiji-lowy-poll