Cogito, ergo sum. I think, therefore I am. (René Descartes, mathematician and philosopher,1599-1650)
Showing posts with label polling methods. Show all posts
Showing posts with label polling methods. Show all posts

Friday, 10 August 2018

3. Poll Results and their Commentators

Poll Results and Comments

Election results."I see a man. No, a woman, in your life"

Colmar Brunton and most New Zealand political polls were remarkably accurate close to the 2014 election as this table from the National Business Review shows. The polls differed little from each other and were very close to the actual election results shown on the bottom line. Pn26

A month earlier, however, Colmar Brunton was not as accurate. It had Labour on 37% purportedly due to the “Jacinda Effect”, National on 44%, NZFirst on 10%, and the Greens on 4%, under the 5% threshold and out of the elections. A Stuff heading read “Green Party out of Parliament, Labour surges in new poll”.

One wonders whether polls improve as they get closer to elections.

Fiji polling in 2014
Fiji polling has not been as accurate. A Tebbutt poll held three weeks before the Fiji 2014 Election produced the heading “Fiji Election: Final pre-election poll shows a drop in support for Frank Bainmarma's Fiji First Party”.

Asked for preferred prime minister (the preferred political party question was not asked!) Bainimarama polled 49%, down from 60% a month earlier, and Ro Temumu polled 20%, up from 17%. The ABC stated that if the “results were translated into votes on election day … Bainimarama would have to form a coalition.”


And Fiji academic Professor Brij Lal from the Australian National University supported this view, saying “the (decline ) in the the poll results were not unexpected,... so a coalition government is very much on the cards ...with either SODELPA or with other minor parties which might win some seats”.

The actual election result, as shown in the table below, proved them all wrong.

FijiFirst won 59.2% of the votes; SODELPA 28.2% and NFP 5.%. No other party crossed the 5% threshold.

The main contenders in 2018



(Left-right) Voqere (Frank) Bainimarama FijiFirst, Ro Teimumu Kepu Social Democratic Liberal Pary (SODELPA), Dr Biman Prasad National Federation Party, Sitiveni Rabuka, Chairman SODELPA, and Mahendra Chaudhry, Fiji Labour Party.


FijiSun/Razor poll results
It is exceedingly difficult to obtain a chronological view of poll results over time because the newspapers (and the FijiSun with its more frequent polls has created most problems) seldom reports results and comments in a consistent manner. While most times, reports show the results for the major parties, at other times the focus has been on preferred prime minister or the number of parliamentary seats won or lost. My “best effort” is shown in the table below.

FijiSun/Razor Poll Results (%)
Approximate polling dates
Nov
2017
April
2018
May
12
May
25
(a)
May
29
'July
21

July
27
Aug
3

Fiji First Party
63
62
53
63

65
60
67
57
Social Democratic Liberal Party SODELPA

21

16.8

16

16

19


15

26

18

26
National Federation Party NFP
3
5.2
5
5
5.1
10
6
6
8.9
Fiji Labour Party FLP
3
1.2
7
7
3.7
5
4
5
6
Undecided (b)
10
14.8
19
9

5
4
8
4.4

a. See paragraph below. b. And minor parties.

I have been unable to obtain the poll results for some weeks. For example, the May 29 Sun's Battle for iTaukei Votes in Full Swing reported that SODELPA support “dropped 19 percent to 15 percent” compared with last week, and the FLP and FLP respectively increasing their support from 5.1 to 10% and 3.7 to 5%. No mention is made of FijiFirst support, other parties or the undecided – and the figures mentioned do not match those of the previous week.(See (a) in table). My efforts to obtain poll results from the Sun by email have so far not been successful.

The Sun put the assumed SODELPA drop by May 29 was put down to “internal divisions on a number of issues” and the appointment of iTaukei Pio Tikodudua as the President of the NFP that historically has obtained most of its votes from Indo-Fijians.

By July 21, however, SODELPA was seen to have “pulled away from the opposition pack” and the “National Federation Party … failed to live up to expectation”.

Weekly changes in poll proportions, which have been rarely statistically significant, do not merit these conclusions.

The Sun's political leaning was also reflected in another poll question that week on whether people thought Opposition parties were using race and religion in their campaigns. Most (60%) said yes, 21% no, and 19% were unsure. I wonder whether those interviewed were also asked what FijiFirst was using in its campaign?

By August 3 the Sun's Nemani Delaibatiki had revised his position. He claimed the NFP was “rebuilding” and could hold “the balance of power” NFP leader Biman Prasad took this up saying he could become the “Winston Peters of Fiji.” 

This view echoed the “coalition” prediction – which proved to be very wide of the mark – made by Brij Lal before the 2014 election. Delaibatiki, however, noted Prasad was calling for an election now because it appears .support s slipping away from their grasp.”  I'm left wondering whether the NFP is holding or slipping.

Overstatements and using polls for political end
Political parties everywhere seek to divide their opposition with talk of internal divisions, leadership rivalry, and possible coalition arrangements. Fiji is no exception.

Opinions based on poll results include the Sun's May 12 ….
Kepa grabs lead from Rabuka. She scored 12% leaving “Rabuka trailing (sic!)” at 11 percent” – a one percent difference. It went on to stoke the fires with ”While there have been unity moves … the gap between the Ro Teimumu camp and Mr Rabuka's camp is widening.

If they are, it will certainly help Fiji First.

Yet another example of overstated media statements, based this time on the Tebbutt poll, was February's Fiji Times release stating that Charismatic local celebrity Lenora Qereqeretabua (NFP) had 'soared' in the preferred prime minister stakes” --- from one to three percent!
The NFP candidate's support compared with NFP leader Biman Prasad's 2%, Bainimarama's 64% and Rabuka's 23%.Soar away!


Tebbutt polls
A later Tebbutt poll showed her on 8% of possible women prime ministers, well behind SODELPA's Ro Teimumu on 49%, Lynda Tabuya on 14%, and FijiFirst's Rosy Akbar on 13%.

These statements followed on from a Tebbutt poll in March that showed 70% of those polled said Fiii is ready for woman prime minister, and Prof Ratuva thought “Fiji might (have one) after this year's general election.”

Comments by both papers often amount to unsupported hypebole. In March Fiji Sun /Razor poll, reported a “week of political dynamism” with Rabuka and Ro Teimumu “neck and neck” on 11%. A week earlier Rabuka had been on 20% but Ro Teimuma's 5% was due to her candidacy being uncertain. All that had happened in the week of dynamism was her confirmed candidacy. Nothing had really changed.

Some interesting deductions have also been made by academics. For example, a 5-8 February Tebbutt poll led Fiji academic Professor Steven Ratuva, Director of the MacMillan Brown Centre for Pacific Studies at Canterbury University, to conclude the forthcoming election will be a close contest.

The poll showed FijiFirst at 32% down from 37% a year earlier, SODELPA at 22%, NFP at 3%, 1% for FLP and a very large 34% undecided, down from 40% a year earlier.

Ratuva put aside the undecided and from those who had stated a party preference he concluded FijiFirst would win 56%, SODELPA 38% and NFP 5% which would give FijiFirst only 16 parliamentary seats while the opposition parties would have 13 seats, leaving 21 seats to be filled by the undecided. If a high proportion of the undecided do not vote FijiFirst, Ratuva thinks the election will result in a coalition government. Prof Brij Lal said this about the 2014 election, and he was shown to be very wrong.

The large undecided vote is worrying and its causes are far from clear. They have remained between 35-40% for a year. But this kind of statistical adjustment and speculation does not make things clearer.

Nilesh Gounder USP economist Dr Nilesh Gounder, reported in the FijiTimes1 says the February Tebbutt poll shows Fiji First support at an “all time low” but notes support for Bainimarama as PM rose by over 20%. He  concluded that there was a clear “delink” between the preferred party and preferred PM polls. He could be right if Tebbutt polls are reasonably accurate on both types of poll.  But the FijiSun/Razor polls show no delinking.



Comment from politicians and the street
Opposition leader Sitiveni Rabuka commenting on Tebbutt polls that suggest a decline in the vote for FijiFirst says the polls show a “paradigm shift” while FLP leader Mahendra Chaudhry says “people are looking for a more dynamic leader (than Bainimarama) “with a clear vision for Fiji.” Both claim the polls support their opinions.

Street opinion on the polls include these from FirstFirst opponents:

There's something wrong with the polls or the people are dumb“
The polls could be right. The people are blindfolded.”
People are distracted by rugby...”
The vibe on the street is that people want change. You can almost feel the tension in the air”
I know people who voted FijiFirst but will change their vote this time”
Democracy is dead as shown by the poll results””


Summary
  • The Fiji population is very diverse making accurate sample polls difficult.
  • While the Tebbutt poll methodology is generally better than the Razor polls, it also is far from perfect. And a better methodology does not necessarily produce more accurate results.
  • The FijiSun/Razor and FijiTimes/Tebbutt polls often often arrive at very different conclusions, probably due to differing methodology and political leaning.
  • The large undecided vote on the Tebbutt polls is a matter of concern but its causes are probably multiple, and discussion on its causes purely speculative.
  • Few of the results are statistically significant.
  • Few informed conclusions can be made on the results. Opinions are fine, but they are not necessarily facts.
Finally, it would be interesting to know if polls actually influence people's voting behaviour. An obvious answer is that there would be no polls unless someone thought they did, though some pollsters would argue they only seek to measure opinion, not to influence it, 

Overseas research has shown that political polls can influence public opinion, especially among the truly undecided or when there has been a new “cataclysmic” event. They have also been shown to lead to tactical voting, going along with the majority, or changing sides to be more acceptable, Minority parties are most likely to be affected.

There has been no research in Fiji on the influence of polls but my work on Fiji political blogs2 concluded that most people read and supported blogs that “confirmed their predispositions.” I think this is also likely  for polls.

All political polls ask people what they think of government. Putting the boot on the other foot, it would be pleasing to see government polls which ask the people what they think. 

Government's present use of walkabouts, consultations, commissions and standing committees fails all tests on random representative sampling!


NOTES
1. https://asiapacificreport.nz/2018/03/05/undecided-up-for-grabs-and-decisive-for-fiji-election-says-academic/ and http://newcloudfto.fijitimes.com/story.aspx?id=436673
2. http://newcloudfto.fijitimes.com/story.aspx?id=436673













Thursday, 9 August 2018

2. Comparing Three Sample Polling Methods

2.  Polling Methods Used by 1News/Colmar Brunton Polls in NZ and the FijiTimes/Tebbutt and the FijiSun/Razor Polls in Fiji 

CATI at work

  • The methods used, and methods disclosure made by NZ 1 News Colmar Brunton poll provides a useful yardstick to consider the methods used by the two Fijian polls, FijiTimes/Tebbutt and FijiSun/Razor, which use very different methodologies. Pn25


The table below summarises the sampling methods used by the three pollsters.


Source: Compiled by Croz Walsh. Click to enlarge


CATI and face-to-face interviewing
The Colman Brunton and Tebbutt polls on preferred political party and Prime Minister typically interview a 1000 plus people randomly selected using computer assisted telephone interview (CATI) software1 in which the sample is drawn by the computer from a list of people having landlines and from the cellphone numbers of people with mobile phones. The questions are read from the computer and answers entered into the computer by the interviewer. The typical questions asked are “Who is your preferred Prime Minister?” and “Which is your preferred party?”

CATI is considerably cheaper than face-to-face interviewing and is reputed to produce high quality data, avoid interviewer misinterpretations and take less time. Callbacks are automatically managed. Greater accuracy is achieved because everything is displayed on the computer, the interviewer has more control of interviews allowing recaps and recalls, and a record is kept of the number of interviews that were incomplete or dropped.


CATI, however, is not without its problems, an important one being that those not covered are likely to include more older people and more of the unemployed, uneducated and those of lower incomes, simply because more of these people do not have landlines or cellphones. In Fjii rural areas are also likely to be under-represented.  One possible result is that those interviewed tend to be more of the political right than political left.

I am not sure how who to interview on landlines is determined but I expect that knowledge of proportions in the total population would allow interviewers to select the person in the household to be interviewed, according to age, sex and other demographic characteristics so that the final poll result comes close to these proportions in the total population.  Razor's face-to-face interviewing is possibly determined by the poll organizers and supervisors to achieve their decided ethnic, age and gender proportions.

Weighting and sizes
The poll results are weighted to approximate the total population with respect to age, gender, location and ethnicity. Incomplete interviews, people hanging up, non-responses and those not answering their phone are automatically recorded which helps final sampling decisions.

Other things being equal, a sample size of a 1,000 plus produces a margin of error of ± 3 percent2 for homogeneous populations, but sample size would arguably need to be more (perhaps as much as 8-9 %) in Fiji where there are major differences between rural and urban areas, income, education, poverty, ethnicity and religion. Sample sizes under 1,000 have higher margins of error, but little improvement is gained by increasing sample sizes above 1,000 – if the population is relatively homogeneous. NZ is 75% Pakeha (European); in Fiji only 53% is indigenous  iTaukei.

The Colmer Brunton poll uses CATI, as described briefly and in the table above and is detail in their report for 1News on May 2018, to randomly sample all of New Zealand. http://www.colmarbrunton.co.nz/wp-content/uploads/2018/05/Prelim_19-23-May-2018_1-NEWS-Colmar-Brunton-Poll-report-.pdf 

The Tebbutt poll is similar. Its sample is drawn proportionally from all four administrative divisions (Central, Western, Northern and Eastern). The island of Rotuma to the north of the main islands, is not sampled.

The Razor poll differs markedly. It interviews 500 people in Central, 300 in Western and 200 in the Northern divisions. The remote Eastern Division and the island of  Rotuma are not sampled.

Their sampling method also differs from Tebbutt polling in that it is conducted in face-to-face interviews at urban bus stops which is rather rough and ready,  but it should result in a good coverage of rural people visiting urban places and the poor who more typically travel by bus. Conversely, the urban educated middle class (which has been vocal in criticizing the Bainimarama Government), possibly over-represented in the Tebbutt polls, would be under-represented. 

Some people claim that there is a “climate of fear” in Fiji that results in opponents to Government not revealing their true political feelings. If this is the case, it would add to the unreliability of polls and possibly offer some explanation for the high proportion who, in Tebbutt polls (see the third posting in this series) said they were “undecided” on the party and preferred prime minister question.   Significantly, Razor polls rarely report more than 10% undecided, though this could be because they were excluded from the poll.

The difference raises an interesting question.

The "comfort" of those interviewed
Do those interviewed by phone feel more free to express their views because the interviewer is more anonymous – or less because the calls and their responses can be automatically traced back to them? Accusations were made in 2014 that Vodafone listens in to calls3. I

In this respect could Razor's bus stop interviews have an advantage over phone polling. Those interviewed would be difficult to trace once their bus leaves!

To the best of my knowledge the only research on the effect of subject bias is was one conducted by my USP postgraduate students in 1995. This research on ethnicity and gender subject biases noted in the first postingshowed statistically-important differences when the ethnicity and gender of interviewer and interviewed were mismatching. This could be less in phone polling (although ethnicity could probably be deduced), or more or less in face-to-face polling, depending on whether the interviewer and interviewed are "matched." It is unclear whether Razor designers and poll supervisors make an effort to match which would make those interviewed more comfortable and therefore likely to express their true opinions.

Both pollsters weight their results, trying to achieve demographic similarities with the total population, but I doubt either test for the possibility of interviewer and interviewed mismatching which almost certainly increases the number of unreliable answers. If they don't test, I would consider this a polling method error no less important than if they did not weight their results by the demographic variables.

The Fiji political polls ask two questions, If an election was held tomorrow who is your preferred Prime Minister and which is your preferred political party. There are, however slight differences in the Tebbutt and Razor polls that could influence results but neither actually asks who they'd vote for. 

In contrast, the Colmar Brunton poll asks “Which political party would you vote for?” or for people who say they don't know “Which one would you be most likely to vote for?”

From theTruth for Fji blog
Paying the piper
Finally, the Tebbutt http://www.tebbuttresearch.com/ polls are paid for by the Fiji Times https://en.wikipedia.org/wiki/Fiji_Times . One poll in 2016 asked those polled for their preferred newspaper. Unsurprisingly, the Fiji Times came first with its rival the Fiji Sun a poor second!

The Times has been generally anti-Government, though in the past two years overtly less so, possibly in fear of retribution. Relations between the Times and Government have been strained and Government does not place any advertisements the paper. 

The Fiji Sun, in contrast, is overtly pro-Government. The Razor polls are paid for by the firm CJ Patel cjp.com.fj/ whose firm also owns the paper. The firm has greatly benefited from Government contracts and the Sun from Government advertising.

The extent to which ownership and political leanings influence poll results is unknown, but while the so called “climate of fear” has diminished, it is still evident and could influence results, especially those of the face-to-face Razor polls. 

One reputable Government opponent who has been critical of poll findings and methodologies is Dr Wadan Narsey https://narseyonfiji.wordpress.com/ This posting would be incomplete if it did not urge readers to read his blog, especially where it refers to polls and his critique of a 2011 Lowry Institute poll.6  

NOTES
2. Sample size. See Surveysystem.com calculates sample size needed for 95 and 99% confidence level and any selected acceptable margin of error. My estimates of the Tebbutt and Razor polls are based on Surveysystem calculations.
4. Subject bias. A C Walsh, “Ethnicity, Gender and Survey Biases in Fiji”, The Journal of Pacific Studies, 19 (1)1995 :145-157.
5. A little dated but most useful critique of political polls in Fiji is Wadan Narsey's “Making Sense of Opinion Polls.” http://intelligentsiya.blogspot.com/2014/06/prof-wadan-narsey-elections-issues-14.html See also his http://pacific.scoop.co.nz/2011/09/exposing-the-flaws-of-the-fiji-opinion-poll-by-the-lowy-institute/ and my critique of Wadan's criticism in http://www.pmc.aut.ac.nz/articles/rebuttal-arguments-validity-fiji-lowy-poll













Tuesday, 7 August 2018

1. Why Are Polls Often So Inaccurate



  • This first of four posting will consider why polling is often less than accurate.  Pn24
  • The second  will examine polling methods for preferred party and PM polls used by 1NewsColman Brunton polls in NZ and the FijiTimes/Tebbutt and the FijiSun/Razor polls in Fiji. 
  • The third will report on some preferred party and PM polls poll results and reflect on what some NZ and Fiji commentators have had to say about them.
  • The fourth will look at some poll results  in Fiji that touch on other political issues, and what commentators have deduced from them.
         --  Croz

With the Fiji elections approaching it is timely to re-visit the value of opinion surveys. If you or your acquaintances read poll results, you should be sure about how they are conducted or the wool may be pulled over your eyes.


Polling Methodologies
 There has been more than usual debate worldwide about the accuracy of polls, most especially following their inaccuracy in calling the US elections result. How wrong they were that Hillary Clinton would beat Donald Trump.

Polls are based on samples of a total population which are intended to reflect the opinions of the total population. Unfortunately, the methodology used to establish a truly representative sample (which is often not fully revealed by many pollsters)  is often far from perfect.  Today, to reduce costs -- and with some methodologies to improve accuracy --  many sample interviews are conducted by telephone. We shall consider polling methods more fully in the second posting. 

So why were the polls less than accurate? 
In “perfect” polls pollsters need to accurately know the size, demographic characteristics and location of the total population they are studying -- a relatively easy task in an homogeneous population but most difficult in a diverse population like Fiji --  so that they can take an accurate sample, typically one or more thousand people who are assumed to be representative of the total population. From this, pollsters can estimate the statistical “probability of their sample conclusions being plus or minus so many percentage of the population “reality.” This approach is called inferential statistics.

Given the number of people who hang up on telephone interviews, or who grow weary and offer any sort of answer if there are too many questions or if the interview is too long, or who feel they are being prompted towards answers than match the opinions of the interviewer, it is a wonder the US polls were as accurate as they were.

Ideological and confirmation biases (respectively, when polls are looking for what we believe in, or looking for more evidence to support what we believe in) are not thought to have had much influence in the US election, and I doubt they have much influence in New Zealand. I am less certain that this is always the case in Fiji. My Fiji research (“Ethnicity, Gender and Survey Biases in Fiji”, The Journal of Pacific Studies, 19 (1) 1995:145-157) showed responses on sensitive issues significantly differed when the ethnicity and gender of  interviewers and those interviewed differed. 

So why use telephone polling?
Because it is the most cost efficient polling method. The costs of  face-to-face household interviews are beyond the budget of most pollsters, and face-to-face interviews possibly  have bigger problems of interviewer and subject bias (the biases of the interviewer, and the perception that those interviewed hold of the interviewer) which is especially important, for example, when ethnicity or class of the interviewer is very different from those interviewed. 

Of course, the number of households that own telephones from which the sample population is drawn has always produced a sample bias in favour of the urban, middle class. But today fewer people, even in in urban areas and among the middle class, use landline telephones and, if America is anything to go on, fewer people are fully responding to telephone polls. Up to 36% in recent polls (up from an earlier 9%) do not fully respond in telephone polling.

Some are not at home or do not answer the phone; others hang up during the interview and an unknown number grow weary of the questions and do not answer with care.

It has therefore become increasingly difficult with fewer landline phones and poorer responses to establish an accurate sample size from which reliable conclusions can be drawn.

The methods used by pollsters to compensate for these missing people who should be part of their sample are at the best informed guesstimates, and probably, statistically-speaking,  often not very statistically reliable.

Hence, for the most part, the inaccuracy of many polls.

Internet and Predictive polling 
 Internet polling using mobile phones, or a mix of landline and mobile phones, would be cheaper than telephone polling but establishing a “population” and sample size would be no less difficult. 

Predictive polling (who do you think will win the next election if it was held this week?) is thought to be more accurate than asking people who they will vote for.



The Use of Polls
Despite their limitations, political polls still have their uses because they are invaluable in getting some sense of popular thoughts on a range of issues, although I'm not aware of any research showing the effect of political polls on actual voting behaviour. 

In considering poll findings, we  should always consider the political context within which they are held and used, and the possible political motivations of their publishers.

We should be informed and wary of their methodologies  

We should insist on disclosures of the numbers not sampled and publication of their margins of error, with a clear indication of which results are are statistically significant -- and which are not.

And we should not place too much weight on one poll result. Several polls over several weeks are likely to be more accurate.