WHAT ELECTION MODELLERS CAN LEARN FROM PRO SPORTS GAMBLERS

WHAT ELECTION MODELLERS CAN LEARN FROM PRO SPORTS GAMBLERS



OPINION POLLS CAN’T EVER BE ‘WRONG’ (BUT POLLSTERS CAN – AND OFTEN ARE).

Opinion polls can’t be ‘wrong’ because they aren’t predicting or projecting anything. They are just a set of data, a bunch of numbers.

An opinion poll is a piece of data based evidence. Just like any kind of evidence, polls need to be interpreted to make sense of them, and to weigh their significance. In other words they are a tool that can be used in the process of making a decision, they are not a definitive end in themselves.

After the 2015 UK general election there was an almighty outcry about how all the opinion polls were so wrong. The British Polling Council said that the pre-election polls of its members were ‘clearly not as accurate as we would like’ and announced it was conducting an inquiry into why they were so wrong. As opinion polls can’t be wrong, this is a nonsense.

What people mean is that they were annoyed/surprised/disgusted that the eventual result of the national vote was very different from the opinion polls. But people shouldn’t be annoyed/surprised/disgusted, because while opinion polls are certainly a legitimate piece of evidence when looking to project the outcome of an actual election, they are a fairly weak form of evidence.

Opinion polls are fundamentally limited by the fact that people who respond to them have no incentive to tell the truth. They can say whatever they want with no repercussions.

And there is never any obligation to respond to a request from a pollster, so it is entirely possible that a bias will emerge where voters more inclined to vote in a particular way will also be less likely to respond at all, or respond untruthfully if they do. It seems probable that this occurred in the UK ’15 election, where the ‘shy Tory’ effect saw the Conservatives do worse in polls than on the actual day.

If you look closer at the data from opinion polls there will usually be a sizeable minority of responders who were ‘don’t knows’. Opinion polls tend to ignore these people and simply report the %’s for each party/candidate based on the share of responders who made a positive choice. Effectively that is distributing the ‘don’t knows’ evenly among the choices pro-rata. But if a large % of the ‘don’t knows’ all end up voting the same way then the poll result will be totally skewed.

It’s worth noting though that the UK ’15 election polls weren’t ‘that’ wrong. It’s not like they ever showed a UKIP majority or anything. The swing towards the Conservatives on polling day was only about 6% versus the latest polls. In the grand scheme of things 6% is not a huge number. But it’s ramifications for the ’15 UK election result were obviously significant.

So it’s not that the opinion polls were totally wrong, the issue is just that people had too much faith in their results being mirrored in the actual vote. An error of analysis, more than data collection, really.

The Conservative party ended up winning the popular vote in the ’15 UK election by 6.6% over Labour (37.8% to 31.2%). No opinion poll published by any of the leading UK polling companies at any point between the previous election in May 2010 had the Tories in a lead of more than 6%.

Any pollster or election analyst who predicted a 6.6% Tory win on the eve of the election would have been a conspicuous outlier. So it’s not really fair to blame all the pollsters for getting the numbers in their polls wrong. Their mistake was in the interpretation of their data, especially their over-confidence and reliance upon it.

HOW TO TELL AN AMATEUR FROM A PROFESSIONAL MODELLER

If someone tells you that they have a model that can predict the outcome of an election then they are an amateur. Only amateurs talk about ‘predicting’ something as random and chaotic as an election.

Professional gamblers/investors/modelers generally distinguish themselves by using the word ‘projection’ in place of ‘prediction’. It may sound like a pedantic distinction, but it’s actually a critical difference that helps professionals to profit from gambling/investing activities, and condemns the majority of amateurs to making a loss.

The key word is ‘randomness’. Everything in the universe is random, except for a few immutable laws of nature.

In order to truly predict with precise accuracy the result of an election it would be necessary to know the inner thought processes of every individual who could influence the result, right up to the moment they cast the vote, plus various other factors such as the process of vote-counting, how many ballots would be spoiled and miscounted etc. It is plainly completely unrealistic to be able to know these things, so by logical extension, a true prediction of what will happen is impossible.

Some things are extremely random, some are extremely un-random. But whatever it is, if it’s happening in the future and it is complex, then it unpredictable (as in literally you cannot truly predict it). Every expression of what you think will happen in the future is therefore an estimation, a guess. When you put money on a number on a roulette table you are making a guess. If the number comes up after the wheel is spun, you didn’t truly ‘predict’ the outcome. A spin of a roulette wheel is a random chaotic event. You guessed, and you got lucky.

If you bet on Black 24 you got lucky with a guess, you didn't predict it.

If you bet on Black 24 you got lucky with a guess, you didn’t ‘really’ predict it.

A guess is a guess no matter whether it was arrived at by saying something off the top of your head with no prior thought, or it involved programming a computer with a complex model that attempted to simulate reality. Over a long period the computer model is likely to be less wrong than the blind guesser, but it will never be able to predict something as complex as an election, or a fair spin of a roulette wheel.

It is no more possible to predict what will happen in an election than it would be possible to predict the final resting positions of a thousand marbles thrown in a cluster onto the floor of a gym hall. The forces of randomness, the many millions of tiny factors that can come into play to decide the outcome are utterly un-knowable.

So a professional’s model will generate a range of projections, like ‘65% chance of Tories winning a majority, 30% hung parliament, 5% something else’, rather than a prediction. It is fine to talk about a single number as an ‘expectation’ – which is the average if the event was to be conducted an infinite number of times.

But a pro uses an expectation as the base for a distribution of the chance of a range of results occurring, not as a prediction of what they think will happen, which is the typical amatuer mistake.

THE PITFALLS IN BUILDING ELECTION MODELS – GIGO

A model’s usefulness is limited by the quality of its input. In computing terminology this is referred to as GIGO – ‘Garbage In, Garbage Out’.

A ‘true prediction’ is impossible. No matter how fancy the model is, no matter how big the computer is that the model runs on, and no matter what the claims of the people who built/run it – the output of a model is never more than a guess. Smart modelers embrace randomness, and acknowledge the limitations in any model of a complex event, and talk in terms of projections of how likely things are to happen, rather than in terms of predictions of what they think will happen.

An election model that uses polling data is automatically limited by the fact that opinion polls are, as we have seen, a weak form of evidence and therefore limit the worth of any model that relies upon them.

Some election models for general elections like those in the USA and the UK can boast of great detail by aggregating projections for all of the individual constituencies or states that make up the national result. This sounds impressive and clever, but it was noticeable during the ’15 UK general election that some modelers allowed themselves to be seduced by their cleverness, and overlooked the inherent flaw in this plan.

The nature of general elections is that they are made up of many smaller elections, each of which happen independently. But it is a mistake to model them as though they were independent, because a single undetected national influencing factor can potentially impact on them all. And because all the constituency votes happen at the same time, the model has no opportunity to adjust.

If general elections were arranged so that each constituency’s vote was held, counted and announced on consecutive days then the modeling of the upcoming constituency votes would become much more accurate, as modelers learned more about the swings in actual votes versus opinion polls.

In betting terms, individual constituencies are ‘related bets’, because a single factor can impact all of them. So it would be wrong/foolish for a bookmaker to offer the full multiplied odds on an accumulator bet on the same party in several constituencies.

THE NATE SILVER PARADOX

If you are into election betting and/or modeling you will have heard of Nate Silver. He’s an American former accountant and poker player who found fame for predicting the results of American Presidential and Congressional elections. In 2012 he predicted correctly the result of all 50 States in the US Presidential election, having got 49 out of 50 in 2008.

He builds models for sports as well as elections, and his methods/outlook are pretty similar to ours. He thinks/models in probabilities, rather than predictions. He believes in Bayesian thinking, and in being a fox rather than a hedgehog.

Which means he fundamentally doesn’t believe in making predictions. But he got famous because of his predictions. He got lucky, and he’s ridden his luck to become a regular TV political commentator, a writer of award-winning books, and become the editor-in-chief of his own ESPN owned website FiveThirtyEight.com (538 being the number of electoral college votes in a US Presidential election).

The website is terrific (if, like us, you like data based journalism) and Nate talks and writes intelligently about sports, economics, poker, weather prediction etc. as well as politics. But he won his platform to speak by way of a lucky guess. He doesn’t possess any mystical predictive powers. His computer models are subject to all the limitations we have discussed above, something which was amply demonstrated when his 2015 UK General Election model bombed just as badly as all the opinion polls.

His guesses for the 2008 and 2012 US elections were probably more informed, cleverer, ‘better’ guesses than those made by anybody else. He was probably more likely to guess correctly than anyone else in the world. But they were still guesses.

So while Nate Silver deserves his high profile and success, because he is smart enough to know that predictions are futile – he became famous because of his predictions. Which were lucky guesses. This is the Nate Silver paradox.

Nate Silver

Nate Silver

KEEP POLLING AND ANALYSIS SEPARATE, AND DON’T BE A CHICKEN
If pollsters limit themselves to saying ‘we conducted a survey, and these are the results that we got’ then there is no problem. They are simply reporting an objective piece of evidence. Where pollsters go wrong is when attempting to present insight from their data; ‘based on our poll’s results, there will be a hung parliament’ for example.

One polling company, Survation, did get a polling result just before the UK ’15 election that showed the Tories in a 6% lead over Labour. But they ‘chickened out’ and chose not to publish it, as it seemed such an outlier – so different to all the other polls. This is ‘herding’ – being scared of standing alone, or going in a different direction to the crowd. It is a very bad thing for a pollster to do, and anathema to professional gamblers/investors who need to have the courage of their convictions, and possess a strong streak of contrariness.

This highlights a fundamental issue with any form of analytics, which is that gathering data, and doing the analysis to divine insight from it are two separate job; one objective, the other subjective. They are best kept apart, and done by different people. Pollsters should report their findings faithfully, without any comment on its ‘meaning’. It is then the job of analysts to get insight from the data.

A parallel exists in football where OPTA do a brilliant job of collecting playing data from football matches. But if you had put money on all the betting tips they publish as part of the written preview content they provide betting and media partners, you would now be considerably poorer. Analytics is about finding insight from data, and using it to make smarter decisions. It is not simply ‘knowing lots of stats’.

When you conflate the two jobs of gathering and analysing, then the mistake that Survation made becomes much more likely. And it has more implications than simply egg on the face of the chickens who decline to publish. Averages of polls are often used to give a more accurate reflection of polling opinion. This is essentially a good thing as it increases the overall sample size, and reduces the impact of systemic errors used by individual companies. But if the outliers are arbitrarily removed from any sample of polls then the average of polls will be weakened, not strengthened.

Damian Lyons Lowe, CEO of Survation

Damian Lyons Lowe, CEO of Survation

THE UNHELPFUL INFLUENCE OF THE MEDIA

The media loves opinion polls, because in the absence of an actual hard news story (i.e. who has actually won an election) it gives them something to write and talk about.

It is very hard not to be influenced by what the media says, because newspapers, television, radio, magazines and social media is how we find out most of the things we know about what’s going on in the world. No matter how often we get told ‘don’t believe everything you read in the papers’, we can’t help ourselves – we do. Or at least we are influenced by them more than we should be.

Polls are influenced by the media, and the media is influenced by the polls.

This is an issue because the media is not concerned with seeking ‘the truth’ about an event like an election. They are concerned primarily with being entertaining. Their job is to write/say things that engage their audience. A secondary job can also be to promote an agenda – such as a left-leaning newspaper promoting the claims of a socialist party. Being accurate, fair, balanced, reasonable and analytical are lesser considerations. The media should not be trusted to interpret the data of opinion polls. They have no real incentive to do it well, and they probably don’t have the ability to extract the smart insight from them anyway.

Opinion polls aim to get a representative sample despite only asking the opinion of a tiny minority of the actual electorate. So there is always going to be a considerable margin for error. But that doesn’t stop media outlets pouncing on poll results and making a headline of such insignificant and random variation as a 2/3% shift to some party. This is represented as a ‘swing in the polls’, but in reality is nothing of the sort. The sample sizes are far too small, and the inherent problems with polling methods too large to be sure it is signal not noise. And anyway, these polls are generally simulating a nationwide vote that will never actually happen. National elections are actually ultimately decided by a relatively small number of ‘swing’ states/constituencies

What has actually occurred in these cases is that among a small sample of probable voters, the outcome of being asked a hypothetical question, to which there is no incentive for them to tell the truth, shows that the ratio of responses of those who chose to respond, differs very slightly from the ratio of the small sample who were asked last week.

It could be indicative of some genuine sea-change in voting intentions. Or it could (and probably is) just some random noise, that the media has used to sell more of their ‘product’ to a consuming public who are entertained by the story. So the media are happy that they got an attention grabbing story out of the poll, and the polling company is happy because it got paid.

So the lesson is that opinion polls can’t be wrong, but opinion pollsters, modelers and the media using polls can be. It is a mistake to believe that the conditions of a real election can ever be reproduced by a poll, so it is illogical to expect the outcome of a poll and election to be identical. Polling data is just a piece of evidence, and a weak piece at that. Just because it happens to be the ‘best’ piece of evidence before an election that is available doesn’t mean that it is ‘good’ evidence.

MODELING ELECTIONS AND FOOTBALL MATCHES

At OddsModel we do a lot of modeling of football matches. Compared to the task of modeling elections, we have the considerable advantage of being able to use loads of high quality, relevant, recent data – because football teams play lots of games. We can watch the games, and/or gather stats from them. The teams are usually trying their hardest, so it’s relatively easy to get a decent idea of their innate level of ability.

But elections are much less common than football matches. If there was a general election every month, then election modelers would undoubtedly do a much better job of projecting election results. But in the place of actual elections, they are forced to instead use opinion polls and results from previous elections held years and even decades in the past. This isn’t ‘wrong’, it just isn’t ideal – so nobody should be surprised that the results of election models aren’t great.

To compare it with football modeling, using opinion polls to model an election result is like using the evidence of football club training sessions to model a football match. It isn’t terrible – you can get a reasonable idea of how good a football player/team is from watching him/it train. But it will never be as good as the real thing.

Players have no real incentive to try their hardest in training, nor the team to play at its maximum – just like potential voters targeted by polling companies have no real incentive to respond, nor any reason to tell the truth if they do. Using results from previous elections years earlier is like using the league table from a few seasons ago to tell you how good a football team is. There’s a fair chance this will help you identify the really good teams from the mediocre ones, but it isn’t much more use than that.

Pollsters and political commentators spend a lot of time thinking about politics. But for many members of the general public, politics is a bit boring and they will only really start to consider how to vote (if they vote at all) on the eve of an election. So opinion polls that are conducted closer to the polling date are better than polls conducted a long way out from an election. This is something those in the ‘political bubble’, or modelers who crave data for their models can fail to recognize.

A distance out from polling day candidates with a high media profile, and/or a name that comes easily to voters’ minds (such as a candidate who shares a name with someone they voted for before, or somebody they’ve heard mentioned a lot on TV) are often favoured in a way that won’t be reflected on the day that matters.

Some voters will vote based on a careful consideration of party manifestos, and/or an ideological affinity with a particular party. But plenty will decide based for much more prosaic reasons.

It is certainly possible that a significant factor in Labour’s hammering in ’15 was down to a sizeable number of UK voters thinking on the day of the election; ‘when push comes to shove, I really don’t want as my Prime Minister a guy who can’t eat a bacon sandwich without looking like a pillock’. That is not something that voters were likely to admit to pollsters in the run up to the election though, and neither is it ever going to show up in any post election inquiry.

ed-miliband-bacon

Ed Miliband enjoying a a bacon butty

A FAT-TAIL DISTRIBUTION IS PROBABLY SMART
The potential influence of a single nationwide factor like that means that the shape of an election model’s distribution curve should probably be flat at the extremes (‘fat-tail’). It is a classic, fundamental mistake to assume that constituency/state outcomes are independent, and to over-estimate the chance of the reality equaling the average of your best guesses. Modelers should never under-estimate something happening to make the result closer to one of the extremes.

To relate this to football, imagine that you were modeling the outcome of Leicester City’s Premier league points at the start of this Premier League season. You would look at last season’s performance (finished 14th on 41pts) and would probably have estimated their chances this season at something similar off the top of your head. You are building a clever, detailed model though, so you have a projection for each of their 38 games using your pre-season assumption.

You would therefore have in your model a projection for an upcoming game with (say) Chelsea, and at the start of the season it would have shown them having only a small chance of winning. But the reality is that they actually have a much better chance of winning the game than your model projected at the start of the season.

Because what has happened is that Riyad Mahrez has emerged as a genuine superstar player, and elevated their performances and results way beyond anything that could sensibly have been predicted at the start of the season – while Chelsea have deteriorated markedly for some unfathomable reason(s).

So bookmakers, and football modelers like us have adjusted our assessment of Leicester and Chelsea accordingly and our projections for their game are now very different, and much more accurate than what we had several months ago.

Election modelers don’t get the benefit of this kind of learning. But there is no excuse not to account for the possibility of a Mahrez-style spanner being thrown in the works of an election outcome. As it happened, in the UK ’15 general election something evidently happened to make the actual result differ significantly from the opinion polls. It could have been the ‘Shy Tory’ factor, the ‘bacon butty factor’ or something else.

So an election model, just like a season-long football model needs to take into account the chance of something happening that changes the shape of all of the games/states/constituencies that are being modeled, and the result will be a fat-tailed distribution of possible outcomes.

Over-confident distribution

A distribution shape of an over confident model. Overstimates the chance of the result being at the mean/peak, and understimates the chance of it being at an extreme.

Flat tail distribution

When a single factor can influence a host of related outcomes the distribution should look more like this. The line could be Tory seats, Leicester or Chelsea league points.

UNDERSTAND THE RULES OF THE GAME
One of the great challenges in assessing an upcoming election, especially if you plan to build a model around it, is to properly understand the rules of the game.

How the votes are distributed is as/more important than the actual numbers of votes cast. For example it has happened four times in US election history that the President of the United States gained the White House despite a rival getting more overall votes.

At OddsModel if we are building a model to price up a tournament like the 2016 Euros, the task is much more involved than working out how good each of the teams is likely to be. The rules of the game in this case are that teams are organized into groups, and then there’s a pre-determined pathway to the final for each qualifier, so luck of the draw plays a part. In the knockout stages weaker teams need only hang on to a draw for 120 minutes to get to the ‘lottery’ of a penalty shootout (it’s not really a lottery, the better team still wins the majority of the time). The whole event is over in a month, so the sample size that is used to identify the ‘champion’ team is nothing like as scientific as a nine month league season. Being able to identify who the best teams are ahead of the tournament is only one element in modeling the likelihood of each country winning the tournament.

A good example of election analysts failing to account for the rules of the game was with the recent UK Labour party leader election. The largely unconsidered Jeremy Corbyn only barely scraped into the field of candidates, and would have head no chance of winning had the leadership election been based on votes cast by Labour MPs. But a change in the leadership election rules that allowed all party members a vote, including those who had only very recently joined, and paid their £3 joining fee, saw the outsider sweep into the job, having at one point been 100/1 with UK bookmakers. They hadn’t properly researched the rules of the game, and/or understood their significance.

SUPER-COMPUTERS CAN BE DUMB TOO
As we have discussed, the media’s job in election reporting is to be interesting, not to be accurate. And the newspapers in particular like nothing more than a ‘supercomputer predicts election result!’ story.

It is a racing certainty that at every election there will be at least one story that gains traction where some impressively intelligent looking professor or rocket scientist is claimed to have programmed a supercomputer (always a ‘super’ computer, never just a computer) that can predict the result of the election. Often the story is padded with some nonsense about how the computer has the bookmakers running scared, as the geniuses working with the supercomputer are expecting to clean up by betting on the result.

Remarkably though, the bookmakers remain healthily profitable, and after the election there is a conspicuous lack of follow-up stories on how the computer geniuses have spent their winnings. That’s because there aren’t any winnings. There is no supercomputer that can predict any election – no matter how super-duper the computer happens to be. A computer is just a machine for making calculations.

It is computer programmes, and models that run in them that can be smart. And these programmes are limited by the factors we’ve discussed above. But this is an inconvenient truth that could get in the way of a good story, so it gets ignored.

When Deep Blue beat World chess champion Garry Kasparov, it was a PR triumph for IBM who developed the chess ‘supercomputer’. But what actually beat Kasparov was not really the computer itself, but the programme that ran on it. And that programme was designed, developed and refined by a team of chess grand-masters. A computer programme or model is limited by the quality of the input going into it. The ‘horse-power’ of the computer the programme/model runs on is not all that important.

Deep Blue defeats Garry Kasparov - a triumph of computer programming.

Deep Blue defeats Garry Kasparov – a triumph of computer programming.

USING ELECTION BETTING MARKETS – THE WISDOM OF CROWDS

In every modern election cycle, bookmaker’s betting odds on the outcome are cited as evidence by pundits in the media. In principle, using betting markets is a fine way to get to accurate projections of what will happen in future complex events. There are some very sound reasons why betting markets can be better than opinion polls.

Firstly, the contributors to markets have some skin in the game through the investment of their own money. They are incentivized to think about the outcome, and to act with care and attention, unlike poll responders who are free to say whatever they like with no consequences.

Betting markets benefit from the ‘wisdom of crowds’, where the aggregation of individual opinions comes together to make a smarter opinion than that of any individual. In recent times this has been harnessed to further science, where betting markets have been used to gauge the worth of academic scientific studies.

The act of betting on an election is done after the analysis and interpretation of the raw data, including polling numbers, rather than simply repeating naked polling statistics. The definition of analytics is to gain insight from data, in order to make more efficient decisions. So all the players in an election betting market are analysts. They will differ wildly in the amount and quality of analysis they will have undertaken, from detailed examination of polling data and previous election results on one hand, to gut instinct based on conversations in the pub on the other. But not every member of a crowd needs to be wise in order for the crowd to have wisdom.

In some cases betting markets become very wise indeed, and following them will lead to the most accurate projection of likely outcomes it is possible to get. Examples would be the Betfair market on a big horse race at the ‘off’, or the Asian market on big football games at kick-off.

The key ingredients in making these markets so smart are a) their liquidity – i.e. there is so much activity going on in the market that Darwinian forces are applied that force it towards maximum efficiency, and b) the quality of the analysis that is undertaken by the dominant shapers of the market.

Smart professional gamblers and syndicates using analytical models and ratings to seek inefficiencies in these sports market based on their own very accurate projections of ‘true’ prices. This knocks the prevailing market price for an outcome into very efficient shape.

These forces are not at play in election markets however. Election markets are not liquid. Compared to major sporting events, very few people actually bet on them, and don’t bet a lot of money when they do. For bookmakers, election markets are mostly about PR. It is an opportunity for them to seek free advertising by getting their names in the papers and on the TV in the ‘News’ sections, where they normally cannot reach.

The way horse racing and football prices evolve is that the bookmakers make their initial estimations of likelihoods by publishing their prices, and these progressively get shaped and adjusted by the weight of money as customers (including the pros and the syndicates) place bets. The initial bookmaker prices will be pretty good to start with as they have decent expertise in setting these prices, and then the ‘wisdom of the crowd’ will be considerable.

But in the case of election markets the bookmakers have little or no expertise, so the initial prices are often little better than wild guesses, put together by the PR guys rather than professional odds-compilers. It can be pretty amusing to hear earnest sounding political pundits saying things like ‘Ladbrokes think Trump has a 20% chance of getting the White House, and bookmakers are not often wrong’. That 4/1 price might well have been made up by some kid on work experience in the bookies’ marketing team, who got asked to price up the election on the basis that he usually watches Question Time.

The point of the markets is not to make money, but to generate publicity. So the accuracy of the prices is much less important than the fact that the firm has some prices that their PR rep can send to a paper, or talk about on TV. And so much the better if the prices can be used to form an attention grabbing story (e.g. ‘bookies respond to 2% Labour poll swing by slashing the price of Ed Miliband becoming PM‘).

The number of bettors in an election market who are capable of doing quality analysis to project accurate true odds of the outcome in all the constituencies/states, and therefore the overall election outcome, is tiny compared to the number of experts in horse racing and football. So the crowd is much less wise, and the market therefore much less to be trusted.

It is possible to use betting markets to get an insight into elections, but you really have to know what you are doing, and know where to look. We’ve yet to hear a pundit on the telly or in the papers citing election betting markets who does.

WHY DONALD TRUMP WILL (ALMOST CERTAINLY) NOT BE THE NEXT US PRESIDENT
In the US at the moment the two main parties are in primary season, and (if you hadn’t noticed) the media coverage is being dominated by Donald Trump. He is as short as 4/1 (i.e. a 20% chance) with Coral and Ladbrokes to be the next US President. Even without building any sort of model, we can tell you for absolute certain that the real chance of Trump winning is much, much less than 20%. He should probably be nearer 100/1. Donald Trump is a good case-study of a lot that is wrong with election analysis.

Prices on 2016 US Presidential with UK bookmakers.

Prices on 2016 US Presidential with UK bookmakers.

Trump is currently going well in opinion polls. But the actual US general election is just under a year away. Polls this far out are significantly worse predictors of election outcomes than those closer to the date. And as we saw in the UK ’15 election, even polls conducted a mater of hours before the polling stations open can be way out. Don’t be fooled that long-range opinion polls are significant, just because the media tells you that they are.

Trump’s current situation is a media creation. He has a bit of charisma (this is undeniably true, no matter what you may think of his policies), is a known name and face from his appearance on a popular TV show (US Apprentice) so has instant name recognition with everyone who is asked an opinion at this stage. But many voters will not really have made their mind up about a preferred Republican nominee yet, never mind their choice of a Presidential candidate. Voters are much less vexed about the General Election this far away from the day than the journalists whose job it is to generate interest in political stories.

The rules of the game Trump is playing really matter here too. For him to become President he (realistically) needs to win the Republican nomination, and this is a process in which behind-the-scenes political maneuvering is common-place and influential. So if the Republican establishment decides it doesn’t want Trump as its candidate then he’s very unlikely to become their candidate, even if his popular support manages to hold up (which it probably won’t).

But it’s very unlikely Trump can win the Republican nomination anyway. The rules governing the electoral process of finding a Republican nominees are complex, but have a built in disadvantage to the most right-wing candidates, as the votes of Republican voters in Democratic districts effectively count more than those in districts which will vote Republican in November.

And even if Trump does manage to secure the nomination, and even if he doesn’t get trounced by Hillary Clinton in the Presidential debates and campaigning, there is still the ‘when push comes to shove’ effect.

On the morning of polling on November 8th, if Trump really is the Republican nominee then millions of Americans are going to ask themselves ‘do I REALLY want to see this man as the Leader of the Free World?’.

The next leader of the Free World?

The next leader of the Free World?