It's understandable that with geographic databases helping police focus on high crime areas, predict how riots will spread, find serial killers, and more, a common question among the science-savvy is why can’t science help predict where armed conflicts will erupt next?

Facing up to complexity

Anyone who has ever used Google is occasionally (or constantly) amazed at how the search engine can read our minds because that’s exactly what it seems to do when you click on “feeling lucky” or when Google “predicts” what you are going to type next in the search field.

This is an aspect of applying artificial intelligence to data mining. Recognising this potential, the betagoogle website is supposedly Google’s attempt at fortunetelling. When you begin to type you get autofill messages linked to refugee concerns.

But, of course, betagoogle isn’t real, it is a fake Google-look page run by DigiDay, not affiliated with Google, and not even trying to make predictions about your future unless you feel that it is trying to lead you to donate to a refugee charity.

That is an excellent example of what people have become deluded into believing what computers, science, and especially companies like Google can do.

Predicting human behaviour is incredibly difficult. Google Trends tracks what people are already doing and sometimes what they are thinking, but whether those trends actually predict what they will do tomorrow, or even in five minutes is unclear.

Google does have a prediction API which it is rolling out to give companies and individuals access to the vast amount of data in Google’s cloud of information.

Risky maps

In the past decade, a number of companies and researchers have tried to develop methods using cloud data to predict where armed conflict will break out, either between countries, or internally (civil unrest.)

Early efforts used linear regression which (approximately) attempts to predict new conflicts based on increasing tensions getting worse, e.g.

extending a trend or a history of past conflicts.

It was quickly realised that applying basic graph theory wasn't going to work and efforts turned to machine learning systems such as neural networks which might deal with the rapidly changing mass of complex data.

The February 3, 2017 issue of Science Magazine was mostly devoted to this question of what scientists can and can't predict - outside the usual confines of predicting outcomes of experiments (the way an hypothesis is tested.)

In their essay in the same issue of Science, Lars-Erik Cederman and Nils B.

Weidmann say that one thing people claiming to produce useful threat maps based on automated analysis of cloud data need to do is compare their forecasts produced by highly complex models with the results from simple baseline models - essentially analysis by experts.

They use a model of ethnic Violence for former Yugoslavia, “A complex agent-based model” from Lim, et. al. as , in “Global Pattern Formation and Ethnic/Cultural Violence” (Science, Sept. 14, 2007) as an example of the problem with the complex analysis which is claimed to produce good predictive models.

Cederman and Weidmann point out that a closer look shows the complex model was little better than an alternative model useing randomly selected points.

Generated risk maps are risky.

According to them, using brute-force methods on big data with no political science analysis, doesn’t seem to work. “Automated data extraction algorithms, such as Web scraping and signal detection based on social media, may be able to pick up heightened political tension, but this does not mean that these algorithms are able to forecast low-probability conflict events with high temporal and spatial accuracy.

Big data sets are helpful in making predictions with limitations and it is hardly surprising that human “superforecasters” working in teams are still not able to beat specialised experts when it comes to the prediction of geo-political events in general.