Blind faith in the ability of new technologies to solve deep-rooted social problems is naïve and dangerous.

The artificial intelligence (AI) revolution seems inevitable. At least, if we are to believe corporate projections and marketing campaigns; it will be a prime growth market for decades to come and provide solutions for a number of tough societal problems. The topic has caught the eye and imagination of politicians and administrators alike in The Netherlands. Unfortunately, however, common understanding of the revolutionary potential of AI systems is often overly optimistic.

These systems are much more fragile and error-prone than marketing materials tend to present. For example, IBM’s self-learning algorithms have suggested erroneous medical interventions that could have fatal consequences. Elon Musk launched Tesla’s self-driving technology far too soon, against the advice of his engineers. After several fatal accidents and delays in projected development, even the automotive industry itself is starting to lose confidence in AI as a revolutionizing technology.

Meanwhile, the AI hype has spurred a plethora of experiments within Dutch government services and systems. Current discussions about government use of AI, however, overlook a number of issues that must be tended to in order to ensure that these systems are safe and equitable and do not create additional problems, respect (and utilize!) the historical and contextual knowledge of people and organizations, and strengthen, rather than erode, the rule of law and democratic values.

Is the Dutch government paying sufficient attention to the limitations and dangers of these technologies? And is there enough willingness to carefully test its promises and perils, measured against their impact on important public values?

Dutch Data-Driven Policies

For decades, Dutch government agencies have been using algorithms and software models in decision-making and other crucial processes. Currently, multiple experiments with self-learning algorithms are underway. A visible example takes place in Rotterdam, where the city is developing “data-driven youth policy.” Instead of relying on the experience and judgement of youth workers, algorithms determine what vulnerable children need in terms of care. Decisions are made based on sensitive and unstructured data from a variety of systems and sources. The experimental nature of the policy is invoked to minimize legitimate privacy concerns; these are rendered “not urgent” to pave the way for the pursuit of innovation and technological progress. The system should set the example for other cities and other policy areas.

Likewise, the Dutch government developed a “System Risk Indication” (SyRI) program to detect, among other things, potential fraud in welfare and tax claims. The system generates insights by combining thus far unconnected databases from both private and public sources, raising numerous privacy concerns in the process. The details of the program are largely kept from public sight or scrutiny and request for information by a coalition of concerned citizens and organizations has not been met. Its ramifications for privacy and the rule of law have forced the coalition to file a court proceeding against the Dutch state, claiming that the program should be declared unlawful.

The realization that these data-driven policies can have considerable negative impact on the Dutch democracy is only slowly starting to take root with politicians, while in many other countries this has been subject to extensive debate. The Netherlands’ leading position in digital technology and its welcoming attitude towards innovation have thus far favoured a position towards artificial intelligence that is best described by “let’s make way for it” and a need to “accelerate to prevent falling behind” on global technology trends.

Let’s hit the brakes

Rather than accelerating, the above examples should motivate Dutch decision makers to hit the brakes and be more considerate and careful. Self-learning algorithms make mistakes, are sensitive to manipulation, and often poor at coping with outliers. Moreover, algorithmic systems make decisions based on historically biased and flawed data and embed the assumptions of their developers.

These dynamics, in turn, can lead to unintended yet discriminatory outcomes, as societal biases are encoded into AI systems. For example, critics of the SyRI system fear that households in poor neighborhoods will be unfairly targeted, thereby labeled “suspect by-default”. At the same time, by focusing on the poor, the detection of fraud among the wealthier becomes less and less likely. This means that systems like SyRI can reinforce discrimination and worsen existing inequity, because it will increasingly associate ‘higher risk’ with the type of households it has “seen” most often: the poor and the underserved.

There is hope for change: the current coalition government and opposition parties are aware of the need for new legislation that will provide the necessary guard-rails for the integration of AI in government services. The Dutch parliament recently assembled a Committee on the Digital Future which is tasked with improving the state of knowledge about AI among Dutch politicians. Enabling them to critically discern what AI systems are and are not capable of will hopefully allow them to withstand miraculous promises of AI companies.

Similarly, the introduction of new legislation should make it harder for those marketing and selling AI systems to the government, to conceal systematic shortcomings under the guise of company trade secrets or proprietary information. It cannot be the case that even administrators themselves cannot see inside the systems and turn the knobs of algorithms that decide over citizens livelihoods or control critical infrastructure.

The AI revolution is definitely not inevitable, but it will be if we keep ignoring the messy reality of these systems and the challenges to build them in ways that are safe and respectful of democratic principles. AI is not magical, nor is it a panacea for complex societal problems.