Until recently, it could not be said that AI had a hand in forcing the government to resign. But that’s exactly what happened in the Netherlands in January 2021, when the incumbent cabinet resigned over the so-called kinderopvangtoeslagaffaire: the childcare benefit case.

When a family in the Netherlands tried to get a state childcare allowance, they had to file a claim with the Dutch tax authority. These claims went through the gauntlet of a self-learning algorithm originally deployed in 2013. In the tax authority’s workflow, the algorithm first checks applications for signs of fraud, and people scrutinize those applications that it flags as high-risk.

In fact, the algorithm developed a pattern of falsely labeling claims as fraudulent, and jaded government officials churned out fraud labels. Thus, for years, the tax authorities have arbitrarily ordered thousands of families to pay their claims, driving many into burdensome debt and destroying lives in the process.

“When there is a disproportionate influence, there should be a public discussion around this, whether this is fair. We need to define what “fairness” is, says Yong Suk Lee, a professor of technology, economics and international relations at the University of Notre Dame in the US. But this process did not take place.

The autopsy revealed evidence of bias. Many of the victims had lower incomes and a disproportionate number belonged to ethnic minorities or immigrants. The model considered the lack of Dutch citizenship as a risk factor.

“The operation of a model or algorithm should be transparent or published by different groups,” Li says. This includes things like the accuracy level of the model, he adds.

The tax authority’s algorithm avoided such a check; it was an opaque black box, opaque inside. It can be almost impossible for those affected to tell exactly why they have been tagged. And they lacked any due process or remedy to fall back on.

“The government had more faith in its flawed algorithm than in its own citizens, and the civil servants working on the files simply absolved themselves of moral and legal responsibility by pointing to the algorithm,” says Natalie Smouha, legal tech specialist at KU Leuven. in Belgium.

When the dust settles, it becomes clear that this case will do little to stop the spread of AI in governments – 60 countries already have national AI initiatives. Private sector companies certainly see opportunities in helping the public sector. For all of them, the story of the Dutch algorithm deployed in an EU country with rigid rules, the rule of law, and relatively accountable institutions serves as a warning.

“If, even under these favorable circumstances, such a dangerously flawed system could be deployed over such a long period of time, one should be concerned about how things are going in other, less regulated jurisdictions,” says Levine Schmitt, policy researcher at the Institut Barcelona d’Estudis Internationals in Spain.

So what can stop future wayward AI implementations from doing harm?

In the Netherlands, the same four parties that were in government before the resignation are now back in government. Their solution is to put all public AI — both in government and in the private sector — under the supervision of the country’s regulator, which the government minister says will ensure people stay on top of things.

On a larger scale, some policy enthusiasts are pinning their hopes on the European Parliament’s AI Act, which puts AI in the public sector under greater scrutiny. In its current form, the AI ​​Act will completely ban some applications, such as government social credit systems and the use of facial recognition by law enforcement.

Something like the tax authority’s algorithm would be followed, but due to its public role in government functions, the AI ​​Act would mark it as a high-risk system. This means that a broad set of rules will apply, including a risk management system, human oversight, and a mandate to remove bias in the data involved.

The story of the Dutch Algorithm deployed in an EU country with rigid rules, the rule of law, and relatively accountable institutions serves as a warning.

“If the AI ​​Act had been passed five years ago, I think we would have noticed [the tax algorithm] then,” says Nicolas Moes, an AI policy researcher in Brussels for the Future Society think tank.

Moes believes the AI ​​Law provides a more specific enforcement pattern than its foreign counterparts, such as the one recently enacted in China, that focuses less on public sector use and more on curbing private companies’ use of customer data. US regulators who are currently on the legislative air.

“The EU AI law really controls the whole space, while others still only address one aspect of the problem, very gently addressing only one problem,” says Moes.

Lobbyists and legislators are still busy bringing the AI ​​Law to its final form, but not everyone believes that this law – even if it gets tougher – will go far enough.

We see that even [General Data Protection Regulation], which entered into force in 2018, is still not properly implemented,” says Smouha. “The law can only get you so far. To make artificial intelligence work in the public sector, we also need education.”

This, she said, should be achieved by properly informing government officials about the opportunities, limitations and impact of AI implementation on society. In particular, she believes that civil servants should be able to question its results, no matter what time or organizational difficulties they may encounter.

“It’s not just about making sure that the AI ​​system is ethical, legal and trustworthy, but also that the public service in which the AI ​​system operates is organized in such a way that it can be critically thought through,” says she is.



Source link