Governments are Using AI Now, How Will the Public React?

Reading Time: 2 minutes
Ukraine, the Netherlands, and the United Kingdom have already begun using AI in governance (Image credit: Intel).

The Stanford AI Index Report, produced by Stanford University’s human-centered Artificial Intelligence (AI) department, offers a comprehensive overview of AI’s growth and application worldwide. A standout finding presented as takeaway number nine reveals:

The legislative landscape is rapidly evolving with the integration of AI, as evidenced by the surge in laws referencing “artificial intelligence” — from a solitary instance in 2016 to 37 in 2022 across 127 countries. Similarly, AI mentions in global legislative discussions have amplified more than sixfold since 2016 in 81 countries.

Emerging at the nexus of computer science, statistics, and social sciences, Computational Social Science (CSS) harnesses computational methods to unlock new understandings of human society. It was recently acclaimed as a “social research revolution,” giving rise to the notion of “government by algorithm,” wherein AI is instrumental in decision-making and the automation of public services.

A case in point is Ukraine’s Ministry of Justice, which, in 2017, leveraged blockchain technology in experimental government auctions aimed at enhancing transparency and mitigating corruption. Yet, details about the application of this technology were conspicuously absent.

In the Netherlands, flawed algorithmic forecasts adversely impacted thousands of families, sparking concerns about the escalating use of algorithms in governance. Victims have voiced frustrations over the government’s belated and inadequate responses to erroneous child care benefits fraud accusations, attributed to a self-learning algorithm.

On October 24th, The Guardian reported the UK government’s use of AI and automated algorithms to govern public sector and civil procedures. While the UK Government introduced the Algorithmic Transparency Recording Standard, there is no established protocol for auditing the models’ precision or practicality. Moreover, there is a conspicuous lack of guidelines to support those wrongly affected by these algorithms, coupled with an apparent deficiency in the rigorous pre-approval testing and training of the models.

Harvard University political scientist Gary King advocates for a novel collaborative approach between academia and industry to overcome data sharing obstacles, dubbed “Social Science One” (SS1). Unveiled at the pre-conference on Politics and Computational Social Science (PaCSS), SS1 envisions a panel of eminent scholars with unfettered access to a corporation’s data (e.g., Facebook) for a specific study. Under this framework, academic findings remain confidential until the completion of the project, thereafter granting scholars complete freedom to publish their results. This arrangement seeks to reconcile the dual imperatives of protecting sensitive data and upholding academic liberty to disseminate research findings. King contends that SS1 could herald a new era in social science research, empowering researchers to tackle some of the most pressing societal challenges.

Drawing on the extensive knowledge and perennial studies of academicians, it becomes abundantly clear that governments cannot guarantee optimal outcomes absent these rigorous standards. It is imperative for the UK government, and indeed all governments employing AI, to implement measures that assure model accuracy and the prompt attention to those inadvertently harmed by algorithmic errors. A failure to exhibit decisive stewardship in this realm may not only infringe upon human rights but also erode public trust in government institutions.

Written by Emily Ulloa

Share this:

You may also like...