Join Today

The Moral Maze of AI

Moral Hazards and Unconscious Bias aren’t your typical terms when considering a technology project. But for Artificial Intelligence (AI), and more so for those in the public sector, these must be crucial considerations.

AI undoubtedly offers public agencies a powerful tool to enhance their citizen centric services. However as the Dutch government witnessed earlier on this year with the System Risk Indication it was developing, societal interests often trump technology interests. In this case, the Hague District Court ordered an immediate halt to the project, citing Human Rights breaches. The court sided with complainants that the system’s risk algorithms were deemed as unfairly bias towards low-income and minority residents with human rights campaigners deeming use of this technology a “welfare surveillance state.” Not only to safeguard sizeable investment into these technologies, but to ensure they are fully fit for purpose it behooves public sector policy makers, commissioners and technologists to fully comprehend and implement with consideration how this technology will impact citizens lives.

The United Nations Educational, Scientific and Cultural Organisation (Unesco) has recently launched a global online consultation on the ethical use of AI, which will be used to help draft a framework to govern how the technology is applied across the globe. Unesco is convinced that there is an urgent need for a global approach, with its draft recommendation taking into account the wide-ranging impacts of AI, across the environment, labour markets and culture. It outlines 11 principles, including fairness, responsibility, human oversight and privacy among others. All of which are to be underlined by values such as human rights, human dignity, living in harmony and trustworthiness.

Given that the benefits of AI are well known, but implementation can be a challenge, HM Government has tried to make things easy with 20 pages of easily digestible procurement guidelines. Buried in the details are some very salient points, highlighting how procuring Artificial Intelligence (AI) isn’t the same ‘kettle of fish’ as your typical technology project.

It’s Not Science Fiction Anymore

Isaac Asimov’s Three Laws of Robotics form the bedrock of morality across Science Fiction and in academic circles of philosophy relating to AI. The first of these laws is as follows: “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” The problem for AI implementations today, is that this technology does not have the sentience to consider such morals being bound to the work flows, rules and considerations set by those human beings buying, developing, integrating and supporting these technologies.

This places the onus on the policy makers, commissioners and technologists, particularly those in the public sector to ensure that AI and the workflows around the technology not only creates efficiencies at scale but also handles the nuances and sensitivities of citizen life. The implications of decisions by rules engines can be hard felt: Benefits could be withheld or medicines not prescribed or innocent citizens unfairly brought into custody. All due to an over reliance on machine driven decision making, cold to data ethics.

Implementing AI is far more than buying the technology, although that is true for all IT projects. Key considerations for data bias is essential both to limit false positives which risk reducing any efficiencies to nil or to prevent those prejudicing citizens unfairly through algorithmic ignorance. Understanding the limitations of underlying data and workflows powered by AI, needs to be considered at project conception.

It’s Takes A Village

Multi-disciplinary teams are a must for these projects. Beyond project managers, business analysts and programmers: AI projects require ethicists, linguistic specialists, data strategists etc to think out and address the implications and limitations of these technology implementations. All of which working towards agencies not only reaping the rewards of enhanced data processing but safeguarding the trust in public services and citizens lives themselves.

By augmenting strategy and implementation teams with these types of specialists, BJSS has seen thoughtful, well thought-out projects into life and making a real difference both to the public sector and citizens today. This year’s deferred Spending Review is said to have an increased focus and, in some cases, mandated policy on AI and automation usage across the government. It is therefore imperative that as we continue to embrace the digital revolution across the state, services, whether they are public-facing or back-office focused, there is an obligation on ensuring that the social and environmental impacts from an ethical perspective – not just from the outset, but throughout the lifetime of the service, process or system – is considered.

Scroll to top
X