Join Today

ICO’s ExplAIn draft guidance and consultation

Katy Guthrie, Head of ScotlandIS Data

The ICO, in partnership with the Alan Turing Institute, have published draft guidance on Explainable AI and they are looking for feedback on this.  This guidance is intended to enable organisations to explain to individuals in simple terms how decisions which concern them have been made. It is something that all organisations who have AI which involves personal data will need to take heed of, and may steer the decisions on which type of AI algorithms companies use to reach a decision; simpler machine learning techniques, such as decision trees naturally have more inherent traceability. This means AI practioners need to have a range of tools at their disposal and should keep is simple where possible. It will also drive research into making more complex algorithms explainable. While this guidance only impacts AI which includes PII data, there are lots of other reasons why AI might need to be explainable, particularly in heavily regulated industries, so should be taken as good practice regardless.

At about 170 pages in total, the ICO’s guidance is certainly comprehensive. Fortunately, it is split into three sections each of which are aimed at slightly different audiences.

It explains the legal basis for requiring explanation of AI based decisions, including but not limited to the GPDR principles around Lawfulness, fairness and transparency and Accountability. As well as the legal requirements, the guidance also puts forward a compelling case for why providing good explanations at the right level is a win-win option for organisations and individuals, helping to build trust and guarding against reputational damage and upholding the law.

Part 1 also covers the various types of explanation which could be required. These should ensure the rationale for decisions is explained in an accessible, non-technical manner, and that there is clarity on responsibility for the decisions – even if they have been made by an AI process, an accountable human must be able to review if required. It also outlines data and fairness explanations – these go back to good data governance and understanding of data provenance, as well as being able to explain why algorithms were selected.

The second section of guidance is aimed at more technical teams, while the third section provides an overview of what an organisational structure for achieving this might look like, and what types of policies and procedures an organisation should put in.

The ICO’s public consultation is open until 24th January so there is still an opportunity to provide feedback. It’s all available at https://ico.org.uk/about-the-ico/ico-and-stakeholder-consultations/ico-and-the-turing-consultation-on-explaining-ai-decisions-guidance/

Scroll to top
X