Check out this blog from Tom Weeks, Technical Director at Informed Solutions, on responsible usage of artificial intelligence following the Digital Leaders’ National Conference.
Last month, I was privileged to take part in the Digital Leaders’ 18th National Digital Conference, which explored the opportunities, benefits, risks, and societal impact of AI. It was fascinating to hear the different perspectives that were discussed during the day and also the common themes that emerged.
One theme that particularly stood out was the importance that everyone placed on developing and using AI responsibly and ethically. Society is becoming increasingly aware and questioning of AI and so, rightly, trust and confidence will need to be earned by demonstrating that AI is being developed and used with people’s best interests in mind.
Here at Informed, AI is playing an increasingly significant role in the digital transformation programmes that we deliver for our clients, and in the solutions we provide to our international customer and partner community. We want the solutions we deliver to have a positive impact, and so it’s hugely important to us that we develop and use AI responsibly and ethically. Taking part in the conference made me reflect on how we approach AI assurance, and I wanted to share some of the principles and practices that we have found make a noticeable difference.
How AI Assurance and Other Information Assurance Functions Can Work Shoulder-To-Shoulder
Over the last few years, information assurance has become a more integral part of every organisation. All organisations in the UK have an obligation to protect data in line with GDPR but, for most organisations, data protection is just one information assurance function that sits alongside others such as information security and cyber security. AI is data driven, and so AI assurance has a tight relationship with these other information assurance functions.
Whilst AI assurance, data protection, information security, and cyber security are complementary and inter-related, the level of collaboration between specialists in each of these assurance functions is often limited. For example, it isn’t often that we see data scientists, data protection specialists and information security specialists sitting down together to co-review a Data Protection Impact Assessment for a new AI based service, or to brainstorm the organisational and technical measures that will help to make an AI solution safe, secure, transparent, and fair by design. This sort of siloed working isn’t uncommon, but it is a missed opportunity for collaboration that risks creating poorer outcomes for AI assurance.
AI assurance, data protection, information security, and cyber security may be different and very specialised disciplines, but they all share a common outcome – to create trust and confidence by assuring that information is being managed responsibly, ethically and legally. Given that shared outcome, organisations should reflect on their operating model for information assurance and, if they need to, make changes that require close collaboration between the different functions. Close collaboration leads to a more complete and cohesive understanding of risks and opportunities that is greater than the sum of its parts. A more complete and cohesive understanding of risks and opportunities leads to more effective actions. More effective actions will lead to more assured AI and greater levels of trust and confidence.
Improving collaboration between assurance functions might sound easier said than done, but we have seen it done simply and well. The best examples are where assurance functions have adopted ways of working that you would typically find in an agile product team. The Plan-Do-Check-Act lifecycle that is a staple of many ISO standards maps closely to the Scrum sprint planning, delivery and review/retrospective framework, and we have seen assurance functions use Scrum very successfully as a methodology for running multi-disciplined teams who work collaboratively to shape and agree shared assurance goals and deliver a Backlog of work that achieves these.
Embed AI Assurance Techniques Into Delivery Methods
Security by design, privacy by design and data protection by design and default are concepts that we’re all familiar with and subscribe to. These concepts say that security, privacy and data protection considerations should be ‘baked in’ to everyday working practices so that they are assured as a matter of course throughout the delivery lifecycle, rather than every so often. Applying the same principle to AI assurance will help to ensure that AI is safe and ethical by design and has people’s best interests in mind.
The majority of digital transformation programmes involve the delivery of new products, services and capabilities using agile methodologies based on frameworks like Scrum, Nexus and SAFe. These methodologies involve muti-disciplined teams of User Researchers, Service Designers, Architects, Data Scientists and Developers delivering products and services in a user-centred and iterative way. Teams frequently inspect and adapt what they are delivering to assure that user needs are being met, quality is high, and risks are being mitigated. This ‘baked in’ focus on user needs, quality and risk means that agile delivery methodologies can be adapted to embed AI assurance techniques with relatively little effort.
Here is one simple example of how we have embedded AI assurance techniques into a two-week Discovery Sprint where the goal is to understand user needs for a new digital service that incorporates AI:
- During Sprint Planning the whole team brainstorms and agrees the research objectives that they want to achieve during the coming Sprint. This includes identifying the users that we want to conduct research with, and the research techniques we plan to use (Focus Groups, interviews and surveys etc). The research objectives are formulated as hypotheses using a Hypothesis-Driven Development user story structure and are informed by what we’ve learned during the previous Sprint.
- Early on during Sprint delivery we run a Consequence Scanning ceremony based on the excellent Kit available at doteveryone.org.uk. This is a whole-team ceremony that brings together team members from user research, service design, data science, and technical architecture to consider the consequences of the AI based service from different perspectives. We also involve assurance specialists from our clients AI assurance, data protection, information security, and cyber security assurance functions so that delivery teams and assurance functions are collaborating shoulder-to-shoulder. During the ceremony, we take the hypotheses that were formed during Sprint Planning and consider what the intended and unintended consequences of these might be. We often use the ‘Potential Harms from Automated Decision-Making’ framework developed by the Future of Privacy Forum as a prompt for making sure we think broadly about the different categories of consequences that could lead to individual or collective benefits or harms. Once we have a sense of what the consequences could be, we use these to refine our hypotheses and inform the discussion guides or surveys that steer our research.
- We run our research and elicit user feedback on the hypotheses and consequences. We synthesise the feedback to draw out findings and insights that we use to refine our understanding of our user personas and needs. As well as capturing user needs in our personas, we also capture the users’ views on the consequences we’ve identified and articulate these as potential risks, harms, and opportunities. This helps to keep these topics at the forefront of the team’s mind.
- During Sprint Review, the whole team inspects the findings from our user research and reflects on what we’ve learned and whether our hypotheses turned out to be true or not. We take what we’ve learned and use it to inform and adapt the research objectives for our next Sprint. The cycle then starts again.
These are all simple things but making them an embedded part of your delivery method has significant benefits. The overall approach allows organisations to balance agility and innovation with control, which is in-keeping with the spirit the pro-innovation approach to AI regulation and assurance set out in the recent UK Government white paper. The frequency of inspection and adaptation reduces the likelihood of more insidious risks, such as bias in data and models, creeping in unnoticed. There are regular forums for involving assurance specialists in delivery and for different assurance functions to work shoulder-to-shoulder. It is more straightforward to quickly reconcile different viewpoints that team members might have, such as how to balance user needs identified through research with compliance obligations identified by assurance specialists. It is more straightforward to adapt AI assurance techniques (such as those set out in the CDEI portfolio of AI assurance techniques) as new needs, standards and guidance emerge.
AI assurance is closely inter-twined with other information assurance functions and should be approached with a ‘by design’ mindset. AI, data protection, information security, and cyber security assurance functions should collaborate closely, and AI assurance techniques should be baked in your delivery approach. Agile delivery frameworks like Scrum can be readily adapted to allow this and, by doing so, AI assurance becomes an everyday team sport. Ultimately, that can only lead to higher levels of trust and confidence that AI is being developed and used with people’s best interests in mind.