What is next for AI regulation? Senior Technologist Adam Leon Smith CITP FBCS explores the implications of the EU AI Act.
In September 2021, there was a panel at a ForHumanity conference, with senior guests from the US Equal Employment Opportunity Commission (EEOC), the US Government and Accountability Office, the European Commission and the UK Accreditation Service. The topic was AI-specific regulation, whether it is needed, the progress it is making and the implementation complexities.
Paul Nemitz, from the European Commission, outlined the need for the proposed AI Act in the EU. Whilst GDPR regulates automated decision-making, it is focused on the use of personal data, rather than the technologies themselves. In Paul’s opinion this leaves a gap, and the Act is expected to pass early in 2022.
The AI Act applies the equivalent of product safety regime with CE marks to AI. Like GDPR, the AI Act is extraterritorial and affects international trade in technology products and services, so it is of significant international interest.
Differing stances
The UK’s position on AI regulation is unclear. The UK’s Information Commissioner's Office (ICO) has welcomed the proposed EU act. However, the most recent consultation issued by the Department for Culture, Media and Sport instead proposes the removal of Article 22 from GDPR, removing the right of a human to appeal where an algorithm makes a decision that has a significant effect on them.
Their rationale for removing it is confusing, as it seems to be that organisations are not sure how to comply with it - and that it might be difficult when we use AI more widely. That does not jump off the page as a sound rationale.
The US regulators at the conference were very well versed in the risks and opportunities of AI. The EEOC highlighted the way they have seen biases introduced in the hiring process by AI, but have also seen cognitive biases removed through the use of AI. Both US regulators thought the existing legislation on the books was enough to regulate AI, but one noted that the Biden administration may be working on proposing a specific AI law.
The UK Accreditation Service was also represented. They normally accredit UK organisations to provide certifications through product, service or organisational audit and inspection. They are currently working with the ICO on the GDPR requirements, including on a certification scheme for AI.
Extraterritorial legislation
It seems like there are quite different views on the approach to regulation across the Atlantic, but the effect of extraterritorial legislation from the EU will probably ripple across the world. UK companies producing and exporting AI and AI services are starting to pay attention to the demands for auditable technical quality management systems coming from the EU.
The CDEI made an important point in a blog in response to the EU AI Act. Most of the approaches for quality management of AI are about ‘risk assurance’ rather than ‘compliance’. That is, they use open-ended questions to assess risk, rather than being auditable in a binary way.
Whilst ISO/IEC and IEEE has been working on AI standards for some years, best practices are still in their infancy. As an example, whilst the concepts and techniques relating to AI bias are defined in standards going through the publication process, recommendations for mitigating it are currently only suggestions.
The EU AI Act contains provisions to automatically adopt technical standards when they are ready. A technical committee inside the EU’s standardisation arm (the UK is still included) will review each one, and recommend whether it should be adopted, ignored or extended.
It seems like we are going to end up with legislation being primarily driven by the EU, with implementation in 2023. Given that technical standards are unlikely to be ready by then, it seems like compliance will be a challenge. Reassuringly, BCS seems to be at the heart of this, responding to UK government consultations and creating new professional standards. BCS members are visible and vocal in European and international standards bodies, helping to drive technical consensus.
About the author
Adam Leon Smith CITP FBCS is a senior technologist and CTO of Neuro and Dragonfly.Twitter: @adamleonsmith
LinkedIn: Linkedin.com/in/adamleonsmith/