Many organisations have established an AI policy. Some companies, such as IBM1, Google2 and others have made these available online. Here, we propose ten - largely risk-based - considerations that synthesise the various societal, legal, ethical and engineering challenges that organisations need to consider in developing an AI system.
1. Governance
In a recent report, Accenture reported that 63% of AI adopters had an ethics committee3. Establishing an AI ethics committee to oversee the use of AI will ensure adherence to the law, promote best practice, oversee risk and provide authority for periodic audit. Additionally, the committee would be responsible for ensuring that remediation takes place, if breaches of policy occur.
2. Privacy
The general principles of the GDPR4 provide a robust basis for protecting personal data in any jurisdiction, although additional local laws may also apply (e.g. in some US States). The GDPR requires that a data privacy assessment must be performed on all training datasets. If the datasets contain personal data, they are subject to the same privacy controls as production data, notably following the principle of privacy by design and default.
3. Security
Machine learning datasets and AI codebases represent valuable IP. Security breaches can result in adverse reputational and financial consequences. The use of a security management framework, such as ISO27001, helps ensure controls are in place to enforce confidentiality, integrity and accountability in the management of these information assets.
Model development introduces new intrusion targets. Trojan attacks can cause the wrong learning to be made and these can be propagated into testing and operation. Model inversion enables intruders to reverse-engineer the machine learning development, typically to replicate the capabilities or game the production system.
Production AI systems should follow the usual security best practices, such as hardening and applying patches. Additionally, AI systems should also be robust to adversarial attacks that attempt to misclassify data, e.g. by using a photo of a face in a facial recognition system.
4. Safety
A principle of ‘safety by design’ should begin with a safety impact assessment. If the AI system can impact human or product safety, it must be subjected to elevated, holistic controls, commensurate with the level of risk.
The behavioural impact of AI systems should be evaluated, for example: in recommendation systems; where minors or other vulnerable groups may be impacted; or where an actor’s behaviour may adversely change if they are aware they are dealing with automated decision-making - driverless cars come to mind5.
All AI actors should be appropriately trained in the operation of an AI system to respond appropriately in the event of a safety incident such as a malfunction - perhaps as a result of misuse - or of an input failure (such as a sensor).
Classifier thresholds should be set such that, where there is a high probability of a false positive or negative occurring in a critical assessment (such as in cancer screening) the safest automated decision is taken, or there is reference to human expert review.
5. Replication of outcomes
A management system such as ISO9001 containing a formal AI system lifecycle provides a powerful toolset to facilitate repeatability. Environments in which AI models are developed, trained, tested and operated should have controls in place to address any changes to hardware, software, data or process from those specified in AI system documentation.
When an AI system fails, the ability to reproduce the failure in a test setting is an important engineering design consideration, particularly if the reasons why an AI system made a decision or a prediction is opaque. Many of the subsequent policy considerations depend on repeatable processes being in place.
6. AI performance
AI system performance may be impaired if the testing of AI models is ineffective or outcomes are unclear. A plan defining AI performance targets should be established at the outset of AI system development, typically with traffic light indications of acceptable ranges, e.g. of precision and recall. Where operational performance levels cannot be established before a model is trained, the training exercise can be viewed as a form of prototyping, with detailed targets, redefined ahead of formal testing.
‘Concept drift’ can occur during the operational use of AI, such that data presented to an AI system is significantly at variance to the examples used in training. Additionally, continuous learning activities may result in unexpected variances in outcomes. As a consequence, a form of calibration can be periodically performed to ensure the AI system operates in bounds against a reference standard. Alternatively, service design can ensure periodic or continuous review of out of bounds AI performance takes place, with automated alerts raised as appropriate.
7. Avoidance of bias
An assessment should be made during the pre-processing of any data used in machine learning to ensure that the quality of the data supports the operational goals of the AI development and that practical steps are taken to detect and prevent bias towards any individuals or sub-populations represented in the training data.
A methodical approach is needed as bias can occur in many guises. Forms of bias identified by Google6 include automation, confirmation, experimenter’s, group attribution, implicit, in-group, out-group homogeneity, coverage, non-response, participation, reporting, sampling and selection bias. These should all be explicitly considered in the assessment for applicability and, if applicable, avoidance.
8. Accountability
Key to accountability is traceability, explain-ability and transparency in decision-making. During development, this can be facilitated by adopting periodic go / no-go checkpoint reviews, with clear definition of the responsibilities of and communication to, impacted stakeholders.
A principle of ‘explainable AI’7 (XAI) should be implemented so that the rationale behind an operational outcome can be explained in terms understandable to a data subject or an auditor, either through documentation, within the AI system, or through appropriate subject matter expertise.
9. Human rights & social responsibility
In addition to compliance with law, as a result of the high societal impact in the use of AI, the provision of AI solutions should be associated with best practice in corporate social responsibility, e.g. UN Guiding Principles on Business and Human Rights8.
10. Suppliers responsibilities
All suppliers of components or skills in the development or operation of AI systems should be directed, trained and audited as appropriate to ensure they comply, as appropriate, with ethical policies comparable to the client.