European Commission Ethics Guidelines for Trustworthy AI

The European Commission's High Level Expert Group on Artificial Intelligence has published the final version of its Ethics Guidelines for Trustworthy AI.

On 08 April 2019, following a wide-ranging consultation, the European Commission High Level Expert Group on Artificial Intelligence published the final version of its Ethics Guidelines for Trustworthy AI (the Guidelines). The Guidelines aim to provide a framework for stakeholders in AI to develop and deploy “trustworthy” AI systems, ensuring that the benefits of AI are realised and potential harmful consequences avoided. Such an approach should help AI to avoid the lack of trust that has historically affected industries such as aviation, nuclear power and food safety.

Trust in AI is an increasingly pressing concern, and this is reflected by the presence of representatives of a number of key stakeholders in the High Level Expert Group that developed the Guidelines (including Google, IBM, Nokia Bell Labs and Orange). Trust in AI has also been the subject of a recent study by Ernst & Young, which also attempts to establish a framework for assessing AI systems.

What do the Guidelines require?

The Guidelines are grounded in fundamental rights and sit alongside existing laws such as the GDPR and consumer protection legislation. Although they are couched in terms of fundamental rights, the Guidelines note that firms can be required to comply with such laws and clearly envisage that fundamental rights should be actively considered by those involved in the development and deployment of AI. For example, the Guidelines suggest that, where risks exist, companies should undertake a “fundamental rights impact assessment”, mirroring requirements for data protection impact assessments under the GDPR.

The Guidelines centre around seven key requirements for “trustworthy” AI:

1. Human agency and oversight

AI should be used to support human autonomy and decision-making. Users should be in a position to make informed decisions regarding AI systems and those systems should be subject to human oversight to ensure that they do not undermine human autonomy.

2. Technical robustness and safety

AI should be secure and resilient to attacks, such as hacking. Systems should produce reliable, reproducible results and there should be safeguards in case there are problems.

3. Privacy and data governance

AI systems should guarantee data privacy and protection throughout their lifecycle. Access to data should be appropriately controlled and those developing or deploying AI systems should be conscious of the quality and integrity of data used by those systems.

4. Transparency

AI systems should not represent themselves as human to users. Their decisions should be explainable in a manner appropriate to the audience, while data and processes should be well documented so that it is possible to trace the cause of errors.

5. Diversity, non-discrimination and fairness

AI should be developed to control for biases that could lead to discriminatory results. Systems should be user centric and accessible to all people regardless of age, gender, ability and characteristics.

6. Societal and environmental wellbeing

The broader society, the natural environment and other sentient beings should all be considered as stakeholders throughout the lifecycle of an AI system. The use of AI should be carefully considered, particularly in the context of the democratic process.

7. Accountability

AI systems and data should be capable of being assessed and should be externally audited when fundamental rights are affected. Negative impacts should be minimised and reported, and adequate redress must be available when unjust adverse impacts occur.

The Guidelines also set out a more detailed “assessment list” that is designed to serve as a starting point for companies looking to comply with the Guidelines.

What do I need to do now?

First, it should be noted that compliance with the Guidelines is voluntary. Except to the extent that they overlap with existing laws and unless companies have otherwise committed to follow them, there is no obligation on companies to comply. However, the Guidelines do set a starting point for the development of EU regulation of AI, so companies should consider the possible impact of the Guidelines and follow developments in this area.

Companies should consider whether they develop or use technology that could be considered to be AI. The High Level Expert Group on AI has separately published a definition of AI. AI is broadly defined and includes all “systems that display intelligent behaviour by analysing their environment and taking actions - with some degree of autonomy - to achieve specific goals”. The Guidelines point to machine learning and reasoning technologies as the two key sub-disciplines of artificial intelligence.

To the extent that they develop or use AI, companies should also consider whether their existing technologies comply with the Guidelines. The Guidelines envisage a “piloting phase” that will last until early 2020, followed by a consultation and a revised version of the Guidelines. Companies may wish to contribute to this process to ensure that their use of AI is adequately captured in the development of the Guidelines and future regulation.

Companies may also wish to consider whether any of their existing contractual arrangements could require them to comply with the Guidelines. It is common for commercial contracts to contain provisions requiring the parties’ compliance with applicable laws and regulations. Depending on how such provisions have been drafted, companies may have agreed to follow the Guidelines, although, given that they are non-binding guidelines only, this may be unlikely.

What might I need to do in future?

Although they are not legally binding, the Guidelines envisage that they will be used to “inspire new and specific regulatory developments”. The High Level Expert Group on AI is expected to produce a set of Policy and Investment Recommendations in June 2019, which will raise “regulation [that] may need to be revised, adapted or introduced, both as a safeguard and as an enabler”. The Guidelines also envisage that standards and certification are likely to be used in future to communicate to users that complex AI systems are trustworthy.

Further comment

The Guidelines make it clear that it is only a matter of time until the EU pursues broader regulation of AI. The level of industry and academic involvement in the development of the Guidelines, and the broader interest in the approach, indicate that they reflect a growing consensus on how AI should be governed in future. Future regulatory efforts are likely to reflect the Guidelines (at least in part), so considering the application of the Guidelines now may help firms to avoid costly redevelopment or redeployment of systems when enforceable measures are adopted in future.

While the Guidelines were developed by the European Commission, there has already been substantial international interest in the approach and how it will be applied. It is therefore also possible that the Guidelines will become a template to be followed in other jurisdictions.

You can read the European Commission’s Ethics Guidelines for Trustworthy AI here, while the Definition of AI can be found here.

Simmons & Simmons’ Artificial Intelligence Group

Simmons & Simmons’ Artificial Intelligence Group comprises lawyers across various practice areas who can assist companies and individuals with any legal issues arising in relation to AI.

We would be happy to advise on the Guidelines (including any risks for you or your business or on conducting a “fundamental rights impact assessment” as noted by the EC), or on any other legal issues relating to AI.

This document (and any information accessed through links in this document) is provided for information purposes only and does not constitute legal advice. Professional legal advice should be obtained before taking or refraining from any action as a result of the contents of this document.