OECD Principles on Artificial Intelligence

The OECD has published the final version of its Principles on Artificial Intelligence.

On 22 May 2019, the Organisation for Economic Cooperation and Development (OECD) published the final version of its recommended Principles on Artificial Intelligence (the Principles). The Principles aim to guide governments, organisations and individuals to design and run AI systems in a way that prioritises peoples’ interests and ensures that designers and operators are held accountable for their functioning. At present, 42 countries (including the 36 OECD members) have signed up to the Principles.

The Principles were developed by an OECD expert group on AI, which was established in September 2018 and includes representatives of governments, academia, professional organisations and businesses such as Facebook, Google, IBM and Microsoft. The European Commission, whose recent Ethics Guidelines on Trustworthy AI also seek to address the need to develop public trust in AI (see here), also contributed to the Principles.

What do the Principles recommend?

The Principles go beyond the OECD’s normal focus on economics and also emphasise issues like privacy, individual and worker rights, and the reliability and safety of AI systems. Accordingly, the Principles cover both national policies and “principles for the responsible stewardship of trustworthy AI”.

In particular, the Principles include five key recommendations for AI:

1. Inclusive growth, sustainable development and well-being

Stakeholders in AI should proactively pursue beneficial outcomes for people and the planet, including augmenting human capabilities and enhancing creativity, advancing the inclusion of underrepresented populations and reducing economic, social, gender and other inequalities.

2. Human-centred values and fairness

Throughout the lifecycle of any AI system, stakeholders should respect the rule of law, human rights and democratic values. Stakeholders in AI should implement appropriate mechanisms and safeguards to ensure human freedom, dignity and autonomy, privacy and data protection, non-discrimination and social justice.

3. Transparency and explainability

Stakeholders in AI should commit to transparency with respect to their AI systems. In particular, stakeholders should seek to foster general understanding of AI, ensure that individuals are aware of when they are interacting with AI, enable those affected by an AI system to understand outcomes and provide appropriate means to challenge the outcomes of AI systems.

4. Robustness, security and safety

Stakeholders in AI should ensure that systems are robust, secure and safe throughout their lifecycle, both in conditions of normal use and in adverse conditions or under foreseeable misuse. AI systems should be traceable in relation to datasets, processes and decisions to ensure that AI systems can be appropriately analysed. AI stakeholders should apply a systematic approach to risk management at each stage of the lifecycle of AI systems.

5. Accountability

Stakeholders in AI should be accountable for the functioning of AI systems and for their respect of the above recommendations, based on their roles, the context and the current state of the art.

In addition to these five key recommendations, the Principles also provide suggestions for national policy priorities, including: Investing in responsible AI research and development; fostering a digital ecosystem for AI; shaping an enabling policy environment for AI; building human capacity and preparing for labour market transformation; and international cooperation for trustworthy AI.


The Principles are not legally binding, but do reflect a growing international consensus on the development and governance of AI systems. Existing OECD recommendations, such as the OECD Privacy Guidelines and the OECD Principles of Corporate Governance have proven to be highly influential in shaping international standards and national legislation. As a result, companies involved in the development or use of AI may wish to consider whether their current technologies are consistent with the Principles.

The OECD Committee on Digital Economy Policy is expected to follow up the Principles with further practical guidance for implementation, and to provide a forum for dialogue and information exchange in respect of AI (the OECD AI Policy Observatory). Companies may wish to monitor the development of this practical guidance when developing or deploying new AI systems. Failure to do so may lead to costly changes to systems when the Principles and accompanying guidance start to be reflected in the signatory countries’ national legislation and policies.

You can read the OECD’s Principles on Artificial Intelligence here.

Simmons & Simmons’ Artificial Intelligence Group

Simmons & Simmons’ Artificial Intelligence Group comprises lawyers across various practice areas who can assist companies and individuals with any legal issues arising in relation to AI.

We would be happy to advise on the Principles (including any risks for you or your business), or on any other legal issues relating to AI.

Please contact Minesh Tanna if you would like any further information.

This document (and any information accessed through links in this document) is provided for information purposes only and does not constitute legal advice. Professional legal advice should be obtained before taking or refraining from any action as a result of the contents of this document.