European Commission Policy and Investment Recommendations for Trustworthy AI

The European Commission's High Level Expert Group on Artificial Intelligence has published its Policy and Investment Recommendations for Trustworthy AI.

Following its recent Ethic Guidelines for Trustworthy AI (the Guidelines, see our summary article: here), the European Commission's High Level Expert Group on Artificial Intelligence published its Policy and Investment Recommendations for Trustworthy AI (the Recommendations) on 26 June 2019.

The earlier Guidelines set out the Commission’s vision for “trustworthy” AI - AI that is ethical, lawful and robust - and the Recommendations build on this vision with specific suggestions for future policy and regulation.

The Recommendations

The Recommendations comprise 33 proposals, listed in Appendix 1 below, which seek to achieve a balance between encouraging the use and development of AI to the benefit of society and protecting society from the potentially harmful effects of AI. The Recommendations touch a wide range of European policy areas, including competition, diversity and inclusion, education, the environment, infrastructure, investment and workers’ rights.

As a result, many of the Recommendations are primarily targeted at EU and Member State institutions, rather than private organisations. However, they also contain a number of proposals of particular relevance to organisations that develop, deploy or use AI. We summarise further below the key proposals that may be relevant to private organisations.

The Recommendations focus on four key areas in respect of which policy is required to achieve the Commission’s vision for “trustworthy” AI:

1. Humans and society. Human-centricity is at the heart of the emergent European Commission approach to AI. This implies not only attention to individuals, but also to the well-being of the natural environment and society in general. Achieving human-centric AI is a key goal of the Recommendations.

2. Private sector. The Recommendations acknowledge the important role that the private sector must play in addressing the opportunities and challenges posed by AI. The Recommendations also emphasise the importance of collaboration between the private sector, the public sector, academia and civil society.

3. Public sector. The Recommendations envisage a fundamental shift in governments providing more efficient and digitised public services. By delivering services using AI, the public sector should act as a catalyst for growth and innovation.

4. Research and academia. The Recommendations highlight the crucial role that research and academia must play in understanding the emerging challenges and turning the development of AI towards the common good.

In order to achieve “trustworthy” AI, the Recommendations also identify four key policy areas that will act as enablers:

1. Data and infrastructure. The Recommendations emphasise the importance of data and infrastructure in promoting “trustworthy” AI.

2. Education and skills. The Recommendations underscore the importance of appropriate education and skills to build and maintain an AI-enabled world.

3. Governance and regulation. The Recommendations recognise the need for an appropriate governance framework and suggest a detailed review of the current legislation and regulation.

4. Funding and investment. The EU is currently an underperformer in early stage innovation funding. The Recommendations therefore propose steps to improve the availability of funding for AI-related projects.

Recommendations relevant to private organisations

A number of proposals are likely to be of particular interest to companies that develop, deploy or use AI. Like all of the Recommendations, these proposals are non-binding (except to the extent that they reflect existing legislation such as the GDPR). However, they are a useful indicator of the likely focus of legislation and ethical guidance.

  • Protection of privacy. Governments should refrain from engaging in mass surveillance of individuals. Similarly, measures should be taken to limit commercial surveillance of individuals so that it is consistent with the fundamental right to privacy. These measures should apply to services that individuals receive for free.

  • Transparency and explainability. Where individuals interact with AI systems and could believe that they are interacting with a human, the deployers of those systems should be responsible for disclosing that the system is non-human. In parallel, tools should be developed to explain personal data processing.

  • Workers’ rights. Where AI systems are being developed, deployed or procured, workers should be involved in discussions to ensure that systems are useable, and workers retain sufficient autonomy and job fulfilment. Such discussions should also assist in ensuring that AI systems comply with applicable employment legislation. Employees at risk of losing their jobs to AI should also be supported.

  • Social inclusion. Developers of consumer-facing AI systems should be responsible for ensuring that such systems can be used by all intended users and do not lead to the social exclusion of those with disabilities.

  • Data interoperability. Generally, data interoperability between market players should be encouraged. More specifically, current competition law requirements for data interoperability can apply only to dominant firms upon proof that data access is indispensable to competition. In some sectors, the Recommendations envisage that it may be appropriate to introduce specific regulations, which would impose data interoperability requirements on both dominant and non-dominant firms.

  • Gender bias. Any gender bias in algorithmic decision-making should be addressed, particularly to the extent that it specifically affects women in the labour market.

  • Trustworthy AI assessments. Regulators should consider introducing mandatory requirements for trustworthy AI assessments where AI systems deployed by the private sector have the potential to have a significant impact on human lives. These assessments should include a fundamental rights impact assessment. Future requirements for traceability, auditability and ex ante oversight of AI systems are also a possibility.

  • Accountability and responsibility. In order to protect the principle of human agency, accountability and responsibility, policy-makers should refrain from establishing any form of legal personality for AI systems or robots. The Recommendations state that to do so would pose a significant moral hazard.

Further comment

The Recommendations demonstrate the EC’s proactive approach to AI and lay out a framework for future policy and regulation. Importantly, the Recommendations build on the earlier Guidelines by focussing on specific areas that organisations and policymakers should consider. Although the Recommendations are not binding, organisations developing, deploying or using AI may wish to consider their implications in order to avoid costly changes when they start to be reflected in EU law and policy.

The Recommendations also provide a road-map for legislative action in relation to AI: proposing a broad review of current EU law, followed by appropriate changes. Measures in relation to AI are expected to be a focus of the new European Parliament and organisations involved in AI should follow the progress of relevant EU legislation related to AI.

Simmons & Simmons’ Artificial Intelligence Group

Simmons & Simmons’ Artificial Intelligence Group comprises lawyers across various practice areas who can assist companies and individuals with any legal issues arising in relation to AI.

We would be happy to advise on the Recommendations (including any risks for you or your business), or on any other legal issues relating to AI.


APPENDIX 1

  1. Empower humans by increasing knowledge and awareness of AI
  2. Protect the integrity of humans, society and the environment
  3. Promote a human-centric approach to AI at work
  4. Leave no one behind
  5. Measure and monitor the societal impact of AI
  6. Boost the uptake of AI technology and services across sectors in Europe
  7. Foster and scale AI solutions by enabling innovation and promoting technology transfer
  8. Set up public-private partnerships to foster sectoral AI ecosystems
  9. Provide human-centric AI-based services for individuals
  10. Approach the government as a platform, catalysing AI development in Europe
  11. Make strategic use of public procurement to fund innovation and ensure trustworthy AI
  12. Safeguard fundamental rights in AI-based public services and protect societal infrastructures
  13. Develop and maintain a European strategic research roadmap for AI
  14. Increase and streamline funding for fundamental and purpose-driven research
  15. Expand AI research capacity in Europe by developing, retaining and acquiring ai researchers
  16. Build a world-class European research capacity
  17. Support AI infrastructures across member states
  18. Develop legally compliant and ethical data management and sharing initiatives in Europe
  19. Support European leadership in the development of an AI infrastructure
  20. Develop and support AI-specific cybersecurity infrastructures
  21. Redesign education systems from pre-school to higher education
  22. Develop and retain talent in European higher education systems
  23. Increase the proportion of women in science and technology
  24. Upskill and reskill the current workforce
  25. Create stakeholder awareness and decision support for skilling policies
  26. Ensure appropriate policy-making based on a risk-based and multi-stakeholder approach
  27. Evaluate and potentially revise EU laws, starting with the most relevant legal domains
  28. Consider the need for new regulation to ensure adequate protection from adverse impacts
  29. Consider whether existing institutional structures, competences and capacities need revision to ensure proportionate and effective protection
  30. Establish governance mechanisms for a single market for trustworthy AI in Europe
  31. Ensure adequate funding for the recommendations put forward in this document
  32. Address the investment challenges of the market
  33. Enable an open and lucrative climate of investment that rewards trustworthy AI

This document (and any information accessed through links in this document) is provided for information purposes only and does not constitute legal advice. Professional legal advice should be obtained before taking or refraining from any action as a result of the contents of this document.