Organisation for Economic Cooperation and Development (OECD), 22 May 2019
Artificial Intelligence (AI) is a general-purpose technology that has the potential to improve the welfare and well-being of people, to contribute to positive sustainable global economic activity, to increase innovation and productivity, and to help respond to key global challenges. It is deployed in many sectors ranging from production, finance and transport to healthcare and security.
1. Defining AI: “AI” is currently being used to describe a variety of technologies, from automation to machine learning to, potentially, a human-level comprehension (so-called “general AI”). The effectiveness of both private law and public law rights and obligations will depend on a cogent articulation of AI, as well as distinguishing between “automated decision-making” and “autonomous decision-making”, in order to understand if and how those rights / obligations apply and, generally, to avoid uncertainty. This will be particularly challenging in light of the rate of development of AI.
2. Data challenges: Data is the foundation of AI; without it, machines cannot “learn” and “evolve”. There are fundamental ethical questions regarding the morally-permissible limits of data use, which are frequently debated. AI developers will also need to comply with data privacy legislation (e.g. the GDPR), although there are concerns about whether current data privacy regulation can adequately, and should, deal with AI. In particular, there are already indications that current data privacy regulatory regimes will need to be specifically augmented to deal with the issues raised by AI.
3. Tortious liability: Challenging questions arise when, in the absence of a contract, AI “goes wrong” and causes physical or financial harm to a person. Liability in tort generally requires a duty of care, owed by the tortfeasor to the injured person, and foreseeability of the harm caused. Given that AI involves machines making autonomous decisions, it will be difficult to establish who (if anyone) owes a duty of care to the injured person, making them liable for the AI’s decisions. As to foreseeability, will AI actions be seen as sufficiently autonomous to constitute an intervening cause severing the causal link from the most proximate human act?
4. Contractual liability: In principle, a contract will allow parties to a transaction involving AI to allocate liability between them in the event that the AI “goes wrong” and causes harm. However, as with tortious liability, the autonomy of AI is likely to give rise to challenges in negotiating and drafting appropriate contractual provisions dealing with the liability of AI. We are already seeing and advising on risk allocation in AI-related technology development contracts, particularly in relation to a “maximum error tolerance” framework.
5. Consumer protection: Societies often protect consumers from harm through strict liability regimes (i.e. where liability is established regardless of fault), for example, in the case of defective products in the UK and EU. These regimes incentivise producers to maintain rigorous safety standards. Given the inherent capabilities of AI and the potential harm it could cause, there may be a good argument for introducing a strict liability regime for AI, although there is a risk that this could unduly stifle AI innovation. National states are likely to implement their own regulations, but there is an opportunity to implement international regulation which will foster greater competition and harmonious standards. The OECD Principles and EC Ethics Guidelines are a helpful start. Encouraging national states to adopt international regulation will, however, be challenging.
6. IP challenges: According to a recent study by the World IP Organisation, there has been a surge in patent applications for AI-based inventions, with US-based companies like IBM and Microsoft leading the way. AI developers should consider an appropriate IP protection strategy for proprietary elements, particularly when they are collaborating with others. AI also attracts novel challenges, such as determining the ownership of IP created by AI. For example, most copyright systems are founded on the principle that copyright will subsist when a work displays a human author’s originality or creativity, but how does that apply to copyright works generated by AI? From a tax perspective, it is important to consider what IP is being created with AI, as well as where that IP is created and managed within the global organisation.
7. Employment / HR risks: AI is increasingly being used in employment / HR processes, where it can foster important principles such as equality and non-discrimination. However, as it is based on historic data and human input (which can itself incorporate unconscious bias), there may be a risk that AI will perpetuate, rather than solve, existing problems of bias and discrimination. Companies using AI systems will need to ensure the integrity of input data, amongst other things, in order to minimise any bias and other issues in the AI’s decisions.
8. Antitrust: A particularly sensitive area is the growing use of AI technology by companies to improve pricing models and to predict market trends. The European Commission’s E-commerce Sector Inquiry (2017) found that, of the retailers surveyed, 67% use automatic software to monitor competitor prices and, in some cases, implement automatic price adjustments. Both the UK and EU competition authorities have said that, where businesses enter into anticompetitive agreements via AI, that conduct will be treated no differently to any other offline agreement. An infringement of antitrust rules as a result of the use of AI could lead to heavy fines, director disqualifications and/or follow on actions for damages.
9. Insurance: Given the potential harm that AI can cause, those involved with AI are likely to seek insurance to cover any liability in respect of this harm. Insurance is built around risk and the biggest challenge that insurers will face is pricing the risk that AI creates, which will be particularly difficult given the ability of AI to make autonomous decisions and given the lack of historical data on which to base risk assessment.
10. Impact of AI on tax structure: AI has the potential to be incorporated into business processes across the entire value chain and may involve a relocation of people functions, shifting of intercompany flows of for example data or AI outputs, changes in roles and responsibilities of people within the organisation, creation of new IP, as well as fundamental changes in what would be considered economically significant risks for the business overall. These changes to the value chain may mean that existing tax structures may not be sustainable in the future and should be re-evaluated, as this new technology is harnessed.
This document (and any information accessed through links in this document) is provided for information purposes only and does not constitute legal advice. Professional legal advice should be obtained before taking or refraining from any action as a result of the contents of this document.