BACKGROUND
AI is not hyped, it’s going to get bigger, better and stronger, and as in-house lawyers, we need to be on top of it.
However, according to a recent study made by Wolters Kluwer
73% of lawyers expect to integrate generative AI into their legal work in the next 12 months; HOWEVER,
There's a lack of consensus about generative AI being an opportunity or a threat
A recent report from Upwork reveals that 77% of workers say that AI tools have decreased their productivity and added to their workload.
THE CHALLENGE
The challenge for us, as business enablers, when we face something disruptive, be it legislation or a new product is to navigate the path to getting to YES where every fibre in our body screams NO. However, we can learn from previous experiences and use existing methodologies and legislation to assist us in getting to yes.
Some of us work for multinationals therefore guidance will derive from resources such as ISO and NIST (not just the EU AI Act). For example, NIST suggests that AI risk be evaluated at three levels of potential harm.
Harm to people – for example deploying AI hiring tool that perpetuates discriminatory biases from past data.
Harm to organisations – erogenous financial data not properly reviewed by a human before making it available to the public
Harm to ecosystems. Harm to the global financial system or supply chain.
Some companies will be subject to all three.
Turning closer to home (UK), we are going to adopt the risk-based approach and as I navigate through the landscape that we operate in, you will see that there is a commonality that cuts across every sector when it comes to AI.
GDPR
We have the GDPR blueprint for a risk-based approach:
· Be fair;
· Be transparent; and
· Be accountable.
But what do other various regulators and sector guidance say about AI?
COMPETITION AND MARKET AUTHORITY (CMA)
Companies most affected by the CMA’s approach will likely be those developing or deploying AI systems that interact with consumers or have the potential to distort competition, particularly tech giants.
The CMA wants them to be:
· Accountable;
· Fair;
· Transparent - that is, prevent algorithmic bias, privacy infringements, and the manipulation of consumer behaviour through AI-driven recommendations.
The CMA encourages innovation while ensuring that companies adhere to competitive and ethical standards.
FINANCIAL CONDUCT AUTHORITY (FCA)
The FCA is pro-innovation and is also a user of AI. It wants:
· Fairness;
· Transparency; and
· Explainability.
It wants to create a best practice and potential future regulatory work. So, watch this space.
OFFICE OF COMMUNICATIONS (“OFCOM”)
Ofcom covers a wide array of topics including communications, online safety, TV, radio broadcasting and other media such as video-sharing platforms.
· Safety
· Fairness
· Accountability; and
· Contestability
Under the Telecommunications (Security) Act 2021, Ofcom must ensure that telecom providers take appropriate and proportionate measures to identify, reduce and prepare for security risks.
OFCOM is also responsible for enforcing the Online Safety Act.
OFFICE OF GAS AND ELECTRICITY MARKETS (“OFGEM”)
They want us to have an understanding of the potential risks from AI and develop guidance and tools that make sure that:
· The use of AI is fair;
· Ethical;
· Transparent and explainable
They would like the use of AI to result in fair market outcomes, effective oversight of AI systems and for customers to have appropriate redress to contest negative outcomes.
Another interesting point is that they want the use of AI to be sustainable, however, think about the power that is required to keep the AI servers up and how it stacks up against your ESG statements.
CIVIL AVIATION AUTHORITY (“CAA”)
If artificial intelligence and autonomy are to be accepted within the aviation ecosystem, or indeed within the CAA as a business or regulatory tool, the public, consumers, colleagues, and customers need to be able to TRUST it.
There has been considerable focus, especially in the ‘novel aerospace’ part of the aviation sector, on the use of AI as a means to higher levels of automation. There should be:
· No harm to people, things, or environment*;
· No unfair treatment based on who you are, and free from bias;
· Someone should be responsible for the AI's actions. If a drone crashes, you should know who to hold accountable;
· Transparency, understand how AI works and why it decides; and
· Explainability, challenge unfair AI decisions and get help if harmed in some way.
*Reminds me of Asimov‘s 3 basic rules: a robot may not injure a human being or allow a human being to come to harm; a robot must obey orders given to it by human beings, except where such orders would conflict with the first law; and a robot must protect its existence, as long as such protection does not conflict with the first or second law.
EQUALITY AND HUMAN RIGHTS COMMISSION (“EHRC”)
The EHRC is determined to ensure equality and human rights are central to the development and use of AI. It is working with regulators to explore fairness across regulatory remits. The principles will be incorporated into EHRC’s compliance and enforcement work.
THE SOCIETY OF AUTHORS
SoA has written to major tech companies, including Microsoft, Meta and Google, demanding that they obtain licences and authorisation from rights holders before using their copyright-protected works for training or operating generative AI systems. The SoA asserts that the use of the works without consent constitutes copyright infringement under UK and international law and rejects the defence that works which are available in a digital format and are accessible online can be used without permission.
THE ROLE OF THE IN-HOUSE COUNSEL
So as in-house Counsels, we need to ensure that we keep to that commonality that I mentioned.
It is our role to square the circle. To date, we’ve telling our businesses to minimise data and delete anything unnecessary. Now the businesses are telling us that they are going to collect everything and keep everything because they need it for training.
Also, in some scenarios, it is going to be difficult to determine who is a Controller and who is a Processor.
Developers either collect training data directly through web scraping, indirectly from another organisation that have web-scraped data themselves, or by a mix of both approaches.
As part of complying with the lawfulness principle, developers need to ensure their processing:
(a) is not in breach of any laws; and
(b) has a valid lawful basis under UK GDPR.
If there is a risk to the rights and freedoms of individuals do conduct a DPIA.
The lawful basis in all likelihood will be ‘legitimate interest’. For this purpose, you will need to show that:
1. The purpose of the processing is legitimate;
2. The processing is necessary for that purpose; and
3. The individual’s interests do not override the interest being pursued.
A couple of points to bear in mind when it comes to individual interest:
· People may lose control over their personal data because they are not informed; and
· Models can be used to generate inaccurate information about people that may cause harm and/or distress
Demonstrate how the interest they have identified will be realised, and how the risks to individuals will be meaningfully mitigated, including their access to their information rights.
PURPOSE LIMITATION
Purpose limitation requires organisations to have a clear purpose for processing any personal data before they start processing it.
There must a lawful basis for processing it; and
The purpose is not in breach of other laws, such as intellectual property or contract laws.
The generative AI model lifecycle involves several stages and having a specified purpose in each stage will allow you to understand the scope of each processing activity, evaluate compliance, and help evidence it.
ACCURACY
We need to ensure that there is clear communication between developers, deployers and end-users of models to ensure that the final application of the model is appropriate for its level of accuracy.
Things to consider:
Clear information about the statistical accuracy of the application, and easily understandable information about appropriate usage;
Monitor user-generated content; and
Provide information about the reliability of the output.
USER RIGHTS
Individuals have rights regarding their data, whether it is collected directly or indirectly.
Right to:
· Access
· erasure,
· rectification
· restriction of processing, and
· object to processing
When the data is collected directly be clear about the fact the data is used for AI training.
When it is indirect you may be able to rely on an Art 14 exemption (disproportionate effort) but you will still need to take appropriate measures to protect individuals’ rights and freedoms.
Despite this complexity, people's rights need to be respected throughout the AI lifecycle and supply chain, including during deployment therefore documenting your journey is vital.
Remember Article 22 of the GDPR states that data subjects have the right not to be subjected to a decision that is based solely on automated processing, including profiling.
If you are using AI in employment, clearly set out the purpose, investigate the risk appetite, and keep employees informed.
Interestingly, as of 5th July 2023, the State of New York has made it mandatory for any company operating and hiring in New York, that is using AI and other machine learning technology as part of their hiring process, to perform an annual audit of their recruitment technology. Only time will tell whether this will be a sufficient tool to determine fairness, transparency and accountability, but for now, it appears that the State of New York is leading the way forward.
Finally, not every AI system will require taking the same approach that we discussed some could be implemented in the same manner as acquiring any other software products.
TIPS
The legal repercussions of using AI-generated content that might contain copyrighted material remain undecided. Ongoing legal disputes and forthcoming policies may offer clarity, but currently, we're navigating the complex intersections of groundbreaking technology and undefined liability.
Whether you're employing generative AI for individual tasks or incorporating it into your business operations, you should be well-informed about indemnification.
Bonterms is an initiative aimed at streamlining the legal and contractual processes involved when companies want to use or offer cloud and potentially other tech services.
Twilio
Twilio, a company known for its communication automation services for businesses.
Twilio created a label that shows AI models in play, training methods, and optional features. They incorporate a "privacy ladder" to distinguish data usage for internal projects versus training for broader clientele, and whether the data contains personal identifiers.
The label is intended to give consumers and businesses a more transparent and clear view into ‘what's in the box’ - how their data is being used - especially when it comes to training LLMs with vendors like AWS, Google, and OpenAI. These labels empower customers to make informed decisions about which AI-powered capabilities they are prepared to adopt.
Kommentare