top of page
Search
Writer's pictureaskdr

OpenAI, X, Google and Meta will be able to use legitimate interests as a legal basis for processing personal data for developing and deploying AI models

AI models, especially large-language models, are often trained with massive amounts of data harvested on the open Internet without much distinction between personal and non-personal data.

 

AI companies opted for 'legitimate interest' as a legal basis for processing personal data for developing and deploying AI models

 

Ireland's Data Protection Commissioner (which supervises most large tech companies in the EU) asked the EDPB to issue an opinion to help develop a shared understanding of this complex issue at the EU level.

 

In its opinion, the EDPB addressed the question of which legal basis AI model providers could use for processing personal data under the EU GDPR. AI companies that overwhelmingly opted for 'legitimate interest' as a legal basis have received the first blessing from EU data regulators.

 

Therefore, OpenAI, X, Google and Meta will be able to use legitimate interests as a legal basis for processing personal data for developing and deploying AI models provided that adequate mitigation measures are in place.

 

The EDPB said that you need to have a 3-pronged approach to achieve legitimate interest:

 

·       Identify the need - is it lawful, clearly and precisely articulated;

 

·       Is the amount of personal data processed proportionate;

 

·       You must consider the fundamental rights and freedoms of the people whose data is being processed.

 

In other words, a balancing act must be made before reaching the conclusion that a legitimate interest exists. Importantly, for situations where the balancing test doesn't seem to be passed, the opinion provides a non-exhaustive list of mitigating measures that AI model providers can implement in the development and deployment stages.

These mitigating measures are those on which AI model providers are likely to concentrate to ensure the lawfulness of their data processing practices.


In the development stage, the opinion suggests measures such as pseudonymisation, unconditional opt-out, and releasing the data collection criteria. For the deployment phase, the EDPB proposed measures such as preventing the storage of personal data and post-training techniques to remove personal data.


The EDPB also discussed the consequences of unlawful data processing in the AI models’

If unlawful processing occurs during development, then it needs to be evaluated to determine how it impacts the legality of later processing activities because of the cascade effect, which is where the initial violation makes all subsequent uses of the model potentially unlawful.

 

When a deployer takes it over it needs to investigate the original source of the training data and investigate whether proper consent or legal basis existed as it could face liability if it knowingly used a model trained on unlawfully processed data. It might need to implement additional safeguards or obtain new consent.

 

If the original developer uses unlawful data and then goes to produce an anonymised version which is genuine and irreversible, to remedy the situation, it won’t cure the original unlawful processing, however, technically a that point anonymised data falls outside the scope of GDPR.


personal_data_AI_legitimate_interest

0 views0 comments

Recent Posts

See All

Comments


bottom of page