In our submission, we argue that the EDPB’s opinion must take a firm approach to prevent peoples’ rights being undermined by AI. We focus on the following issues in particular:
- The fundamentally general nature of AI models creates problems for the legitimate interest test;
- The risks of an overly permissive approach to the legitimate interests test;
- Web scraping as ‘invisible processing’ and the consequent need for transparency;
- Innovative technology and people’s fundamental rights;
- The (in)adequacy of filters and other similar safeguards; and
- Opting out of opt outs.
The approach taken by the EDPB towards generative AI models may have important downstream repercussions for the future of people’s information rights online. If the balance it strikes is wrong with respect to emerging practices, then people stand to have their rights under the GDPR further violated by other new and emerging technologies.
That’s why a strong position should be taken with respect to generative AI models. It is unacceptable to rely on untested, unproven and uncertain additional technology (such as ‘machine unlearning’) to try to fulfil people’s rights.
Privacy by default and design should be implemented so as to not place the onus on individuals to take action to prevent invasive practices.
More/Source: http://privacyinternational.org/advocacy/5495/pi-submission-edpb-ai-models