A question that could also be “how do I avoid ending up in legal action like the present in the United States targeting Google for web scraping?”
A class action lawsuit was recently lodged against Google in the Northern District of California's United States District Court.
The lawsuit pertains to claims of web scraping, encompassing both copyright and privacy concerns, which were allegedly used in the development of its AI tools, including Bard, Imagen, MusicLM, Duet AI, and Gemini.
According to the complaint, Google's updated privacy policy claims to grant these tools authorisation to use any content shared online to enhance and train their AI products even though that involves the scraping of both personal and copyrighted data.
Compared to the class actions targeting Open AI, this action appears to focus even more specifically on the fundamental problem associated with Generative AI tools.
THE FIRST REGULATION ON AI: THE ARTIFICIAL INTELLIGENCE ACT
The Artificial Intelligence Act is currently being negotiated between the European Parliament, the European Commission, and the Council of Ministers. Their goal is to finalise the wording of the Act by late 2023.
At this stage the Act is expected to cover a set of rules that will vary depending on the level of risk associated with a generative AI system. It is believed that these levels of risk could break down as:
Unacceptable risk (banned, with exceptions)
- Manipulating cognitive behaviour, like toys encouraging dangerous actions in children.
- Social scoring: categorizing people by behavior, socio-economic status, or personal traits.
- Real-time remote biometric systems, including facial recognition.
High risk (specific areas, must be registered in EU database - will be assessed before market)
- Biometric identification, critical infrastructure, education, employment, services access, law enforcement, migration, and legal assistance.
Generative AI platforms like Bard will therefore need to meet transparency standards by explicitly stating it exists for the purpose of AI generation to prevent illegal content creation, and sharing summaries of copyrighted training data.
(Source: European Commission)
It is inevitable that legislation in the field of AI will continue to develop. However, until the EU's upcoming regulation on artificial intelligence comes into force, we must make best use of the regulations and legal tools that already exist to help us safely navigate the many issues raised by the use of artificial intelligence.
In addition, there are some additional steps that you can take to minimise the risks posed by AI:
- Create internal AI guidelines to avoid mishandling personal data, copyright violations, trade secret exposure, and breaches of confidentiality.
- Gather essential documentation, including AI tools and challenges, review suppliers' documentation, revisit or create an article 30 record (General Data Protection Regulation), perform risk assessments, and consider an impact analysis to enhance the safety of AI usage.
- Use Data Ethics tools to help you make more informed decisions amidst the challenges posed by artificial intelligence until more specific legal frameworks are established.
If this blog has raised any concerns regarding the use of or threat posed by AI, please contact us today.