Loading stock data...

EU boosts oversight on leading tech firms due to Generative AI concerns before upcoming elections

The European Commission has sent a series of formal requests for information (RFI) to Google, Meta, Microsoft, Snap, TikTok, and X about how they’re handling risks related to the use of generative AI. The asks are being made under the Digital Services Act (DSA), the bloc’s rebooted ecommerce and online governance rules.

Eight Platforms Designated as Very Large Online Platforms (VLOPs)

The eight platforms involved – Bing, Facebook, Google Search, Instagram, Snapchat, TikTok, YouTube, and X – are designated as very large online platforms (VLOPs) under the regulation. This means they’re required to assess and mitigate systemic risks, in addition to complying with the rest of the rulebook.

Commission’s Request for Information

In an press release, the Commission said it’s asking them to provide more information on their respective mitigation measures for risks linked to generative AI on their services. This includes:

  • Hallucinations: Where AI technologies generate false information
  • Viral dissemination of deepfakes: The creation and spread of synthetic media that can be used to manipulate public opinion or deceive users
  • Automated manipulation of services: The use of generative AI to create content that can mislead voters, such as fake news articles or manipulated images

The Commission is also requesting information on the platforms’ strategies for:

  • Identifying and removing harmful content generated by AI
  • Preventing the spread of deepfakes and other forms of synthetic media
  • Ensuring transparency and accountability in the use of generative AI

EU’s Strategy to Regulate Generative AI

The Commission’s strategy to regulate generative AI is two-fold. Firstly, it will apply pressure indirectly through larger platforms (which may act as amplifiers and/or distribution channels) and via self-regulatory mechanisms such as the Disinformation Code.

Secondly, it will establish a regulatory framework for VLOPs, which will require them to implement measures to mitigate the risks associated with generative AI. This includes:

  • Content moderation: Platforms must ensure that their content moderation policies are effective in identifying and removing harmful content generated by AI
  • Transparency and accountability: Platforms must provide transparency into their use of generative AI, including information on how they identify and remove harmful content
  • Security measures: Platforms must implement security measures to prevent the spread of deepfakes and other forms of synthetic media

Related News

What’s Next?

The Commission will review the responses to its RFI and may take further action if necessary. This could include:

  • Enforcement actions: The Commission may take enforcement actions against platforms that fail to comply with the DSA
  • Legislative changes: The Commission may propose legislative changes to strengthen the DSA and provide greater clarity on how it applies to generative AI

Stay tuned for further updates as this story develops.