Skip to content

Artificial Intelligence regulations necessitate major digital platforms to take action in combating AI-generated falsehoods under the No Fakes Act.

AI Regulation Act Imposes Responsibility for Artificial Content and Advocates Protections Against Voice and Likeness Manipulation

Artificial Intelligence Regulations Prompting Social Media Sites to Address Deepfakes
Artificial Intelligence Regulations Prompting Social Media Sites to Address Deepfakes

Artificial Intelligence regulations necessitate major digital platforms to take action in combating AI-generated falsehoods under the No Fakes Act.

In the digital age, the line between reality and artificial intelligence (AI) is increasingly blurred, leading to concerns about unauthorized use of a person's voice, image, and likeness in AI-generated content. To address this issue, a federal bill known as the No Fakes Act was introduced in 2023 by a bipartisan group of U.S. senators.

The No Fakes Act aims to provide clear legal boundaries for the use of a person's digital assets, serving as a foundation for policy and enforcement. The Act expands its scope to target not only digital replicas but also tools that could produce unauthorized images. However, experts caution that the bill's broad and vague definitions could suppress protected speech and place heavy burdens on developers and platforms due to broad and vague restrictions.

Enforcement of the No Fakes Act is proving to be challenging. The early stage of detection tools, the lack of visible markers in some AI-generated content, and the speed and volume of uploads make it difficult to effectively prevent the spread of unauthorized content. Platforms are already struggling with existing laws, as seen in the over-censorship problems arising from broad-filtering and takedown demands.

Major tech platforms, such as Meta, Twitch, Spotify, TikTok, and YouTube, are responding to the increasing prevalence of AI-generated content in various ways. While there is no direct statement about their stance on the No Fakes Act specifically, industry patterns and legislative context indicate that platforms are grappling with the challenges of balancing enforcement against illegal AI-generated content with protecting free expression and innovation.

Meta, for instance, plans to expand its "Imagined with AI" labels to video and audio on Facebook and Instagram. Spotify removed AI-generated songs that copied the voices of major artists like Drake and The Weeknd in 2023, and updated its terms of service to prohibit content that mimics real individuals without permission. TikTok has joined the Coalition for Content Provenance and Authenticity (C2PA), an industry initiative aimed at building standards for tracking digital content origins.

The Human Artistry Campaign, supported by organisations like RIAA, SAG-AFTRA, and Universal Music Group, focuses on ensuring that AI tools are used in ways that support artists rather than replace or exploit them. The Campaign promotes seven key principles, including the need to get permission before using someone's voice or image, credit original creators, and ensure artists are paid fairly.

As the No Fakes Act progresses, civil liberties and technology experts voice concerns about its overly broad and punitive approach, which risks chilling online speech and innovation without effectively addressing the underlying harms of AI-generated deepfakes or unauthorized synthetic media. A more balanced approach that protects rights without broad censorship is advocated for by many.

In addition to the No Fakes Act, state laws against revenge porn including manipulated content, and the federal Take It Down Act, impose removal obligations on platforms within tight deadlines, further complicating enforcement efforts and incentivizing conservative moderation practices.

In the wider landscape, talent agencies such as Creative Artists Agency (CAA) are helping clients manage digital risks alongside traditional career support, including monitoring for unauthorized use of a client's voice, face, or performance online. Record labels are negotiating licensing deals with AI music companies to define how copyrighted music can be used in AI-generated content.

In conclusion, the No Fakes Act, while intended to address the legal gaps regarding AI-generated content, faces criticism for its broad and vague definitions and the potential for over-censorship. As the digital landscape continues to evolve, it is crucial to find a balance between protecting individuals' rights and fostering innovation and free expression.

The No Fakes Act, in its aims, includes the regulation of not just digital replicas but also tools that could produce unauthorized images, building a foundation for policy and enforcement in the realms of technology and entertainment. The Act, however, is met with concerns from experts over its broad and vague definitions, potentially suppressing protected speech and placing heavy burdens on developers and platforms.

Meta, a major tech platform, plans to expand its "Imagined with AI" labels to video and audio on Facebook and Instagram, showcasing its efforts to balance enforcement against illegal AI-generated content with protecting free expression and innovation in the entertainment sector.

Read also:

    Latest

    Large retailer Homzmart ventures into logistics services, launching Lifters - a new platform for...

    Large-scale retailer Homzmart ventures into logistics, introducing Lifters – a platform designed for shipping oversized and hefty items throughout Egypt.

    Egyptian furniture e-commerce platform Homzmart introduces standalone logistics company, Lifters, specializing in large and bulky shipments. Ibrahim Mohamed, Homzmart's co-founder and COO with extensive expertise, takes the lead in this venture to cater not only to Homzmart's clientele but also...