top of page
  • Writer's pictureNatalia A. Ralat

The Age of Digital Deception: No AI Fraud Act



“Law and technology produce, together, a kind of regulation of creativity we’ve not seen before” - Lawrence Lessig


The "No Artificial Intelligence Fake Replicas and Unauthorized Duplications Act of 2024" (H.R. 6943) was introduced in the United States House of Representatives on January 10, 2024. This bill, also known as the "No AI Fraud Act," aims to protect individuals’ rights by preventing the misuse of their voice and image, particularly in light of the rapid advancements in artificial intelligence technology (AI). Supporters of the bill include the Recording Industry Association of America (RIAA), Human Artistry Campaign and a range of music companies, organizations and artists. 


Recent uses of AI include creating deceptive songs mimicking artists' voices, fabricating celebrity endorsements, and generating false intimate images. For example, Heart on My Sleeve and Demo #5 gained widespread attention for imitating the voices of Drake, The Weekend, Bad Bunny, Justin Bieber and Daddy Yankee. These incidents, along with the circulation of fabricated sexually explicit images involving various celebrities and online personalities including Taylor Swift, highlight the urgent need for legal protection against AI-generated fakes.


Right of Publicity 

In the United States and Puerto Rico, individuals possess rights over their name, image, likeness, voice, and other identifying characteristics, enabling them to control their commercial use and prevent unauthorized exploitation, this is called the right of publicity. The right of publicity differs by jurisdiction, resulting in each state and territory having its own provisions on the matter, especially those related to how long the right of publicity lasts. In Puerto Rico, under the "Ley del Derecho sobre la Propia Imagen" (Law no. 139 of 2011), this right lasts for up to 25 years after an individual's death, regardless of whether the image was used for commercial purposes during that person’s lifetime or not. This inconsistency is why Congress is currently working towards standardizing these rights by aiming to establish a federal framework safeguarding citizens' individual rights to their likeness and voice against AI-generated fakes.


H.R. 6943

The bill introduces new terminology specifically tailored for the digital realm. This includes terms like "digital depiction," which refers to a replica, imitation, or approximation of the likeness of an individual that is created or altered in whole or in part using digital technology. A "digital voice replica," is an audio rendering that is created or altered in whole or in part using digital technology and is fixed in a sound recording or audiovisual work which includes replications, imitations or approximations of an individual that the individual did not actually perform. Additionally, the bill defines "personalized cloning service" which is an algorithm, software, tool, or other technology, service, or device whose primary purpose or function is to produce one or more digital voice replicas or digital depictions of particular identified individuals.


Most importantly, the legislation proposes standardizing the right of publicity for a period of 10 years after the death of the individual. Once the 10-year period has expired, the rights would terminate after two consecutive years of non-exploitation or the death of all heirs. The new bill also requires that individuals meet specific requirements to legally authorize the use of their digital depiction or digital voice replica for a new performance in an advertisement or expressive work. It also outlines the situations in which an individual or entity could face liability for damages arising from the unauthorized simulation of voice or likeness. Legal remedies range from $5,000.00 to $50,000.00 per violation, or actual damages suffered by the injured party or parties, punitive damages and attorney’s fees, in addition to any profits gained from the unauthorized use. Lastly, the bill cautions that in case of a violation of the Act, the unauthorized user cannot use a public disclaimer as a defense to assert that the digital depiction, digital voice replica, or personalized cloning service was unauthorized or that the individual rights owner did not participate in its creation, development, distribution, or dissemination.


AI Digital Doubles in Film

AI is also expanding horizons within the film industry. Certain production studios do not grant actors ownership of their digital identities. The Creative Artist Agency (CAA) in Los Angeles, is focusing on helping clients create digital duplicates that they can own and control. In line with this objective, CAA recently introduced the “CAA Vault,” a database that generates duplicates by scanning an actor’s body, movements, facial features, and voice. This tool could be utilized to put an actor’s face onto a stunt double’s body or synchronize the actor’s mouth movements with dubbed dialogue. 


This technology drives innovation in filmmaking, yet it also raises concerns in the aftermath of last year's SAG-AFTRA strike, during which actors successfully negotiated significant contractual protections against the utilization of AI. Some common questions related to the CAA Vault may include: who retains ownership of the digital replica, how payment is handled for the use of digital replicas as doubles or in other projects, and concerns related to possible cybersecurity breaches that may affect the storing of the digital duplicates. However, it's not all negative. Not only does AI provide opportunities for actors to license their digital identities, there are many benefits for using this technology in projects. For example, digital doubles can be utilized for reshoots or automated dialogue replacement (ADR). AI also expands role opportunities for individuals with disabilities, it mitigates the risks associated with dangerous stunts, simplifies the filming of large-scale scenes that would otherwise be time consuming and reduces an actor’s time on set.


The Law is Trying to Keep Up!

The law frequently trails behind the rapid progress of technological advancements. The recent  flooding of AI generated content online, has provoked some curiosity as to how courts will analyze and decide cases concerning AI and the right of publicity in the future. The current prevailing legal issue is how to maintain a balance between freedom of speech versus an individual’s right over their name, image and likeness, including digital likeness. This means that an individual’s interest in protecting their right of publicity must be balanced against the rights of users to creative or self-expression. 


While this AI and legal terrain remains evolving, one thing is for certain, AI technology is here to stay and is already rapidly reshaping the entertainment industry landscape. While this bill has yet to become law, it represents a positive move forward because of the increasing number of AI-related legal issues that Congress must address, issues which are likely to have far-reaching consequences. Deepfakes have already led to revenue losses for artists and confusion among consumers. However, it's not just celebrities; individuals' privacy is also under threat. Meanwhile, since the AI macrocosm transcends across state lines and international borders, it emphasizes the importance of implementing a federal right of publicity to establish uniformity and protect this right, no matter which state or territory you live in. On the same day the bill was introduced, the ELVIS Act was proposed in Tennessee by Governor Bill Lee, with a focus on protecting music industry professionals from AI misuse. This further highlights the pressing need for policy addressing the implications of this technology.


TIP: In Puerto Rico it is legally required to obtain explicit consent from individuals before using their image or likeness for commercial purposes. Consent is typically obtained through written agreements or contracts, such as talent release forms, where the individual grants permission for the specified use of their image. Clear and informed consent helps protect both the individual’s rights and the interests of those using the image.



Sources:


  1. https://www.congress.gov/bill/118th-congress/house-bill/6943/text/ih?overview=closed&format=xml

  2. https://abcnews.go.com/US/taylor-swift-ai-fraud-act-congress-plans-fight/story?id=106765709

  3. https://salazar.house.gov/media/press-releases/salazar-introduces-no-ai-fraud-act

  4. https://www.wsj.com/tech/ai/behind-the-making-of-my-ai-digital-double-0ff22ac8#selection-4783.1-4783.46

  5. https://futureparty.com/caa-ai-digital-duplicates/

  6. https://bvirtualogp.pr.gov/ogp/Bvirtual/leyesreferencia/PDF/139-2011.pdf

  7. https://www.billboard.com/business/tech/nicki-minaj-cardi-b-support-no-ai-fraud-bill-us-congress-1235596033/

  8. https://www.local3news.com/local-news/gov-lee-announces-elvis-act-to-protect-tennessee-music-against-ai/article_0d66641a-abef-11ee-a60e-3f906a45ea5d.html

  9. https://ipwatchdog.com/2024/01/11/no-ai-fraud-act-create-ip-rights-prevent-voice-likeness-misappropriation/id=171856/


 

About the Author


Natalia A. Ralat

Intellectual Property and Entertainment Attorney at The Sifre Group

Natalia A. Ralat Albors is an intellectual property and entertainment attorney at The Sifre Group dedicated to protecting the rights of creatives. Natalia is enthusiastic about novel issues transforming the intellectual property law landscape, such as artificial intelligence and digital technologies. 

276 views0 comments

Recent Posts

See All
bottom of page