A bill that would require developers of artificial intelligence to disclose the use of copyright holders’ works in training their models would inappropriately threaten the tech industry, according to the Chamber of Progress.
“The TRAIN Act isn't ready for the rails, and certainly shouldn’t leave the station until we get more clarity from the courts. We don't need new regulations for what may prove perfectly legal conduct,” Adam Eisgrau, senior director of AI, creativity, and copyright policy at the Chamber of Progress, said in a Nov. 26 release.
The Chamber of Progress is supported by AI investors and developers, including Meta and Google.
Eisgrau was commenting on the Transparency and Responsibility for Artificial Intelligence Networks -- or TRAIN -- Act, which was introduced Nov. 25 by Sen. Peter Welch (D-VT) and is endorsed by the recording industry and performers’ unions.
The bill would enable copyright holders to subpoena generative AI companies for training records “sufficient to identify with certainty” whether their works were used, according to text of the legislation.
“Failure to comply with a subpoena creates a rebuttable presumption that the model developer made copies of the copyrighted work,” according to a fact sheet on the bill released by Welch’s office.
Welch’s release framed the bill within the context of a broader need for transparency of AI-training data for consumer protection, citing his proposal of the Artificial Intelligence Consumer Opt-In, Notification Standards, and Ethical Norms for Training (AI CONSENT) Act. He also pointed to Nov. 14 testimony before the Judiciary Committee from U.S. Copyright Director Shira Perlmutter which indicated a report due from her office at the end of the year will address the importance of transparency.
“I love that the TRAIN Act is addressing one of the largest barriers individuals have in protecting their personal data or creative work from theft by AI: transparency,” Calli Schroeder, senior counsel for the Electronic Privacy Information Center, told Inside AI Policy, adding, “I’m a big fan.”
The bill is tailored to specifically address copyright holders’ concerns, with the Welch fact sheet noting, “only training material with [copyright holders’] copyrighted works need be made available” and that “subpoenas are granted only upon a copyright owner’s sworn declaration that they have a good faith belief their work was used to train the model, and that their purpose is to determine this to protect their rights.”
“If your work is used to train A.I., there should be a way for you, the copyright holder, to determine that it’s been used by a training model, and you should get compensated if it was. We need to give America’s musicians, artists, and creators a tool to find out when A.I. companies are using their work to train models without artists’ permission,” Welch said in the Nov. 25 release. “As A.I. evolves and gets more embedded into our daily lives, we need to set a higher standard for transparency.”
But Eisgrau argued such assertions are actively being litigated under the existing copyright regime, which lays out general factors for determining the “fair use” of copyrighted work, including whether its use is for a commercial or non-profit purpose.
“Dozens of courts are weighing whether training AI models on content from the internet - including copyrighted works -- is protected under the fair use doctrine,” he said. “If it is, no advance permission is needed under this flexible doctrine that's fueled tech innovation for almost 50 years.”