LLMs use the entirety of a copyrighted work for their training, which fails the “amount and substantiality” factor.
That factor is relative to what is reproduced, not to what is ingested. A company is allowed to scrape the web all they want as long as they don’t republish it.
By their very nature, LLMs would significantly devalue the work of every artist, author, journalist, and publishing organization, on an industry-wide scale, which fails the “Effect upon work’s value” factor.
I would argue that LLMs devalue the author’s potential for future work, not the original work they were trained on.
Those two alone would be enough for any sane judge to rule that training LLMs would not qualify as fair use, but then you also have OpenAI and other commercial AI companies offering the use of these models for commercial, for-profit purposes, which also fails the “Purpose and character of the use” factor.
Again, that’s the practice of OpenAI, but not inherent to LLMs.
You could maybe argue that training LLMs is transformative,
It’s honestly absurd to try and argue that they’re not transformative.
That’s objectively false. It’s downloaded to the server, but it should never be redistributed to anyone else in full. As a developer for instance, it’s illegal for me to copy code I find in a medium article and use it in our software. I’m perfectly allowed to read that Medium article, learn from it, and then right my own similar code.
And Aero should not have lost that suit. That’s an example of the US court system abjectly failing.
That’s what we’re debating, not a given.
Fair point, but it is objectively transformative.