Fair Use for Sale: How Copyright Became a Corporate Weapon
By late 2025, the legal firestorm surrounding generative AI music seemed to have fizzled out as quickly as it had ignited. While the three major record labels originally sued platforms like Suno and Udio for training their generative AI models on vast libraries of copyrighted recordings, the so-called “principled” defense of fair use never actually reached a judge. Instead, Universal and Warner Music quietly backed away from their piracy arguments and converted their grievances into licensing partnerships, effectively turning a copyright war into a private revenue stream. This pivot reveals the playbook: fair use, once a cornerstone of copyright, remains the law in theory, but access to it has been auctioned off.
This isn’t a new phenomenon. It’s a pattern with a long paper trail. When the NYU Engelberg Center on Innovation Law & Policy held its “Proving IP” symposium in May 2019, top musicologists debating the Blurred Lines case played clips of the music they were analyzing to make their point. Predictably, YouTube’s Content ID, a $100 million automated enforcement system, flagged the video and killed the feed. When the Engelberg Center successfully challenged the takedown, the system simply forgot the ruling and flagged the same video again. And again.
Four years later, UMG targeted the same episode on Spotify. The Engelberg Center responded promptly, claiming fair use and receiving confirmation that their response had been received. Spotify removed the episode anyway, then falsely claimed no response had ever been filed. To compound the insult, Spotify refused to disclose whether the takedown was the result of a formal DMCA notice or their own proprietary audio matching system, leaving the Engelberg Center legally blind, unable to pursue any remedy. Because the bot has no memory of past rulings, users are condemned to fight the same battle against a machine programmed to ignore the outcome. Call it industrialized, institutional amnesia.
The castle is real. But the gatekeeper is a bot.
The labels’ retreat into corporate double-speak didn’t happen in a vacuum. A single courtroom verdict had rewritten the rules of musical copyright, creating a precedent so sweeping that even the labels themselves weren’t sure how to use it. When the estate of Marvin Gaye successfully sued Robin Thicke and Pharrell Williams over the “feel” of a song rather than any demonstrable copying of melody or lyrics, it established that you could own the vibe of a song. The implications were immediate and chilling. If inspiration itself was now actionable, every songwriter was potentially liable for being influenced by every recording they’d ever heard.
Consider the case of Katy Perry’s “Dark Horse.” When YouTuber Adam Neely made a video defending Perry against a lawsuit from a Christian rapper, he used a sample of the other artist’s song to prove Perry hadn’t copied it. To avoid having to pay damages, Perry’s publisher, Warner Chappell, argued in court that the two songs were totally dissimilar. Yet at the same time, they used Content ID to claim Neely’s video as infringing on their own property. It was a contradiction they apparently hoped no one would notice: in court, the songs were different; in the algorithm, they were identical enough to own. When a company can’t distinguish its own property from its competition, the fact-intensive nature of fair use becomes a weapon for the powerful.
Nowhere is the privatization of fair use more explicit than in the Suno and Udio settlements. Both platforms defended themselves on fair use grounds. By late 2025, Universal Music and Warner Music had quietly settled with Udio, and Warner had settled with Suno, converting their lawsuits into licensing partnerships almost overnight. The labels’ pivot from litigation to partnership tells you everything you need to know about what these cases were really about: not protecting artists, but controlling the next revenue stream. Fair use, once again, never got its day in court.
This weaponization of copyright has spread far beyond the music industry, and the absence of any penalty for fraudulent claims has turned the copyright complaint system into a cudgel available to anyone willing to pay the filing fee. It’s a remarkably consistent pattern of abuse, regardless of who’s doing the abusing. Law enforcement officers played loud copyrighted music during interactions with the public, specifically to trigger automated filters and block the resulting recordings from being uploaded, not to protect intellectual property, but to protect themselves. From firms representing war criminals to peddlers of fraudulent medical cures, bad actors have learned that the fastest way to bury an inconvenient truth is to claim copyright over it. Tech giants have been accused of abusing the takedown system to suppress information about their own vulnerabilities, effectively exposing users to spyware under the guise of IP protection.
The copyright complaint form is filled out the same way whether you’re a war criminal or a record label.
This isn’t a bug that legislation is working to fix. It’s a feature that legislation is working to entrench.
The push for strict liability and automated “filternets,” exemplified by Europe’s Article 17, is often framed as a way to hold established platforms accountable for profiting from copyrighted material without compensating its creators. In reality, it does the opposite. By making automated filters a legal requirement, governments are granting a permanent monopoly to the giant corporations wealthy enough to build the machinery of censorship. Google’s $100 million investment in Content ID is a barrier to entry that no startup can hope to clear. In Europe, the laws can’t even be enforced without running headlong into the GDPR, the continent’s own privacy framework. The effect of these laws isn’t to deter tech monopolies. It’s to guarantee their existence.
Section 230 is routinely cast as Big Tech’s best friend. In reality, it’s the only thing protecting everyone else from Big Tech’s worst instincts. Section 230 of the Communications Decency Act, a law so brief it fits in a tweet, shields platforms from legal liability for content posted by their users, protecting them from the kind of unwinnable nuisance suits that wealthy and powerful actors love to weaponize. When these protections are removed, platforms don’t magically become “responsible.” They become more ruthless. Under the DMCA, if a platform receives a takedown notice and fails to act on it, it faces up to $150,000 in statutory damages per claim in cases of willful infringement. Faced with that math, they’ll always choose to silence the whistleblower, the organizer, and the marginalized.
Automated filters haven’t proven to be the precision tools their architects promised. They’re digital driftnets, indiscriminately sweeping up vital discourse and creative expression while letting the largest sharks pass through via private licensing deals. In a world where algorithms act as judge, jury, and executioner, fair use has become a luxury for the wealthy few.
If we continue to allow copyright law to be hollowed out into meaningless corporate theatre, can free expression truly survive the “safe harbor” of a tech monopoly?