Joseph Gordon-Levitt speaks onstage during the ACLU SoCal Bill Of Rights Dinner Monica Schipper/Getty Images Share on Facebook Share on X Share to Flipboard Send an Email Show additional share options Share on LinkedIn Share on Pinterest Share on Reddit Share on Tumblr Share on Whats App Print the Article Post a Comment A long time ago, kings owned all the land, while serfs worked that land without owning anything. Back then, if a serf had said, "Hey, I think this little plot of land where I built my house and farm my crops should belong to me," he would have been laughed at. "Oh yeah, how's that going to work?" the king would have asked. "Is every one of you going to own your own little plot of land? Will you little people be able to buy and sell land to one another? How are you going to keep track of who owns what? Obviously, none of this is doable." Related Stories News Jess Glynne Criticizes White House for Using Viral Jet2 Holiday Sound in Deportation Meme: "Makes Me Sick" Business European Creators Slam AI Act Implementation, Warn Copyright Protections Are Failing In today's increasingly digital world, data is becoming as valuable as land. And the lords of Silicon Valley don't want us owning our data any more than the old kings wanted serfs owning their land. Last week at the questionably titled "Winning the AI Race Summit" in Washington, D.C., President Donald J. Trump was talking about whether big tech companies should have to share the wealth with all the people whose skill, talent and labor contribute to the value of their extremely lucrative AI products. "You just can't do it," said Mr. Trump, "because it's not doable." I consider myself an extremely lucky artist. I've gotten to be a part of some incredible creative projects, but what I actually feel luckiest about is the people I've gotten to collaborate with. Making things together with my fellow passionate artists - whether "professional" or "unestablished" and whether "above the line" or "below" - is truly one of the great joys in my life. So you might assume I'd hate the very idea of using technology to do creative things that in the past could only be done "manually" by humans. But this isn't the case. I don't have a problem with AI as a technology; I think some of the new creative tools are inspiring. However, I believe we all have an urgent problem with today's big AI companies' unethical business practices. The truth is that today's GenAI couldn't generate anything at all without its "training data" - the writing, photos, videos and other human-made things whose digital 1s and 0s get algorithmically crunched up and spit out as new. For more than half a decade now, AI companies have been scraping up massive amounts of this content without asking permission and without offering compensation to the people whose creations are so indispensable to this new technology. Silicon Valley's justification for what I believe is a clear case of theft - which Mr. Trump echoed - is that a Large Language Model (LLM) is no different from a person who, for example, reads a book and takes inspiration from it. But this comparison is not only inaccurate, it's dystopian and anti-human. These tech products are not people. And our laws should not be protecting their algorithmic data-crunching the way we protect human ingenuity and hard work. Enter Republican Sen. Josh Hawley and Democratic Sen. Richard Blumenthal (to thunderous applause) who introduced The AI Accountability and Personal Data Protection Act just last week as well. This new legislation would bar AI companies from training on copyrighted works, and allow people to sue for use of their personal data or copyrighted works without consent. In stark contrast to Mr. Trump's Silicon Valley bootlicking summit, these two lawmakers from both sides of the aisle are standing up for working Americans against the giants of the tech industry. We should all hope their bill passes. There are also glimmers of hope coming from the judiciary. In contrast to Mr. Trump's comments, the White House's official AI Action Plan doesn't address the question of training data and intellectual property, and administration officials said it should be left up to the courts. Now, a few weeks ago, Mark Zuckerberg's Meta declared victory on the issue, when a federal court ruled against a group of authors who had sued for violation of their copyright. But in fact, the judge of that case said the authors probably only lost because their lawyers made the wrong argument about the legal framework of fair use. In his ruling, Judge Vince Chhabria wrote: "No matter how transformative LLM training may be, it's hard to imagine that it can be fair use to use copyrighted books to develop a tool to make billions or trillions of dollars while enabling the creation of a potentially endless stream of competing works that could significantly harm the market for those books." So, if I were Zuck, I wouldn't be celebrating too hard yet. There are plenty m