The news: OpenAI and Microsoft and its subsidiary GitHub were named in a class-action lawsuit alleging that AI-code-generating software Copilot violates US copyright laws.
- Programmer and plaintiff Matthew Butterick, represented by Joseph Saveri Law Firm, brought the suit before the District Court of Northern California in San Francisco on Friday, per The Register.
- The suit accuses the trio of massive piracy culpability because GitHub’s Copilot generative AI system is trained on a trove of licensed open-source code scraped from the internet with no credit given to the authors.
- “This is the first step in what will be a long journey,” the lawyers said in a statement reported by The Verge. “As far as we know, this is the first class-action case in the US challenging the training and output of AI systems. It will not be the last. AI systems are not exempt from the law.”
The lust for data: Tech’s appetite for consumer data has morphed into hunger for copyrighted data. Just as companies are cutting budgets for moonshot projects, there’s a flood of investment going to AI that relies on reams of data for model training.
- Generative AI, which is the basis of Copilot, has spurred a flurry of interest around its ability to create new things: code, art, text, videos, and more.
- Companies like Microsoft, Google, Shutterstock, and Stable Diffusion are flocking to the technology’s revenue potential, which Sequoia Capital expects could generate “trillions of dollars of economic value.”
- Yet the economic potential rests on human contributions, and the question for the court is whether copyright laws will evolve to protect intellectual property when AI is the pirate.
For some, AI is worth sacrifices: The lawsuit brings into focus tech companies’ eagerness to commercialize artificial creators built by AI researchers who are going full throttle on the technology and view ethics concerns as stumbling blocks to innovation.
- The perspective is rooted in a push toward the holy grail of artificial general intelligence (AGI), believed to be more achievable than previously thought and capable of solving problems like cancer, aging, and climate change.
- Yet there’s an opposing belief that AI has potential for harm, and in the case of Copilot, the court is unlikely to ignore effects on workers who contributed crucial data.
- At the backdrop of the fight is a much higher-stakes conflict between the US and China in a tech cold war focused on AI.
- How individual nations decide to regulate AI and whether those regulations spur or hinder innovation could have global repercussions and could factor into the outcome of legal cases.