Why Has The New York Times Sued Microsoft And OpenAI For ‘Billions’? What About The Collection Of Extensive Personal Data From The Internet?
The New York Times has taken legal action against OpenAI, the owner of ChatGPT, and Microsoft, alleging significant copyright infringement in the training of the ChatGPT language model. This lawsuit, seeking "billions of dollars" in damages, contends that ChatGPT was trained using a vast number of New York Times articles without proper authorization. The core argument revolves around the claim that ChatGPT, now widely recognized as a reliable information source, competes directly with the renowned newspaper by sometimes generating verbatim excerpts from its articles, which are typically accessible only through a paid subscription. However, this is not the first lawsuit; the legal disputes bring to light the many challenges surrounding intellectual property in advanced artificial intelligence and machine learning models - a powerful legal development has arisen with OpenAI and its investor, Microsoft, facing a substantial class-action lawsuit that seeks to address issues of privacy, data harvesting, and the broader ethical implications of AI development.
The New York Times has filed a lawsuit against OpenAI, the owner of ChatGPT, and Microsoft, alleging copyright infringement in the training of the ChatGPT language model.
The legal action seeks “billions of dollars” in damages, asserting that ChatGPT was trained using “millions” of New York Times articles without proper authorization.
The lawsuit contends that ChatGPT, now viewed as a reliable source of information, competes with the newspaper and sometimes generates “verbatim excerpts” from its articles, inaccessible without a subscription.
The legal complaint argues that this usage enables readers to access New York Times content without paying, resulting in a loss of subscription revenue and advertising clicks for the newspaper.
The lawsuit also points out instances where Bing, powered by ChatGPT, produces results from a New York Times-owned website without proper attribution or referral links, impacting the newspaper’s income streams. Microsoft, having invested over $10 billion in OpenAI, is named as a defendant in the lawsuit.
The lawsuit, filed in a Manhattan federal court, discloses that the New York Times had previously approached Microsoft and OpenAI in April to seek a resolution to the copyright dispute but was unsuccessful.
The One Of Many
This legal action follows a challenging period at OpenAI, marked by the brief dismissal and subsequent rehiring of co-founder and CEO Sam Altman.
The company is currently contending with multiple lawsuits, including a similar copyright infringement case filed by a group of authors, including George RR Martin and John Grisham, in September.
Comedian Sarah Silverman initiated legal action in July, and an open letter signed by authors Margaret Atwood and Philip Pullman that same month demanded compensation from AI companies for using their work.
OpenAI and Microsoft are also facing a lawsuit, along with GitHub, from a group of computing experts who claim their code was used without permission to train an AI named Copilot.
Moreover, various lawsuits have been brought against developers of generative AI, such as Stability AI and Midjourney, by artists claiming copyright infringement in January; as of now, none of these legal disputes have been resolved.
The Question Of Data Privacy
Ever since Sam Altman, the CEO of OpenAI, testified before a Senate hearing on AI oversight in May, questions regarding the regulation of artificial intelligence (AI) have been a subject of public discussion.
Although Altman emphasized the importance of governmental regulatory intervention to mitigate the risks associated with increasingly powerful AI models, despite Altman’s statements and the European Union’s AI Act being a step towards regulation, no substantive regulatory framework has been established.
However, OpenAI and its major investor, Microsoft, are grappling with yet another substantial class-action lawsuit that essentially calls for court-imposed regulation.
The lawsuit, filed on June 28 in the U.S. District Court for the Northern District of California, alleges, among other things, that OpenAI, following its transition to a for-profit entity in 2019, adopted a strategy to clandestinely collect extensive personal data from the internet, including virtually every piece of exchanged online data.
The complaint contends that OpenAI carried out this data harvesting without notifying or obtaining consent from the “hundreds of millions of internet users” affected.
According to the suit, OpenAI engaged in the continuous scraping of digital footprints, leading to the unjust earning of profits based on unauthorized harvesting of personal data.
Beyond accusations of widespread privacy violations, the lawsuit raises concerns about the existential risks associated with AI, echoing sentiments expressed by Altman himself.
The lawsuit alleges that OpenAI’s disregard for privacy laws is mirrored by its indifference to the potentially catastrophic risk posed to humanity. Notably, the complaint cites a statement from OpenAI’s CEO, Sam Altman, predicting that while AI could potentially lead to the end of the world, there would be great companies in the meantime.
Furthermore, the lawsuit accuses OpenAI of contributing to an “AI arms race,” suggesting that the company, along with other major tech firms, is ushering society into a scenario where over half of surveyed AI experts believe there is at least a 10% chance of a catastrophic crash that could result in widespread harm.
In addition to seeking damages, the complaint requests a “temporary freeze on commercial access to and development” of ChatGPT until OpenAI complies with a subset of the 11 regulatory options listed.
These options include implementing full transparency and accountability protocols, establishing an ‘AI Council’ to approve products before deployment, incorporating technological safety measures, and providing users with the ability to opt out of data collection.
The Viewpoint
The ongoing lawsuit against OpenAI stresses the urgent need for comprehensive and enforceable regulations in the AI sector.
The allegations of unauthorized data harvesting and privacy violations, if proven true, raise serious ethical concerns and highlight the potential misuse of AI technologies.
The lawsuit also brings to the forefront the broader existential risks associated with AI, as articulated by Altman himself, asserting that regulatory intervention is crucial not only to address the immediate issues raised by the lawsuit but also to guide the responsible development and deployment of AI technologies.
Thus, striking a balance between technological innovation and ethical considerations is imperative to ensure that AI advancements benefit society while minimizing risks and safeguarding individual rights.
The Last Bit, the unfolding lawsuits against OpenAI, accompanied by the broader discourse on AI regulation, signals a critical juncture in the responsible development and deployment of artificial intelligence technologies.
The allegations of privacy violations and data harvesting emphasise the pressing need for enforceable regulations to govern the ethical use of AI.
These lawsuits not only bring to the fore the immediate concerns regarding OpenAI’s practices but also amplify the broader existential risks associated with the unbridled advancement of AI, as acknowledged by industry leaders.
In the face of these legal challenges, the AI community must collectively engage in a thoughtful dialogue to establish robust and adaptable regulations that can balance the complexities of evolving technologies.