OpenAI Hacked: Hacker stole OpenAI’s internal AI secrets in a data breach, NYT reports

In a major security breach last year, a hacker infiltrated the internal messaging systems at OpenAI, stealing sensitive details about the company’s artificial intelligence technologies, the New York Times reported on Thursday. The breach also exposed internal conversations among researchers and other employees, highlighting vulnerabilities within the company.

According to the Times, the hacker reportedly accessed an online forum where OpenAI employees discussed their latest technological advancements. However, the hacker did not penetrate the core systems where OpenAI develops and maintains its AI. Despite the severity of the breach, Microsoft Corp-backed OpenAI did not respond to a request for comment from media outlets.

“Early last year, a hacker gained access to the internal messaging systems of OpenAI, the maker of ChatGPT, and stole details about the design of the company’s A.I. technologies. The hacker lifted details from discussions in an online forum where employees talked about OpenAI’s latest technologies, according to two people familiar with the incident, but did not get into the systems where the company houses and builds its artificial intelligence,” the New York Times reported.

The company informed its employees about the breach during an all-hands meeting in April of the previous year, and the board of directors was also notified. However, executives chose not to disclose the incident publicly, given that no customer or partner information was compromised.

The decision not to publicize the breach was based on the assessment that the hacker appeared to be an individual without connections to any foreign government, thus not posing a national security threat. Consequently, OpenAI did not report the incident to the FBI or other federal law enforcement agencies.

For some OpenAI employees, the breach sparked concerns about potential threats from foreign adversaries, such as China, who might attempt to steal AI technology that could eventually pose a risk to U.S. national security. The incident also raised questions about OpenAI’s commitment to security and revealed internal disagreements over the risks associated with AI technologies.

Following the breach, Leopold Aschenbrenner, an OpenAI technical program manager focused on preventing harmful AI applications, sent a memo to the company’s board. In the memo, he argued that OpenAI was not taking sufficient measures to protect its secrets from foreign adversaries, including the Chinese government.

In May, OpenAI announced that it had disrupted five covert influence operations attempting to misuse its AI models for deceptive activities online. This incident added to growing safety concerns about the potential misuse of AI technology.

Amid these security issues, the Biden administration has been preparing to introduce measures to safeguard U.S. AI technology from adversaries such as China and Russia. Preliminary plans include placing strict regulations around the most advanced AI models, including ChatGPT, as reported by Reuters.

At a global meeting in May, 16 companies developing AI pledged to focus on the safe development of the technology, reflecting the increasing urgency among regulators to address rapid innovation and emerging risks in the AI sector, Reuters reported.

Founded in 2015 by Sam Altman and Elon Musk, OpenAI began as a nonprofit research organization dedicated to advancing safe and beneficial artificial general intelligence (AGI). However, in 2020, it transitioned into a commercial entity, marking a pivotal change in its path. Despite facing internal turmoil in November, including Altman’s brief departure and subsequent return, OpenAI continues to lead the dynamic AI market, driven by the success of its flagship product, ChatGPT, launched in 2022.


TOP