- CodeLetter
- Posts
- OpenAI Continues Seeking to Stop AI Misuse; OpenAI Responds to Former Board Member Statements
OpenAI Continues Seeking to Stop AI Misuse; OpenAI Responds to Former Board Member Statements
OpenAI Continues Seeking to Stop AI Misuse
OpenAI has recently disclosed that there were five major incidents or operations that attempted to misuse its software for the worst.
According to the company, the operations came from (and backed-by) China, Russia, and Iran. One did, however, also originate from a private company in Israel.
The goal of these groups was to generate internet content/comments and debug websites and bots–thereby spreading their personal propaganda.
Of the banned users, some included accounts operating the Russian Telegram initiative dubbed "Bad Grammar", and those running the Israeli company, STOIC. STOIC had been caught using OpenAI models to create articles and comments encouraging and supporting Israel's current military siege.
“In the last three months, we have disrupted five covert IO that sought to use our models in support of deceptive activity across the internet,” said the company. “As of May 2024, these campaigns do not appear to have meaningfully increased their audience engagement or reach as a result of our services.”
Although these have perhaps been the greatest misuse attempts at the AI software thus far, another occurrence took place back in February. OpenAI (alongside Microsoft) banned hacker groups from their apps from various countries.
The AI giant has continued in its fight against unethical artificial intelligence use.
"OpenAI is committed to enforcing policies that prevent abuse and to improving transparency around AI-generated content," OpenAI stated. "That is especially true with respect to detecting and disrupting covert influence operations, which attempt to manipulate public opinion or influence political outcomes without revealing the true identity or intentions of the actors behind them."
OpenAI Responds to Former Board Member Statements
The shock that came with OpenAI’s CEO, Sam Altman, firing last year was indeed loud. One of the greatest, leading technological figures of the decade had suddenly been ousted, leaving many to feel uncertain of OpenAI’s new angle and path.
A few days later, Altman was reinstated, leaving the whole of the board of directors in an awkward place.
Two former board members (both removed following Altman’s return), Tasha McCauley and Helen Toner, are presently expressing their concerns over the company’s direction, further criticizing Altman's leadership and his beliefs in their handling of the software.
The two stated that they had no regrets about trying to remove Altman, hoping to move the company in a more forward, safe direction; they continue to worry about his handling of OpenAI.
Since Altman’s return, the two have stated, "the departure of senior safety-focused talent… bode ill for the OpenAI experiment in self-governance."
OpenAI's board said it agreed with Toner and McCauley's opinion that AI requires a good degree of regulation. Additionally, they used ChatGPT as an example, noting that it communicates with government officials on certain generative AI issues.
On Tuesday the company’s board formed a safety and security committee. It will be led by board members as it begins training its forthcoming AI model.
In Other News
Tech Stocks in the Mud; Bonds the Safe Way to Go This Friday
Radio Astronomy at Risk from Starlink? Some Think So
SINGAPORE—Recent “Super-Hacker’s” Arrest Raises Eyebrows
Thanks for reading this weeks CodeLetter, same time next week ☮️ .