Four days before the departure of OpenAI CEO Sam Altman, several researchers on the team wrote a letter to the board in which they warned of a powerful artificial intelligence (AI) discovery that, according to them, could threaten humanity, people familiar with the matter told Reuters. .
The contents of the letter, which are as yet unknown, and the AI algorithm were key developments before the board ousted Altman, the face of generative AI at the company, the two sources said. Before his triumphant return on Tuesday, more than 700 employees threatened to resign and join Microsoft in solidarity with their ousted leader.
Sources pointed to the letter as one factor among a longer list of board complaints that led to Altman’s firing. Among the complaints were concerns about the commercialization of technological advances before their consequences are understood. Reuters was unable to review a copy of the letter, and the officials who wrote it did not respond to requests for comment.
Reuters contacted OpenAI, which declined to comment, acknowledging in an internal message sent to employees the existence of a project called Q* (read Q-StarA letter was addressed to the board before the weekend’s events, one source said. An OpenAI spokesperson said the letter, sent by longtime CEO Mira Moratti, alerted employees to specific stories in the media, without commenting on its accuracy.
Some people at OpenAI believe Q* could represent a breakthrough in the company’s pursuit of what is known as artificial general intelligence (AGI), one of the people told Reuters. OpenAI defines artificial general intelligence as autonomous systems that outperform humans at the most economically valuable tasks.
Thanks to its vast computing resources, the new model was able to solve some mathematical problems, said the person, who requested anonymity because he was not authorized to speak on behalf of the company. The source said that although he was only able to solve mathematical problems at the level of primary school students, the fact that he passed these tests made researchers very optimistic about Q*’s success in the future. Reuters was unable to independently verify the Q* capabilities claimed by investigators.
The veil of ignorance
Researchers consider mathematics as a frontier in the development of generative artificial intelligence. Currently, this technology is good at writing and translating languages, statistically predicting the next word, and answers to the same question can vary greatly. But gaining the ability to do mathematics – for which there is only one correct answer – implies that AI will have greater reasoning capabilities, similar to those of human intelligence. According to AI researchers, this could be applied, for example, in new scientific investigations.
Unlike a calculator that can solve a limited number of operations, artificial intelligence can generalize, learn and understand. In the letter they sent to the board, the sources said, the researchers highlighted the ingenuity of artificial intelligence and its potential danger, without specifying the safety concerns referred to in the letter. There has been a long debate among computer scientists about the danger posed by highly intelligent machines—for example, whether they can decide that destroying humanity is in their own interest.
The researchers also highlighted the work of a team of “artificial intelligence scientists” whose existence has been confirmed by several sources. The group is made up of a mixture of previous teams “Code Jane” that it “General Mathematics”The company was exploring how to improve existing AI models to improve its thinking and ultimately perform scientific work, one of the people said.
Altman led the effort to make ChatGPT one of the Programming The fastest-growing company in history and attracted the investment – and IT resources – needed from Microsoft to move closer to AGI. In addition to announcing a series of new tools at this month’s event, last week, at a summit of world leaders in San Francisco, Altman expressed his conviction that major progress is on the horizon.
“Four times in the history of OpenAI, most recently in the past two weeks, I have had the opportunity to be in the room where we have pulled forward the veil of ignorance and the frontiers of discovery. To do so is a professional honor for the Asia-Pacific Economic Cooperation Summit,” he said at the Asia-Pacific Economic Cooperation Summit. The administration fired Altman.
Translated by: Marta Leite Ferreira