The CEO of Anthropor, Dario Ameudi, said that the models of artificial intelligence today are hallucinations, or make things and present them as if they were real, at a lower rate of humans, during a press conference at the first developed event in the anthropologist, code with Claude, in San Francisco on Thursday.
Amodei said that all this is in the midst of a larger point that it raised: that hallucinations of artificial intelligence are not restrictions on the Antarbur route to AGI-artificial intelligence systems with intelligence at the human level or better.
“This really depends on how it is measured, but I think that artificial intelligence models may calm down less than humans, but they are insulting in more surprising ways,” said Amodei, in response to the question of Techcrunch.
Agi CEO is one of the most leaders in the industry in this field on the possibility of AGI artificial intelligence models. In a widely distributed paper written by last year, Amodei said he believed that AGI may arrive after 2026. During the press on Thursday, the CEO of anthropologist said he sees steady progress to this goal, noting that “water rises everywhere.”
“Everyone is always looking for these difficult blocs for what [AI] Amodei said: There is nothing like that. “
Other artificial intelligence leaders believe that hallucinations offer a big obstacle to AGI. Earlier this week, the CEO of Google DeepMind Demis Hassabis said that today’s artificial intelligence models have a lot of “holes”, and that they make a mistake of many clear questions. For example, earlier this month, a lawyer representing the Antarbur was forced to apologize in the court after they used Claude to create martyrdom in the court file, and Ai Chatbot got Hilan and got the wrong names and titles.
Amodei is difficult to check, largely because most hallucinogenic standards dig AI models against each other; They do not compare the models with humans. Some technologies seem to help reduce hallucinations, such as giving artificial intelligence models to access the web. Separately, some artificial intelligence models, such as the GPT-4.5 from Openai, decrease significantly in hallucinations rates on standards compared to early generations of systems.
However, there is also evidence that hallucinations are worse in advanced artificial intelligence models. Openai O3 and O4-MINI models have higher Hallusia rates than thinking models in the previous generation in Openai, and the company does not really understand the reason.
Later on the journalistic briefing, amodei noted that TV broadcasters, politicians and human beings in all kinds of professions make mistakes all the time. The fact that artificial intelligence makes errors also is not a blow to its intelligence, according to my Imami. However, the CEO of Anthropic acknowledged the confidence offered by artificial intelligence models that are incorrect because the facts may be a problem.
In fact, Antarbur has conducted a good amount of research on the tendency of artificial intelligence models to deceive humans, a problem that seemed to be particularly prevalent in Claude Obus 4. Apollo went to the extent of Anthropor’s suggestion should not be issued this early model. Anthropor said it had reached some of the dilutions that seemed to address the issues raised by Apollo.
Amodei comments indicate that Antarbur may consider AI’s model AGI, or is equal to the human level, even if it is still hallucinations. Artificial intelligence that may be insecure may not be separated from AGI by defining many people.