The artificial intelligence model in Openai (AI) recently helped a researcher of cybersecurity in revealing a security gap for zero day in Linux. According to the researcher, the defect was found in the Linux Kernel (SMB) application, also known as KSMBD. It is said that the unknown security defect is said to be difficult because it includes several users or communications that interact with the system at the same time. This specific error is now tracked as CVE-2025-37899, and a repair has already been released.
Openai’s O3 finds twice zero a day
The use of artificial intelligence models to find errors a zero or unknown day (and it is likely to be not exploited) is relatively rare, despite the increasing capabilities of technology to search for them. Most researchers still prefer to discover these security defects using traditional code checking, which can be a stressful way to analyze a large code base. Researcher Sean Heylan detailed how the Openai model helped him to reveal the defect relatively easily in A. Blog post.
Interestingly, the main error was not the researcher’s axis. Heilan was testing the ability of artificial intelligence against a different mistake (Cve-2025-37778), also described as “Weakness to Approval Kerberos”. This error also falls in the category of “free use of empty”, which means that part of the system delete something of memory, but the other parts are still trying to use it after that. This can lead to security accidents and issues. The artificial intelligence model was able to find the defect in eight out of 100.
Once Helean stressed that O3 is able to discover a well -known safety error from a large part of the code, he decided to use it to feed the artificial intelligence model with the processor of preparing the session instead of only one job. This file, in particular, contains about 12,000 lines of code and deals with different types of requests. The analogy of this will be to give Amnesty International a novel and ask it to find a specific typographical error, only, this typographical error can collide with the computer.
After O3 was asked to play 100 simulations for this full file, he was only able to find mistakes previously known once. Helean admits a decrease in performance, but highlights that artificial intelligence was still able to find the error, which is a great achievement. However, it was found that in other runs, the Openai model monitored a completely different mistake, which was not previously known, and the researcher lost it.
This new security defect was also of the same nature, but it affected the Loging SMB driving processor. This vulnerability also guarantees the zero day the system that tries to reach a pre -deleted file, however, this error raised the problem when the user was recording the exit or ending a session.
According to the O3 report, this error may lead to the disruption of the system or allow the attackers to operate the software instructions with a deep arrival of the system, making it a major security concern. Hillan highlighted that O3 was able to understand a difficult mistake in the real world scenario, and clearly explained the weakness in its report.
Helean added that O3 is not perfect and has a high percentage of referring to noise (a ratio between wrong to real positive). However, it was found that the model behaves like a person when searching for errors, unlike traditional safety tools, which have a solid way to work.