Software

AI + ML

Google's AI bug hunters sniff out two dozen-plus code gremlins that humans missed

OSS-Fuzz is making a strong argument for LLMs in security research


Google's OSS-Fuzz project, which uses large language models (LLMs) to help find bugs in code repositories, has now helped identify 26 vulnerabilities, including a critical flaw in the widely used OpenSSL library.

The OpenSSL bug (CVE-2024-9143) was reported in mid-September and fixed a month later. Some, but not all, of the other vulnerabilities have also been addressed.

Google believes its AI-driven fuzzing tool – which injects unexpected or random data into software to catch errors – found something that's unlikely to have ever been caught by human-driven fuzzing.

"As far as we can tell, this vulnerability has likely been present for two decades and wouldn't have been discoverable with existing fuzz targets written by humans," said Oliver Chang, Dongge Liu, and Jonathan Metzman of Google's open source security team in a blog post.

If that's correct, security research henceforth really ought to involve AI for fear that threat actors have already done so – and found flaws that would be invisible to the AI-deprived.

Another example cited by Google's security team, a bug in the cJSON project, is similarly said to have been spotted by AI and missed by a human-written fuzzing test.

So the value of AI assistance appears to be substantial for security professionals. The Chocolate Factory earlier this month announced that, for the first time, a separate LLM-based bug hunting tool called Big Sleep had identified a previously unknown exploitable memory-safety flaw in real software.

And in October, Seattle-based Protect AI released an open source tool called Vulnhuntr that used Anthropic's Claude LLM to find zero-day vulnerabilities in Python-based projects.

The OSS-Fuzz team introduced AI-based fuzzing in August 2023 in an effort to fuzz a greater portion of codebases – to improve fuzzing coverage, meaning the amount of code tested.

The process of fuzzing involves drafting a fuzzing target – "a function that accepts an array of bytes and does something interesting with these bytes using the API under test" – then dealing with potential compilation issues and running the fuzzing target to see how it performs, making corrections, and repeating the process to see whether crashes can be traced to specific vulnerabilities.

Initially, OSS-Fuzz handled the first two steps: 1) Drafting an initial fuzz target; and 2) Fixing any compilation issues that arise.

Then, at the beginning of 2024, Google made OSS-Fuzz available as an open source project and has been trying to improve how the software handles subsequent steps: 3) Running the fuzz target to see how it performs, and fixing any obvious mistakes causing runtime issues; 4) Running the corrected fuzz target for a longer period of time, and triaging crashes to determine their root causes; and 5) Fixing vulnerabilities.

According to Google, its LLM can now handle the first four steps of the developer's fuzzing process and the plan is to tackle the fifth shortly.

"The goal is to fully automate this entire workflow by having the LLM generate a suggested patch for the vulnerability," said Chang, Liu, and Metzman. "We don't have anything we can share here today, but we're collaborating with various researchers to make this a reality and look forward to sharing results soon." ®

Send us news
9 Comments

Boffins trick AI model into giving up its secrets

All it took to make an Google Edge TPU give up model hyperparameters was specific hardware, a novel attack technique … and several days

Google Gemini 2.0 Flash comes out with real-time conversation, image analysis

Chocolate Factory's latest multimodal model aims to power more trusted AI agents

Open source maintainers are drowning in junk bug reports written by AI

Python security developer-in-residence decries use of bots that 'cannot understand code'

Google Timeline location purge causes collateral damage

Privacy measure leaves some mourning lost memories

Just how deep is Nvidia's CUDA moat really?

Not as impenetrable as you might think, but still more than Intel or AMD would like

Google DeepMind touts AI model for 'better' global weather forecasting

Bases predictions on historical data, instead of solving physics equations

Microsoft won't let customers opt out of passkey push

Enrollment invitations will continue until security improves

Google thinks the grid can't support AI, so it's spending on solar for future datacenters

Deal with Intersect Power will see gigawatts of compute capacity come online

US bipartisan group publishes laundry list of AI policy requests

Chair Jay Obernolte urges Congress to act – whether it will is another matter

Humanoid robots coming soon, initially under remote control

Dodgy AI chatbots as brains – what could go wrong?

Guide for the perplexed – Google is no longer the best search engine

Seek and ye shall find

Australia moves to drop some cryptography by 2030 – before quantum carves it up

The likes of SHA-256, RSA, ECDSA and ECDH won't be welcome in just five years