Daniel Stenberg’s recent post highlights the growing issue of AI-generated security vulnerability reports plaguing open source projects like curl. He shares two examples of bogus reports likely created by language models like Bard and ChatGPT. Though initially convincing, upon further inspection these reports clearly misunderstand and fabricate security flaws.
Sifting through these AI-crafted reports takes time away from curl’s small development team. As Stenberg notes, every report must be investigated by a human, and security issues often take priority over other tasks. Though easy to eventually dismiss, better-worded AI reports require more back and forth to reveal their falsehoods.
Stenberg rightly predicts proliferating use of generative AI will only exacerbate this problem. As the capabilities improve, AI-generated text will become harder to detect. And with rewards at stake, temptation will lead some to continue firing off AI vulnerability reports in hopes of easy bounty payouts.
Open source projects run on good faith contributions of time, effort, and expertise. Bogus AI reports undermine this collaborative spirit and divert maintainer energies away from actual improvements. Stenberg is right to call for better tools and processes to curb this behavior. Projects like curl cannot afford death by a thousand AI papercuts. Clear guidelines, improved detection, and consequences for abuse will be needed to stop this distraction.
The curl project is a popular open source command line tool and library for transferring data with URLs. First released in 1997, it supports many internet protocols and is used in countless applications and systems. With only a few core developers, curl relies on community contributions and responsible vulnerability disclosure.