Skip Navigation

Does bug finding software threaten to make closed-source s/w more secure than FOSS?

www.npr.org /2026/04/11/nx-s1-5778508/anthropic-project-glasswing-ai-cybersecurity-mythos-preview

cross-posted from: https://slrpnk.net/post/36882585

An arms race is on the horizon. An AI project has produced a tool that finds bugs in software. And it has gotten very good at it. The bugs it finds is a treasure trove for criminals (incl. spy agencies) looking for vulns.

Of course the AI project is commercial and the code is proprietary closed-source. The company that made it is quite tight with the s/w, so far, partly due to liability control (a legit fear that if they are loose with distribution, criminals will get a flood of 0 days like crazy and exploit it, after which the AI corp could be liable).

The FOSS world seems to be at a serious disadvantage. This tool will be unavailable to the FOSS community. So devs will be blocked from a tool that finds bugs that criminals and FOSS adversaries will have. Well-funded project (i.e. mostly non-FOSS) can either offer bug bounties and/or afford the bug finding tool.

Is it time to ask govs to get off their ass and protect the commons? Should govs (the UN? NATO?) fund a competing project to find bugs in FOSS and inform devs? Or mandate that all such commercial projects lend their AI bots to the commons?

Comments

3