

That article has lots of issues:
17% of the most popular Rust packages contain code that virtually nobody knows what it does
That’s not true at all, the article where he got that information from says:
Only 8 crate versions straight up don’t match their upstream repositories. None of these were malicious: seven were updates from vendored upstreams (such as wrapped C libraries) that weren’t represented in their repository at the point the crate version was published, and the last was the inadvertent inclusion of .github files that hadn’t yet been pushed to the GitHub repository.
So, of the 999 most popular crates analyzed 0% contains code nobody knows what it does.
He then lists some ways packages can be maliciously compromised:
- Steal credentials and impersonate a dev
- Misleading package names
- Malicious macros (this one is interesting, had never considered it before)
- Malicious build script
And his solutions are:
- Bigger std library (solves none of the above)
- Source dependencies (solves none of the issues he showed, only the issue that happens in 0% of packages where binary doesn’t match the source and is detectable)
- Decentralized packages (which worsens every security concern)
- Centralized Checksum database (so a centralized package manager is bad, but a centralized Checksum index is good? How does that work?)
Honestly I can’t take that article seriously, it grossly misinterpreted another study, presents problems that exist on every single package manager ever, doesn’t propose ANY valid solution, and the only thing he points to as a solution suffers from ALL of the same issues and then some.
I don’t get how that output showcases anything, unless he had run that against a known instance of forgejo so the owners of that instance could confirm that he actually executed code. But he’s only showing a text file, that’s like saying look I hacked super_secure_self_hosted_service:
For all we know
chain_alpha.pyis just a bunch of prints.Also, even if it is real (which I don’t really doubt, but I have seen no proof) holding the information instead of properly disclosing it is just childish. It’s not a carrot methodology, it’s a stick one, and one without a carrot. This is the sort of thing you do to big companies with no morals, doing it to a small open source project is just wrong, they don’t have the manpower or money to redo the investigation you already did. Release a CVE, talk to the devs, and/or push a PR, but saying “I found a vulnerability but I won’t tell you about it” is just dumb.