• 0 Posts
  • 49 Comments
Joined 3 years ago
cake
Cake day: June 21st, 2023

help-circle
  • Also,i’m sure you know this, but security through obscurity is a poor systems design choice in almost all scenarios.

    The only time I can think of from the top of my head where obscurity aids security is when secret keys are kept obscure. This isn’t even what people mean by “security through obscurity” though, so I’d actually beg someone to give an example where obscurity is actually beneficial to security and doesn’t just give a false sense of security instead.

    That’s not to say everything can or should be open source, of course, just that relying on it being closed source for your application to be secure is a good way to open yourself up to attacks.




  • If you’re referring to GPL variants, that depends. You can absolutely use GPL software and libraries with closed source software. You just need to separate the GPL portions from the closed source portions with some sort of boundary, like running it as a service of some sort or turning it into a CLI tool. You’re just not allowed to create derivative works of GPL software that isn’t also GPL.

    Also, there should be nothing dangerous about open sourcing code (unless you’re referring to financial risk to the business I guess). Secrets should never live in code, and obscurity is never secure.




  • why would something have to be closed source in order to optionally provide secure boot? Couldn’t you provide the secure-boot-enabled binaries in addition to the source for everything except the boot keys?

    This is also something I don’t fully understand. Unfortunately it’s not easy to find what the requirements are to get a bootloader signed by MS. It’s possible I’m mixing up these requirements with requirements for something else that requires a NDA, but it’s really not that simple to find the requirements online.

    It’s possible that the latter is actually the case and it’s not secure boot that requires it to be closed source. It’s also possible I’m entirely mistaken and they don’t need to make it closed source at all. I wish TrueNAS would give more details why it needs to be closed source - whether it’s due to a NDA or whatnot.



  • Self sign doesn’t defeat the purpose

    The whole point of signing is that the BIOS can verify that the bootloader is legitimate. For a local Arch install, it doesn’t matter because Arch doesn’t distribute signed bootloaders and the environment is wholly personal. TrueNAS sells products and services though, such as enterprise-level support. It isn’t just something used in home labs. Their customers may require things we do not, and secure boot support appears to be one of them.

    Self-signing to work around the idiotic restrictions Microsoft imposes to get it signed would be one way to do that, but then the software is essentially acting as its own authority that it is legitimate. Customers would realistically rather the bootloader’s signature is valid with the built-in key provided by MS since it means that MS is confirming its validity instead - not exactly a name I would trust, but I’m personally not a TrueNAS enterprise customer either.


  • This transition was necessary to meet new security requirements, including support for Secure Boot

    Secure boot is dumb, but explains why they’d need a repo to be closed source. To summarize it briefly, you need your bootloader to be signed to work at all with secure boot, which means you have two options: self-sign (which defeats the purpose, though some Linux distros let you do this if you want), or follow all the requirements imposed by Microsoft. As far as I’m aware, one of those requirements is that it must be closed source.








  • Rust currently isn’t as performant as optimized C code, and I highly doubt that even unsafe rust can beat hand optimized assembly — C can’t, anyways.

    A bit tangential, but to answer this question, nothing beats the most optimized assembly code. At best, programming languages can only hope to match the most optimized assembly.

    Rust does have macros for inlining assembly into your program, but it’s horribly unsafe and not super easy to work with.

    Rewriting ffmpeg in Rust is not a solution here (like you’re saying).




  • I don’t understand how a bug is supposed to know whether it’s triggered inside or outside of a google service.

    Who found the bug, and what triggered it? Does it affect all users, or does it only affect one specific service that uses it in one specific way due to a weird, obscure set of preconditions or extraordinarily uncommon environment configuration?

    Most security vulnerabilities in projects this heavily used are hyper obscure.

    If the bug is manifestly present in ffmpeg and it’s discovered at google, what are you saying is supposed to happen?

    e) Report it with the usual 90 day disclosure rule, then fix the bug, or at least reduce the burden as much as possible on those who do need to fix it.

    Google is the one with the vulnerable service. ffmpeg itself is a tool, but the vast majority of end users don’t use it directly, therefore the ffmpeg devs are not the ones directly (or possibly at all) affected by the bug.

    There are a bunch of Rust zealots busily rewriting GNU Coreutils which in practice have been quite reliable and not that badly in need of rewriting. Maybe the zealots should turn their attention to ffmpeg (a bug minefield of long renown) instead.

    This is weirdly offtopic, a gross misrepresentation of what they are doing, and horribly dismissive of the fact that every single person being discussed who is doing the real work is not being paid support fees by Google. Do not dictate what they should do with their time until you enter a contract with them. Until that point, what they do is none of your business.

    Alternatively (or in addition), some effort should go into sandboxing ffmpeg so its bugs can be contained.

    And who will do this effort?