• lobut@lemmy.ca
        link
        fedilink
        arrow-up
        0
        ·
        3 months ago

        Yeah, but I think apparently the tests that “could” have caught it relied on mocks which basically rendered it useless in those cases.

        • teejay@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          Ah yes. The unintended consequences of mandated code coverage without reviewing the tests. If you can mock the shit out of the test conditions to always give you exactly the answer you want, what’s the point of the test?

          It’s like being allowed to write your own final exam, and all you need to pass the exam is 90% correct on the questions you wrote for yourself.

      • phorq@lemmy.ml
        link
        fedilink
        Español
        arrow-up
        0
        ·
        3 months ago

        Yup, and they’re run on an estimated 8.5 million test machines

  • Blackout@fedia.io
    link
    fedilink
    arrow-up
    0
    ·
    3 months ago

    Lol. I run an open-saas ecom and everything is done live. No one but me handling it. The customers must think they are tripping sometimes. Updates are rarely perfect the first push.

  • edinbruh@feddit.it
    link
    fedilink
    arrow-up
    0
    ·
    3 months ago

    I’m about to do this to this kernel driver. Certainly broken before, possibly broken after, what’s the worst that could happen

  • JaggedRobotPubes@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    I don’t actually know enough to know anything about this but I’m assuming that’s badass and you can only do it with sunglasses on

    • spacecadet@lemm.ee
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      I used to have to use a CI pipeline at work that had over 40 jobs and 8 stages for checking some sql syntax and formatting and also a custom python ETL library that utilized pandas and constantly got OOM errors.

      They didn’t write any unit tests because “we can just do that in the CI pipeline” and if you didn’t constantly pull breaking changes into your branch you would guarantee the pipeline would fail, but if you were lucky you only had to restart 30% of your jobs.

      It was the most awful thing and killed developer productivity to the point people were leaving the team because it sucks to spend 40% of your time waiting for CI scripts to fail while you are being yelled at to deliver faster.

        • thebestaquaman@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          3 months ago

          My test suite takes quite a bit of time, not because the code base is huge, but because it consists of a variety of mathematical models that should work under a range of conditions.

          This makes it very quick to write a test that’s basically “check that every pair of models gives the same output for the same conditions” or “check that re-ordering the inputs in a certain way does not change the output”.

          If you have 10 models, with three inputs that can be ordered 6 ways, you now suddenly have 60 tests that take maybe 2-3 sec each.

          Scaling up: It becomes very easy to write automated testing for a lot of stuff, so even if each individual test is relatively quick, they suddenly take 10-15 min to run total.

          The test suite now is ≈2000 unit/integration tests, and I have experienced uncovering an obscure bug because a single one of them failed.

        • ByteOnBikes@slrpnk.net
          link
          fedilink
          arrow-up
          0
          ·
          3 months ago

          Still waiting on approval for more resources. It’s not a priority in the company.

          I swear we have like 4 runners on a raspberry pi.