• projectmoon@lemm.ee
    link
    fedilink
    arrow-up
    0
    ·
    1 month ago

    LLMs are statistical word association machines. Or tokens more accurately. So if you tell it to not make mistakes, it’ll likely weight the output towards having validation, checks, etc. It might still produce silly output saying no mistakes were made despite having bugs or logic errors. But LLMs are just a tool! So use them for what they’re good at and can actually do, not what they themselves claim they can do lol.

    • flashgnash@lemm.ee
      link
      fedilink
      arrow-up
      0
      ·
      1 month ago

      I’ve found it behaves like a stubborn toddler

      If you tell it not to do something it will do it more, you need to give it positive instructions not negative