we love google (and LLMs)

  • Carighan Maconar@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    6 months ago

    It’s called Chinese Room and it’s exactly what “AI” is. It recombines pieces of data into “answers” to a “question”, despite not understanding the question, the answer it gives, or the piece sit uses.

    It has a very very complex chart of which elements in what combinations need to be in an answer for a question containing which elements in what combinations, but that’s all it does. It just sticks word barf together based on learned patterns with no understanding of words, language, context of meaning.

    • Valmond@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      6 months ago

      Yeah but the proof was about consciousness, and a really bad one IMO.

      I mean we are probably not more advanced than computers, which would indicate that consciousness is needed to understand context which seems very shaky.

      • kibiz0r@midwest.social
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        6 months ago

        I think it’s kind of strange.

        Between quantification and consciousness, we tend to dismiss consciousness because it can’t be quantified.

        Why don’t we dismiss quantification because it can’t explain consciousness?