It’s called Chinese Room and it’s exactly what “AI” is. It recombines pieces of data into “answers” to a “question”, despite not understanding the question, the answer it gives, or the piece sit uses.
It has a very very complex chart of which elements in what combinations need to be in an answer for a question containing which elements in what combinations, but that’s all it does. It just sticks word barf together based on learned patterns with no understanding of words, language, context of meaning.
Yeah but the proof was about consciousness, and a really bad one IMO.
I mean we are probably not more advanced than computers, which would indicate that consciousness is needed to understand context which seems very shaky.
It’s called Chinese Room and it’s exactly what “AI” is. It recombines pieces of data into “answers” to a “question”, despite not understanding the question, the answer it gives, or the piece sit uses.
It has a very very complex chart of which elements in what combinations need to be in an answer for a question containing which elements in what combinations, but that’s all it does. It just sticks word barf together based on learned patterns with no understanding of words, language, context of meaning.
Yeah but the proof was about consciousness, and a really bad one IMO.
I mean we are probably not more advanced than computers, which would indicate that consciousness is needed to understand context which seems very shaky.
I think it’s kind of strange.
Between quantification and consciousness, we tend to dismiss consciousness because it can’t be quantified.
Why don’t we dismiss quantification because it can’t explain consciousness?