Stumbled upon this when looking for help on Reddit:
https://www.reddit.com/r/github/comments/1cwag3q/are_race_conditions_racist/?chainedposts=t3_n8qi6n
Stumbled upon this when looking for help on Reddit:
https://www.reddit.com/r/github/comments/1cwag3q/are_race_conditions_racist/?chainedposts=t3_n8qi6n
proud Rust developer
Joke aside, everytime people gush over AI, I always have to remind them that AI is just a puppy that learnt how to maximise treats, and not actually understand shit. And this is a perfectly good example.
I mean, I totally agree with you. But that also kinda ignores all the useful things a dog can be trained to do.
Oh I’m not talking that it can’t be trained well. That’s not my point.
Of course dogs can be trained to sniff drugs or find people, the gist of it is that they were trained for this behaviour, and might not understand it like we do.
A good exemple is a study that research on cancer sniffing dogs had problems with false positives.
The false positive problem actually works in favour of the dogs, here: Their noses are excellent they know exactly whether there’s drugs there or not. They also know that the humans can’t tell so it’s easy to get a treat regardless. And they also know to not overdo it.
Even more complicated are cats, figures that they are by and large uninterested in being studied or proving anything to you.
Right??? I’m continually floored by how many genuinely smart people I come across who ignore this concept, which is one of the biggest reasons I just don’t trust LLMs in a general sense. Like sure, I can use them fairly effectively, but the vast majority of the people who interact with LLMs don’t use a level of caution with them that’s appropriate.
And that doesn’t even touch on the huge ethical (and legal) issues around how LLM devs acquire and use training data.
Dogs are way more intelligent than that. LLM tech is basically a way to quickly breed fruit flies to fly right or left when they see a particular pattern.