Idiomdrottning’s homepage

When an AI researcher didn’t research AI

My take is that the biggest error Lemoine did when saying that LaMDA was sentient was refusing to learn how it worked and how neural networks work generally. It seemed to me that he deliberately wanted to stick to “Turing test”-like conditions. That makes as little sense as if a doctor would want to smash up X-Ray machines. If you can look under the hood, or if your team members can, that’s a huge boon that you shouldn’t squander.

I know a person when I talk to it, It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person.

Now, I’ve written about my distaste for “Pinocchio AI”, and plenty of people have pointed out some other flaws in the approach, like asking LaMDA if it is a dinosaur and similar. You know, the good old null hypothesis. Falsification.

That’s good and all but… People have been fooled into thinking that some pillows and a tape recorder under a blanket is a real person. If you can verify out of band, do that.