Idiomdrottning’s homepage

Re: The Thoughts The Civilized Keep

Shannon Vallor (and I can’t necessarily vouch for this think tank, I was not familiar with them) wrote a piece called

I think Vallor has said/done some other stuff that has been good or more thought-through so this isn’t meant as a general slag on her either.

I def agree with the basic point that so called “Pinocchio” style AI that really understands and really wants to be alive is thousands of years away, if ever (unlike AI that’s good enough to do a lot of stuff I do by hand today, like sorting images and fold proteins and such, that might come sooner).


This argument is just 100% empty semantics:

[U]nderstanding cannot occur in an isolated computation or behavior, no matter how clever. Understanding is not an act but a labor. Labor is entirely irrelevant to a computational model that has no history or trajectory in the world.

These words means absolutely nothing.

Again, I’m on the same side as the argumenter. Yes, GPT-3 is just a glorified Markov chain, a cut-ups-project released, a slightly-worse version of the I Ching or Tarot. GPT-3 specifically is a very long way away from understanding.

But use real arguments please. Use better arguments than what GPT-3 would generate:

“True comprehension cannot happen in a function application because the real comprehension was the friends we made along the way. Friendship is something that the cold-hearted robots can never understand because they, unlike every single human, don’t remember their personal experience of the entire Earth’s history billions of years back nor do they, unlike every single human, have true and full awareness of the inevitable hand of the reaper.”


“Understanding is not an act but a labor”


“Labor [depends on] history or trajectory in the world”

is a conflation fallacy on two separate polysemes of labor (“process” vs “effort”).

Rather than restate her case in my own words, please let me reply to it in more detail, because I don’t fully agree with her here.

More importantly, it reveals the sterility of our current thinking about thinking.

Our “current thinking about thinking” is a huge field across psychology, cognitive science, mathematics, even religion and poetry.

Dismissing all of that collective introspection as “sterility” is, well…

I don’t wanna unequivocally defend the promethean quest of replicating our own awareness.

A growing number of today’s cognitive scientists, neuroscientists and philosophers are aggressively pursuing well-funded research projects devoted to revealing the underlying causal mechanisms of thought, and how they might be detected, simulated or even replicated by machines. But the purpose of thought — what thought is good for — is a question widely neglected today, or else taken to have trivial, self-evident answers.

That’s not true at all. That’s a core philosophical question from Zhuang Zhou to Camus to Ligotti to Bodhidharma.

I’d say the opposite is true—modern day computational information processing is over-emphasizing behavioristic purpose and results. (Although that they have their focus there, that suits me just fine.♥︎)

What purpose, then, does thinking hold for us other than to be continually surpassed by mindless technique and left behind?

What purpose does it hold for us even without any machines? It’s the whole “sixpence none the richer” argument from Lewis, the old “Sisyphus is happy” argument from Camus, the old “chop wood, carry water” from Buddhist thought.

Our existence is precious to me because it is purposeless, because it is useless, because we may exist anyway. We’re enough.

Her reference to Dreyfus’ 1992 “What computers still can’t do” isn’t really current since these AIP systems have moved on from structured heuristic symbol manipulation. They now work way more dreamily and intuitively. The mantra when I got my computational linguistics degree was that statistics was better than linguistics.

By symbol manipulation, we mean the early modernist view of thinking overly conflated the map with the territory for things like “grammar↔language”, “linnean nomeclature↔life” etc. Chess engines crammed full of theory and guidelines and deliberately programmed “if this then that”.

This is in contrast to contemporary AIP systems, including GPT-3 as crappy as that is, because the contemporary AIP systems are more… “grown” than “constructed”. Less “follow this recipe” and more “let’s try to catch some sourdough”.

Labor is entirely irrelevant to a computational model that has no history or trajectory in the world. GPT-3 endlessly simulates meaning anew from a pool of data untethered to its previous efforts.

A fundamental difference between AI and human is that AI is like a song on a tape. You can stop, rewind, start over. (It’s unlike a tape because it can go somewhere new each time.)

Each human child starts out without a personal history of experiences, but absorbs a lot of info and processes from culture and surroundings.

When an AI app starts over, it doesn’t start over completely fresh. Creating GPT-3 was a climate disaster but that work does not have to be re-done every time.

When a human child is born it does not remember what some other dead person has personally experienced and thought. When you’re absolute beginners, the kingdom is for you.♥︎♥︎

GPT-3 is constantly rewinding, it doesn’t reincorporate (although there are many AI that does do that! Although that’s not necessarily what you want, for tool purposes, since it comes at the expense of predictability) but it doesn’t start over either.

When GPT-3 answered Millière’s question it basically pasted together stuff from science fiction pulp stories. That’s what it does. It’s a cut up statistics-driven jumble machine.

AI will in the future get so good at doing this that it’ll look like it cares.

Now to avoid falling into the same empty semantics “it’s not true labor because I say it’s not” pitfall myself… what does “caring” about something mean?

AI are conditionally rewarded on good results and extinguished on bad. Like humanity as a whole.

I almost wanna write “AI does try to deliver, and if that’s what caring about what they do means…” but “try to” is too teleological.

What even is “try”? Do humans ever “try” to do something or is that just the word we’ve assigned to how it feels like when we do things with uncertain outcome?

We care because we don’t want bad outcomes (such as the people we love getting hurt).

We try because we care.

IDK and maybe we’ll never.

I wrote the other day, in reference to Lewis’ sixpence argument:

Pretty nice! So even if our existence would be just like a bunch of beeping, broken Tamagochis, pre-programmed robots on strings… we could still do good if we so wish. A meaningless good is good nonetheless.♥︎

It’s like… Vallor’s piece is motivated by fear. Yeah, things can go wrong pretty darn quickly, I’m not disputing that. This fear can lead us to kinda reaching when it comes to arguments.

It’s the God of the Gaps argument except that it’s “humanity of the gaps”.

For me, when I’m confronted with a fear such as this, such as “what even is the purpose of being a thinking human? Aren’t we just meat puppets with a one-way ticket to Boot Hill anyway?”, I wanna face that fear openly. I want to confront it. I want to think about it and sit with it and maybe accept that some things are pretty awful and some things pretty wonderful.♥︎

It’s one life, it’s this life, and it’s beautiful.♥︎


Sifr, who was the one who linked me to Vallor’s text, wrote:

I am here for considering non-human approaches to life and communication but AI does not strike me as interesting in this direction, like at all

Oh, same! That’s a really good point.

I just think she is saying a lot of nothing and a lot of unfounded things, based on just fear and hope and empty semantics.

Does that mean that the GPT-3 hype-crowd is right? Absolutely not. I’m not into AI stuff.

But that’s more because how I feel about it—it strikes me as not as interesting or fruitful as other forms of meditation or cognitive research—than anything I could actually logically reason for.

Model of Language vs Model of World

jmcbray writes in:

[I]t seems to me that when we say that GPT-3 lacks understanding, we mean that it’s building on only a model of language, not a model of language and a model of the world, the way hew-mons do.

Yes, that’s a distinction that puts GPT-3’s limitations in pretty clear terms.

GPT-3 specifically is a model of un-dereferenced references.

By de-referencing, I mean how humans like to do some sorta mapping from “the pencil on my desk” to the pencil I on my desk that I can reach out and touch, from “I like pizza” to actually thinking about some pizza how it smells and feels.

What about Watson, the glorified Wikipedia that won Jeopardy? Is that just a model of language or also a model of the world? I’m not arguing for Watson’s level of comprehension here, just the desired scope. Is it Watson’s job to, solely on the language level, write Jeopardy answers just like it’s GPT-3’s job to write texts?

jmcbray continues:

The most interesting, to me, grounds for suspecting GAI is not possible is the argument that humans are not a general intelligence either – that we’re good a very wide range of things, but hopelessly bad at others, and especially bad at seeing our limitations.

Yes, maybe the fear isn’t as much that we’ll like Mary Shelley’s Prometheus grab the spark of fire from the heavens and create towering life out of slime and bricks.

Maybe the fear is that we’ll discover our own puppet strings.

But don’t worry, darling, if that were to happen. Discovery of circumstances isn’t the same as changed circumstances. Everything we could do before we could still do after. Dance, sit, chop wood, carry water, give a sixpence to God♥︎