One of the things that makes me super bored now is blogposts with a super self-assured reductive take on AI and what AI is and will be and mean and will mean. (And usually the post also has a tedious rant on what to call it instead of “AI”, and why AI is such a misnomer.) Those have gotten so old already.
Usually the take is that it’s an over-hyped nothingburger, and that take has been done to death. The opposite take is just as boring.
One of the worst ones I saw recently I’ve thankfully forgotten the name of but it used the I-wish-I-could-say-it-was-unique schtick of mixing in threats of physical violence like “I will kick in the teeth of the next person who hypes AI” or whatever it was, I don’t remember because I was dead from boredom, let alone all the violence.
I do have some sympathy for the fact that everyone wants to get in on day one and be the Marshall McLucan or Timothy Leary or William Gibson or Scott McCloud or Cory Doctorow of this 🐝, first to understand our contemporary world properly and to describe it accurately, so I’m not saying you can’t keep making these posts full of unfounded cocky guesses. Go ahead. Saves me money on sleeping pills.
In the fifties they at least had the good taste to throw some drugs and tits and bug-eyed monsters into their speculative fiction. That was a li’l more fun.
Over email, someone asked “whaddayamean ‘day one’? neural network language models aren’t new”, and that’s true. But when I wrote “day one”, I tried to contextualize that by giving examples like Marshall McLuhan, who was writing in the 1950s even though researchers had been trying to invent television since the 1900s—similar time gaps exist for some of those other philosophers like Timothy Leary. McLuhan wasn’t writing about the technical details to transmit cathode ray control codes, he was writing about how society was affected and about how our own minds were altered by the practical application of the new tech and the emergent consequences thereof. Which for some quite complex inventions end up being nothing, a dead end (like the whole decades long stopgap that was “optical media” like CD-ROM or, more recently, those embarrassing proof-of-work ledgers of non-fungible tokens) while other simple-seeming inventions do end up being something that change societies profoundly.
It’s absolutely true, and needs pointing out, that hypes and tulipmanias are driven by gamblers and bagholders. But it’s also true that sometimes tech, like modems or cars or TV, do end up making our day-to-day pretty different. Even for those of us who try to opt out; I can avoid television but I still have to live in a world warped by policies set by officials mandated by an electorate swallowing televised lies.
I’m glad people are thinking about this stuff and not buying into hypes too hastily. At the same time, some of these pundit sermons ex cathedra have come across as more wishful thinking than anything else.
If ML models stay this bad, we won’t need any arguments because they’ll collapse by themselves in dot-com 2.0, and if they become good, the “they suck” argument isn’t gonna last us far.
My whole case here is that it’s premature to guess whether they’re gonna become useful or if they’re gonna stay ugly and confused, so I didn’t really wanna make a guess either way—not that I can’t: My own guess is that they are going to become very capable. Not Pinocchi level, just really useful and often the most convenient ways to solve things, like the camera was for illustration and digital computation was for math. Which is why the very real problems of energy use and ownership concentration need to be solved sooner rather than later.
And if I’m wrong about that, if they’re gonna stay as bad and useless as the we-hate-all-changes crowd of conservative xennials believe they are, then that’s fine by me, too. Still need to solve those same two problems though; a tulip by any other name still uses resources.