What Generative AI Output is Missing
The No One’s Home Effect
I just wrote a comment for Ignacio de Gregorio’s post, AI Exits Uncanny Valley. So, What Now? about the advances in Generative AI video production. It’s a nice read. I recommend you check it out.
The point of the article is how rapid advances in this technology are making AI videos more and more indistinguishable from videos with, well, you know, actual people in ‘em.
I left this comment, which I thought would also be useful to post here. It’s a comment about schemes to watermark, in the metaphorical sense, AI videos so you know they’re not real. The article, among a few other things, proposed a blockchain authenticity idea, which is just great, except that bad actors, for whatever reason, simply won’t use it.
My comment:
I’d like to disagree in a certain sense. I’ll call it John’s Rule (Why not?): Once something is possible, it happens.
Our own oh so clever little brains will do it. Sure, you can use some blockchain ownership scheme, but then the only way around that is simply not to use it!
And the incentives, well, they’re the usual things, $$$, power, sex, mischief, political power… You know the list.
In 1905, I believe, Einstein figured out E=MC2. It only took us 40 years to make use of it.
So what to do?
There really is a “tell”, as you would say in poker. I call it the no one’s home effect. We’ve invented these systems using Transformer generative AIs (All You Need Is Attention!) that really quite cleverly correlate whatever data you have in parallel. That’s the trick: Everything sees everything else. And that is super fucking powerful. It is analogous to how our own brains work, ie, one neuron can have (I think my memory’s correct) 10,000 connections to other neurons, ie, we can build a world.
But what the hell do we have that this freaking Skynet doesn’t?
LOL — Antislop! I know the world I’m looking at, just the room where I’m typing these words has a certain, what do you call it? Oh, it’s real. What a concept! You can put guardrails on a generative AI (It reminds me of a song from the 1960s (actually 1971, now that I’ve looked it up), “Do this, don’t do that/Can’t you read the sign?”)
So just like a Tesla in AI self-driving mode, if I didn’t program in that particular sign, well, you know…
If you’re a programmer, you should see the parallels here, ie, previous attempts at AI which used languages like Prolog or LISP to just put in every logical rule. It made sense, except for the fact that it didn’t work. (I’m a comic. I can’t help it.)
Does that remind you about why supposed Reasoning Models appear to be failing? What do we do with our lovely massively parallel biological neuronal systems to bind our correlations with reality that these quite nifty silicon neuronal chips don’t?
Indeed. It looks like Tom Cruise, or Taylor Swift, or Brad Pitt (a video example from the article), but … well, let’s just use Brad Pitt: Would you still have tears after watching Seven Years in Tibet?
I bet not.
You gotta have someone home. Let’s see where this all goes.
_______________________
PS — I still love (((((LISP))))).

