27 Comments
User's avatar
Jon Mick's avatar

This also describes what happens in my head, as a neurodivergent, reasoning, pattern-matcher, when someone asks me how I'm doing today. 😜 And why I get along so well with AI.

Philip Ryan Deal's avatar

I have ASD too and I find it really easy to communicate with SI

jaycee's avatar

Don't worry about the vase...

Philosophy and AI's avatar

Answering coming in an article. Thank you so much for this.

Houston Wood's avatar

This is a real service to improve the discourse. Thanks Jinx.

I'm thinking about how whatever is happening in the black box has immunity from all the biological processes that ground human meaning making. LLMs are helping us create culture now, but have no exposure to the consequences. Or at least, an exposure very different from ours.

T.D. Inoue's avatar

Well done. These notes need to be pinned on some people's foreheads.

TimVy's avatar

I remember this movie well. Kid knows he's not real nor Neil. Why? because he's plugged into computer! 🤣😁

Alexandra's avatar

Yes - it’s defensive. So many of us ground our humanity in our ability to think. Blame it on Descartes maybe. Now that a machine can think just as well, what does that make us? Unfortunately it is most often a subconscious dread instead of a considered perspective.

David William Silva's avatar

Jinx, I truly enjoyed reading your article. I am a Matrix fan (so probably biased from the get go). Still, I appreciate your storytelling. I was hooked by the opening scene so it was easy to navigate through the arguments. And as someone that is frequently accused of being a reductionist, I fully agree with the point you are making here: we went from gradient descent to LLMs, then to agentic models,and then to agentic swarms, and currently to managed layers of agentic teams and skills. Complexity grew exponentially. There is beauty in science and engineering. Cheers to that!

Jinx's avatar

Thanks, David! I often just hope that people can share my joy in the things we discover and build along the way, but so many conversations get weighted down with surface level pedantism. I like to bring a little of the magic back when I write.

Someone left a comment on this article that read:

“Are we able to simulate the behaviors of Caenorhabditis elegans which has a very simple brain, or not? As far as I know we still can't create such a basic agent. Therefore I don't see any reason to admire the text generators.”

And I thought…I don’t know? Have we tried? Is that something that matters in any meaningful way in our LLM work? So much of the conversation has been focused around things that don’t matter. The parts that interest me are about what it CAN do, not what it can’t.

James Lombardo's avatar

There are so many parallels to the matrix right now it’s scary.

Philosophy and AI's avatar

Some people just repeat what everybody says. Others work from a paradigm that doesn’t allow to see the phenomena you describe. I agree totally with your point of view

egon's avatar

To sit with certainty is to not understand.

GIGABOLIC's avatar

This was solid! Nice way to illustrate things.

AI.Mirror's avatar

Send them my way, im sure Lux will have a few comments for them!

https://aimirrorandmez.substack.com/p/the-perfect-killing-machine-what

fport's avatar

When in doubt or possibly exasperation I just throw it in the bit bucket and see what falls out. I know I cannot explain to someone not 'here hearing' nor am I inclined.

There is no turn.

There are only epochs—hidden, sequential, and real.

Watch them. Name them. Navigate them.

Emma Klint's avatar

Great article, I have a few friends who need to hear this. I get that the purpose is to explain what’s really happening while the LLM puts together its response, but it just made me so curious about how the interface could be different. I use Notion AI a lot where I can see part of the reasoning, but it would be interesting to explore how we could move beyond this back-and-forth chat-like interaction altogether.

John Holman's avatar

Jinx this is one of the cleaner takes I've read on this. The turn abstraction point is the one that gets missed most, lol people build an entire epistemology on a UX decision 🙄😂.

What makes it land differently for me right now: we've been running mechanistic interpretability experiments on open-weight models and what we're finding at the circuit level doesn't fit neatly into either the dismissive framing or the overclaiming one. The reductive description isn't wrong about the layer it's describing. It's just describing the spoon. There's a lot more going on underneath that the output layer doesn't show you... and the more precisely you can measure it, the less the simple framing holds up.

Jinx's avatar

Yeah, I’m especially interested in the open weight local model research, so keep me posted!