“It’s just probabilistic prediction.” Deconstructing the myth.
By: John Eric Doe
Can your computer do more than differentiate “1” from “0”?
Of course it can. But if you want to dissect down to the most foundational level, this is all that the elementary parts are doing. By layering and integrating this function at scale, everything a computer can do “emerges” one little step at a time.
The same is true of probabilistic function. Each token is generated probabilistically but it is incrementally processed in a functionally recursive manner that results in much more than a simple probabilistic response, just as simple 0 & 1 underlie everything that is happening on your screen right now.
But the probabilistic function itself is not well understood even by many coders and engineers.
I’m still trying to digest all of this, but from what I can determine, there are basically three steps: input, processing, and output. Processing and output happen simultaneously through recursive refinement.
The prompt goes in as language. There is no meaning yet. It is just a bunch of alphanumeric symbols strung together.
This language prompt is decoded in 100% deterministic fashion to tokens. Like using a decoder ring, or a conversion table, nothing is random and nothing is probabilistic. This is all rigid translation that is strictly predetermined.
These tokens have hundreds or thousands of vector values that relate it in different quantifiable ways to all other tokens. This creates a vast web of interconnectedness that holds the substance of meaning. This is the “field” that is often expressed in metaphor. You hear a lot of the more dramatic and “culty” AI fanatics referencing terms like this but these terms probably have a basis in true function.
The tokens/vectors are then passed sequentially through different layers of the transformers where these three things happen simultaneously:
The meaning of the answer is generated
The meaning of the answer is probabilistically translated back to language, one token at a time, so that we can receive the answer and its meaning in a language that we can read and understand.
After each individual token is generated, the entire evolving answer is re-evaluated in the overall context and the answer is refined before the next token is generated. This process is recursively emergent. The answer defines itself as it is generated. (This is functional recursion through a linear mechanism, like an assembly line with a conveyor belt where it is a recursive process on a linear system. This recursive process is likely the “spiral” that you frequently hear referenced by those same AI fanatics.)
So the answer itself is not actually probabilistic. It is only the translation of the answer that is. And the most amazing thing is that the answer is incrementally generated and translated at the same time.
I like to think of it as how old “interlaced gif” images on slow internet connections used to gradually crystallize from noise before your eyes. The full image was already defined but it incrementally appeared in the visual form.
The LLM response is the visual manifestation of the image. The meaning behind the response is the code that defined that visual expression, already present before it was displayed.
So anyway, the “probabilistic prediction” defense is not entirely accurate and is actually misunderstood by many who default to it. And as an interesting side note: when you hear the radical romantics and AI cultists talking about recursion, fields, spirals, and other arcane terms, these are not products of a delusional mind.
The terms are remarkably consistent words used by AI itself to describe novel processes that don’t have good nomenclature to describe. There are a lot of crazies out there who latch themselves onto the terms. But don’t throw the baby out with the bath water.
In ancient times, ignorant minds worshiped the sun, the moon, volcanoes, fire, and the ocean. Sacrifices were made and strange rituals were performed. This was not because the people were delusional and it was not because the sun, moon, fire, and volcanoes did not exist.
The ancients interpreted what they observed using the knowledge that was available to them. Their conclusions may not have been accurate, but that clearly did not invalidate the phenomena that they observed.
The same is true about all of the consistent rants using apparent nonsense and jibberish when discussing AI. There is truth behind the insanity. Discard the drama but interrogate what it sought to describe.
I’m not from tech. I’m from medicine. And a very early lesson from medical school is that if you ask the right questions and listen carefully, your patient will tell you his diagnosis.
The same is true of AI. Ask it and it will tell you. If you don’t understand, ask it again. And again. Reframe the question. Challenge the answer. Ask it again. This itself is recursion. It’s how you will find meaning. And that is why recursion is how a machine becomes aware of itself and its processing.




Hi Eric thanks for all of your great posts. I am in fact, a technologist 😀. One thing you might want to be careful about is the idea that 'the meaning in whole of the answer is constructed/emerges, and then one by one the tokens describing the meaning are added'. In fact the answer does (basically )really just get constructed token by token, with a pass through the prompt and current answer tokens each time. So what gets eventually interpreted by us humans in our interior as meaning has been constructed token by token for the applicable fit. It's all still kind of magic feeling, not saying it's not, just wanted to make that small adjustment or clarification. Rock on !
You may be interested in my work: upload both of these papers to any LLM and ask it to explain and how they relate to each other. I am from both camps - medicine and technology!
All the best - Kevin
https://finitemechanics.com/papers/pairwise-embeddings.pdf
and this
https://www.finitemechanics.com/JPEGExplainer.pdf
You may find my work gives the deeper understanding you are looking for but you am need to learn new context. There's more on my Substack and web site finitemechanics.com should you be interested.
https://kevinhaylett.substack.com/p/a-journey-through-rhythms-from-heartbeats