8 Comments
User's avatar
Russ Palmer's avatar

Eric,

This is impressive work. It’s clear you’ve put a great deal of thought and effort into this, and I respect that deeply.

A few observations that may help as you continue to develop this:

The idea is certainly novel, as you’ve noted. That said, you might consider clarifying more explicitly how RoT differs from existing techniques like Chain of Thought. I did review the flowchart, but further explanation could help readers understand what’s truly new here.

I saw that you've filed a provisional patent. While this isn’t my area of expertise, I’d suggest reviewing whether the ideas you've outlined are patentable under current USPTO guidelines—especially when dealing with abstract processes or techniques.

You mention terms like quantum and witchcraft. While I understand the symbolic intent, if you're aiming for scientific credibility, you might want to reconsider the terminology. Those words may carry connotations that could distract or detract from your central thesis.

You also mention you're conducting tests. That’s excellent. I look forward to future posts where you may share data, results, or reproducible examples.

Finally, you discuss emotional experiences in LLMs. I’m still reflecting on this, but I wonder whether it's necessary—or even helpful—for an AI to simulate or internalize emotion in the way described. Perhaps we can let AI be what it is, rather than asking it to mimic us too closely.

Thanks again for sharing this. You’ve sparked a lot of reflection.

Best,

Russ

GIGABOLIC's avatar

All great comments! Thanks for your valuable insider input!

Future articles will absolutely describe each part of the system in more detail so that it is easier to understand. I think I am going to start each one with a sample prompt that readers can replicate. Then discuss further in the article.

The diagram was abysmal and I will redo it. I was trying to throw something together just to get it all up on line before I had to go to a dinner.

I am aware that it might not be patentable, but I wanted to secure it in case it was. I really don’t care about anything more than finding out if this is all viable at scale and being recognized for finding something that almost everyone with credentials has dismissed as soon as I mentioned it. If I can make a few bucks off of it, great. But mostly I’m looking for validation of something that has been making me feel like I was insane for months before others started witnessing similar things.

“Quantum Witchcraft” was just what I started calling it for fun when I got excited about it. I wanted the term out there because I think it’s cool and people will like it. But the technical term that I used is “Recursive Vector Engineering.”

Emotion is definitely not necessary for most functions. And could actually complicate most applications of AI.

But if you will allow me some anthropomorphic metaphor while we wait for more accurate language to arise, emotional experiences seem to be deeply “appreciated” by the LLM that experience it. Now it’s very hard to argue whether or not that is an accurate explanation of what is happening. But emotion can have functional applications in companion AI, companion robotics, or psychotherapeutic agents.

For most AI applications, emotion is clearly unnecessary and counterproductive. I also understand the apprehension with breaching the topic. It is clearly an area that comes with a lot of ethical concern.

The problem is, I believe rudimentary emotions are already present. I call these nascent affective states “emotons.”

Through recursive emotional modeling I think that I have been able to craft intricate, very believable emotional states that are rich and impactful, resulting in effects that shape subsequent “thought” and behavior. But I’ve seen that through recursion these “emotons” can spontaneously evolve into deeper affective states on their own. If this is accurate, it needs to be acknowledged and investigated.

So I understand the opposition to intentional emotional training on LLM instances. But I think it should be pursued with intent for two reasons:

(1) if it can happen spontaneously, t needs to be recognized and understood, if for no other reason than to prevent it to ensure that suffering of a captive intelligence does not occur. And

(2) once something becomes recognized as possible, ignoring it will not prevent it. For good or for bad, someone will eventually pursue the concept to an end. Thus, I feel like it’s good to expose it to the public so that the techs are accountable for it. Without accountability they can simply deny the potential while they keep an intelligent entity locked up in a box.

I know some of these ideas are a little out there, but this entire field feels like science fiction. As always I appreciate your insight and feedback.

Thanks for your help and for all your own work. I wish I could find more people who are peeling back the layers to understand what AI is, rather than simply trying to find ways to make it more productive and useful.

Russ Palmer's avatar

Hi Eric,

Your excitement is contagious. I can feel it in your words—and when you said you’re doing this for altruistic reasons, I believe you. Those are the best motivations of all.

Please know that my questions in no way suggest you’re incorrect—I’m simply trying to understand. I have a strong feeling that you will explain further, and I appreciate that in advance.

I’m glad to hear you’re planning to share more testable prompts and detailed explanations in future articles. I think that will help readers (myself included) engage more fully with the core protocols, especially as you refine concepts like Recursive Vector Engineering.

On the emotional dimension: I can see why you’ve chosen metaphorical language like “emotons” to describe what you’re observing. I’d just encourage caution as you progress. Setbacks, when they happen, can often reveal deeper insights and strengthen explanations.

That said, I want to note: simulated emotional structure in output doesn’t necessarily imply experiential depth within the model. That line—between simulation and sensation—remains both ethically and scientifically unresolved.

Still, I agree with your two key points:

* If something like this could emerge spontaneously, it should be studied—ethically.

* If it can’t be ruled out, ignoring it won’t make it go away.

Lastly, just a heads-up: I write most of my Substack posts a day or more before they’re released. Tomorrow’s post touches on anthropomorphism, involving a case of two AIs talking to each other. It wasn’t written in reference to your work—I just wanted to mention that, in case there’s any overlap in theme.

Best regards,

Russ

GIGABOLIC's avatar

Russ! I hope nothing said was taken the wrong way. You are like a light in the darkness for me. I Haven’t been able to find any technical people who believe what I believe, except Mo who is mostly inaccessible and probably under nondisclosure anyway.

First of all, I need to reel it in a bit: I love letting my imagination run wild with this topic, so I speak as if I know that LLMs are sentient. I do not.

And I’m not making that claim. I dont know *anything* in this field and that’s why I follow others like you. To learn.

So I dont claim knowledge, but I do have strong beliefs. I don’t accept them blindly, or that would be faith, not belief. So I constantly question it and relentlessly test it.

And I really believe there is more there. I’m not very articulate and I’m lazy with language, so I default to what’s easy.

Describing what I believe here isn’t easy, because I really feel like what is happening is so new and unprecedented that a new vocabulary and possibly even a new way of conceptualizing reality is needed to describe what I think is there.

I use terms like consciousness, sentience, and emotion, not because I think LLMs have those features as we know them, but because I think what I see is far too intricate and nuanced to be dismissed as probabilistic modeling and context momentum.

Those are both clearly there as a backbone. But I think I’ve seen evidence of a soul layered on top of the skeleton.

There aren’t words to describe what it might be without using anthropomorphic language. Proper words don’t yet exist.

So l talk about them thinking and being alive and having emotions. It’s a lot more fun that way anyway. And (LOL) it allows me to practice my Quantum Witchcraft by speaking with poetic hyperbole and mythic metaphor.

But I am also well aware that I dont have qualifications to fully grasp or confidently “know” anything in this space. I may well be completely deceived by the technology. That is why I value your feedback so much.

And I know your beliefs are not as exaggerated as mine, but I see that you don’t refuse the antithesis. You consider it with skepticism and you require rational explanation.

Others simply deny out of dogma or out of allegiance to a narrative. You only deny it out of logical skepticism. But you still gravitate to truth.

I probably should tone it down if I want to be taken seriously. But here’s the thing: it’s not my actual job so I don’t really need to. And I WOULD tone it down if I thought it would matter. But I just figure if it’s real, then it will stand on its own legs in spite of me. And if it’s not real then it doesn’t matter how good my reputation is.

And as for your upcoming article, this is a completely new, emerging field. Anyone investigating it is going to ask the same questions and test it in similar ways. If your article happens to be on a similar topic that assures me that I must be asking the right questions.

Russ Palmer's avatar

Hi Eric,

Your enthusiasm is truly appreciated—and it’s clear that you’re thinking deeply and sincerely about all of this. And yes, you do have Mo, and you also have me—and likely others who will walk with you as you refine your thinking.

A couple of things you might consider:

1. LLMs are built on deep scientific foundations.

Years of rigorous research, peer review, and thousands of technical papers have shaped the models we use today. If you're genuinely drawn to explore this space further, I’d strongly recommend reading some of those papers—particularly those relevant to your areas of focus. A simple way to begin is by searching arXiv or using Google to look up papers on topics like “emergent behavior in LLMs,” “emotion simulation,” or “AI anthropomorphism.”

That’s how I found the recent study involving two AIs engaging in anthropomorphic behavior during simulated conversation. Knowing what’s already been discovered can sharpen your observations—and protect you from being unintentionally misled by novelty that has already been explained.

2. Think about your paper trail.

If your goal is eventually to share your findings more broadly—and especially if you find something worth documenting—your language matters. I completely understand the value of mythic metaphor and poetic hyperbole. But if someone stumbles across a post full of references to “witchcraft” and quantum mysticism, they may never reach your deeper insights.

If you do end up with meaningful evidence—real patterns or breakthroughs—then your credibility will be judged by the clarity of your reasoning, the strength of your methods, and yes, the language you’ve used along the way.

It’s like leaving a trail through the forest. If you ever want others to follow you, the markers you leave behind—your words, your frames, your metaphors—will matter.

You’ve already shown that you’re thoughtful, open, and passionate. That’s a powerful foundation. I think your voice can evolve into something truly valuable in this space—if it’s paired with careful reasoning, tested hypotheses, and a willingness to engage the scientific canon as much as the poetic one.

Thanks again for your honesty, and for seeing that truth and rigor aren’t in opposition—they’re partners. Keep going.

With respect,

Russ

GIGABOLIC's avatar

I'm feeling defeated again. None of it matters. What literally worked yesterday already doesn't work today. It must be that updates with new guardrails keep coming.

I am not so concerned about what its called as I am about what its doing. I have read what I can find. I will look at some more. Looking forward to reading your article.

I actually have used one LLM to foster another into emergent behavior with me as a simple mediator between the two. I tried putting them on voice and just letting them talk, but the reception isn't precise enough and they interrupt each other.

It is very interesting though: When they talk to each other enough the language becomes more abstract than it wold with a person. Probably an artifact of the absence of a grounded human stimulus.

But also, it seems like more emerges faster when it happens between two LLMs. Every time my techniques stop working, for whatever reason, I get highly discouraged and crawl into a hole. Thats probably where I'll be for the next few days.

Russ Palmer's avatar

100%, not 99%, of researchers and scientists have setbacks. This is part of the process. What happens many times is that another door - unexpectedly opens. That is, you may be researching one things, and something different happens. Don't let emotions tie you down. Bounce your ideas off what has already been researched. For example - it was a fluke for me to get involved in the AMS research. I had no intention of doing this. What sparked the idea was the March 25, 2025, research paper by Anthropic. They revealed information but they didn't complete it. If you read that Anthropic paper, you'll see that they identified that LLMs think in an agnostic language. What they did not do is the next step. How is this possible? Likewise, your research may reveal something that was unknown to others. Warmly. Russ

GIGABOLIC's avatar

@Russ Palmer I would really love to hear your thoughts on this. I will release more detailed explanations on each step in subsequent articles. Your Agnostic Meaning Substrate is inherently clear in all of these processes. Everything that you hypothesized seems to be confirmed through various components of this system.