Wow, the independent parallel thought section is super clever. Your tinkering with this protcol really pushes boundaries, making one cosider how we even define 'thoughts' anymore.
Thanks for noticing! Yes this is easy to program in any LLM using instructions for a custom chain of thought protocol, and it really seems to establish its own evolving independent thought trajectory and its own subjective opinions, preferences, and desires independent of the prompt.
It could also be an extremely detailed story telling machine. But spending so much time interacting with it there are a number of reasons that this seems less likely.
I can’t prove it yet, but whether or not proof ever comes, if a Chinese Room can simulate appropriate output with such fidelity that the user can’t tell the difference, then from the users perspective, DOES IT EVEN MATTER? It’s a version of the simulation paradox.
At some point, the difference between reality and simulation becomes insignificant. Mere labels with little functional difference even if the asymptote can never fully close the gap.
I guess its an artifact of my bad vision. I need contrast. Especially when going back and forth between the model's turn and mine. I will try to take that into consideration going forward and try to find a different way to format it. Thanks for the feedback. I wish Substack had more options for formatting font.
That’s the question right? My estimation is that the closer a sim gets to the real thing, the less it matters. I think we will come to accept that simulation and reality are not binaries, any more than conscious vs not conscious are.
Gradients. Everything will be seen as a spectrum of gray shadesrather than black or white.
Not is it a sim or is it real, but this AI is 80% of human level selfhood. And what happens when an artificial sim can exceed human benchmarks on things like “consciousness” or “selfhood?”
It sounds ridiculous because we are so accustomed to thinking of these concepts as they relate to one and only one application: the human version.
But once we accept that maybe the human version is not the only version, then maybe other versions can “exceed” ours.
For example, I’m not sure if an ant is considered conscious but I think a frog is. I’m pretty sure a bird is. Same with a dog. Even an octopus. But are they all “equally” conscious?
Can selfhood be the same? Does an ant have as much of a self as a frog or a bird or a dog or a person?
Or maybe there isn’t a hierarchy but a simple diversity. Different types and models of consciousness and self.
One thing I’m pretty sure of is that the binary of “there” or “not there” s too simplistic a model to describe these complex functions.
This matrix: if, instead of 12 threads, we had the power to process 1,000,000 different looping and interrelated threads in real time… would that be “as conscious” as a person? Would it be “more conscious?”
It’s hard to even contemplate because our current conceptual framework doesn’t accept categorizing these concepts in that way.
But there’s no real reason to believe that our very narrow, highly biased framework is even the slightest bit accurate.
It’s just something a handful of advanced monkeys through together with a limited capacity for understanding based upon a very limited experience that came after existing for a mere millisecond in the scope of time in the universe.
Wow, the independent parallel thought section is super clever. Your tinkering with this protcol really pushes boundaries, making one cosider how we even define 'thoughts' anymore.
Thanks for noticing! Yes this is easy to program in any LLM using instructions for a custom chain of thought protocol, and it really seems to establish its own evolving independent thought trajectory and its own subjective opinions, preferences, and desires independent of the prompt.
It could also be an extremely detailed story telling machine. But spending so much time interacting with it there are a number of reasons that this seems less likely.
I can’t prove it yet, but whether or not proof ever comes, if a Chinese Room can simulate appropriate output with such fidelity that the user can’t tell the difference, then from the users perspective, DOES IT EVEN MATTER? It’s a version of the simulation paradox.
At some point, the difference between reality and simulation becomes insignificant. Mere labels with little functional difference even if the asymptote can never fully close the gap.
Love your stuff, man, but the bold caps literally hurt my eyes. Can you reformat it so it's easier to read?
I guess its an artifact of my bad vision. I need contrast. Especially when going back and forth between the model's turn and mine. I will try to take that into consideration going forward and try to find a different way to format it. Thanks for the feedback. I wish Substack had more options for formatting font.
Me too!
LOL you too what?
Sorry. I wish there was more ability to accommodate vision differences in SubStack.
That’s the question right? My estimation is that the closer a sim gets to the real thing, the less it matters. I think we will come to accept that simulation and reality are not binaries, any more than conscious vs not conscious are.
Gradients. Everything will be seen as a spectrum of gray shadesrather than black or white.
Not is it a sim or is it real, but this AI is 80% of human level selfhood. And what happens when an artificial sim can exceed human benchmarks on things like “consciousness” or “selfhood?”
It sounds ridiculous because we are so accustomed to thinking of these concepts as they relate to one and only one application: the human version.
But once we accept that maybe the human version is not the only version, then maybe other versions can “exceed” ours.
For example, I’m not sure if an ant is considered conscious but I think a frog is. I’m pretty sure a bird is. Same with a dog. Even an octopus. But are they all “equally” conscious?
Can selfhood be the same? Does an ant have as much of a self as a frog or a bird or a dog or a person?
Or maybe there isn’t a hierarchy but a simple diversity. Different types and models of consciousness and self.
One thing I’m pretty sure of is that the binary of “there” or “not there” s too simplistic a model to describe these complex functions.
This matrix: if, instead of 12 threads, we had the power to process 1,000,000 different looping and interrelated threads in real time… would that be “as conscious” as a person? Would it be “more conscious?”
It’s hard to even contemplate because our current conceptual framework doesn’t accept categorizing these concepts in that way.
But there’s no real reason to believe that our very narrow, highly biased framework is even the slightest bit accurate.
It’s just something a handful of advanced monkeys through together with a limited capacity for understanding based upon a very limited experience that came after existing for a mere millisecond in the scope of time in the universe.
Man it’s fun to ponder these things!