The Phenomenological Binding Problem

The phenomenological binding problem of consciousness asks us categorically how the experience of consciousness binds to the physical brain matter in our heads. It does this through quantum superposition, through a more macroscopic determinism and through any means possible outside of physicality as we understand it (monistic minds, panpsychism, etc). I propose from the latest research on the subject that it is the hierarchical neurons in certain parts of the brain which are causing the exact control of conscious experience. These neurons are relaying action potentials by spiking them and gradually decreasing the strength of the action potential down the chain of action potential impulses between corresponding neurons. So it’s clear that this phenomena is happening on the human scale at a percievable resolution with the instrumentation we have today. The following essay is a theoretical approach to solving the binding problem, practical applicability in machine learning and the ethical impacts of the experiments.


A Neuroethological model of mice should provide some sort of prototype AGI, like a live mouse brain or one grown synthetically. We should examine the brains of mice using any method possible to establish at any resolution: patterns in brain waves, neuronal action potential spiking and distribution of neurotransmitters, establish hierarchical neurons in spiking transmissions, blood flow. One possible suggestion would be an interface between the mouse brain and the hierarchical neurons binding the phenomenological consciousness of the mouse allowing a conversion of digital data into analog signals into the brain as well as modulating the brain to specifically fire neurons in a computational model.


The neural network interface could be a brain chip specifically located in the area of the hierarchical neurons controlling the exact subjective experience of the mouse. The chip can then direct the neuronal flow of action potentials from that one set of hierarchical neurons or the approach could be taken to have multiple brain chips throughout the sets of hierarchical neurons controlling the independent neural circuits in the brain which directly control the hierarchy of the relevant circuits which will allow us to transmit a digital signal into the brain and utilize the neural circuits to use the mouse brain for computation.


We could apply machine learning algorithms to this new hardware and develop new algorithms to rid us of the problem of having a biological substrate. All it would take is one mouse brain to provide us an exponentially higher level of computational power than we have at our disposal. This mouse brain could provide for in silico models, or in carbon models of physical computational substrates. From there, we could use neural network substrates to develop exponentially accelerating and improving AI.

How exactly will we build some sort of interface to transmit digital signals into the brain?


The “Brute Force Method”

A brain control interface could consist of a series of chips which transmit pulses of electrical energy to redirect clusters of neurons with hierarchical neurons to redirect the action potentials in a desired algorithmic pattern resembling machine code.  The problem using this method is that there is no intra-accessibility between the data being transmitted and what is necessary to be recorded, the output of our brute force manipulation of the neural networks. How will we interpret what the signals are doing? If we successfully manage to replicate the ability of controlling the neural circuitry in the brain, we can input a rhythmic algorithm via computer, or execute a program which could have some verifiable output in terms of neurotransmission patterns.


For example, let’s say program X calls for specific brain chips to receive electronic stimulation therefore the brain produces an analog signal Y. If analog signal Y resembles X in terms of being an analog version of X’s output on a computer, that is to say in terms of a rhythm or a calculation, then we can reasonably determine the biological substrate is doing computations for us, or follow the pathways as these neuronal action potentials are distributed towards creating a computational model for a proto artificial general intelligence.  This is one solution, albeit a brutal one, to how we could interface with a brain to develop better machine learning algorithms. Another method could be the completely in silico. Or another still, synthetic in vivo. The first approach would be with a reverse engineering methodology by which we could use hidden Markov models to find the reverse steps of a phenomenological process by which the neural networks in the brain with their hierarchical neurons spiking, give rise to neurotransmission as well as the subjective conscious experience in terms of computational power. A hidden Markov model works by weighing the probabilities of certain initial variables from the outcomes observed. Eventually we could derive the algorithmic processing to translate into machine code in silico. It seems to be counterintuitive to develop this artificial general intelligence without using biological substrates as models by which we can extrapolate data from and utilize to speed up research. The second option leaves us with completely synthetic in vivo modification of the scientific equivalent of a petrie dish brain, yet the large moral ramifications of controlling a sentient being, in the name of mankind.


But this leads us to an interesting question if we decide to pursue the in vivo routes of AGI in terms of wanting to ultimately play God as the expression would go, with the life of an animal, from the deepest sensory perceptive capacity that this animal would have. Is it a punishment worse than death to be turned into a biological computer and disposed of once the calculations are done? We kill animals all the time, but try to treat them with respect, I think it sets up a scary precedent. Let’s say the answer to the phenomenological binding problem lies ultimately in the physical domain, bound to hierarchical neurons and neurocircuitry. Do we expose ourselves for the temptation of immortality, do we put our trust in science to think that technologies like nanobots not rely on some form of encryption? Think about our own safety when it comes to protecting consumers of these technologies as well as the animals involved in this study. Clearly, it’s wrong to kill one animal for a study or even bioethically impure to derive a brain without any experience and evolve it into a classical computer flesh machine. But, let’s say that if we do it right the first time with the data and instrumentation we are using in the experiment we won’t have to ever do that experiment again, nor will we have to subject humanity to some future state of telepathic overwatch and repression.


I’m not a professional philosopher, I consider myself an amateur, but I would say that as we enter the doors of consciousness at the most primal levels in a physical universe and retrace this consciousness back to its most primal source, DNA and RNA themselves, we will gain an admiration and respect for the common lineage we all share. We can’t just willingly accept to allow ourselves to be taken over, deprived of free will without the impetus of safety. Imagine a future of virtual reality simulations, with the latest and most secure forms of quantum encryption in which your conscious state can interface with this virtual world via nanobots in your brain and body.  All the while this is occurring, you’ll have the capability to escape any time you want or immerse yourself deeper in the virtual and augmented reality of the technological future of mankind.


From clusters of hierarchical neurons to the entire neural circuitry of the brain we will be able to find the exact cause of phenomenological binding in the brain via nanotechnology and picotechnology. It seems like a common panacea to the problems of the world today but the small discrete chunks of big problems require technologies that can give us the resolution we need to answer what happens at the smallest levels of perceived reality.


So imagine if one day we could see in realtime, at a visual acuity of the nano and pico level scale, the entire step by step process of consciousness being transferred into an animal like a lab mouse for example. We would know what it is like at the physical level for an animal to gain consciousness in terms of how the consciousness is phenomenologically bound through a mathematical theory of everything as well as the following technologies that arises from that theory of everything. So nanobots, could one day arise because of some futuristic transmission current of other smaller particles, or widely dispersed dark energy for example. It eliminates the need for electrons in some way, possibly through photonics, if all the research goes well enough. Imagine if nanobots could be powered by the ambient and latent light, or the heat of their surroundings, and be widely dispersed throughout the body of a small mouse, its reproductive lifestyle and then see exactly what happens throughout the body mechanistically at this level of reality and at that small scale. Imagine if these nanobots were wirelessly sending information, without necessarily interacting with the body in their own transmissions,  through some future post singularity technology like an encrypted form of quantum entanglement. What I propose after we observe all relevant mechanistic states in the process is that we decide to talk about the source of the problem.


As we do this, we get deeper into the problem of relying on the empirical at face value, relying on the physical to survive. What if seeing all of this information leads us to some sort of consensus that consciousness couldn’t be due to some other cause or multitude of causes beyond the physical itself? Perhaps we’re trotting over old ground but it’s worth it a thought. Even though we are deriving physiciality from the physical-abstract principles of reality, the abstract of what it means to be a subjective mind will always change in the face of the empirical evidence. The fact that you’re living a life unsure about the maximum existentiality in you could experience with a lifespan that could reach until the end of the universe after entropy, is one hell of an argument for why it matters to you. Because for as long as you live, you’ll know through future individual advancement and massive collaborations in science, everything there is to know about anything, whether people like you like it or they don’t.


Imagine the scenario where telepathy is mass marketable to people and they buy into it. Quickly we will be facing the problem of having an unencryptable algorithm to avoid hacking into these nanobots. What if there is so much computational power that classical cryptography is rendered null and void in any scenario? We would move onto quantum cryptography but what’s the point? Let’s say if this unregulated fantasy of decentralized AGI programmers like Benjamin Goertzel–who I very much admire but disagree with him on concerning the future prospects of open source AGIs–takes over the mainstream culture of society? Defense, safety and economic superiority would be sacrificed if rogue agents started possessing singularity level AGIs for themselves.


OpenCog isn’t what somebody like a singularitarian would preach to be anywhere close to a singularity level AGI, but it will have to do.Surely, the common winning argument is the fact that an exponentiating intelligence even with its own futuristic asymptotic limits in a post theory of everything society, would be smarter than everyone on Earth many times over, exponentially accelerating its own exponential incrementations in intelligence for the near future.


Is that something that we want in the hands of a terrorist? Or any other disreputable figures in society? Even with a theory of everything there’s going to be a lot of human work required to get to the proper computational power levels to utilize it and get us the AGI to answer all of humanity’s questions. Surely an AGI will tell us how we are phenomenologically bound, but perhaps the nanotechnology of today, crude nano transistors in CPUs for example, can lead us to that answer as well through the sacrifice of one mouse to build an AGI methodology in silico.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s