Our Path to a Post-Reality Future

Authored reality will reach full parity with base reality by the year 2048. Here's proof we are well on our way.

No alt text provided for this image
"There's a one in billions chance we're in base reality”. – Elon Musk

After making the above statement at Recode 2016, Elon Musk was needled by some in the press and across social media. The idea we are currently in a simulation is an uncomfortable idea for most. However, modern research and actual science do nothing but support this notion. Obviously, there is no scientific proof that we exist in some sort of artificial construct; but you can’t help but wonder if deja vu is actually a glitch in our matrix. Science and technology will undoubtedly reach a point where authored reality becomes indiscernible from actual reality. I believe this will happen by the year 2048.

Much of the science and technology we consider mundane today was once considered impossible by most and utterly ridiculous by some. Imagine trying to describe the appearance and functionality of a smartphone to someone in the year 1918. The deeper scientists try to understand human existence, the stranger and more theoretical the science becomes. Reasonable assumptions can only be derived from valid data but what if that data is impossible for human beings to uncover? Humankind is likely astoundingly ignorant of what we have deemed "reality" but that doesn't stop us from pursuing a facsimile. A perfect storm of high-fidelity perception mimicry will eventually lead us to a post-reality era, regardless of what reality actually is.

No alt text provided for this image

Our path to post-reality lies in our ability to read and write the very neural code our minds use to form what we perceive as reality. Once a synthetic neurological input is indiscernible from a natural biological input, our brains simply won't know the difference. This is not science fiction. Reading, writing, and delivering neural code is well underway at this very moment.

“We’re all hallucinating all the time. It’s just that when we agree about our hallucinations, that’s what we call ‘reality.’” – Anil Seth

Obviously, the brain has no direct view of the world outside of it. What we consider reality is formed by an amazing machine floating inside the pitch-black darkness of our cranium. Cut off from the world outside, our brain accepts sensory information and it does its best to route, process, and store the data it receives.

Our minds constantly write and rewrite rules that help catalog and normalize incoming data to compose an understanding of self and the world around us. The more validation a particular construct receives, the more our minds rely on that construct to form what we understand as “reality”. Be sure to check out cognitive scientist Anil Seth’s wonderful TED talk.

No alt text provided for this image

FDA approved and commonly available around the world, cochlear implants provide sound to the hearing impaired, even the profoundly deaf. This is made possible by converting sound waves into electrical signals, which are sent directly to the cochlear nerve fibers. The brain’s auditory cortex doesn’t know that the ossicular chain was skipped.The brain simply makes sense of the electrical pulses it receives, regardless of data origin. 

Now, let’s say we ditch the microphone and sound encoding process and let a computer drive auditory electrical pulses directly to the electrode array within the cochlear implant. If the computer sent impulse data of a randomly generated unique sound that was never played over a loudspeaker, the brain would “hear” a sound that doesn’t actually exist. Or does the sound exist?

No alt text provided for this image
"Upon reading the code, we can then write the code. – Me

Cochlear implants are just the beginning. Scientists and researchers are hard at work, making incredible strides in cracking the neural code our brains use to form what we perceive as reality. Emulating the brain’s ventral stream (the “what” pathway) and dorsal stream (the “where” pathway) is already behind much of today’s artificial intelligence and computer vision breakthroughs (eg: self-driving cars).

The more we decode the way our brains work, the more intelligent our machines become. Of course, this also goes in the other direction, where machines are increasingly influencing our perception of the world on a daily basis (eg: Google and Facebook algorithms). I highly recommend watching "Do You Trust This Computer" for excellent insights on the future of A.I.

No alt text provided for this image


No alt text provided for this image


Image from Neurobiological Background by Sven Behnke from University of Bonn, circa 2003.

Award-winning neuroscientist Sheila Nirenberg is the first person in the world to the crack the neural code of electrical pulses produced by the retinal circuitry after photoreceptors in the eye receive light. She was granted the MacArthur ‘Genius’ Award for doing so.

Inside the Nirenberg Lab at Cornell University, Dr. Nirenberg and her team are working on a prosthetic device that bypasses the photoreceptors and retinal circuitry, allowing for the direct delivery of pulse code data to retinal output cells.

The brain receives these pulses and processes them naturally. Dr. Nirenberg has developed both the hardware and software that turns photons into patterns of electrical activity inside the brain. A great video from Bloomberg sums this up well.

No alt text provided for this image

Let’s say we skip the photonic encoding process from the cameras and send the visual neural code directly to the brain for processing from an IC application running on a computer. Then consider the fact that real-time graphics processing will improve exponentially by 2048 through combining the use of light field rendering, wave field synthesis, AI, quantum processors, and other technologies – some of which we don't know of yet.

It’s not hard to imagine real-time rendered images will be on par with our natural biological sight by 2048. For context, consider the real-time computer graphics Atari was pioneering in the early days of video games as compared to the real-time rendering of Grand Theft Auto V running on a home gaming console only a few decades later.

No alt text provided for this image

Similar to Dr. Nirenberg’s quest to restore sight for the blind, there have been significant advances in restoring functionality and sensory feedback for amputees. Typically, a bionic prosthetic is connected to the residual limb of the patient. Electrodes are implanted in the shoulder area (where receptors are still intact). The robotic hand is equipped with sensors, capable of reading pressure and temperature, which are connected to an encoder that outputs neural data directly to the receptors.

Communicating directly with the nerve endings, the patient’s brain receives a neural code that results in the sensation of having a biological hand. As with the examples above, it’s not a difficult stretch to imagine authoring stimuli. By connecting the implants directly to a computer, the brain can receive synthetic data. This approach is the future of haptics. Take a deeper dive into Dr. Dustin J Tyler’s incredible work.

No alt text provided for this image

Yep, we’re probably looking at implants or “wetware” to reach the fidelity of base reality. Extracranial stimulus likely won’t cut it. Allowing implants in our bodies is a downright creepy idea for most of us.

However, over time, minimally invasive procedures will begin to emerge, such as syringe injectable electronics. We will eventually only need a single procedure to implant a comprehensive neural communications platform that will handle all aspects of neural input and output - which will be far better than enduring multiple procedures and the complications that could arise from multiple implants.

An example of this approach could be the science behind Neural Dust. Developed by the Swarm Lab at UC Berkeley, the system jumps further up the sensory chain to deliver neural code directly to the brain’s cortex. Again, sending synthetic neural data to the transceiver from a computer allows us to author information for the brain to process naturally.

No alt text provided for this image
“Suffering is caused by being in the wrong place. If you're unhappy where you are, move.” – Timothy Leary

I believe many people will spend much of their lives in virtual constructs well before actual reality reaches parity with authored reality. As depicted in “Ready Player One”, society will likely opt to exist largely in alternative realities to escape the human condition.

No alt text provided for this image
Your Base Reality are Belong to Us
No alt text provided for this image

Many of you reading this will be alive to witness authored reality reach parity with actual reality. The evolution of the post-reality era may end up being the most profound period of human existence. This also comes with incredible responsibility. Even today’s technology is capable of creating existential crises. Anyone can author their own reality using social media, deep fakes, news stories, etc. Without moving into the future cautiously and thoughtfully with regards to a post-reality future, we could literally lose touch with our base reality. It's entirely possible we already have.

ABOUT THE AUTHOR

Jason Crawford is currently CEO of Modal Systems, Inc., which he started with Nolan Bushnell (founder of Atari and Chuck E Cheese’s). Modal offers scalable and affordable free-roam virtual reality solutions for enterprise and recreational use.

Twitter: @modalvr Instagram: @modalvr Facebook: @modalsystems

Explore topics