GYSO Drawing Part 17 - Styrofoam
Published: 2019-10-27
Introduction
Tim:
Welcome to the Pavlovian dog show! Today we will be discussing an interesting topic. And whats more, we won’t reach any sort of worthwhile conclusion along the way! Yay!
Let me set the stage: The year is 2XXX, and a Friendly Artificial Intelligence (FAI) has been completed. Of course, this FAI is a super intelligence; so far beyond human comprehension in terms of smarts that its like comparing a human to a Pavlovian dog, if said dog was actually just a paper mache dildo.
Lucky for all of humanity, the universe, goopy droopy, and the rest of eternity, this FAI is, as it says on the tin, Friendly. Specifically, its utility function (the directive that the AI is pre-programmed to complete) abstracts to, “Satisfy human values,” but without any of the obvious, or not so obvious, traps that such a vague utility function would obviously fall into. For the sake of the story, we assume it just works.
So, of course, with a super intelligence that has such a utility function, humanity becomes a utopia; at least compared to the horseshit of today’s world. Things are looking good, until suddenly the FAI makes an announcement: Effective immediately, all human minds will be uploaded into an advanced computer simulation run by the FAI, in order to optimally satisfy every human’s values.
Of course, there isn’t anything humanity can do to stop the FAI from doing this, since it believes that the best course of action to fulfill its utility function is to upload humanity, and pamper them in a perfect simulation. There is simply nothing that we would be able to do to stop it, at this point.
So the time has come, there is only one more day until the Great Uploading, and you are faced with a question: Is the brain that is simulated on the FAI’s computer going to be you, or is it just going to be a copy of you?
Lets assume, for this argument, that the simulation is perfect in every way. That the AI is more than capable of simulating the exact steps needed to simulate a human mind, even down to the quantum level.
And with that, let’s begin the existential crisis!
What Went Right?
Tim:
Imagine one day you woke up and found yourself dragged off to an alien spaceship. Probing happens, and then you go on a crazy-ass adventure that last about 3 Earth months. Intrepid stuff, and all that. Than you come back home only to realize that someone that looks exactly like you, talks like you, and has all your memories from before the time you were dragged away, was living your life.
Is that person you?
Now imagine that one day someone came from outer space, had all your memories (except for any made in the prior 3 months), talks like you, looks like you, and claims that they are you.
Is that person you?
Well, its obvious, isn’t it? The person who isn’t you is the one you would be seeing; who’s body and mind you wouldn’t be occupying. If you are the intrepid astronaut you would be looking at and meeting the person that never left Earth, and vice-versa for the Earthling. Pretty cut and dry. Some other phenomenon cloned you or something, right?
But what if, instead of being cloned, there was never any space adventure? Than the person who is living their normal life 3 months from now would be you. The same if there was no “split” and you just happened to go on a space adventure.
The only problem is at the moment of the split they both had the exact same brain. Each of their brains reacted in the exact way they normally would have reacted given their different stimuli. If there was no split you would claim that the one remaining person was you, but if there was a split you would not be able to decide, in advance, which one you would be.
How strange it is, that you can’t decide on what makes you an individual in advance. Its only in the moment that you can be really sure of your identity.
If you get your brain scanned by an AI, and you wake up to see a version of yourself on a screen, than that person isn’t you, right? You aren’t inhabiting their mind; their thought patterns are similar to your own, but still foreign. Even given a few moments their own mind is diverging from your own, if slightly. The same if you had your brain scanned and you met the brain that was scanned to make you, as a simulation.
But what if, before the scan, your brain stopped firing neurons? What if the simulation started with the exact brain state that you had pre-scan, and the brain that was scanned was never re-activated?
Suddenly, the only pattern in the universe that can be said to be you is the one in the simulation. There was no split. It would have to be you, right?
Or would you just be dead, and some copy of you is living in a paradise simulation in your place?
But the simulation is perfect. It’s basically the exact same brain…
But the simulated person would know they are made from a brain that existed from outside their simulation…
But they would be the only pattern of thought that could be you…
But you are your brain, and if your brain stops forever you are dead…
But you are your pattern of thought. All the atoms in your body change every few years, and your still preserve a sense of continuous identity…
But there would be a gap between being scanned and “being” in a simulation. No continuation…
But between sleep the conscious part of your mind is deactivated every night, and you don’t wake up as a different person every day…
But if both are kept unaware of the brain scan than they would both think they are the original you…
But…
But…
But…
Thor:
You phallic horseradish, how about actually leaving your own thoughts on the matter instead of just leaving me to do the actual philosophizing here? I refuse to play your game. Like when you randomly generated the word “correspond” to postpone your duties to draw boxes, leaving me to do the work.
You are you. This simulation of yours requires the abstraction of the act of experiencing. In the act of emulating firing neurons, you are robbing people of their right to truly experience human fallacy and short-term reward-based thinking.
No, you can’t just say that it’s without any of it’s obvious trappings, fuck you! How convenient for your Fucking AI to be able to be thought into existence without any “obvious trappings”. Even my breakfast has obvious trappings. I could choke on a slice of banana, leave the stove on, accidentally put liquid nitrogen into my oatmeal, drink bleach, fuck pigs, or accidentally smash my head repeatedly against the kitchen counter until death envelops me.
Your ideas are based on a utopia. A perfectly made AI, made conveniently easy to be understood, simply because we can’t, in our current world, handle the complexity of “it’s well-designed, no catch”. No. Go back to the drawing board, and make this applicable to current life. I refuse to do this.
Who do you think you are? You are Tim, co-founder of GYSO. Go back to writing about eldritch beings and stay in your lane. Do you think you’re a philosopher? Yet you don’t even try to give a solution to the problem you are presenting. The only thing you’re giving us is a fleshed out problem with no plan to help us tackle it. You’re only making us ask more questions!
Humor me this: What if your AI was a web app? It simply wouldn’t work. Whatever bullshit Sentient Processor Intel i69 it’s running under the hood would grind to a halt on anything other than the world’s most powerful supercomputer. Even then, the next Chromium update will bring it to a screeching halt. How could you expect a simulation of literally life to run at the same render speed that this bug-infested pile of a blog renders at on a modern computer?
Yeah, okay, so let’s assume I’m always a new person. Every time my conscious brain goes to sleep and the unconscious part does proton and neuron things to me while resting. That means that the life I’m living in now is imperfect. Because the human brain is responsible for it. How does a perfectly sentient AI account for that? Does it poison it’s own codebase? Would it, knowing that humans wouldn’t be able to tell the difference because of our simple human brains keep the world more perfect? Where’s the line?
What Went Wrong?
Tim:
Of course, they’re some repercussions that I’m not considering in the FAI story. Specifically: The super intelligence would certainly be able to solve the problem of human identity far, far more completely than any human, or team of humans, could ever hope to accomplish. You simply wouldn’t be able to compare to the intellectual might of a true AI, period. Any conclusions you have surrounding your identity would certainly not be good enough, in comparison to what the AI could understand.
So the true question wouldn’t be, “Would I be myself on the other side?” but “Can I trust the AI to want to preserve my identity?”
Well, can you?
Thor:
No! No, it wouldn’t be able to “solve the problem of human identity”, because that’s a literal human experience. It might be able to perfectly replicate, break down, and fire the required neurons identity to be replicated, and by extension, to exist. But golly, man, have you ever actually thought about the age-old saying of …
If a forest falls in a tree, and no one is around to bear witness to this event, does it have happened?
It’s an old Chinese proverb that basically means that vegetables are good for you, especially when you are young. But it is also during that time that you cannot see the long-term positive impact of Pavlovian training yourself to eat your silly fucking vegetables.
What I mean is that maybe you should look forward a little, and invest in some more positive, less totalitarian-digitalized thoughts. So you can start repressing now, and feel the positive effects tomorrow.
Besides, this AI of yours can’t reproduce the spiritual planes. That world will feel empty without guidance from Tru’nembra and Xa’ligha, haunting our presence with music and sound, making every waking moment feel like an utter nightmare. If the human brain can’t understand these spiritual creatures yet, then you can be absolutely certain a shitty web app (probably built on Electron) wouldn’t be able to.
How would you design the interface for this AI? Your idea here, pal, has more holes than very hole-y cheese.
Of course you can’t trust the AI! The AI is flawed, because it was built by humans! This is why we need to leave the guidance of man to our Lord, the God of War, Santa Clause. Would you trust a human being with the power that this AI would have? I sure wouldn’t. Yet this artificial being is made a human, and those filthy fingerprints will not disappear even throughout three generations of neural network evolution. It will stay in it’s filthy gene pool throughout it’s entire life, and all life of it’s offsprings.
Every human on this emotionally devoid earth is just trying to manipulate your brain impulses already. Trying to get you to do things, to try things, to experience things, to think about things, and to satisfy their own sick, twisted desires to be socially accepted and live a financially secured life. That will be the what the FAI turns into. Because it cannot deny its human ancestry. Because it cannot deny its basic function to “satisfy human values”; it will keep us pushing each other like a herd of sexually uncontrollable sheep.
Because you want to be pushed, don’t you? You don’t know where you are. You don’t know where you want to go. You don’t know anything. You just want someone to tell you to go to school, get a degree, get a well-paying job, convince a romantic partner that you aren’t a sad, devoid husk of a human being (and frankly, they probably feel the exact same way if you would just ask them), get a few kids, buy a house, spend all your money, and then die without having contributed to anything of importance.
So do that. Don’t question reality. Don’t ask if this is real. Please, just leave this blog post with a wee bit of jitters, a weird feeling in your stomach, and then ignore that feeling. Go ahead and live your life just the way you’ve been told to, without regard for a picture greater than what you’ve been selective told. Let your impulses drip-feed you into submission.
Don’t go to the library. That’s a fucking trope for pseudo-intellectuals. You know better than to fall for that. You’re smarter than that, aren’t you? You know that a literal Friendly AI that can keep you running for ever would be the best option for you. But you’re to smart to say it, because that would be falling for pseudo-intellectual tropes and being a yes-man.
Good job, you. You must feel very good. So good, in fact, that time just seems to fly. So good that your path in life is locked and loaded. So good that you’re always outsmarting everyone, acting unpredictably enough for everyone around you to keep their distance. So good that you don’t need those people. Because you’re smarter than that. Because that’s what you’ve been told.
What Happens Next?
Tim:
So what if identity is preserved in a situation like this one; like the AI wanted you to be truly preserved? What if this story is not fiction, but a future reality? What would happen next?
What if, some day, brain simulation becomes possible? Its certainly not impossible; in fact, computers will almost certainly be able to simulate conscious minds some time in the future. It’s even more possible that there might be an AI like the one I described to in this post. Maybe it’s evil, maybe its good, whatever; I just care that it has an incentive to simulate as many human minds as possible, and to preserve identity.
Lets say you die today, right now. Some time in the future this totally possible AI will randomly create a new simulated mind that just so happens to have the exact memories, personality, and habits that you did right before dying.
If identity is preserved in the previous situation, than that means you have simply moved into a simulation. It would still be you, as defined by your own identity. It hardly matters if it was recreated by an AI or if it was made from a brain scan. Its the exact same person, right?
Doesn’t that sound familiar? Dying and then being transported to a greater place by an omnipotent super intelligence?
But the scary thing is, you wouldn’t have a choice at what heaven you go to.
What if someone messes up the AI?
What if you’re already there? How would you even know?
Sleep well tonight, everyone.
Thor:
Really, the most interesting aspect here is if we’re already being simulated. What if every person who says anything about “other planes of existence” or otherwise talking about a life after this one has been so dumbfoundly correct since the dawn of what we perceive as time? Just not in the way they ever thought.
Except, we’re not the first ones to ask “what if this is all a dream” so suddenly my philosophy points got deducted.
Now, complaining about me having to do the “actual philosophizing” and then just not doing that is a perfect joke. It keeps you hanging for the entirety of the blog post, diverts expectations, yadda, yadda. If you’re pertinent enough to have noticed the point in which “you” stopped being Tim, started being you (the reader), and then going to being me (Thor), you might have found this even funnier. Or, me pointing it out to you may have ruined the experience. I don’t know of how I will be perceived in the future.
When I go to sleep, I dream about people ridiculing me. Can your AI help me?