Showing posts with label singularity. Show all posts
Showing posts with label singularity. Show all posts

Tuesday, August 7, 2012

Conscionable Consciousness Conduction

This is something I wrote a long time ago, but which still bears out, I believe. It is a method I would be willing to employ to transfer my consciousness into another body, or into a computer.

To illustrate why this is important, let me say that I would not be willing to use a teletransporter that copied my entire physical form, sent the data to another terminal which reconstituted me, and then destroyed the original. Although I have no philosophical objections to this happening, I find the idea highly emotionally disturbing and would never go through with it. For that matter, even if the original wasn't destroyed but was rather pulled apart and transferred, I still wouldn't do it for reasons I think are obvious.
 
Likewise, I would not be willing to be, say, put to sleep, have my brain scanned, then be uploaded to a machine and have my body destroyed. I would not object, of course, to having a copy of my mind made, to be run later or used as a sort of back up.

However, there is a way I would be willing to actually abandon my body and live in a virtual world (assuming of course all assurances of liberty and safety, etc). If my various faculties — sight, hearing, language, independent limb motor control — were to be transfered one by one to an emulator running on a computer connected to the various sensors and devices which would temporarily mimic said faculties, I would be able to track the progress of my mind from my head to the computer. I imagine the process something rather like this:

I sit down in the chair, my head shaved and access plugs and sub-cranial scanning mesh installed. The technician behind me takes one long wire and inserts the end of it into the plug square in the back of my head. He asks me if I'm ready. I take a deep breath and then nod. I hear a switch flip, and then I vomit. My body thinks that I'm having a stroke, or have an eyeball knocked out of its socket, or am spinning faster than my eyes can focus on anything. After a few moments, I start to orient myself. I am looking ahead, at a large black box, the size of a television set, with a forest of instruments sticking out of it. I also see my body, sitting in a chair, a host of medical equipment and one technician behind me. I raise my right hand from the arm of the chair, and see it both out the corner of my eye and from across the room simultaneously. Finally, I come to grips with the fact: my brain is getting direct data from a video camera hooked up to a computer. The technician asks me again if I'm ready. I've long ago memorized the sequence of the procedure. I hear another switch flip and a loud humming, and slowly my vision of the computer in front of me fades. However, I can still clearly see my body. Nothing has changed, but that the part of my brain which receives data from my eyes has temporarily stopped working. Luckily, I am hooked up to a camera, which replaces the function of the eyes, and a computer, which now hosts the software needed to interface between eyes and cognitive and reflexive areas of the brain. The technician inserts another wire into the top left of my skull. Now I feel as if I have a third arm. I move the arm on my body, and it responds as it should. Then I move this new appendage, and see something wave in from of my new field of vision. It is a robot arm, identical in shape and construction to my natural arm. When the inhibitor is turned on, it prevents my brain from sending signals to my muscles, and I am no longer able to control my fleshy right arm. But I can still quite easily control both my left arm and the robot arm to the right of my field of vision. This continues — left arm, left leg, right leg, diaphragm — until every part of my brain has been mapped, transferred, and inhibited. Now comes the final moment. Up to now, I have been physically connected to all of my wetware. I could have, at a moment's notice, regained control of any part of my brain. But now the technician removes the first wire he inserted. My visual cortex is completely dormant and no longer connected to the computer, yet I can still see my body — I am still connected to my body — and I can still feel every part of it as if I'm still in my brain.
 
And so on. In this way, there would be no point at which I could feel “myself” “die” or disappear. I would simply phase from one substrate to another, and be awake and (at least nominally) in control the entire time. Of course, none of this might ever be possible, but it’s not completely unreasonable.

Friday, April 20, 2012

Predicting the Future (and Other Abilities We Don't Have): Part 1

Humans are good at lots of things.  This series of two posts is about a number of abilities that are not among those things.  Part 1 discusses the fashionable tendency to make guesses about the future course of society and then heavily imply (without usually stating outright) that these guesses constitute accurate predictions.  Part 2 discusses the trouble we tend to have in viewing ourselves in a historical context.  This makes it easy for us to believe that we happen to live in revolutionary times, or that none of the old rules apply.  These types of charlatanism can have far reaching consequences, as it turns out.

Part 1:  Extrapolation is an Art, not a Science

One of the ways to get people to pay attention to your predictions is by preaching the good news: eternal life.  Ray Kurzweil has made a number of predictions about the bright future that technological growth will bring us, with this being by far the most notorious.  Although his version of immortality, uploading ourselves onto computers, differs somewhat from the standard Christian view, one can't help but notice the religious flavor of this prediction.

Kurzweil's other predictions for this century include, yes, flying cars, but also reverse-engineering the human brain, nearly perfect simulations of reality (for our digital selves to live in), and, crucially, an AI that is more intelligent in every way than all humans combined.  He has freed himself from any responsibility to explain how these things will be accomplished.  Nobody has the slightest idea how to do the interesting ones.

I will defer the actual technical explanation of why these are truly goofy predictions to authors who have basically handled it: Steven Pinker, Douglas Hofstadter, PZ Myers, and many others have noted how technology and scientific discovery don't progress in the way Kurzweil has claimed.  Instead, I want to draw attention to the fact that these attempts to predict the future are actually a very human tendency.

In the early 1960's, progress in programming machines to do certain tasks (like proving theorems), gave researchers supreme confidence that essentially human-like A.I. would be a solved problem within 20 years.  They should have said that at this rate, it would be done.  What actually happened was that computers became more and more sophisticated but left AI behind: the problems they were doing were just much harder than they had anticipated.  Even relatively simple tasks like constructing grammatical sentences proved to be far out of their grasp.  Now, the most successful language tools largely involve throwing our technical understanding of language out the window and using probabilistic models.

Economies are not spared from erroneous predictions about the future.  Kurzweil and others jumped on the tech-boom bandwagon, claiming in 1998 that the boom would last through 2009, bringing untold wealth to all.  Maybe they should have been reading Hyman Minsky instead of Marvin Minsky.

Enough about the good news.

The other way to get people to pay attention to your predictions is by telling them the bad news: social breakdown the end of the world.  Overpopulation is an issue along these lines that receives attention that is disproportionate to the seriousness of the claims made by its scare tacticians.  Among these claims is the belief that we are imminently reaching the carrying capacity of the Earth, at which point starvation, crowding, and wars over scarce resources will tear human civilization to pieces.

This hypothesis relies on progress not happening, the opposite of the singularitarians' reliance.  But the very same question can be asked of both: how do you know?  This is where it becomes apparent that extrapolating patterns is an art for the Kurzweils of the world.  If you extrapolate one variable, you get intelligent machines.  If you extrapolate another, you get the end of the world.  But if you extrapolate yet another, say the total fertility rate (TFR), it doesn't look so scary.  Defined as the average expected number of children born per woman, the world's TFR has been steadily declining in the post-war period, from almost five in 1950 to almost 2.5 today.  As it approaches two, the world population approaches equilibrium.

Phony overpopulation scares are common in the history of anglophone countries, from Thomas Malthus to American anxiety over the "yellow peril" around the turn of the century (see Jack London's The Unparalleled Invasion for a rosy portrait of the future).  Wealthy people and business interests are often the biggest proponents of the theory that population growth is the largest problem facing the world.  Conveniently, it's one of the only major global issues that isn't their responsibility.  In reality, the only reliable way to lower growth rates is to facilitate the economic growth of poor countries to the point where people there have a decent standard of living.

The danger in blowing the perils of overpopulation out of proportion is that it leads people to prioritize population control above reproductive rights and, more generally, morality.  If it really is that serious, then we have carte blanche to do whatever is necessary.  The bottom line is that, despite the very real possibility of overpopulation becoming an issue, there is no reason to think it is serious or imminent enough to change what our goals would be, were it not imminent.  Our immediate task is still figuring out how to get communities to lift themselves out of poverty while handling climate change and other real crises.

We have a predisposition to weigh the likelihood of possible futures, either good or bad, based on how exciting or terrifying they are instead of how probable they are.  Anyone interested in solving problems should be aware of this bias.

Part 2 covers a second, related bias that people have: the impression that the times we currently live in offer wider and more revolutionary possibilities than existed in the past.  This impression, created by the fact that we live now, not in the past, is the source of huge blunders and the general abandonment of reason.