I'm gonna write an angry post. I haven't slept well this week and can't get my thoughts together well enough to write something I'm not immediately invested in, so I'm gonna write this instead. If anyone is offended by it (and quite a few people ought to be) I can't apologize in all honesty. Five months ago I might have been able to, but not right now.
A few weeks ago I went to my great-grandmother's grave alone for the first time. I'd been there before with my family, but this was the first time I spent any great length of time there. It was a bizarrely warm March day, with a bright sun and green grass, rather than the usual Minnesota Spring blizzard. I sat by my grandma's grave for an hour that day. When I first got there, I was just looking around, making sure it was tidy. I thought about the prosaic aspects of being at a graveyard, of my bike ride from Minneapolis to Falcon Heights, of the previous times I'd been. Then I stopped avoiding my real purpose in being there, and I started trying to articulate my thoughts and feelings, for the first time, four months after she had passed away.
Up to that point, from the day she died December 4th, I hadn't cried. In fact, I hadn't really cried outside of watching movies in about a decade. And I hadn't cried at her funeral, or at my subsequent visits. And I didn't cry right then. But I sat down on the grass and started talking. I started saying mundane things about missing her, and finally coming to see her, and how strange it felt for her not to be around. And then the same part of my brain responsible for writing this blog kicked in, and I started drawing straight lines. Straight lines are what I do. I don't curve around inconvenient ideas (in as much as I can help it — we all have our biases). I try to think straight through all the relevant information I have in my head.
And a line of thought began to develop. My grandma was dead. Her body lay six feet beneath me in a coffin in the ground. This was a physical reality. Everything that had ever been her was in a box beneath the earth. I pictured her brain, as it was then, several months after her death. It certainly was not a pleasant image, but that was what everything referred to as "her" was now. All we are is a subset of the pattern of neuron firings in our heads. And that pattern had come to an end.
That pattern hadn't gone anywhere. It hadn't transcended matter. It wasn't part of a soul, or spirit, or chain of reincarnation. Everything that had been my great-grandma, that had experienced love and life and war and migration, all of that was now a pool of gray mush inside a very slowly crumbling skull. It had been sustained as part of a self-perpetuating chemical process for nearly a century, which had quickly degenerated and ceased to be. Now, for some people, that is an ugly and horrible thought, that that is all we are. But to me, it's heartbreakingly beautiful: this pile of gray mush pushing electrons around via sodium and potassium exchange wrote the Bible, built St. Peter's, painted the Sistine Chapel, and composed the Ave Maria. That is a miracle.
And then my mind took the next step forward: all of that ability and potential and memory and personality was gone for my grandma. It had gone out like a candle, with barely even a wisp of smoke to show for it. It was gone. She was gone. She was gone, and she was nowhere, and she never would or could come back. There's a physical law that says as much. The laws of physics literally dictated that my great-grandma had ceased to exist for eternity. Except a huge host of high deluded people thought that she wasn't.
Religious people believe in the eternal soul. They hold it that there is some essential, everlasting part of us that continues to exist before life and beyond death. They believe that we are never really gone, and some of them even believe that we will join our loved ones in eternal paradise after death (or judgement, or whatever fairy tale they wish). But they're wrong. And they know they're wrong. How do I know that they know they're wrong? Because they grieve.
If I had the slightest shred of belief that my great-grandma was not well and truly gone for all eternity, I would not have shed a single tear that day. But as I came upon the above line of reasoning, I started crying. Sobbing. Huge, painful dry heaves. I sat on the grass for forty-five minutes straight and cried into my hands. I cried because I knew my grandma was gone. I knew she was gone. I knew it right down to my bones. I knew it the way a baby zebra knows its mother is gone after finally finding her lion-eaten corpse. It was beyond thought, beyond culture or memory or ideology. It was chemical.
And in my grief came also anger. Outrage, in fact. Outrage at the fact that religious people would dare to grieve at a funeral. That they would dare to wail and moan about the supposed "loss" of a loved one. The hypocrisy of it galled me. Had I any hair, I would have been tempted to tear at it. To claim that there is an afterlife where your relatives wait for you before an eternity of bliss, and then to bemoan their passing struck me as obscene. And on clear-headed reflection, I can do nothing but stand by that line of thought.
If a religious person thinks that a deceased person is merely in another place, where they themselves will eventually go, then grief is not simply unnecessary, but nonsensical. We do not grieve when our loved ones move away. We do not grieve when the brother we're angry at leaves town and we know we likely won't speak to him again. We do not grieve when we leave a job, knowing we'll never again see our coworkers. We might be sad, or disappointed, or upset, but we do not grieve for separation. We grieve for death. Because we know right down to our DNA what death is, and all the religious platitudes in all the holy books read by all the priests and sages can't stop us knowing it. And I think that claiming that "it's God's plan" and "she's in a better place" is the absolute worst of sanctimonious, hypocritical delusion.
If you want to claim that you are religious, and believe in a God, or a Soul, or an afterlife, then you do not get to fucking grieve. You get to be sad and annoyed and impatient, because you won't get to see your loved ones for a little while. But what is the remainder of your life compared to eternity? Nothing. Literally, mathematically nothing. So just don't. However, if you accept your grief for what you know it to be, give up your childish insistence on magical thinking and ancient fairy tales. Accept that the universe is a system of particles interacting in infinitely complex ways, guided by blind, stupid natural laws which still somehow manage to produce the absolute miracles of thoughts and songs and love and life. If you insist on keeping your holy books and imaginary creatures, I won't judge you. But only atheists get to grieve.
Friday, April 27, 2012
Tuesday, April 24, 2012
Jaynesian Consciousness, or Why Consciousness Is Not What You Think It Is
Many people use the term "consciousness" to mean a huge variety of things. In this talk by John Searle, he attributes consciousness to his golden retriever. I have had conversations with people who say that consciousness is a matter of degree, all the way from plants up to humans. The idea that consciousness is somehow a fundamental and pervasive feature of biology is a very common idea — but is simply wrong.
If you've ever driven a car for any length of time, you've almost certainly had the experience of driving for many minutes at a time, and only coming to realize very near the end of your trip that you'd reached your destination. For miles and mile, you monitored traffic, changed lanes, taken turns and exists, all while blissfully daydreaming or listening to your favorite music. The realization that you had arrived might have come as a bit of a shock, a record-skip from the last moment you were conscious of driving to that moment, when you became conscious of it again. The entire time, your brain faithfully carried out all the complex, precise movements required to keep the car on the road and going in the right direction without your conscious awareness.
If consciousness were somehow fundamental to human cognition — or cognition in general — this would not only be impossible, it would not even make sense! However, it is very possible, and extremely common! Nervous habits are often completely outside consciousness until pointed out. The vague recollection of dreams — not to mention their very existence — is another place where consciousness is shown to be fuzzy and transitory. Various drugs that can destroy the ego or cause us to have blackouts are similar. Hypnosis and schizophrenia, phenomena that suppress or eliminate conscious control and replace it with hallucinatory or external control, would be just as absurd. Spirit possession, found in a multitude of cultures, would require actually supernatural explanation, rather than a more prosaic psychological one. The entire notion of inspiration is entirely unrelated to consciousness, in fact! Invention and intellectual discovery, often naively identified with conscious reasoning, is in fact almost always the result of sudden flashes of insight which come upon one in the shower or while taking a walk, rather than consciously worked out piece by piece from premises.
Daniel Dennett is fond of saying that consciousness is an illusion. I think that's too strong. It's more accurate to say that the fundamental and all-encompassing nature of consciousness is an illusion. What seems to us the basic operating principle of the brain is actually a much more limited object. Others subvert reason, logic, memory, understanding and planning to consciousness. However, these are all separate things. The term "consciousness" is best reserved for the self-introspecting ability that seems unique to human. It is that constant stream of language we hear in our heads at all times, almost uninterruptibly, which allows us to form a sort of internal mind-space, and to give ourselves declarative commands in the form of decisions and arguments.
Various animals share almost all the cognitive features of humans in some combination. Dolphins are highly intelligent, playful and social. The other great apes share our sociability, and to some extent our language. Many animals, from chimps to pigeons, can either learn or be taught to recognize themselves is mirrors. Dogs and crows can recognize individual humans and react to each in unique ways. Chimps, crows and some fish make and use tools. Ants, termites, spiders, and birds build homes. In fact, it is extremely difficult to come up with a human faculty which is not also expressed by some animals.
One ability which very probably does distinguish us from all other animals is our ability to model the world around us in certain ways. An important part of brain function in any animal is modeling the world it inhabits. This allows it to plan and execute movement in useful and beneficial ways. Without a mental representation of the world, movement would be meaningless and uncoordinated. An animal's brain must know — that is, represent — the details of its environment and its own body in some way so that it can interact with it. Many intelligent animals, including us, take this a step further. We do not merely build models of our physical environments, but also of our social environments. It must be the case that, up to some point in our evolution, humans went no further than this. But eventually we took it another step further: we made models of mental environments. That is, we created models of how minds work, presumably whenever it was that we figured out that other humans had minds.
And that is exactly what consciousness is. It is the ability to make absract, multi-level models of minds, including our own. This ability is granted to us by the linguistic relationship between sounds and meanings in combination with a cultural focus on self-hood and narrativity ("narrativity" referring to our habit of telling stories about ourselves and events around us regardless of whether such stories actually relate in any way to reality). This ability to simulate the mental world not only lets us generate new ideas and inventions, it also lets us model the inner world of other people, to guess their thoughts and motivations. Here I don't mean empathy, which chimpanzees almost certainly share with us, but rather an ability to very literally read another person's thoughts, to form words in your head which are likely similar in meaning to the words they are forming in theirs.
This analogical mind-space, and the cohesive sense of self that it leads to, was first described by Julian Jaynes in his criminally misinterpreted and completely underappreciated book The Origin of Consciousness in the Breakdown of the Bicameral Mind. Daniel Dennett is one of the few thinkers on consciousness who openly acknowledges Jaynes's influence on his own ideas, but many others have proposed effectively identical characterizations of consciousness, most notably the neuroscientist and philosopher Thomas Metzinger, whose monograph Being No One lays things out in great detail.
The more I come to understand consciousness, the more tentative my grasp on my own consciousness feels. I become more and more aware every day of just how little of my everyday life I am conscious of. Malcom Gladwell's Blink, derided by many as anti-intellectual, is in fact an excellent document on the limits of our conscious thought, and at the same time of the power of our brains as a whole. Thinking, reasoning, learning, talking, inventing, discovering, and, for the most part, acting are all non-conscious events. Consciousness is just a curved mirror held up to our mental world, reflecting itself and its surroundings.
[Edit: Thanks to my friend Rob for pointing out a very important mistake I made, whereby I failed to distinguish modeling the world from modeling the mental.]
If you've ever driven a car for any length of time, you've almost certainly had the experience of driving for many minutes at a time, and only coming to realize very near the end of your trip that you'd reached your destination. For miles and mile, you monitored traffic, changed lanes, taken turns and exists, all while blissfully daydreaming or listening to your favorite music. The realization that you had arrived might have come as a bit of a shock, a record-skip from the last moment you were conscious of driving to that moment, when you became conscious of it again. The entire time, your brain faithfully carried out all the complex, precise movements required to keep the car on the road and going in the right direction without your conscious awareness.
If consciousness were somehow fundamental to human cognition — or cognition in general — this would not only be impossible, it would not even make sense! However, it is very possible, and extremely common! Nervous habits are often completely outside consciousness until pointed out. The vague recollection of dreams — not to mention their very existence — is another place where consciousness is shown to be fuzzy and transitory. Various drugs that can destroy the ego or cause us to have blackouts are similar. Hypnosis and schizophrenia, phenomena that suppress or eliminate conscious control and replace it with hallucinatory or external control, would be just as absurd. Spirit possession, found in a multitude of cultures, would require actually supernatural explanation, rather than a more prosaic psychological one. The entire notion of inspiration is entirely unrelated to consciousness, in fact! Invention and intellectual discovery, often naively identified with conscious reasoning, is in fact almost always the result of sudden flashes of insight which come upon one in the shower or while taking a walk, rather than consciously worked out piece by piece from premises.
Daniel Dennett is fond of saying that consciousness is an illusion. I think that's too strong. It's more accurate to say that the fundamental and all-encompassing nature of consciousness is an illusion. What seems to us the basic operating principle of the brain is actually a much more limited object. Others subvert reason, logic, memory, understanding and planning to consciousness. However, these are all separate things. The term "consciousness" is best reserved for the self-introspecting ability that seems unique to human. It is that constant stream of language we hear in our heads at all times, almost uninterruptibly, which allows us to form a sort of internal mind-space, and to give ourselves declarative commands in the form of decisions and arguments.
Various animals share almost all the cognitive features of humans in some combination. Dolphins are highly intelligent, playful and social. The other great apes share our sociability, and to some extent our language. Many animals, from chimps to pigeons, can either learn or be taught to recognize themselves is mirrors. Dogs and crows can recognize individual humans and react to each in unique ways. Chimps, crows and some fish make and use tools. Ants, termites, spiders, and birds build homes. In fact, it is extremely difficult to come up with a human faculty which is not also expressed by some animals.
One ability which very probably does distinguish us from all other animals is our ability to model the world around us in certain ways. An important part of brain function in any animal is modeling the world it inhabits. This allows it to plan and execute movement in useful and beneficial ways. Without a mental representation of the world, movement would be meaningless and uncoordinated. An animal's brain must know — that is, represent — the details of its environment and its own body in some way so that it can interact with it. Many intelligent animals, including us, take this a step further. We do not merely build models of our physical environments, but also of our social environments. It must be the case that, up to some point in our evolution, humans went no further than this. But eventually we took it another step further: we made models of mental environments. That is, we created models of how minds work, presumably whenever it was that we figured out that other humans had minds.
And that is exactly what consciousness is. It is the ability to make absract, multi-level models of minds, including our own. This ability is granted to us by the linguistic relationship between sounds and meanings in combination with a cultural focus on self-hood and narrativity ("narrativity" referring to our habit of telling stories about ourselves and events around us regardless of whether such stories actually relate in any way to reality). This ability to simulate the mental world not only lets us generate new ideas and inventions, it also lets us model the inner world of other people, to guess their thoughts and motivations. Here I don't mean empathy, which chimpanzees almost certainly share with us, but rather an ability to very literally read another person's thoughts, to form words in your head which are likely similar in meaning to the words they are forming in theirs.
This analogical mind-space, and the cohesive sense of self that it leads to, was first described by Julian Jaynes in his criminally misinterpreted and completely underappreciated book The Origin of Consciousness in the Breakdown of the Bicameral Mind. Daniel Dennett is one of the few thinkers on consciousness who openly acknowledges Jaynes's influence on his own ideas, but many others have proposed effectively identical characterizations of consciousness, most notably the neuroscientist and philosopher Thomas Metzinger, whose monograph Being No One lays things out in great detail.
The more I come to understand consciousness, the more tentative my grasp on my own consciousness feels. I become more and more aware every day of just how little of my everyday life I am conscious of. Malcom Gladwell's Blink, derided by many as anti-intellectual, is in fact an excellent document on the limits of our conscious thought, and at the same time of the power of our brains as a whole. Thinking, reasoning, learning, talking, inventing, discovering, and, for the most part, acting are all non-conscious events. Consciousness is just a curved mirror held up to our mental world, reflecting itself and its surroundings.
[Edit: Thanks to my friend Rob for pointing out a very important mistake I made, whereby I failed to distinguish modeling the world from modeling the mental.]
Monday, April 23, 2012
On the Fundamental Interconnectedness of Science
Scientifically illiterate people tend to make wild claims about new discoveries after reading third-hand articles on CNN.com or in the Fortean Times. Creationists have been doing this with every tiny bit of contradictory biological evidence since Darwin. New Agers do it with quantum mechanics. Regular people do it with tiny advances in technology or overblown predictions from hack journalists.
A stark example was the big scare at CERN over faster-than-light neutrinos.When the results were publicly announced last year, a chorus arose in indignation of the clearly malicious lie the academy had been spreading for the last century, that the speed of light was a fundamental speed limit in our universe, and all the physical effects this implied. Short-sighted people, believing themselves, at their Dunning-Krugeriest, to be incredibly farsighted, proclaimed a new age of physical theories and hyperdrive travel. They scoffed at the closed-mindedness of science in making such outrageously doctrinaire claims as that there were limits on the movement of objects in space! There were certainly no limits to the human spirit! ...or some such.
What these breathless blowhards don't understand is that no one seriously considered the possibility that said neutrinos were traveling faster than light. A few theoretical physicists came up with some pet models that might allow a special variety of neutrino to do something weird, but that's because they have nothing better to do. The uproar in the physics community was not about the possibility of faster-than-light travel (many people make claims about discovering such things all the time), but rather about how a huge, extremely carefully set up, and thoroughly verified experiment could produce such results! What sorts of error could be the cause, and could that error populate other results in other experiments? As it turns out, it was a loose fiber optic cable, a simple human error, but one incredibly hard to catch on practical grounds. Nothing was traveling faster than light.
Here is the salient point, however: all of those people hailing the new age of faster-than-light physics failed to understand that if it were possible for something to travel faster than light, the universe would not look the way it in fact does. Distant galaxies would not appear as they do. Our computers would not work how we expect them to. Science is an intricate machine: one cannot simply remove a component and expect the thing to keep on working.
Creationists do the same thing. By claiming on the one hand that evolution does not occur, or that the Earth has only existed for six or ten thousand years, and on the other hand continuing to drive cars and use cell phones and watch television, they fail to understand that the same natural phenomenon that allows new medical treatments to be developed allows dinosaurs to evolve into ducks. They don't understand that the science which tells us how old early hominid remains are is the same which allowed us to build an atomic bomb. And of course they do not deny the existence of genetic therapy or nuclear weapons. But they do deny evolution. (Of course, creationists do not really hold a principled position at all — they pick and choose their beliefs based on authority rather than reason.)
New discoveries in science certainly can obviate old theories. The connection between germs and diseases completely destroyed older theories of disease. But not all discoveries are like that. While it's certainly true that Einstein's general theory of relativity was a vast improvement upon Newtonian mechanics, it was not a wholesale usurpation of it. For measurements below the astronomical scale, Newton's laws were and are still perfectly valid. That is, the level of description at which they work, while inadequate for measuring the orbit of Mercury, is just fine for balls and ramps, or even bridges and skyscrapers.
The mistake many people make, though, is in seeing the universe in the opposite way. They assume Newtonian mechanics is more fundamental, because it is more intuitive. They think that their ingenious "racecar headlight on a moving train" thought experiment demonstrates that you can travel faster than the speed of light. But they still use the GPS on their phone, which would not need careful timing corrections if relativity didn't work as Einstein described.
When science throws something we don't like at us, like quantum indeterminism, special relativity, or Darwinian evolution, we cannot simply choose to ignore it while accepting all the parts we don't dislike. All the various scientific fields and theories are deeply interconnected and interdependent. This does not entail that they are all correct, of course, but one cannot simply decide that something "must be wrong" without independent, scientific reasons for thinking so. Doing that rather puts you in the position, to quote Tom Lehrer, of "a Christian Scientist with appendicitis."
A stark example was the big scare at CERN over faster-than-light neutrinos.When the results were publicly announced last year, a chorus arose in indignation of the clearly malicious lie the academy had been spreading for the last century, that the speed of light was a fundamental speed limit in our universe, and all the physical effects this implied. Short-sighted people, believing themselves, at their Dunning-Krugeriest, to be incredibly farsighted, proclaimed a new age of physical theories and hyperdrive travel. They scoffed at the closed-mindedness of science in making such outrageously doctrinaire claims as that there were limits on the movement of objects in space! There were certainly no limits to the human spirit! ...or some such.
What these breathless blowhards don't understand is that no one seriously considered the possibility that said neutrinos were traveling faster than light. A few theoretical physicists came up with some pet models that might allow a special variety of neutrino to do something weird, but that's because they have nothing better to do. The uproar in the physics community was not about the possibility of faster-than-light travel (many people make claims about discovering such things all the time), but rather about how a huge, extremely carefully set up, and thoroughly verified experiment could produce such results! What sorts of error could be the cause, and could that error populate other results in other experiments? As it turns out, it was a loose fiber optic cable, a simple human error, but one incredibly hard to catch on practical grounds. Nothing was traveling faster than light.
Here is the salient point, however: all of those people hailing the new age of faster-than-light physics failed to understand that if it were possible for something to travel faster than light, the universe would not look the way it in fact does. Distant galaxies would not appear as they do. Our computers would not work how we expect them to. Science is an intricate machine: one cannot simply remove a component and expect the thing to keep on working.
Creationists do the same thing. By claiming on the one hand that evolution does not occur, or that the Earth has only existed for six or ten thousand years, and on the other hand continuing to drive cars and use cell phones and watch television, they fail to understand that the same natural phenomenon that allows new medical treatments to be developed allows dinosaurs to evolve into ducks. They don't understand that the science which tells us how old early hominid remains are is the same which allowed us to build an atomic bomb. And of course they do not deny the existence of genetic therapy or nuclear weapons. But they do deny evolution. (Of course, creationists do not really hold a principled position at all — they pick and choose their beliefs based on authority rather than reason.)
New discoveries in science certainly can obviate old theories. The connection between germs and diseases completely destroyed older theories of disease. But not all discoveries are like that. While it's certainly true that Einstein's general theory of relativity was a vast improvement upon Newtonian mechanics, it was not a wholesale usurpation of it. For measurements below the astronomical scale, Newton's laws were and are still perfectly valid. That is, the level of description at which they work, while inadequate for measuring the orbit of Mercury, is just fine for balls and ramps, or even bridges and skyscrapers.
The mistake many people make, though, is in seeing the universe in the opposite way. They assume Newtonian mechanics is more fundamental, because it is more intuitive. They think that their ingenious "racecar headlight on a moving train" thought experiment demonstrates that you can travel faster than the speed of light. But they still use the GPS on their phone, which would not need careful timing corrections if relativity didn't work as Einstein described.
When science throws something we don't like at us, like quantum indeterminism, special relativity, or Darwinian evolution, we cannot simply choose to ignore it while accepting all the parts we don't dislike. All the various scientific fields and theories are deeply interconnected and interdependent. This does not entail that they are all correct, of course, but one cannot simply decide that something "must be wrong" without independent, scientific reasons for thinking so. Doing that rather puts you in the position, to quote Tom Lehrer, of "a Christian Scientist with appendicitis."
Sunday, April 22, 2012
Why I No Longer Argue with Libertarians
There is an important distinction between lower-case L libertarians, of which I am one, and capital L Libertarians, of which Ron Paul is one. The former is simply the philosophical position that liberty ought to be maximized. In practice, this implies the elimination of coercive State government and exploitative capitalism (setting aside anarcho-capitalism, whose coherence I will address in future). The latter is an American invention, and is a hybrid of Austrian economics and Randian political philosophy (if it can be called such). To compare and contrast these two positions: the former is anti-State and anti-capitalist; the latter requires a state to enforce property rights, and is extremely pro-capitalist and against government intervention in the economy. The former is generally strongly federalist and socialist; the latter largely individualist and laissez-faire. Both oppose the intervention of any form of government into people's private lives, as well as some measure of positive enforcement of certain rights, although both view liberty as essentially negative (although many would argue that the distinction is meaningless).
There is another important distinction that needs to be made. It is between what I will term capital C Capitalism and lower-case C capitalism. Capital L Libertarians are also, necessarily, capital C Capitalists. That is, they hold the belief that capitalism is a desirable state of affairs, and a positive good for the world. Lower-case C capitalists are merely those people who own capital. They are business owners, CEOs, managers, bankers. Lower-case C capitalists can be Libertarians, conservatives or liberals, progressives or reactionaries. The former is an ideological position; the latter is a position in society.
Left socialists often rail against Libertarianism, which is fun to do, no doubt. However, Libertarians don't actually matter in society. Maybe in the future, when the Libertarian Party has a majority in the Senate, we can worry about their ideas. The real opponent of the left, though, is not the Capitalist Libertarian. The real opponent is the capitalist. So while it's intellectually interesting to get into shouting matches with the local Randroids, anarchists and other leftists should really save their energy, both physical and intellectual, for opposing actual capitalism! Arguing against right-wing Capitalists is easy. What's hard is convincing a liberal capitalist why stateless socialism is desirable (not to mention feasible). That's why I'm not gonna argue with Ron Paul supporters and Ayn Rand fans anymore. It's a waste of breath, both on principle and in effect. My task from now on will be to convince capitalists of their error.
There is another important distinction that needs to be made. It is between what I will term capital C Capitalism and lower-case C capitalism. Capital L Libertarians are also, necessarily, capital C Capitalists. That is, they hold the belief that capitalism is a desirable state of affairs, and a positive good for the world. Lower-case C capitalists are merely those people who own capital. They are business owners, CEOs, managers, bankers. Lower-case C capitalists can be Libertarians, conservatives or liberals, progressives or reactionaries. The former is an ideological position; the latter is a position in society.
Left socialists often rail against Libertarianism, which is fun to do, no doubt. However, Libertarians don't actually matter in society. Maybe in the future, when the Libertarian Party has a majority in the Senate, we can worry about their ideas. The real opponent of the left, though, is not the Capitalist Libertarian. The real opponent is the capitalist. So while it's intellectually interesting to get into shouting matches with the local Randroids, anarchists and other leftists should really save their energy, both physical and intellectual, for opposing actual capitalism! Arguing against right-wing Capitalists is easy. What's hard is convincing a liberal capitalist why stateless socialism is desirable (not to mention feasible). That's why I'm not gonna argue with Ron Paul supporters and Ayn Rand fans anymore. It's a waste of breath, both on principle and in effect. My task from now on will be to convince capitalists of their error.
Friday, April 20, 2012
Why Syndicalism
Everybody works. Or, at least, everybody works when artificial unemployment doesn't exist. And I say "everybody" because I always speak in hyperbole. In any case, the overwhelming majority of people seek work of some sort. Just recently, my mom lost her job, and instead of spending all day sitting on the couch watching Home & Garden television (which she could easily afford to do), she went and got a job that pays her barely anything. People want to be formally occupied by something they believe brings benefit to them and their family, or to society at large. This tendency takes various forms outside the capitalist structure of "employment". Artists create art regardless of whether they get paid for it. People who love cooking spend hours perfecting recipes for no one's enjoyment but their own. In a world that didn't care about "marketable skills" and didn't penalize risk-taking with destitution, people would be able to occupy themselves with whatever work they were naturally inclined to do.
In a capitalist society, where people must balance their desires against the demands of the market, many work jobs they do not enjoy, the most unpleasant of which are usually the lowest paying and most exploitative. Such people should, and historically often did, organize into guilds or unions to demand (and occasionally win) increases in wages, reduction in working hours, and improvements in working conditions. These unions are the perfect place to foment radicalism, since workers are the most exposed to the oppressive and exploitative nature of capitalism, and often suffer the most at the hands of the government once they organize. From a utopian perspective (by which I mean, from the point of view of an imagined future free society) such unions would constitute democratic worker councils in their respective industries, certifying members of professional groups and organizing allocation of work and resources. In the modern world, they are means of resisting capitalist exploitation and social oppression.
Not everyone, of course, is keen on resisting exploitation, because they do not see it as such. Particularly in America the myth (that is, the misunderstanding of economics and probability) that anyone can get rich tricks people into aligning their perceived interests with the capitalist class, and the illusion of democracy allows them to believe that the government exists to support them, rather than to support the capitalist system. They believe that fighting for their own, realistic, interests will jeopardize their chances of ascending the social or corporate ladder on the off chance they come up with the better mousetrap. The refusal to admit the existence of a sharp class division between workers on the one hand and owners and rulers on the other leads them to have disdain for anyone who recognizes, and fights against, it.
The disdain many people have for unions specifically is due to the essentially capitalist trades unions whose leaders often have more in common with the bosses than with the workers, and of course to the stain of Soviet Communism on the entire notion. (The Soviet Union was of course in no way communist, but is rather State Capitalist to the core.) When workers are divorced from the output of their labor, whether by capitalist profiteering or state mandates, the tendency to lose personal interest in their work is increased and reinforced, because the work is no longer theirs, either in methodology or results. To contrast, work done by democratically organized, voluntary worker collectives instills a sense of pride and ownership in the work which produces both better results and stronger communities. It is this aspect of union organizing which leads me to believe that syndicalism — that is, the organizing of the working class into unions based on industry or geography — is the most practicable way of achieving revolution.
Workers, who make up the vast majority of society, are shown the power of democratic organizing, the power of their numbers in the face of capitalist and government oppression, and the dignity and satisfaction to be found in controlling the product of their own labor. I will discuss in a future post why I think syndicalism is the best way of instilling revolutionary consciousness in the working class, but it is definitely not the simplest or most glamorous way. It involves working shitty jobs, taking large personal, financial and health risks, and seeing little progress or huge reversals in fortune. The main point is, though, that many workers do this every day without any political motivation anyway, and the addition of that motivation has been proven historically to be easier and more effective than the creation of entirely new, theory-motivated political organizations.
In a capitalist society, where people must balance their desires against the demands of the market, many work jobs they do not enjoy, the most unpleasant of which are usually the lowest paying and most exploitative. Such people should, and historically often did, organize into guilds or unions to demand (and occasionally win) increases in wages, reduction in working hours, and improvements in working conditions. These unions are the perfect place to foment radicalism, since workers are the most exposed to the oppressive and exploitative nature of capitalism, and often suffer the most at the hands of the government once they organize. From a utopian perspective (by which I mean, from the point of view of an imagined future free society) such unions would constitute democratic worker councils in their respective industries, certifying members of professional groups and organizing allocation of work and resources. In the modern world, they are means of resisting capitalist exploitation and social oppression.
Not everyone, of course, is keen on resisting exploitation, because they do not see it as such. Particularly in America the myth (that is, the misunderstanding of economics and probability) that anyone can get rich tricks people into aligning their perceived interests with the capitalist class, and the illusion of democracy allows them to believe that the government exists to support them, rather than to support the capitalist system. They believe that fighting for their own, realistic, interests will jeopardize their chances of ascending the social or corporate ladder on the off chance they come up with the better mousetrap. The refusal to admit the existence of a sharp class division between workers on the one hand and owners and rulers on the other leads them to have disdain for anyone who recognizes, and fights against, it.
The disdain many people have for unions specifically is due to the essentially capitalist trades unions whose leaders often have more in common with the bosses than with the workers, and of course to the stain of Soviet Communism on the entire notion. (The Soviet Union was of course in no way communist, but is rather State Capitalist to the core.) When workers are divorced from the output of their labor, whether by capitalist profiteering or state mandates, the tendency to lose personal interest in their work is increased and reinforced, because the work is no longer theirs, either in methodology or results. To contrast, work done by democratically organized, voluntary worker collectives instills a sense of pride and ownership in the work which produces both better results and stronger communities. It is this aspect of union organizing which leads me to believe that syndicalism — that is, the organizing of the working class into unions based on industry or geography — is the most practicable way of achieving revolution.
Workers, who make up the vast majority of society, are shown the power of democratic organizing, the power of their numbers in the face of capitalist and government oppression, and the dignity and satisfaction to be found in controlling the product of their own labor. I will discuss in a future post why I think syndicalism is the best way of instilling revolutionary consciousness in the working class, but it is definitely not the simplest or most glamorous way. It involves working shitty jobs, taking large personal, financial and health risks, and seeing little progress or huge reversals in fortune. The main point is, though, that many workers do this every day without any political motivation anyway, and the addition of that motivation has been proven historically to be easier and more effective than the creation of entirely new, theory-motivated political organizations.
Predicting the Future (and Other Abilities We Don't Have): Part 1
Humans are good at lots of things. This series of two posts is about a number of abilities that are not among those things. Part 1 discusses the fashionable tendency to make guesses about the future course of society and then heavily imply (without usually stating outright) that these guesses constitute accurate predictions. Part 2 discusses the trouble we tend to have in viewing ourselves in a historical context. This makes it easy for us to believe that we happen to live in revolutionary times, or that none of the old rules apply. These types of charlatanism can have far reaching consequences, as it turns out.
Part 1: Extrapolation is an Art, not a Science
One of the ways to get people to pay attention to your predictions is by preaching the good news: eternal life. Ray Kurzweil has made a number of predictions about the bright future that technological growth will bring us, with this being by far the most notorious. Although his version of immortality, uploading ourselves onto computers, differs somewhat from the standard Christian view, one can't help but notice the religious flavor of this prediction.
Kurzweil's other predictions for this century include, yes, flying cars, but also reverse-engineering the human brain, nearly perfect simulations of reality (for our digital selves to live in), and, crucially, an AI that is more intelligent in every way than all humans combined. He has freed himself from any responsibility to explain how these things will be accomplished. Nobody has the slightest idea how to do the interesting ones.
I will defer the actual technical explanation of why these are truly goofy predictions to authors who have basically handled it: Steven Pinker, Douglas Hofstadter, PZ Myers, and many others have noted how technology and scientific discovery don't progress in the way Kurzweil has claimed. Instead, I want to draw attention to the fact that these attempts to predict the future are actually a very human tendency.
In the early 1960's, progress in programming machines to do certain tasks (like proving theorems), gave researchers supreme confidence that essentially human-like A.I. would be a solved problem within 20 years. They should have said that at this rate, it would be done. What actually happened was that computers became more and more sophisticated but left AI behind: the problems they were doing were just much harder than they had anticipated. Even relatively simple tasks like constructing grammatical sentences proved to be far out of their grasp. Now, the most successful language tools largely involve throwing our technical understanding of language out the window and using probabilistic models.
Economies are not spared from erroneous predictions about the future. Kurzweil and others jumped on the tech-boom bandwagon, claiming in 1998 that the boom would last through 2009, bringing untold wealth to all. Maybe they should have been reading Hyman Minsky instead of Marvin Minsky.
Enough about the good news.
The other way to get people to pay attention to your predictions is by telling them the bad news: social breakdown the end of the world. Overpopulation is an issue along these lines that receives attention that is disproportionate to the seriousness of the claims made by its scare tacticians. Among these claims is the belief that we are imminently reaching the carrying capacity of the Earth, at which point starvation, crowding, and wars over scarce resources will tear human civilization to pieces.
This hypothesis relies on progress not happening, the opposite of the singularitarians' reliance. But the very same question can be asked of both: how do you know? This is where it becomes apparent that extrapolating patterns is an art for the Kurzweils of the world. If you extrapolate one variable, you get intelligent machines. If you extrapolate another, you get the end of the world. But if you extrapolate yet another, say the total fertility rate (TFR), it doesn't look so scary. Defined as the average expected number of children born per woman, the world's TFR has been steadily declining in the post-war period, from almost five in 1950 to almost 2.5 today. As it approaches two, the world population approaches equilibrium.
Phony overpopulation scares are common in the history of anglophone countries, from Thomas Malthus to American anxiety over the "yellow peril" around the turn of the century (see Jack London's The Unparalleled Invasion for a rosy portrait of the future). Wealthy people and business interests are often the biggest proponents of the theory that population growth is the largest problem facing the world. Conveniently, it's one of the only major global issues that isn't their responsibility. In reality, the only reliable way to lower growth rates is to facilitate the economic growth of poor countries to the point where people there have a decent standard of living.
The danger in blowing the perils of overpopulation out of proportion is that it leads people to prioritize population control above reproductive rights and, more generally, morality. If it really is that serious, then we have carte blanche to do whatever is necessary. The bottom line is that, despite the very real possibility of overpopulation becoming an issue, there is no reason to think it is serious or imminent enough to change what our goals would be, were it not imminent. Our immediate task is still figuring out how to get communities to lift themselves out of poverty while handling climate change and other real crises.
We have a predisposition to weigh the likelihood of possible futures, either good or bad, based on how exciting or terrifying they are instead of how probable they are. Anyone interested in solving problems should be aware of this bias.
Part 2 covers a second, related bias that people have: the impression that the times we currently live in offer wider and more revolutionary possibilities than existed in the past. This impression, created by the fact that we live now, not in the past, is the source of huge blunders and the general abandonment of reason.
Part 1: Extrapolation is an Art, not a Science
One of the ways to get people to pay attention to your predictions is by preaching the good news: eternal life. Ray Kurzweil has made a number of predictions about the bright future that technological growth will bring us, with this being by far the most notorious. Although his version of immortality, uploading ourselves onto computers, differs somewhat from the standard Christian view, one can't help but notice the religious flavor of this prediction.
Kurzweil's other predictions for this century include, yes, flying cars, but also reverse-engineering the human brain, nearly perfect simulations of reality (for our digital selves to live in), and, crucially, an AI that is more intelligent in every way than all humans combined. He has freed himself from any responsibility to explain how these things will be accomplished. Nobody has the slightest idea how to do the interesting ones.
I will defer the actual technical explanation of why these are truly goofy predictions to authors who have basically handled it: Steven Pinker, Douglas Hofstadter, PZ Myers, and many others have noted how technology and scientific discovery don't progress in the way Kurzweil has claimed. Instead, I want to draw attention to the fact that these attempts to predict the future are actually a very human tendency.
In the early 1960's, progress in programming machines to do certain tasks (like proving theorems), gave researchers supreme confidence that essentially human-like A.I. would be a solved problem within 20 years. They should have said that at this rate, it would be done. What actually happened was that computers became more and more sophisticated but left AI behind: the problems they were doing were just much harder than they had anticipated. Even relatively simple tasks like constructing grammatical sentences proved to be far out of their grasp. Now, the most successful language tools largely involve throwing our technical understanding of language out the window and using probabilistic models.
Economies are not spared from erroneous predictions about the future. Kurzweil and others jumped on the tech-boom bandwagon, claiming in 1998 that the boom would last through 2009, bringing untold wealth to all. Maybe they should have been reading Hyman Minsky instead of Marvin Minsky.
Enough about the good news.
The other way to get people to pay attention to your predictions is by telling them the bad news: social breakdown the end of the world. Overpopulation is an issue along these lines that receives attention that is disproportionate to the seriousness of the claims made by its scare tacticians. Among these claims is the belief that we are imminently reaching the carrying capacity of the Earth, at which point starvation, crowding, and wars over scarce resources will tear human civilization to pieces.
This hypothesis relies on progress not happening, the opposite of the singularitarians' reliance. But the very same question can be asked of both: how do you know? This is where it becomes apparent that extrapolating patterns is an art for the Kurzweils of the world. If you extrapolate one variable, you get intelligent machines. If you extrapolate another, you get the end of the world. But if you extrapolate yet another, say the total fertility rate (TFR), it doesn't look so scary. Defined as the average expected number of children born per woman, the world's TFR has been steadily declining in the post-war period, from almost five in 1950 to almost 2.5 today. As it approaches two, the world population approaches equilibrium.
Phony overpopulation scares are common in the history of anglophone countries, from Thomas Malthus to American anxiety over the "yellow peril" around the turn of the century (see Jack London's The Unparalleled Invasion for a rosy portrait of the future). Wealthy people and business interests are often the biggest proponents of the theory that population growth is the largest problem facing the world. Conveniently, it's one of the only major global issues that isn't their responsibility. In reality, the only reliable way to lower growth rates is to facilitate the economic growth of poor countries to the point where people there have a decent standard of living.
The danger in blowing the perils of overpopulation out of proportion is that it leads people to prioritize population control above reproductive rights and, more generally, morality. If it really is that serious, then we have carte blanche to do whatever is necessary. The bottom line is that, despite the very real possibility of overpopulation becoming an issue, there is no reason to think it is serious or imminent enough to change what our goals would be, were it not imminent. Our immediate task is still figuring out how to get communities to lift themselves out of poverty while handling climate change and other real crises.
We have a predisposition to weigh the likelihood of possible futures, either good or bad, based on how exciting or terrifying they are instead of how probable they are. Anyone interested in solving problems should be aware of this bias.
Part 2 covers a second, related bias that people have: the impression that the times we currently live in offer wider and more revolutionary possibilities than existed in the past. This impression, created by the fact that we live now, not in the past, is the source of huge blunders and the general abandonment of reason.
Thursday, April 19, 2012
Bringing a Provision to a Principle Fight
An oft noticed (but seldom described) difference in the way people discuss policy can prevent real progress from being made. Consider the debate over the Patient Protection and Affordable Care Act, a major step for many progressives. An advocate will typically stress the effectiveness of a particular policy in achieving a goal, such as universal coverage, better health outcomes, or care for uninsured children. The counter-argument, however, will typically involve exhortations about the value of limited government powers, fiscal responsibility, and individual responsibility. Note that, whichever argument is right, the two people having this debate have not addressed what the other is saying. Nor will you ever hear statements like "individual responsibility is more important than protecting the welfare of children," proclaiming the superiority of a value over a stated goal.
Economic discussions can involve a similar pattern. Consider the (relatively value-neutral) argument that financial regulation is necessary in order to most effectively prevent financial crises while maintaining a robust financial sector. Trumpeting the virtues of deregulation and the free market in opposition, however virtuous they may be, does nothing to address the actual question: how do we ensure that financial crises don't happen? Objecting to spending on particular projects by referencing wealth redistribution or fiscal profligacy doesn't address the details of the goals of the project, which could range from providing unemployment insurance to paying for veterans' disability care.
To sum up the point of these examples, principle-based arguments constitute a totally different kind of rhetoric than consequentialist ones. This can lead to a real impasse, particularly surrounding sensitive social issues like sexually transmitted diseases and dealing with the objection of certain religious groups to protective measures. In these cases, someone's religion prevents them from compromising. Effects are irrelevant to someone with a biblical mandate.
This does not always fall neatly along political lines, either. For example, activists on the left opposing Israeli settlement expansion are split between those supporting a two-state compromise settlement and those in support of a one-state solution. The pragmatic argument for a two-state settlement is strong, given the nearly global consensus. On the other hand, a particular interpretation of the right of return of Palestinians leads many to support a one-state solution on ideological grounds. One side has nothing to say to the other, besides making an appeal to compromise or an appeal not to compromise one's values.
Ultimately, anybody who endeavors to meaningfully discuss an issue has to have both a sense of their own values as well as a willingness to compromise and be pragmatic. Insisting on judging a policy based only on whether it adheres to a particular set of values, besides providing a convenient excuse for not looking at the likely effects of said policy, is not a basis for useful discussion. There is a real debate to be had over whether examining consequences or adherence to principles provide a better basis for ethics. But politics is fundamentally about how people with differing values compromise and form policies whose effects are acceptable enough to all parties involved. Ignoring this, at best, turns political discussions into a morality play. At worst, it prevents any useful communication across boundaries.
Wednesday, April 18, 2012
Who Is a Moral Agent?
There is an important assumption about moral agency which always goes unstated in political discussion. It is this: institutions have moral agency. I could not possibly disagree with anything more than this. I believe that this assumption is responsible for a great deal of the evil that happens in the world. The assumption is unquestioned and unacknowledged because people do not think carefully about where moral agency can rest. Liberals who are outraged by the Citizens United verdict have no problems with treating the government as a valid moral agent, capable of killing, stealing, and taking on social projects. Likewise, right-wingers want to treat huge, fascistic corporations as equivalent to human beings, but balk at the idea of the government doing anything which doesn't put money into their wallets.
My take on this matter is simple: moral agency must lie with individual human beings. This is a minimal assumption, and I will take it for granted. However, humans do not live in isolation, and collective decisions must be made. As such, there must be some provision for super-individual moral agency. This is what I refer to as a group of individuals, or simply group. I do not use this term in a simple way, meaning any attempt at decision making involving more than one person. Instead, I use it in a technical way, to mean a voluntary association of individuals, none of whom relinquish or subvert their own moral agency, but merely use some method to determine the prevailing moral judgement of the group. The methods by which a group can come to such a determination are manifold: voting, by simple or super majority; formal debate; consensus building; and many not yet invented, I'm sure.
I contrast the idea of the group to the idea of the institution. An institution is also a super-individual decision-making body. However, it is not composed of individuals. In fact (as I shall discuss in a future post) institutions have priorities and prerogatives completely independent of the will of any given person. Obviously, decisions within institutions are ultimately made by individuals. But that individual must be willing to act, and must in fact act, in the interests of the institution rather than in their own individual interest or they would not be placed in such a position to begin with. A perfect example of this comes from a friend of mine who was tasked to go to a State Legislature meeting on behalf of the healthcare non-profit he works for. The people in charge had decided they would side with a certain political bloc which my friend opposed. However, it was his job to go and relay, and argue for, the position of the non-profit. His individual opinion of the matter at hand did not matter in the slightest. All that mattered was whether or not he could accurately relay the prevailing opinion of the institution he was a part of.
I do not think that the suppression of one's own moral agency in such a circumstance is conscionable. Be it as an employee, a soldier, or a politician, one should not have to abnegate one's own moral agency to serve a greater good. Such a good can be served voluntarily, and morally, by acting as part of a group of individuals, whose decision you can protest and even reject with no artificially contrived consequences to you, such as destitution or imprisonment.
My take on this matter is simple: moral agency must lie with individual human beings. This is a minimal assumption, and I will take it for granted. However, humans do not live in isolation, and collective decisions must be made. As such, there must be some provision for super-individual moral agency. This is what I refer to as a group of individuals, or simply group. I do not use this term in a simple way, meaning any attempt at decision making involving more than one person. Instead, I use it in a technical way, to mean a voluntary association of individuals, none of whom relinquish or subvert their own moral agency, but merely use some method to determine the prevailing moral judgement of the group. The methods by which a group can come to such a determination are manifold: voting, by simple or super majority; formal debate; consensus building; and many not yet invented, I'm sure.
I contrast the idea of the group to the idea of the institution. An institution is also a super-individual decision-making body. However, it is not composed of individuals. In fact (as I shall discuss in a future post) institutions have priorities and prerogatives completely independent of the will of any given person. Obviously, decisions within institutions are ultimately made by individuals. But that individual must be willing to act, and must in fact act, in the interests of the institution rather than in their own individual interest or they would not be placed in such a position to begin with. A perfect example of this comes from a friend of mine who was tasked to go to a State Legislature meeting on behalf of the healthcare non-profit he works for. The people in charge had decided they would side with a certain political bloc which my friend opposed. However, it was his job to go and relay, and argue for, the position of the non-profit. His individual opinion of the matter at hand did not matter in the slightest. All that mattered was whether or not he could accurately relay the prevailing opinion of the institution he was a part of.
I do not think that the suppression of one's own moral agency in such a circumstance is conscionable. Be it as an employee, a soldier, or a politician, one should not have to abnegate one's own moral agency to serve a greater good. Such a good can be served voluntarily, and morally, by acting as part of a group of individuals, whose decision you can protest and even reject with no artificially contrived consequences to you, such as destitution or imprisonment.
Tuesday, April 17, 2012
My Solution to the Fermi Paradox
I think that the most likely solution to the Fermi Paradox is that, while life is exceedingly common in the universe, intelligent life is incredibly rare. In the four and a half billion year lifespan of this planet, life has existed on it from almost the very first moments. However, not until the Cambrian explosion about 530 million years ago did complex multicellular life exist. That means that for nearly four billion years, and starting from nine billion years after the birth of the universe, Earth contained nothing but single-celled organisms and colonies of such. And it is not until about 2 million years ago, 0.04% of the history of life on Earth, that the first technological intelligence emerged. And even that was simply monkeys hitting rocks against each other in a clever way! Anything more complicated than a stone arrowhead was invented in the last ten thousand years!
And let's look at how unlikely it is that humans (or any other hominid) ever even achieved technology. We had access to wheat, barley, and rice, easily domesticable plants that produced high yields and had good nutritional content. We had access to large animals with very particular internal dominance hierarchies which were not so recently evolved alongside us to attack us on sight, but not so distantly separate from us to be completely unwary at the sight of hunters with spears. (Here by "we" I am referring to any subset of humans who had such access — Jared Diamond goes into great detail as to who in fact had access to what.) We had access to workable stone, copious woodlands, and various ore deposits. We had an abundance of fresh water on a planet with an atmosphere suitable for lighting fires. Our luck in the development of our culture and eventual civilization was astounding. We were never subject to an extinction-level impact or eruption (although it seems that we came damn close).
I think that such luck is not only astounding, but is in fact astronomical. While life seems to have no trouble at all finding a place on a planet like the Earth (and perhaps on many other types of planets as well), technological civilization seems like and absolutely ludicrously unlikely event. It has happened exactly once in four and a half billion years (for 0.0002% of that time), or in 530 million years in which complex life has existed on Earth (or 0.0019% of that time). So I would imagine that life is in fact very common in the universe, although almost exclusively in the form of single-celled creatures living on rocks and in oceans. Very, very, very rarely one would find a planet with some sort of multicellular life on it — simple plant-like creatures, or molds of some sort. And once in an unimaginably huge while, one might expect to find a planet where technologically intelligent life once existed. Finding a planet where technologically intelligent life exists concurrently with us seems depressingly close to a fantasy.
[Edit: My percentages were two orders of magnitude off! They were simple ratios, not percentages. Thanks to Scott for pointing this out.]
And let's look at how unlikely it is that humans (or any other hominid) ever even achieved technology. We had access to wheat, barley, and rice, easily domesticable plants that produced high yields and had good nutritional content. We had access to large animals with very particular internal dominance hierarchies which were not so recently evolved alongside us to attack us on sight, but not so distantly separate from us to be completely unwary at the sight of hunters with spears. (Here by "we" I am referring to any subset of humans who had such access — Jared Diamond goes into great detail as to who in fact had access to what.) We had access to workable stone, copious woodlands, and various ore deposits. We had an abundance of fresh water on a planet with an atmosphere suitable for lighting fires. Our luck in the development of our culture and eventual civilization was astounding. We were never subject to an extinction-level impact or eruption (although it seems that we came damn close).
I think that such luck is not only astounding, but is in fact astronomical. While life seems to have no trouble at all finding a place on a planet like the Earth (and perhaps on many other types of planets as well), technological civilization seems like and absolutely ludicrously unlikely event. It has happened exactly once in four and a half billion years (for 0.0002% of that time), or in 530 million years in which complex life has existed on Earth (or 0.0019% of that time). So I would imagine that life is in fact very common in the universe, although almost exclusively in the form of single-celled creatures living on rocks and in oceans. Very, very, very rarely one would find a planet with some sort of multicellular life on it — simple plant-like creatures, or molds of some sort. And once in an unimaginably huge while, one might expect to find a planet where technologically intelligent life once existed. Finding a planet where technologically intelligent life exists concurrently with us seems depressingly close to a fantasy.
[Edit: My percentages were two orders of magnitude off! They were simple ratios, not percentages. Thanks to Scott for pointing this out.]
Monday, April 16, 2012
Two Types of Free Will, part 3
As my previous post argued, the position that free will is somehow essentially true in a naive sense suffers from the problem highlighted in this Dilbert cartoon. It is an incoherent and meaningless way of talking about free will. However, it seems obvious (to me, at least) that there is some sort of free will at play. Otherwise, why would we even have such a concept? If we were mere automata carrying out our inherent programming, why would be have any reason to think about such a thing as free will? Hence, there is free will in some sense, or what I defined as phenomenological free will.
The description of phenomenological free will I gave in my first post on the subject relied heavily on the concept of consciousness, and I think it is exactly consciousness that makes free will a valid concept. As I will go to some length to describe in a future post, consciousness is the self-reflective model of the world we use to predict the future, including the future of our own mental states. It is also the tool by which we invent a narrative for the events that go on around us, sometimes referred to as confabulation. We instantly and instinctively come up with stories about our actions, thoughts, motivations and surrounding, often with absolutely no relation to truth or reality. Many cognitive fallacies are driven by the storytelling ability, such as the fundamental attribution error. We prefer reasons to be narrative than empirical. We need to know why things happen, not just that they do, even (especially?) if the why doesn't exist or is completely made up.
This exact ability applies as much to ourselves as to other people or objects. We come home from a long day at work and yell at our significant other over some petty transgression, and rationalize it by saying that they were annoying and we were tired and it's not our fault anyway. Every non-philosopher has said to themselves that they never intended to yell, and apologized afterwards. However, I contend that that impulse to deny volition isn't a mere face-saving exercise, but is rather precisely correct. We yelled completely automatically, because that is what our brain decided was the correct behavioural response to the situation. Our confabulation ability, however, even as it watched us yelling, was coming up with retroactive reasons to start yelling, and since we became consciously aware of yelling at the same time as we became conscious of our confabulation (since the confabulation is, in a very important sense, our consciousness) we go on to believe that we chose to yell of our own free will.
If we were less tired when we came home from work, our conscious mind might have been quick enough to notice that we were getting ready to yell, and would have stopped us doing so. That, it seems, is another function of consciousness (although, of course, not exclusively of consciousness). Consciousness allows us — if we have time — to stop an action we notice we are about to start. When you reach for a hot skillet while cooking, you don't stop reaching (and thereby prevent burning your hand) until you actually look over and become aware of what you are about to do. Most telling of all, sometimes your don't stop reaching! You helplessly watch yourself proceed to grasp the burning hot skillet and burn your hand! Where is your free will then? This is a case of your brain going about the work it knows it needs to do, completely outside your conscious control, and your consciousness not working fast enough to stop it making a grave (and painful) mistake.
I won't delve into what "we" and "chose" refer to in the above paragraph (as those are both profound questions in their own right), but on the assumption that whatever it is that we refer to when we say "I" is a subset of the function of our brain, we can say that "we" are capable of contributing some influence on our actions, but that for the most part our brain goes about it's business completely without "us", until some sort of conscious decision needs to be made — perhaps one too complicated for our animal brain to figure out on its own. However, we should not despair! After all, "our" interests are almost always in line with that of our brain and body. So the limited, phenomenological sense in which we have free will is enough, even if it's a confabulation. For myself, I'm willing to trust my brain to take care of itself, and it's passenger, "me".
The description of phenomenological free will I gave in my first post on the subject relied heavily on the concept of consciousness, and I think it is exactly consciousness that makes free will a valid concept. As I will go to some length to describe in a future post, consciousness is the self-reflective model of the world we use to predict the future, including the future of our own mental states. It is also the tool by which we invent a narrative for the events that go on around us, sometimes referred to as confabulation. We instantly and instinctively come up with stories about our actions, thoughts, motivations and surrounding, often with absolutely no relation to truth or reality. Many cognitive fallacies are driven by the storytelling ability, such as the fundamental attribution error. We prefer reasons to be narrative than empirical. We need to know why things happen, not just that they do, even (especially?) if the why doesn't exist or is completely made up.
This exact ability applies as much to ourselves as to other people or objects. We come home from a long day at work and yell at our significant other over some petty transgression, and rationalize it by saying that they were annoying and we were tired and it's not our fault anyway. Every non-philosopher has said to themselves that they never intended to yell, and apologized afterwards. However, I contend that that impulse to deny volition isn't a mere face-saving exercise, but is rather precisely correct. We yelled completely automatically, because that is what our brain decided was the correct behavioural response to the situation. Our confabulation ability, however, even as it watched us yelling, was coming up with retroactive reasons to start yelling, and since we became consciously aware of yelling at the same time as we became conscious of our confabulation (since the confabulation is, in a very important sense, our consciousness) we go on to believe that we chose to yell of our own free will.
If we were less tired when we came home from work, our conscious mind might have been quick enough to notice that we were getting ready to yell, and would have stopped us doing so. That, it seems, is another function of consciousness (although, of course, not exclusively of consciousness). Consciousness allows us — if we have time — to stop an action we notice we are about to start. When you reach for a hot skillet while cooking, you don't stop reaching (and thereby prevent burning your hand) until you actually look over and become aware of what you are about to do. Most telling of all, sometimes your don't stop reaching! You helplessly watch yourself proceed to grasp the burning hot skillet and burn your hand! Where is your free will then? This is a case of your brain going about the work it knows it needs to do, completely outside your conscious control, and your consciousness not working fast enough to stop it making a grave (and painful) mistake.
I won't delve into what "we" and "chose" refer to in the above paragraph (as those are both profound questions in their own right), but on the assumption that whatever it is that we refer to when we say "I" is a subset of the function of our brain, we can say that "we" are capable of contributing some influence on our actions, but that for the most part our brain goes about it's business completely without "us", until some sort of conscious decision needs to be made — perhaps one too complicated for our animal brain to figure out on its own. However, we should not despair! After all, "our" interests are almost always in line with that of our brain and body. So the limited, phenomenological sense in which we have free will is enough, even if it's a confabulation. For myself, I'm willing to trust my brain to take care of itself, and it's passenger, "me".
Saturday, April 14, 2012
Two Types of Free Will, part 2
Previously, I set out two distinct phenomena which could be referred to as "free will". I here continue to contrast them, and attempt to show why one must be the case, while the other cannot be.
Phenomenological free will, I contend, is obvious, self-apparent, and completely, empirically true. It is very hard to find someone who will argue that they do not choose their own actions on a daily basis. It requires extremes of brain-altering drugs and abusive behavior to get someone to lose the sense that they are in control of their actions (note, however, that it is in fact possible to do so). The impression I get is that when most people hear someone argue that there is no such thing as free will, they think that the argument addresses phenomenological free will. And, of course, if this were the case, then arguing against free will would be lunacy. It's just that, as with the term "consciousness", no one bothers defining what they mean, so people end up talking at cross-purposes.
The very idea of metaphysical free will, on the other hand, suffers from incoherence in a non-magical universe. If there isn't a soul or spirit pushing and pulling the cords in our pineal glands, then where does this locus of decision reside? One cannot simply say "the brain", because the brain is a monstrously complicated system, segmented into an even more monstrously complicated collection of subsets, down to the connections between individual neurons, each portion of which considers input from sense organs, bodily nerves, and other portions of the brain. There is no "place" where a decision is made. The brain works as a whole system directing action, and conscious awareness of such decision-making is limited and after the fact.
At any level of description — physical, chemical, interneuronal, conscious — what happens in the brain is either completely random or theoretically predictable. Quantum effects do occasionally tip the scale and something weird happens, but unless you posit that that random weirdness is magically motivated, it can in no way be said to be willful. The interactions between neurons are far less random, and can be described and calculated fairly accurately, and interconnected systems of neurons can be isolated by structure and function. So, again, there is nowhere for this metaphysical decision maker to reside.
From a psychological perspective, the case is even more dire! You do what you are inclined to want to do. You take that action which the sum of your habits and motivations does in fact motivate you to take. If you chose to go running instead of eating that tub of ice cream, it's not because you are a free agent in a libertarian universe capable of making any logically possible action. Rather, your sense of guilt over not exercising recently, your motivation to look and feel better, and your desire to be healthier as you get older overcame your desire to eat delicious ice cream and feel aesthetic pleasure for a few minutes. You had these various motivations wrestling inside you, and your brain finally computed that the former motivations were more pressing than the latter, and sent the balance of these desires to conscious awareness so that you could write your "decision" to go running into your conscious narrative.
If you try to explain metaphysical free will from a psychological perspective, you get hopelessly muddled (I can't even formulate a coherent argument for such a thing in my mind), and the only fall-back I can see is on Descartian magic — souls and spirits and such.
Libertarianism (in the philosophical sense mentioned in the previous post) suffers from exactly the same problem as metaphysical free will. What would it even mean to say that the universe "could have gone a different way"? A quantum event could have had some other outcome than it did? Well, sure, in a counterfactual way. But since quantum events are truly random — that is, there is no way on principle to know which way they will come out — all you can possibly say is that it will take some value but you won't know what that value is until you actually measure it. So if there was in fact some metaphysical agent generating our wills in a libertarian universe, without magic powers its determination of a quantum outcome would happen simultaneously with its measurement of that outcome, so it would be beholden to that value no matter what. This is as bad as being beholden to a completely pre-determined outcome! It is worse, in fact, because in the quantum world you can't even make a prediction!
So, I dispose of libertarianism as hopeless. And I dispose of a metaphysical compatibilist view as meaningless at any level of analysis. Since this post is already almost twice as long as I expected it to be, I will hold off on my argument about phenomenological free will, and of my opinion on the nature of our will, until the next post.
Phenomenological free will, I contend, is obvious, self-apparent, and completely, empirically true. It is very hard to find someone who will argue that they do not choose their own actions on a daily basis. It requires extremes of brain-altering drugs and abusive behavior to get someone to lose the sense that they are in control of their actions (note, however, that it is in fact possible to do so). The impression I get is that when most people hear someone argue that there is no such thing as free will, they think that the argument addresses phenomenological free will. And, of course, if this were the case, then arguing against free will would be lunacy. It's just that, as with the term "consciousness", no one bothers defining what they mean, so people end up talking at cross-purposes.
The very idea of metaphysical free will, on the other hand, suffers from incoherence in a non-magical universe. If there isn't a soul or spirit pushing and pulling the cords in our pineal glands, then where does this locus of decision reside? One cannot simply say "the brain", because the brain is a monstrously complicated system, segmented into an even more monstrously complicated collection of subsets, down to the connections between individual neurons, each portion of which considers input from sense organs, bodily nerves, and other portions of the brain. There is no "place" where a decision is made. The brain works as a whole system directing action, and conscious awareness of such decision-making is limited and after the fact.
At any level of description — physical, chemical, interneuronal, conscious — what happens in the brain is either completely random or theoretically predictable. Quantum effects do occasionally tip the scale and something weird happens, but unless you posit that that random weirdness is magically motivated, it can in no way be said to be willful. The interactions between neurons are far less random, and can be described and calculated fairly accurately, and interconnected systems of neurons can be isolated by structure and function. So, again, there is nowhere for this metaphysical decision maker to reside.
From a psychological perspective, the case is even more dire! You do what you are inclined to want to do. You take that action which the sum of your habits and motivations does in fact motivate you to take. If you chose to go running instead of eating that tub of ice cream, it's not because you are a free agent in a libertarian universe capable of making any logically possible action. Rather, your sense of guilt over not exercising recently, your motivation to look and feel better, and your desire to be healthier as you get older overcame your desire to eat delicious ice cream and feel aesthetic pleasure for a few minutes. You had these various motivations wrestling inside you, and your brain finally computed that the former motivations were more pressing than the latter, and sent the balance of these desires to conscious awareness so that you could write your "decision" to go running into your conscious narrative.
If you try to explain metaphysical free will from a psychological perspective, you get hopelessly muddled (I can't even formulate a coherent argument for such a thing in my mind), and the only fall-back I can see is on Descartian magic — souls and spirits and such.
Libertarianism (in the philosophical sense mentioned in the previous post) suffers from exactly the same problem as metaphysical free will. What would it even mean to say that the universe "could have gone a different way"? A quantum event could have had some other outcome than it did? Well, sure, in a counterfactual way. But since quantum events are truly random — that is, there is no way on principle to know which way they will come out — all you can possibly say is that it will take some value but you won't know what that value is until you actually measure it. So if there was in fact some metaphysical agent generating our wills in a libertarian universe, without magic powers its determination of a quantum outcome would happen simultaneously with its measurement of that outcome, so it would be beholden to that value no matter what. This is as bad as being beholden to a completely pre-determined outcome! It is worse, in fact, because in the quantum world you can't even make a prediction!
So, I dispose of libertarianism as hopeless. And I dispose of a metaphysical compatibilist view as meaningless at any level of analysis. Since this post is already almost twice as long as I expected it to be, I will hold off on my argument about phenomenological free will, and of my opinion on the nature of our will, until the next post.
Friday, April 13, 2012
Two Types of Free Will, part 1
I recently heard Daniel Dennett's explanation of his concept of deepity. It struck me that the concept of free will is exactly such a thing. It is a concept which is trivially true, but, in another sense, is logically ill-formed. The usual debate about free will is whether the concept of such is compatible with a deterministic (or truly random — that is, quantum) universe. Those who believe free will is compatible with a deterministic universe, and that therefore free will is "real" in some sense, are called compatibilists, while those who believe it is not, and that free will is an illusion, are called incompatibilists. The idea that the universe is not limited in this sense, but could in fact go some way other than the way it went is called libertarianism, but I more or less dismiss it out of hand as incoherent, for reasons I shall explain below.
Normally, free will is somehow taken to be a single, monolithic concept which is either true or false, and therefore is argued over. However, it struck me that there are two very different concepts of free will which no one bothers to distinguish (as far as I have seen). These I call phenomenological free will and metaphysical free will. Let us take them in turn.
Phenomenological free will is what we experience whenever we are awake and aware. It is the feeling of making choices, which is obvious and inescapable whenever we experience usual brain function, not under the influence of hypnosis, drugs or derangement of some sort. It is the conscious mind's narration of the things it sees us doing (we are consciously aware of our actions about 500 milliseconds after our brain initiates them). A large function of consciousness, it seems, is to inhibit actions it realizes aren't desirable, but it does not initiate them. Regardless, as far as one can tell (and in as far as there is a real "I" there to do the telling), we choose our actions and build our identities around those choices.
Metaphysical free will is what I call the idea of making "real" choices. That is, it is what explores the world of counterfactuals relative to what we did indeed choose, and decides that, had it wanted to, it could have selected one of those other choices. Alternately, it can be seen as that device which looks at the current state of the world, and picks what future actions would be most beneficial or desirable for the actor. It is, however, never forced to make any particular choice — it could make some wild, unmotivated flight of fancy at any moment (or at least, that must be a serious possibility in order for this style of free will to be worth considering). It is somehow independent of forces in this world, even if the actor himself isn't.
These two types of free will are very different to each other. One is a fact of experience and perception (hence, phenomenological), while the other makes a claim about the very nature of our minds, about what is true of the universe. I will contrast these two views in my next post, and show why one of these must be the case, while the other cannot be.
Normally, free will is somehow taken to be a single, monolithic concept which is either true or false, and therefore is argued over. However, it struck me that there are two very different concepts of free will which no one bothers to distinguish (as far as I have seen). These I call phenomenological free will and metaphysical free will. Let us take them in turn.
Phenomenological free will is what we experience whenever we are awake and aware. It is the feeling of making choices, which is obvious and inescapable whenever we experience usual brain function, not under the influence of hypnosis, drugs or derangement of some sort. It is the conscious mind's narration of the things it sees us doing (we are consciously aware of our actions about 500 milliseconds after our brain initiates them). A large function of consciousness, it seems, is to inhibit actions it realizes aren't desirable, but it does not initiate them. Regardless, as far as one can tell (and in as far as there is a real "I" there to do the telling), we choose our actions and build our identities around those choices.
Metaphysical free will is what I call the idea of making "real" choices. That is, it is what explores the world of counterfactuals relative to what we did indeed choose, and decides that, had it wanted to, it could have selected one of those other choices. Alternately, it can be seen as that device which looks at the current state of the world, and picks what future actions would be most beneficial or desirable for the actor. It is, however, never forced to make any particular choice — it could make some wild, unmotivated flight of fancy at any moment (or at least, that must be a serious possibility in order for this style of free will to be worth considering). It is somehow independent of forces in this world, even if the actor himself isn't.
These two types of free will are very different to each other. One is a fact of experience and perception (hence, phenomenological), while the other makes a claim about the very nature of our minds, about what is true of the universe. I will contrast these two views in my next post, and show why one of these must be the case, while the other cannot be.
Thursday, April 12, 2012
Paradox of Voting
The more people vote, the less chance each vote has of affecting the outcome of the election. This is called the Paradox of Voting. Some economists use this as a reason not to vote. I don't think it's a good reason not to vote (there are plenty of other good reasons not to vote). However, it is a good reason to not harass anyone about voting. The most common thing I hear from liberals and other supposedly civic-minded people when they learn that I don't vote is that it's awful and I don't get to complain about politicians.
First of all, it's not awful. Voting in a government election is a form of consent. Even those people who know that I'm an anarchist occasionally still entreat me to vote "because this is such an important election!" Well, uh, guess what? It's not. What's that? We need to replace the Big Oil and defense contractor-owned corporate stooge with a Big Bank and finance-owned corporate stooge? Why, yes, that is a meaningful Change which will bring Hope!
Second, voting isn't magic. Voting is just a method of collective decision making. There are others. I like consensus-building. It probably wouldn't work on a national level, but it produces far more meaningful results. When you tell me that you want me to go vote, you're not really saying you want me to vote. What you're saying is that you want me to agree with you on the value and legitimacy of the State government which you support at the moment. However, I do not support any such government, and I will not feel guilty for doing so. Give me a ballot with "None of the above" on it, and then I might go vote, because then voting would suddenly be a meaningful action again. It would not just be a choice of oppressors, as participation in any coercive institution is, but a form of expressing political will — in this case, the will to not have any of the jokers who call themselves politicians decide my economic and legal fate.
Finally, let's see what George Carlin has to say. I am a bit smug about coming up with that bit of wisdom before I ever saw this clip. The few times I have had the privilege of actually saying this to someone they end up grasping inarticulately at reasons why it's wrong before quickly ending the conversation. Alas, few people like talking about politics — or political theory, in any case. I look forward to donning my red and black "I DIDN'T VOTE" button this coming November. It generates just the right ratio of curiosity to contempt (I figure, if you're contemptuous to begin with, I'm not gonna get through to you in any case).
This concept has a sort of inverse corollary when it comes to consumption, which I will talk about in a future post.
First of all, it's not awful. Voting in a government election is a form of consent. Even those people who know that I'm an anarchist occasionally still entreat me to vote "because this is such an important election!" Well, uh, guess what? It's not. What's that? We need to replace the Big Oil and defense contractor-owned corporate stooge with a Big Bank and finance-owned corporate stooge? Why, yes, that is a meaningful Change which will bring Hope!
Second, voting isn't magic. Voting is just a method of collective decision making. There are others. I like consensus-building. It probably wouldn't work on a national level, but it produces far more meaningful results. When you tell me that you want me to go vote, you're not really saying you want me to vote. What you're saying is that you want me to agree with you on the value and legitimacy of the State government which you support at the moment. However, I do not support any such government, and I will not feel guilty for doing so. Give me a ballot with "None of the above" on it, and then I might go vote, because then voting would suddenly be a meaningful action again. It would not just be a choice of oppressors, as participation in any coercive institution is, but a form of expressing political will — in this case, the will to not have any of the jokers who call themselves politicians decide my economic and legal fate.
Finally, let's see what George Carlin has to say. I am a bit smug about coming up with that bit of wisdom before I ever saw this clip. The few times I have had the privilege of actually saying this to someone they end up grasping inarticulately at reasons why it's wrong before quickly ending the conversation. Alas, few people like talking about politics — or political theory, in any case. I look forward to donning my red and black "I DIDN'T VOTE" button this coming November. It generates just the right ratio of curiosity to contempt (I figure, if you're contemptuous to begin with, I'm not gonna get through to you in any case).
This concept has a sort of inverse corollary when it comes to consumption, which I will talk about in a future post.
Wednesday, April 11, 2012
Games
Most people have the intuition that, while you do not have the right to do violence to someone unprovoked, once they have done violence to you, you are justified in doing violence back to them. I have a system by which I try to schematize this intuition. I frame interactions between people as games. Such games have rules, which are agreed upon implicitly by those involved. The players in these games, I assume, are in an equal power relationship — that is, factors such as authority, gender and race are not involved (this mitigates such non-reciprocal cultural artifacts as orders and bigoted statements. [I should probably make a post about this assumption at some point.] ).
Cultural expectations and past relationships play big roles in determining the rules of such games. These rules can include: "talking is allowed", "intentional physical contact is not allowed" (this might well be a rule with strangers on the street), "kissing is allowed" (such as between people in a romantic relationship). For the vast majority of interactions, "don't do violence" is a rule. However, the rules of games can change, and often do in short order. These changes come about when one person implicitly or explicitly violates a standing rule. Once this happens, both players may now play by these new rules. What this creates in effect is an "eye for an eye" situation. If you are willing to violate one of the rules of the interaction, you are implicitly agreeing to play by that rule for the duration of the interaction. So, if you punch someone (a violation of a rule outside of a boxing ring), you agree to getting punched yourself. If you kiss someone, you are implying that they may kiss you back. And so on.
This allows us to set up a naive code for behaviour: follow the rules, or, if you violate them, accept that the new rules your violation established apply to you as much as to anyone else you interact with.
Cultural expectations and past relationships play big roles in determining the rules of such games. These rules can include: "talking is allowed", "intentional physical contact is not allowed" (this might well be a rule with strangers on the street), "kissing is allowed" (such as between people in a romantic relationship). For the vast majority of interactions, "don't do violence" is a rule. However, the rules of games can change, and often do in short order. These changes come about when one person implicitly or explicitly violates a standing rule. Once this happens, both players may now play by these new rules. What this creates in effect is an "eye for an eye" situation. If you are willing to violate one of the rules of the interaction, you are implicitly agreeing to play by that rule for the duration of the interaction. So, if you punch someone (a violation of a rule outside of a boxing ring), you agree to getting punched yourself. If you kiss someone, you are implying that they may kiss you back. And so on.
This allows us to set up a naive code for behaviour: follow the rules, or, if you violate them, accept that the new rules your violation established apply to you as much as to anyone else you interact with.
Subscribe to:
Posts (Atom)