Tuesday, August 7, 2012

Conscionable Consciousness Conduction

This is something I wrote a long time ago, but which still bears out, I believe. It is a method I would be willing to employ to transfer my consciousness into another body, or into a computer.

To illustrate why this is important, let me say that I would not be willing to use a teletransporter that copied my entire physical form, sent the data to another terminal which reconstituted me, and then destroyed the original. Although I have no philosophical objections to this happening, I find the idea highly emotionally disturbing and would never go through with it. For that matter, even if the original wasn't destroyed but was rather pulled apart and transferred, I still wouldn't do it for reasons I think are obvious.
 
Likewise, I would not be willing to be, say, put to sleep, have my brain scanned, then be uploaded to a machine and have my body destroyed. I would not object, of course, to having a copy of my mind made, to be run later or used as a sort of back up.

However, there is a way I would be willing to actually abandon my body and live in a virtual world (assuming of course all assurances of liberty and safety, etc). If my various faculties — sight, hearing, language, independent limb motor control — were to be transfered one by one to an emulator running on a computer connected to the various sensors and devices which would temporarily mimic said faculties, I would be able to track the progress of my mind from my head to the computer. I imagine the process something rather like this:

I sit down in the chair, my head shaved and access plugs and sub-cranial scanning mesh installed. The technician behind me takes one long wire and inserts the end of it into the plug square in the back of my head. He asks me if I'm ready. I take a deep breath and then nod. I hear a switch flip, and then I vomit. My body thinks that I'm having a stroke, or have an eyeball knocked out of its socket, or am spinning faster than my eyes can focus on anything. After a few moments, I start to orient myself. I am looking ahead, at a large black box, the size of a television set, with a forest of instruments sticking out of it. I also see my body, sitting in a chair, a host of medical equipment and one technician behind me. I raise my right hand from the arm of the chair, and see it both out the corner of my eye and from across the room simultaneously. Finally, I come to grips with the fact: my brain is getting direct data from a video camera hooked up to a computer. The technician asks me again if I'm ready. I've long ago memorized the sequence of the procedure. I hear another switch flip and a loud humming, and slowly my vision of the computer in front of me fades. However, I can still clearly see my body. Nothing has changed, but that the part of my brain which receives data from my eyes has temporarily stopped working. Luckily, I am hooked up to a camera, which replaces the function of the eyes, and a computer, which now hosts the software needed to interface between eyes and cognitive and reflexive areas of the brain. The technician inserts another wire into the top left of my skull. Now I feel as if I have a third arm. I move the arm on my body, and it responds as it should. Then I move this new appendage, and see something wave in from of my new field of vision. It is a robot arm, identical in shape and construction to my natural arm. When the inhibitor is turned on, it prevents my brain from sending signals to my muscles, and I am no longer able to control my fleshy right arm. But I can still quite easily control both my left arm and the robot arm to the right of my field of vision. This continues — left arm, left leg, right leg, diaphragm — until every part of my brain has been mapped, transferred, and inhibited. Now comes the final moment. Up to now, I have been physically connected to all of my wetware. I could have, at a moment's notice, regained control of any part of my brain. But now the technician removes the first wire he inserted. My visual cortex is completely dormant and no longer connected to the computer, yet I can still see my body — I am still connected to my body — and I can still feel every part of it as if I'm still in my brain.
 
And so on. In this way, there would be no point at which I could feel “myself” “die” or disappear. I would simply phase from one substrate to another, and be awake and (at least nominally) in control the entire time. Of course, none of this might ever be possible, but it’s not completely unreasonable.

Monday, August 6, 2012

Prescriptivism and Mysticism

It struck me that grammatical prescriptivism bears a very similar relationship to linguistics as does mysticism to science. This can be analyzed from a philosophical as well as a political perspective.

[Edit: It was pointed out to me that my use of the word "mysticism" here is inappropriate. By it I mean any sort of magical or supernatural thinking, rather than the more usual notion of special states of consciousness meant to put one in touch with the divine. Additionally, a few edits were made towards the end to make my main point clearer.]

Both mysticism and prescriptivism operate on the basis that there need to be certain well-defined rules for how the world works (even if that rule is just "because that's how God did it"). In each case, there is a very strong reluctance to let go the reigns of reality, as it were, for fear that utter chaos would result. The mystic needs their magical system to keep their crops growing, their cities plague-free, and their soul saved. Similarly, the prescriptivist needs strict usage rules to make sure their sentences are (what they consider to be) optimally readable, socially and politically correct, and up to some abstract standard, lest all semblance of readability and communication disappear. In each case the advocate of such rules fails to see that their rules are arbitrary and related to reality only through social convention and particular modes of thought. They miss the point that their devotion to the actual facts of the world is outweighed by their devotion to an already-given system of rules. The rules of both the mystic and the prescriptivist are arbitrary rules.

Now, there is certainly a place in society for arbitrary rules. In the United States, people drive on the right side of the road. If we don't all agree to do so, there will be a lot of head-on collisions. But we don't look at England or Australia and scream that they are doing things wrong and need to change which side of the road they drive on. There may be a bit of inconvenience in shifting from one to the other, but no one in their right mind would claim that there is a correct side of the road to drive on, and an incorrect one, and that America is correct and England incorrect. They are just arbitrary stylistic choices. (Well, maybe.)

Similarly, religious prescriptions are arbitrary rules insisted upon by people with an interest in standardizing the behaviors of members of a given community (including thoughts and utterances). And grammatical prescriptions ("no dangling modifiers", "no stranded prepositions") are arbitrary rules that have been historically insisted upon by people with an interest in standardizing the grammar of a given language.

Of course, I am assuming my reader agrees with both of these assertions. The first one is easy for many people to accept (or at least grant for the sake of argument), as opposition to mystical ideas has a long and proud history, in the form of rationality and science. The second one, though, has its basis in the very young and very much still growing field of linguistics, whose goals, methods, and justification few people have been exposed to, and fewer still understand. But prescriptivist accounts of language fly out the window the moment you actually look at human language as a natural phenomenon. Just as the thunderbolts of Zeus turned into electrical imbalances between the ground and atmosphere, dangling modifier and double negatives turn into simple sentences perfectly comprehensible to speakers of the relevant language. If one speaker says something and another understands the utterance, then grammatically correct linguistic exchange has transpired, regardless of whether any English Grammar Rule Book rules were violated. By way of example, I ended an above sentence with the phrase, "which side of the road they drive on." Nothing exploded. No hair-pulling confusion resulted. Yet I violated a rule of "proper English grammar". But this post isn't about convincing people that prescriptivism is wrong — only that it shares a certain key feature with mysticism.

I insist that in both the case of the mystic and the prescriptivist, there is either a lack of competence in understanding the rules by which nature operates or an emotional attachment to the social implications of the rule set. The mystic cannot grasp the science behind evolution, quantum mechanics, or cosmology, much as the prescriptivist cannot grasp the science behind linguistic universals, childhood language acquisition, or sociolinguistic discrimination. The mystic feels that they are saved and loved and special, much as the prescriptivist feels that they are proper, correct, and supremely literate. But both case are driven by either ignorance or contempt. In neither case is one able to recognize the elitism involved. Or, if the elitism is recognized, it's immediately defended as good and justified. One must be a speaker of "good English" much as one must be a "good Christian". And failing to be so means that you are inferior to the defender of this (dubious) Good.

But because linguistics is a young field which has not yet permeated the cultural fabric, we scoff at linguistic discrimination just as readily as we decry racial or religious discrimination. If you can't talk good, you must be stupid and inferior to all of us who can speak well. This, however, is just ignorance of the way in which people learn language and the way in which language changes, and an almost-mystical assumption about the existence of some Platonic True Form of a given language. It is ignorance of the fact that this Platonic language almost always coincides with the way the elite speak, be they London aristocracy (where "standard" British English comes from) or Muscovite czars (where "standard" Russian comes from).

As an aside, it must be pointed out that this ignorance costs people in very real ways. Being a speaker of a non-standard dialect cannot, on principle, relate in any way to intelligence or ability. There are astrophysicists in Memphis, Manchester and Mumbai alike, each speaking very different versions of English. Sounding Black on the phone is often a sure-fire way of not getting past a first interview. Having a Queens accent is just the same. Yet such linguistic variations have nothing to do with intelligence, training, education, personability. They are merely artifacts of our physicality, of our vulnerability to our social environments as children. We speak the way the people around us speak.

A final thought: there are good reasons to have arbitrary rules. And this applies to grammatical rules. Writing is a suboptimal translation of language, because it misses so many of the nuances essential to conveying understanding, such as tone, pace, volume, expression, etc. So in order to make writing understandable to others, it is important to have usage and spelling rules. In a written document, double negatives are recognized as negating each other, rather than the sentence as a whole (as is the case in, for example, Italian). Standardized spelling and punctuation are essential to the way English is written and read. But insisting that such rules be transferred from the page to the spoken word is ludicrous. Writing is the only place that needs the extra stringency of arbitrary grammatical rules. And it inherits this property from the already-natural-rule-governed richness of spoken language.

Monday, July 30, 2012

A More Efficient Method?

What follows is a conditional. If you reject the antecedent, then obviously you reject the entire conditional.

Let us assume that you are a member of a large-scale libertarian socialist society. Almost all the world runs along collectivist or similar lines in some way or another, and runs well enough that no one is starving or suffering due to systemic failures. But you get a good idea. Well, two good ideas. The first is some very efficient new method for producing some desirable good or service. The other is that, since the idea is yours, you're going to see if you can't use it to benefit yourself more than other people. What's to stop you? Surely there's nothing wrong with hiring some workers on relatively exploitative terms (although probably still far, far less exploitative than those in modern society) and produce said good or service to the net benefit of all society! Why would anyone be opposed to that, so long as everyone involved agrees to the arrangement?

This was a question I was unable to answer in a recent conversation. However, having had time to reflect, I have not some several answers.

Let's start with the least obvious: if you try to create a subset of society where such exploitation is driven by anything other than survival necessity, you will inevitably be reintroducing all the problems of classism and authoritarian hierarchy — in this case, though, those are your problem, not the workers' problem. In capitalism, exploitation works out fine because workers have pretty much no other choice except to participate in a capitalist economy. If they lose or leave their job at a capitalist firm (and a magnanimous welfare state won't subsidize their unemployment) all they can do to survive is get another job at a capitalist firm (which might be a firm where they are their own boss, but that doesn't really change their relationship with the material wealth of society). In a libertarian society, if they are dissatisfied with work at your firm, they can simply leave and go work anywhere else in a voluntary organization. That is, unlike in capitalism, there is almost no cost to leaving your job, outside of purely physically practical considerations. So unless your hypothetically superior good or service is SO desirable, and SO beneficial to the people working for you (not to mention society at large), then there is little chance that people will be willing to continue working under inferior conditions. It is easy, when one holds the mistaken belief that we currently live in a free society, to forget that there are huge social and economic impediments against most people actually improving the conditions of their lives, set up both consciously and implicitly by capitalists and the institutions which they support and which support them. Such barriers would not exist in a free society, so there would be nothing to stop brain-drain (and hand-drain, as it were) away from any given exploitative firm and towards voluntary, libertarian firms.

A second reason this would be unlikely to work is illustrated best by the results of the Ultimatum Game. Expanded to a larger society, it seems highly unlikely that a group of people would be willing to bequeath to any one individual so large a share of their collective wealth that a productive firm could be established with the resources. On a smaller scale, it seems unlikely that you would find many people who would be willing to work for a firm where they would earn relatively less than another worker, even if they (somehow) earn more than workers in other firms (again, without the societal and institutionalized economic coercion of capitalism). (That this entirely begs the question of what they are earning in a largely collectivist society without large-scale fiat currencies is left aside for the purpose of this discussion.)

A final reason this is unlikely to arise is the lack of institutionalized secrecy in a free society. If you were the manager of a firm which had a superior method of production, the only way to keep it from being copied by anyone and everyone (including all the libertarian, collective firms in that particular line of business) would be to keep it completely secret — high fences, pledges of confidentiality, dark windows. But whatever facility you use to run your firm would be owned by society at large — after all, outside of a capitalist system (including state capitalism), no one seriously thinks of a factory, a storefront, or a suite of industrial machinery as belonging to a single individual for their personal use. And outside of a society with capitalist-style property rights, you would have no right to stop people entering your factory or store or whatever to observe your methods and use them elsewhere. After all, if your firm can produce SUCH excess wealth that it's worth reinstating exploitative labor relations for, then everyone else will want to use it too. And since no capitalist firm is an island, huge chunks of the rest of society will have to be involved in you establishing your firm (building your factory, supplying your raw materials, etc, etc.). So, pretty soon, workers will have no reason to stay with an exploitative firm to produce that exact good or service, and will instead move to collective ones the first instance the exploitation outweighs the benefit.

There are probably other reasons besides the idea stated at the beginning of this post wouldn't work in a free society, but those three are the ones that I was able to articulate to myself in the last couple of days. They are, like any highly hypothetical discussion like this, premised on many assumptions which not everyone agrees with, including, I'm sure, the person who raised the hypothetical scenario in the first place. But they are all, I believe, answers consistent with the view of society, politics and ethics I advance.

Friday, June 8, 2012

State vs. Government

When I talk about anarchism, I mean it as opposition to the State: that is, the institution which claims sole authority to violence in a geographic territory. Strictly speaking, I do not oppose government. Now, the terms "State" and "government" are generally interchangeable in English, and I usually follow that convention. But the word "government" to me evokes the sense of "maintaining function and control". And it's certainly true that humans want control over their environment, both physical and social (that they can rarely have it is beside the point). The confusion comes in the assumption that a State is required for government, and that any society without a State is without government. Two hundred years of libertarian political theory and a handful of historical examples put the lie to this. Humans are able to govern their affairs quite efficiently without the continual threat of violence. Violence is, of course, a possibility in many everyday situations, but that's certainly no less a fact in a Statist society than in a Stateless one. Simply put, State is not necessarily government, and government is not necessarily State.

Tuesday, May 22, 2012

The Institutional Imperative, Part 2

One important consequence of the institutional imperative is that institutions tend to staff themselves with exactly the sorts of people who would tend to work to preserve those institutions. That is, someone who would not make it their first priority to preserve a given institution would not be knowingly selected for a position in that institution. This is why a country would not elect a leader whose stated goal it was to destroy that country's government — a so-called revolution is needed for that. Likewise, it is why a corporation would not hire a CEO whose goals did not match up with the company's.

This is why the heads of corporations, despite any contrition to the contrary, will never work toward the interests of people in general, but only toward the interests of the corporations they work for. When the legally required goal of a corporation is profitability, anyone who becomes a head of a corporation must consent to their actions being directed primarily towards that goal, rather than any humanitarian or just one (profitability, after all, rarely aligns with humanity or justice). No matter how much a CEO or board of directors might in their heart of hearts wish to improve the lot of their workers or the people they generally exploit, they only come into their positions if they are already willing to put these better natures aside for the sake of the corporations survival in the market. And should their consciences prevail, they would promptly be fired and mocked as weak and incapable.

This is why the claim that merely selecting better people to fill the positions in an institution can change the fundamental values of that institution is mistaken. An institution has its own values and priorities, which must be accepted by any person who fills a role in that institution before they would be allowed to do so. The institutional imperative results in a continuous vicious cycle whereby institutions are established with nominal goals, adopt the primary goal of survival, and then staff themselves with people already willing to carry out those nominal goals and necessarily the primary goal, and maintain this state for as long as possible, until collapsing.

Compare this to what I earlier called a group of individuals. Such a group would come together with the primary goal of solving a certain problem. However, unlike an institution, there would be no formal organization to the group that did not arise from the very character of the problem to be solved. The group would be recognized from the outset as a temporary, fluid system for dealing with the specific problem at hand. If the problem were a permanent one (for instance, waste management in a city) then the group would be constantly working, but would have no offices or formal rules. Rather, it would shrink and grow as needed, with procedures determined by the needs of any given moment. This would certainly be more difficult to maintain and run, but would ultimately be worthwhile, I believe, in that it would avoid any chance of corruption, as well as the risk of deviating from its stated purpose.

There are, of course, many other aspects which would have to be explained to account for how a dynamic, informal group could run any of the complex systems which make up modern society. The previous paragraph was meant simply to provide contrast to the way institutions work. At the least, I hope I made clear what I mean by the institutional imperative, and why it can lead to serious problems in society.

Thursday, May 3, 2012

The Institutional Imperative, Part 1

[Got my first really good night of sleep in a week today, so I'm finally back to blogging. Did y'all miss me?]

An institution is, loosely defined, a formal system for organizing human effort which has a permanent nature independent of the people who make it up. The reason for forming an institution is so that there is a centralized, legalistic authority which can make decisions necessary for completing the work the institution was established to do. Institutions are the traditional way of solving societal problems, from governing people and resources at the largest scales to running the local girl's hockey team.

However, as Clay Shirky so eloquently points out in this TED talk, institutions have a big problem. No matter what problem an institution is formed to solve, that problem is never the number one priority of the institution. Whatever the nominal prerogative of the institution is, its main priority from the moment it is actually formed becomes self-preservation. No matter what problem the institution sets out to solve, the institution can't work to address that problem if it no longer exists. It's that election-year mentality that says that it doesn't matter how poorly the incumbent governs because if they don't win, they won't get to govern at all.

This is what I call the institutional imperative. It is an inherent feature of any institutional organization. And it is the reason for a great deal of the problems in the world. It is responsible for the inhumanity of modern corporate capitalism, in which individuals are powerless to stop the cold financial logic of human exploitation and environmental destruction. It is likewise the feature which I believe is chiefly responsible for the counter-revolutionary fervor of the Soviet system and its descendants, whose inhuman slaughter of their own populations was truly inhuman.

Marxist Leninism, which seeks to destroy class distinctions and the State through a specific series of political events (which are, it should be noted, completely opposed to both the spirit and letter of Marxian Communism as an ideological system) is incredibly vulnerable to the prerogative because it is so blind to it on principle. Its very goal was to transfer all power into a single institution, the Communist State, so that it could be eliminated with a single blow once the proletariat was organized for self-sufficiency. What it tragically ignored was the intermediate step of getting power from the many varied institutions of contemporary society into the single Communist State. Because its nominal goal was the ultimate elimination of the State, it was ideologically impossible for Communism to admit that any state established by a Communist Party was going to suffer from the institutional imperative, and have as its first priority its own survival. More and more repressive measures were necessary to maintain the "revolutionary" government, because if it ever fell, they could never achieve the revolution.

This mad state of affairs was largely possible only because many people immediately assume that institutions are the only way to organize human labor, be it in a State, a corporation, a trade union, or a bureaucracy. In fact, this is assumed completely implicitly by most. People are almost never taught to consider the possibility that there are non-institutional solutions to societal problems. Although I will not go now into the alternatives, it should at least be recognized that there is such an assumption, that institutions have this feature which dictates a large chunk of their behavior, and that such behavior can be hugely destructive to humanity and the world.

Friday, April 27, 2012

Only Atheists Get to Grieve

I'm gonna write an angry post. I haven't slept well this week and can't get my thoughts together well enough to write something I'm not immediately invested in, so I'm gonna write this instead. If anyone is offended by it (and quite a few people ought to be) I can't apologize in all honesty. Five months ago I might have been able to, but not right now.

A few weeks ago I went to my great-grandmother's grave alone for the first time. I'd been there before with my family, but this was the first time I spent any great length of time there. It was a bizarrely warm March day, with a bright sun and green grass, rather than the usual Minnesota Spring blizzard. I sat by my grandma's grave for an hour that day. When I first got there, I was just looking around, making sure it was tidy. I thought about the prosaic aspects of being at a graveyard, of my bike ride from Minneapolis to Falcon Heights, of the previous times I'd been. Then I stopped avoiding my real purpose in being there, and I started trying to articulate my thoughts and feelings, for the first time, four months after she had passed away.

Up to that point, from the day she died December 4th, I hadn't cried. In fact, I hadn't really cried outside of watching movies in about a decade. And I hadn't cried at her funeral, or at my subsequent visits. And I didn't cry right then. But I sat down on the grass and started talking. I started saying mundane things about missing her, and finally coming to see her, and how strange it felt for her not to be around. And then the same part of my brain responsible for writing this blog kicked in, and I started drawing straight lines. Straight lines are what I do. I don't curve around inconvenient ideas (in as much as I can help it — we all have our biases). I try to think straight through all the relevant information I have in my head.

And a line of thought began to develop. My grandma was dead. Her body lay six feet beneath me in a coffin in the ground. This was a physical reality. Everything that had ever been her was in a box beneath the earth. I pictured her brain, as it was then, several months after her death. It certainly was not a pleasant image, but that was what everything referred to as "her" was now. All we are is a subset of the pattern of neuron firings in our heads. And that pattern had come to an end.

That pattern hadn't gone anywhere. It hadn't transcended matter. It wasn't part of a soul, or spirit, or chain of reincarnation. Everything that had been my great-grandma, that had experienced love and life and war and migration, all of that was now a pool of gray mush inside a very slowly crumbling skull. It had been sustained as part of a self-perpetuating chemical process for nearly a century, which had quickly degenerated and ceased to be. Now, for some people, that is an ugly and horrible thought, that that is all we are. But to me, it's heartbreakingly beautiful: this pile of gray mush pushing electrons around via sodium and potassium exchange wrote the Bible, built St. Peter's, painted the Sistine Chapel, and composed the Ave Maria. That is a miracle.

And then my mind took the next step forward: all of that ability and potential and memory and personality was gone for my grandma. It had gone out like a candle, with barely even a wisp of smoke to show for it. It was gone. She was gone. She was gone, and she was nowhere, and she never would or could come back. There's a physical law that says as much. The laws of physics literally dictated that my great-grandma had ceased to exist for eternity. Except a huge host of high deluded people thought that she wasn't.

Religious people believe in the eternal soul. They hold it that there is some essential, everlasting part of us that continues to exist before life and beyond death. They believe that we are never really gone, and some of them even believe that we will join our loved ones in eternal paradise after death (or judgement, or whatever fairy tale they wish). But they're wrong. And they know they're wrong. How do I know that they know they're wrong? Because they grieve.

If I had the slightest shred of belief that my great-grandma was not well and truly gone for all eternity, I would not have shed a single tear that day. But as I came upon the above line of reasoning, I started crying. Sobbing. Huge, painful dry heaves. I sat on the grass for forty-five minutes straight and cried into my hands. I cried because I knew my grandma was gone. I knew she was gone. I knew it right down to my bones. I knew it the way a baby zebra knows its mother is gone after finally finding her lion-eaten corpse. It was beyond thought, beyond culture or memory or ideology. It was chemical.

And in my grief came also anger. Outrage, in fact. Outrage at the fact that religious people would dare to grieve at a funeral. That they would dare to wail and moan about the supposed "loss" of a loved one. The hypocrisy of it galled me. Had I any hair, I would have been tempted to tear at it. To claim that there is an afterlife where your relatives wait for you before an eternity of bliss, and then to bemoan their passing struck me as obscene. And on clear-headed reflection, I can do nothing but stand by that line of thought.

If a religious person thinks that a deceased person is merely in another place, where they themselves will eventually go, then grief is not simply unnecessary, but nonsensical. We do not grieve when our loved ones move away. We do not grieve when the brother we're angry at leaves town and we know we likely won't speak to him again. We do not grieve when we leave a job, knowing we'll never again see our coworkers. We might be sad, or disappointed, or upset, but we do not grieve for separation. We grieve for death. Because we know right down to our DNA what death is, and all the religious platitudes in all the holy books read by all the priests and sages can't stop us knowing it. And I think that claiming that "it's God's plan" and "she's in a better place" is the absolute worst of sanctimonious, hypocritical delusion.

If you want to claim that you are religious, and believe in a God, or a Soul, or an afterlife, then you do not get to fucking grieve. You get to be sad and annoyed and impatient, because you won't get to see your loved ones for a little while. But what is the remainder of your life compared to eternity? Nothing. Literally, mathematically nothing. So just don't. However, if you accept your grief for what you know it to be, give up your childish insistence on magical thinking and ancient fairy tales. Accept that the universe is a system of particles interacting in infinitely complex ways, guided by blind, stupid natural laws which still somehow manage to produce the absolute miracles of thoughts and songs and love and life. If you insist on keeping your holy books and imaginary creatures, I won't judge you. But only atheists get to grieve.

Tuesday, April 24, 2012

Jaynesian Consciousness, or Why Consciousness Is Not What You Think It Is

Many people use the term "consciousness" to mean a huge variety of things. In this talk by John Searle, he attributes consciousness to his golden retriever. I have had conversations with people who say that consciousness is a matter of degree, all the way from plants up to humans. The idea that consciousness is somehow a fundamental and pervasive feature of biology is a very common idea — but is simply wrong.

If you've ever driven a car for any length of time, you've almost certainly had the experience of driving for many minutes at a time, and only coming to realize very near the end of your trip that you'd reached your destination. For miles and mile, you monitored traffic, changed lanes, taken turns and exists, all while blissfully daydreaming or listening to your favorite music. The realization that you had arrived might have come as a bit of a shock, a record-skip from the last moment you were conscious of driving to that moment, when you became conscious of it again. The entire time, your brain faithfully carried out all the complex, precise movements required to keep the car on the road and going in the right direction without your conscious awareness.

If consciousness were somehow fundamental to human cognition — or cognition in general — this would not only be impossible, it would not even make sense! However, it is very possible, and extremely common! Nervous habits are often completely outside consciousness until pointed out. The vague recollection of dreams — not to mention their very existence — is another place where consciousness is shown to be fuzzy and transitory. Various drugs that can destroy the ego or cause us to have blackouts are similar. Hypnosis and schizophrenia, phenomena that suppress or eliminate conscious control and replace it with hallucinatory or external control, would be just as absurd. Spirit possession, found in a multitude of cultures, would require actually supernatural explanation, rather than a more prosaic psychological one. The entire notion of inspiration is entirely unrelated to consciousness, in fact! Invention and intellectual discovery, often naively identified with conscious reasoning, is in fact almost always the result of sudden flashes of insight which come upon one in the shower or while taking a walk, rather than consciously worked out piece by piece from premises.

Daniel Dennett is fond of saying that consciousness is an illusion. I think that's too strong. It's more accurate to say that the fundamental and all-encompassing nature of consciousness is an illusion. What seems to us the basic operating principle of the brain is actually a much more limited object. Others subvert reason, logic, memory, understanding and planning to consciousness. However, these are all separate things. The term "consciousness" is best reserved for the self-introspecting ability that seems unique to human. It is that constant stream of language we hear in our heads at all times, almost uninterruptibly, which allows us to form a sort of internal mind-space, and to give ourselves declarative commands in the form of decisions and arguments.

Various animals share almost all the cognitive features of humans in some combination. Dolphins are highly intelligent, playful and social. The other great apes share our sociability, and to some extent our language. Many animals, from chimps to pigeons, can either learn or be taught to recognize themselves is mirrors. Dogs and crows can recognize individual humans and react to each in unique ways. Chimps, crows and some fish make and use tools. Ants, termites, spiders, and birds build homes. In fact, it is extremely difficult to come up with a human faculty which is not also expressed by some animals.

One ability which very probably does distinguish us from all other animals is our ability to model the world around us in certain ways. An important part of brain function in any animal is modeling the world it inhabits. This allows it to plan and execute movement in useful and beneficial ways. Without a mental representation of the world, movement would be meaningless and uncoordinated. An animal's brain must know — that is, represent — the details of its environment and its own body in some way so that it can interact with it. Many intelligent animals, including us, take this a step further. We do not merely build models of our physical environments, but also of our social environments. It must be the case that, up to some point in our evolution, humans went no further than this. But eventually we took it another step further: we made models of mental environments. That is, we created models of how minds work, presumably whenever it was that we figured out that other humans had minds.

And that is exactly what consciousness is. It is the ability to make absract, multi-level models of minds, including our own. This ability is granted to us by the linguistic relationship between sounds and meanings in combination with a cultural focus on self-hood and narrativity ("narrativity" referring to our habit of telling stories about ourselves and events around us regardless of whether such stories actually relate in any way to reality). This ability to simulate the mental world not only lets us generate new ideas and inventions, it also lets us model the inner world of other people, to guess their thoughts and motivations. Here I don't mean empathy, which chimpanzees almost certainly share with us, but rather an ability to very literally read another person's thoughts, to form words in your head which are likely similar in meaning to the words they are forming in theirs.

This analogical mind-space, and the cohesive sense of self that it leads to, was first described by Julian Jaynes in his criminally misinterpreted and completely underappreciated book The Origin of Consciousness in the Breakdown of the Bicameral Mind. Daniel Dennett is one of the few thinkers on consciousness who openly acknowledges Jaynes's influence on his own ideas, but many others have proposed effectively identical characterizations of consciousness, most notably the neuroscientist and philosopher Thomas Metzinger, whose monograph Being No One lays things out in great detail.

The more I come to understand consciousness, the more tentative my grasp on my own consciousness feels. I become more and more aware every day of just how little of my everyday life I am conscious of. Malcom Gladwell's Blink, derided by many as anti-intellectual, is in fact an excellent document on the limits of our conscious thought, and at the same time of the power of our brains as a whole. Thinking, reasoning, learning, talking, inventing, discovering, and, for the most part, acting are all non-conscious events. Consciousness is just a curved mirror held up to our mental world, reflecting itself and its surroundings.

[Edit: Thanks to my friend Rob for pointing out a very important mistake I made, whereby I failed to distinguish modeling the world from modeling the mental.]

Monday, April 23, 2012

On the Fundamental Interconnectedness of Science

Scientifically illiterate people tend to make wild claims about new discoveries after reading third-hand articles on CNN.com or in the Fortean Times. Creationists have been doing this with every tiny bit of contradictory biological evidence since Darwin. New Agers do it with quantum mechanics. Regular people do it with tiny advances in technology or overblown predictions from hack journalists.

A stark example was the big scare at CERN over faster-than-light neutrinos.When the results were publicly announced last year, a chorus arose in indignation of the clearly malicious lie the academy had been spreading for the last century, that the speed of light was a fundamental speed limit in our universe, and all the physical effects this implied. Short-sighted people, believing themselves, at their Dunning-Krugeriest, to be incredibly farsighted, proclaimed a new age of physical theories and hyperdrive travel. They scoffed at the closed-mindedness of science in making such outrageously doctrinaire claims as that there were limits on the movement of objects in space! There were certainly no limits to the human spirit! ...or some such.

What these breathless blowhards don't understand is that no one seriously considered the possibility that said neutrinos were traveling faster than light. A few theoretical physicists came up with some pet models that might allow a special variety of neutrino to do something weird, but that's because they have nothing better to do. The uproar in the physics community was not about the possibility of faster-than-light travel (many people make claims about discovering such things all the time), but rather about how a huge, extremely carefully set up, and thoroughly verified experiment could produce such results! What sorts of error could be the cause, and could that error populate other results in other experiments? As it turns out, it was a loose fiber optic cable, a simple human error, but one incredibly hard to catch on practical grounds. Nothing was traveling faster than light.

Here is the salient point, however: all of those people hailing the new age of faster-than-light physics failed to understand that if it were possible for something to travel faster than light, the universe would not look the way it in fact does. Distant galaxies would not appear as they do. Our computers would not work how we expect them to. Science is an intricate machine: one cannot simply remove a component and expect the thing to keep on working.

Creationists do the same thing. By claiming on the one hand that evolution does not occur, or that the Earth has only existed for six or ten thousand years, and on the other hand continuing to drive cars and use cell phones and watch television, they fail to understand that the same natural phenomenon that allows new medical treatments to be developed allows dinosaurs to evolve into ducks. They don't understand that the science which tells us how old early hominid remains are is the same which allowed us to build an atomic bomb. And of course they do not deny the existence of genetic therapy or nuclear weapons. But they do deny evolution. (Of course, creationists do not really hold a principled position at all — they pick and choose their beliefs based on authority rather than reason.)

New discoveries in science certainly can obviate old theories. The connection between germs and diseases completely destroyed older theories of disease. But not all discoveries are like that. While it's certainly true that Einstein's general theory of relativity was a vast improvement upon Newtonian mechanics, it was not a wholesale usurpation of it. For measurements below the astronomical scale, Newton's laws were and are still perfectly valid. That is, the level of description at which they work, while inadequate for measuring the orbit of Mercury, is just fine for balls and ramps, or even bridges and skyscrapers.

The mistake many people make, though, is in seeing the universe in the opposite way. They assume Newtonian mechanics is more fundamental, because it is more intuitive. They think that their ingenious "racecar headlight on a moving train" thought experiment demonstrates that you can travel faster than the speed of light. But they still use the GPS on their phone, which would not need careful timing corrections if relativity didn't work as Einstein described.

When science throws something we don't like at us, like quantum indeterminism, special relativity, or Darwinian evolution, we cannot simply choose to ignore it while accepting all the parts we don't dislike. All the various scientific fields and theories are deeply interconnected and interdependent. This does not entail that they are all correct, of course, but one cannot simply decide that something "must be wrong" without independent, scientific reasons for thinking so. Doing that rather puts you in the position, to quote Tom Lehrer, of "a Christian Scientist with appendicitis."

Sunday, April 22, 2012

Why I No Longer Argue with Libertarians

There is an important distinction between lower-case L libertarians, of which I am one, and capital L Libertarians, of which Ron Paul is one. The former is simply the philosophical position that liberty ought to be maximized. In practice, this implies the elimination of coercive State government and exploitative capitalism (setting aside anarcho-capitalism, whose coherence I will address in future). The latter is an American invention, and is a hybrid of Austrian economics and Randian political philosophy (if it can be called such). To compare and contrast these two positions: the former is anti-State and anti-capitalist; the latter requires a state to enforce property rights, and is extremely pro-capitalist and against government intervention in the economy. The former is generally strongly federalist and socialist; the latter largely individualist and laissez-faire. Both oppose the intervention of any form of government into people's private lives, as well as some measure of positive enforcement of certain rights, although both view liberty as essentially negative (although many would argue that the distinction is meaningless).

There is another important distinction that needs to be made. It is between what I will term capital C Capitalism and lower-case C capitalism. Capital L Libertarians are also, necessarily, capital C Capitalists. That is, they hold the belief that capitalism is a desirable state of affairs, and a positive good for the world. Lower-case C capitalists are merely those people who own capital. They are business owners, CEOs, managers, bankers. Lower-case C capitalists can be Libertarians, conservatives or liberals, progressives or reactionaries. The former is an ideological position; the latter is a position in society.

Left socialists often rail against Libertarianism, which is fun to do, no doubt. However, Libertarians  don't actually matter in society. Maybe in the future, when the Libertarian Party has a majority in the Senate, we can worry about their ideas. The real opponent of the left, though, is not the Capitalist Libertarian. The real opponent is the capitalist. So while it's intellectually interesting to get into shouting matches with the local Randroids, anarchists and other leftists should really save their energy, both physical and intellectual, for opposing actual capitalism! Arguing against right-wing Capitalists is easy. What's hard is convincing a liberal capitalist why stateless socialism is desirable (not to mention feasible). That's why I'm not gonna argue with Ron Paul supporters and Ayn Rand fans anymore. It's a waste of breath, both on principle and in effect. My task from now on will be to convince capitalists of their error.

Friday, April 20, 2012

Why Syndicalism

Everybody works. Or, at least, everybody works when artificial unemployment doesn't exist. And I say "everybody" because I always speak in hyperbole. In any case, the overwhelming majority of people seek work of some sort. Just recently, my mom lost her job, and instead of spending all day sitting on the couch watching Home & Garden television (which she could easily afford to do), she went and got a job that pays her barely anything. People want to be formally occupied by something they believe brings benefit to them and their family, or to society at large. This tendency takes various forms outside the capitalist structure of "employment". Artists create art regardless of whether they get paid for it. People who love cooking spend hours perfecting recipes for no one's enjoyment but their own. In a world that didn't care about "marketable skills" and didn't penalize risk-taking with destitution, people would be able to occupy themselves with whatever work they were naturally inclined to do.

In a capitalist society, where people must balance their desires against the demands of the market, many work jobs they do not enjoy, the most unpleasant of which are usually the lowest paying and most exploitative. Such people should, and historically often did, organize into guilds or unions to demand (and occasionally win) increases in wages, reduction in working hours, and improvements in working conditions. These unions are the perfect place to foment radicalism, since workers are the most exposed to the oppressive and exploitative nature of capitalism, and often suffer the most at the hands of the government once they organize. From a utopian perspective (by which I mean, from the point of view of an imagined future free society) such unions would constitute democratic worker councils in their respective industries, certifying members of professional groups and organizing allocation of work and resources. In the modern world, they are means of resisting capitalist exploitation and social oppression.

Not everyone, of course, is keen on resisting exploitation, because they do not see it as such. Particularly in America the myth (that is, the misunderstanding of economics and probability) that anyone can get rich tricks people into aligning their perceived interests with the capitalist class, and the illusion of democracy allows them to believe that the government exists to support them, rather than to support the capitalist system. They believe that fighting for their own, realistic, interests will jeopardize their chances of ascending the social or corporate ladder on the off chance they come up with the better mousetrap. The refusal to admit the existence of a sharp class division between workers on the one hand and owners and rulers on the other leads them to have disdain for anyone who recognizes, and fights against, it.

The disdain many people have for unions specifically is due to the essentially capitalist trades unions whose leaders often have more in common with the bosses than with the workers, and of course to the stain of Soviet Communism on the entire notion. (The Soviet Union was of course in no way communist, but is rather State Capitalist to the core.) When workers are divorced from the output of their labor, whether by capitalist profiteering or state mandates, the tendency to lose personal interest in their work is increased and reinforced, because the work is no longer theirs, either in methodology or results. To contrast, work done by democratically organized, voluntary worker collectives instills a sense of pride and ownership in the work which produces both better results and stronger communities. It is this aspect of union organizing which leads me to believe that syndicalism — that is, the organizing of the working class into unions based on industry or geography — is the most practicable way of achieving revolution.

Workers, who make up the vast majority of society, are shown the power of democratic organizing, the power of their numbers in the face of capitalist and government oppression, and the dignity and satisfaction to be found in controlling the product of their own labor. I will discuss in a future post why I think syndicalism is the best way of instilling revolutionary consciousness in the working class, but it is definitely not the simplest or most glamorous way. It involves working shitty jobs, taking large personal, financial and health risks, and seeing little progress or huge reversals in fortune. The main point is, though, that many workers do this every day without any political motivation anyway, and the addition of that motivation has been proven historically to be easier and more effective than the creation of entirely new, theory-motivated political organizations.

Predicting the Future (and Other Abilities We Don't Have): Part 1

Humans are good at lots of things.  This series of two posts is about a number of abilities that are not among those things.  Part 1 discusses the fashionable tendency to make guesses about the future course of society and then heavily imply (without usually stating outright) that these guesses constitute accurate predictions.  Part 2 discusses the trouble we tend to have in viewing ourselves in a historical context.  This makes it easy for us to believe that we happen to live in revolutionary times, or that none of the old rules apply.  These types of charlatanism can have far reaching consequences, as it turns out.

Part 1:  Extrapolation is an Art, not a Science

One of the ways to get people to pay attention to your predictions is by preaching the good news: eternal life.  Ray Kurzweil has made a number of predictions about the bright future that technological growth will bring us, with this being by far the most notorious.  Although his version of immortality, uploading ourselves onto computers, differs somewhat from the standard Christian view, one can't help but notice the religious flavor of this prediction.

Kurzweil's other predictions for this century include, yes, flying cars, but also reverse-engineering the human brain, nearly perfect simulations of reality (for our digital selves to live in), and, crucially, an AI that is more intelligent in every way than all humans combined.  He has freed himself from any responsibility to explain how these things will be accomplished.  Nobody has the slightest idea how to do the interesting ones.

I will defer the actual technical explanation of why these are truly goofy predictions to authors who have basically handled it: Steven Pinker, Douglas Hofstadter, PZ Myers, and many others have noted how technology and scientific discovery don't progress in the way Kurzweil has claimed.  Instead, I want to draw attention to the fact that these attempts to predict the future are actually a very human tendency.

In the early 1960's, progress in programming machines to do certain tasks (like proving theorems), gave researchers supreme confidence that essentially human-like A.I. would be a solved problem within 20 years.  They should have said that at this rate, it would be done.  What actually happened was that computers became more and more sophisticated but left AI behind: the problems they were doing were just much harder than they had anticipated.  Even relatively simple tasks like constructing grammatical sentences proved to be far out of their grasp.  Now, the most successful language tools largely involve throwing our technical understanding of language out the window and using probabilistic models.

Economies are not spared from erroneous predictions about the future.  Kurzweil and others jumped on the tech-boom bandwagon, claiming in 1998 that the boom would last through 2009, bringing untold wealth to all.  Maybe they should have been reading Hyman Minsky instead of Marvin Minsky.

Enough about the good news.

The other way to get people to pay attention to your predictions is by telling them the bad news: social breakdown the end of the world.  Overpopulation is an issue along these lines that receives attention that is disproportionate to the seriousness of the claims made by its scare tacticians.  Among these claims is the belief that we are imminently reaching the carrying capacity of the Earth, at which point starvation, crowding, and wars over scarce resources will tear human civilization to pieces.

This hypothesis relies on progress not happening, the opposite of the singularitarians' reliance.  But the very same question can be asked of both: how do you know?  This is where it becomes apparent that extrapolating patterns is an art for the Kurzweils of the world.  If you extrapolate one variable, you get intelligent machines.  If you extrapolate another, you get the end of the world.  But if you extrapolate yet another, say the total fertility rate (TFR), it doesn't look so scary.  Defined as the average expected number of children born per woman, the world's TFR has been steadily declining in the post-war period, from almost five in 1950 to almost 2.5 today.  As it approaches two, the world population approaches equilibrium.

Phony overpopulation scares are common in the history of anglophone countries, from Thomas Malthus to American anxiety over the "yellow peril" around the turn of the century (see Jack London's The Unparalleled Invasion for a rosy portrait of the future).  Wealthy people and business interests are often the biggest proponents of the theory that population growth is the largest problem facing the world.  Conveniently, it's one of the only major global issues that isn't their responsibility.  In reality, the only reliable way to lower growth rates is to facilitate the economic growth of poor countries to the point where people there have a decent standard of living.

The danger in blowing the perils of overpopulation out of proportion is that it leads people to prioritize population control above reproductive rights and, more generally, morality.  If it really is that serious, then we have carte blanche to do whatever is necessary.  The bottom line is that, despite the very real possibility of overpopulation becoming an issue, there is no reason to think it is serious or imminent enough to change what our goals would be, were it not imminent.  Our immediate task is still figuring out how to get communities to lift themselves out of poverty while handling climate change and other real crises.

We have a predisposition to weigh the likelihood of possible futures, either good or bad, based on how exciting or terrifying they are instead of how probable they are.  Anyone interested in solving problems should be aware of this bias.

Part 2 covers a second, related bias that people have: the impression that the times we currently live in offer wider and more revolutionary possibilities than existed in the past.  This impression, created by the fact that we live now, not in the past, is the source of huge blunders and the general abandonment of reason.

Thursday, April 19, 2012

Bringing a Provision to a Principle Fight

An oft noticed (but seldom described) difference in the way people discuss policy can prevent real progress from being made.  Consider the debate over the Patient Protection and Affordable Care Act, a major step for many progressives.  An advocate will typically stress the effectiveness of a particular policy in achieving a goal, such as universal coverage, better health outcomes, or care for uninsured children.  The counter-argument, however, will typically involve exhortations about the value of limited government powers, fiscal responsibility, and individual responsibility.  Note that, whichever argument is right, the two people having this debate have not addressed what the other is saying.  Nor will you ever hear statements like "individual responsibility is more important than protecting the welfare of children," proclaiming the superiority of a value over a stated goal.

Economic discussions can involve a similar pattern.  Consider the (relatively value-neutral) argument that financial regulation is necessary in order to most effectively prevent financial crises while maintaining a robust financial sector.  Trumpeting the virtues of deregulation and the free market in opposition, however virtuous they may be, does nothing to address the actual question: how do we ensure that financial crises don't happen?  Objecting to spending on particular projects by referencing wealth redistribution or fiscal profligacy doesn't address the details of the goals of the project, which could range from providing unemployment insurance to paying for veterans' disability care.

To sum up the point of these examples, principle-based arguments constitute a totally different kind of rhetoric than consequentialist ones.  This can lead to a real impasse, particularly surrounding sensitive social issues like sexually transmitted diseases and dealing with the objection of certain religious groups to protective measures.  In these cases, someone's religion prevents them from compromising.  Effects are irrelevant to someone with a biblical mandate.

This does not always fall neatly along political lines, either.  For example, activists on the left opposing Israeli settlement expansion are split between those supporting a two-state compromise settlement and those in support of a one-state solution.  The pragmatic argument for a two-state settlement is strong, given the nearly global consensus.  On the other hand, a particular interpretation of the right of return of Palestinians leads many to support a one-state solution on ideological grounds.  One side has nothing to say to the other, besides making an appeal to compromise or an appeal not to compromise one's values.

Ultimately, anybody who endeavors to meaningfully discuss an issue has to have both a sense of their own values as well as a willingness to compromise and be pragmatic.  Insisting on judging a policy based only on whether it adheres to a particular set of values, besides providing a convenient excuse for not looking at the likely effects of said policy, is not a basis for useful discussion.  There is a real debate to be had over whether examining consequences or adherence to principles provide a better basis for ethics.  But politics is fundamentally about how people with differing values compromise and form policies whose effects are acceptable enough to all parties involved.  Ignoring this, at best, turns political discussions into a morality play.  At worst, it prevents any useful communication across boundaries.

Wednesday, April 18, 2012

Who Is a Moral Agent?

There is an important assumption about moral agency which always goes unstated in political discussion. It is this: institutions have moral agency. I could not possibly disagree with anything more than this. I believe that this assumption is responsible for a great deal of the evil that happens in the world. The assumption is unquestioned and unacknowledged because people do not think carefully about where moral agency can rest. Liberals who are outraged by the Citizens United verdict have no problems with treating the government as a valid moral agent, capable of killing, stealing, and taking on social projects. Likewise, right-wingers want to treat huge, fascistic corporations as equivalent to human beings, but balk at the idea of the government doing anything which doesn't put money into their wallets.

My take on this matter is simple: moral agency must lie with individual human beings. This is a minimal assumption, and I will take it for granted. However, humans do not live in isolation, and collective decisions must be made. As such, there must be some provision for super-individual moral agency. This is what I refer to as a group of individuals, or simply group. I do not use this term in a simple way, meaning any attempt at decision making involving more than one person. Instead, I use it in a technical way, to mean a voluntary association of individuals, none of whom relinquish or subvert their own moral agency, but merely use some method to determine the prevailing moral judgement of the group. The methods by which a group can come to such a determination are manifold: voting, by simple or super majority; formal debate; consensus building; and many not yet invented, I'm sure.

I contrast the idea of the group to the idea of the institution. An institution is also a super-individual decision-making body. However, it is not composed of individuals. In fact (as I shall discuss in a future post) institutions have priorities and prerogatives completely independent of the will of any given person. Obviously, decisions within institutions are ultimately made by individuals. But that individual must be willing to act, and must in fact act, in the interests of the institution rather than in their own individual interest or they would not be placed in such a position to begin with. A perfect example of this comes from a friend of mine who was tasked to go to a State Legislature meeting on behalf of the healthcare non-profit he works for. The people in charge had decided they would side with a certain political bloc which my friend opposed. However, it was his job to go and relay, and argue for, the position of the non-profit. His individual opinion of the matter at hand did not matter in the slightest. All that mattered was whether or not he could accurately relay the prevailing opinion of the institution he was a part of.

I do not think that the suppression of one's own moral agency in such a circumstance is conscionable. Be it as an employee, a soldier, or a politician, one should not have to abnegate one's own moral agency to serve a greater good. Such a good can be served voluntarily, and morally, by acting as part of a group of individuals, whose decision you can protest and even reject with no artificially contrived consequences to you, such as destitution or imprisonment.

Tuesday, April 17, 2012

My Solution to the Fermi Paradox

I think that the most likely solution to the Fermi Paradox is that, while life is exceedingly common in the universe, intelligent life is incredibly rare. In the four and a half billion year lifespan of this planet, life has existed on it from almost the very first moments. However, not until the Cambrian explosion about 530 million years ago did complex multicellular life exist. That means that for nearly four billion years, and starting from nine billion years after the birth of the universe, Earth contained nothing but single-celled organisms and colonies of such. And it is not until about 2 million years ago, 0.04% of the history of life on Earth, that the first technological intelligence emerged. And even that was simply monkeys hitting rocks against each other in a clever way! Anything more complicated than a stone arrowhead was invented in the last ten thousand years!

And let's look at how unlikely it is that humans (or any other hominid) ever even achieved technology. We had access to wheat, barley, and rice, easily domesticable plants that produced high yields and had good nutritional content. We had access to large animals with very particular internal dominance hierarchies which were not so recently evolved alongside us to attack us on sight, but not so distantly separate from us to be completely unwary at the sight of hunters with spears. (Here by "we" I am referring to any subset of humans who had such access — Jared Diamond goes into great detail as to who in fact had access to what.) We had access to workable stone, copious woodlands, and various ore deposits. We had an abundance of fresh water on a planet with an atmosphere suitable for lighting fires. Our luck in the development of our culture and eventual civilization was astounding. We were never subject to an extinction-level impact or eruption (although it seems that we came damn close).

I think that such luck is not only astounding, but is in fact astronomical. While life seems to have no trouble at all finding a place on a planet like the Earth (and perhaps on many other types of planets as well), technological civilization seems like and absolutely ludicrously unlikely event. It has happened exactly once in four and a half billion years (for 0.0002% of that time), or in 530 million years in which complex life has existed on Earth (or 0.0019% of that time). So I would imagine that life is in fact very common in the universe, although almost exclusively in the form of single-celled creatures living on rocks and in oceans. Very, very, very rarely one would find a planet with some sort of multicellular life on it — simple plant-like creatures, or molds of some sort. And once in an unimaginably huge while, one might expect to find a planet where technologically intelligent life once existed. Finding a planet where technologically intelligent life exists concurrently with us seems depressingly close to a fantasy.

[Edit: My percentages were two orders of magnitude off! They were simple ratios, not percentages. Thanks to Scott for pointing this out.]

Monday, April 16, 2012

Two Types of Free Will, part 3

As my previous post argued, the position that free will is somehow essentially true in a naive sense suffers from the problem highlighted in this Dilbert cartoon. It is an incoherent and meaningless way of talking about free will. However, it seems obvious (to me, at least) that there is some sort of free will at play. Otherwise, why would we even have such a concept? If we were mere automata carrying out our inherent programming, why would be have any reason to think about such a thing as free will? Hence, there is free will in some sense, or what I defined as phenomenological free will.

The description of phenomenological free will I gave in my first post on the subject relied heavily on the concept of consciousness, and I think it is exactly consciousness that makes free will a valid concept. As I will go to some length to describe in a future post, consciousness is the self-reflective model of the world we use to predict the future, including the future of our own mental states. It is also the tool by which we invent a narrative for the events that go on around us, sometimes referred to as confabulation. We instantly and instinctively come up with stories about our actions, thoughts, motivations and surrounding, often with absolutely no relation to truth or reality. Many cognitive fallacies are driven by the storytelling ability, such as the fundamental attribution error. We prefer reasons to be narrative than empirical. We need to know why things happen, not just that they do, even (especially?) if the why doesn't exist or is completely made up.

This exact ability applies as much to ourselves as to other people or objects. We come home from a long day at work and yell at our significant other over some petty transgression, and rationalize it by saying that they were annoying and we were tired and it's not our fault anyway. Every non-philosopher has said to themselves that they never intended to yell, and apologized afterwards. However, I contend that that impulse to deny volition isn't a mere face-saving exercise, but is rather precisely correct. We yelled completely automatically, because that is what our brain decided was the correct behavioural response to the situation. Our confabulation ability, however, even as it watched us yelling, was coming up with retroactive reasons to start yelling, and since we became consciously aware of yelling at the same time as we became conscious of our confabulation (since the confabulation is, in a very important sense, our consciousness) we go on to believe that we chose to yell of our own free will.

If we were less tired when we came home from work, our conscious mind might have been quick enough to notice that we were getting ready to yell, and would have stopped us doing so. That, it seems,  is another function of consciousness (although, of course, not exclusively of consciousness). Consciousness allows us — if we have time — to stop an action we notice we are about to start. When you reach for a hot skillet while cooking, you don't stop reaching (and thereby prevent burning your hand) until you actually look over and become aware of what you are about to do. Most telling of all, sometimes your don't stop reaching! You helplessly watch yourself proceed to grasp the burning hot skillet and burn your hand! Where is your free will then? This is a case of your brain going about the work it knows it needs to do, completely outside your conscious control, and your consciousness not working fast enough to stop it making a grave (and painful) mistake.

I won't delve into what "we" and "chose" refer to in the above paragraph (as those are both profound questions in their own right), but on the assumption that whatever it is that we refer to when we say "I" is a subset of the function of our brain, we can say that "we" are capable of contributing some influence on our actions, but that for the most part our brain goes about it's business completely without "us", until some sort of conscious decision needs to be made — perhaps one too complicated for our animal brain to figure out on its own. However, we should not despair! After all, "our" interests are almost always in line with that of our brain and body. So the limited, phenomenological sense in which we have free will is enough, even if it's a confabulation. For myself, I'm willing to trust my brain to take care of itself, and it's passenger, "me".

Saturday, April 14, 2012

Two Types of Free Will, part 2

Previously, I set out two distinct phenomena which could be referred to as "free will". I here continue to contrast them, and attempt to show why one must be the case, while the other cannot be.

Phenomenological free will, I contend, is obvious, self-apparent, and completely, empirically true. It is very hard to find someone who will argue that they do not choose their own actions on a daily basis. It requires extremes of brain-altering drugs and abusive behavior to get someone to lose the sense that they are in control of their actions (note, however, that it is in fact possible to do so). The impression I get is that when most people hear someone argue that there is no such thing as free will, they think that the argument addresses phenomenological free will. And, of course, if this were the case, then arguing against free will would be lunacy. It's just that, as with the term "consciousness", no one bothers defining what they mean, so people end up talking at cross-purposes.

The very idea of metaphysical free will, on the other hand, suffers from incoherence in a non-magical universe. If there isn't a soul or spirit pushing and pulling the cords in our pineal glands, then where does this locus of decision reside? One cannot simply say "the brain", because the brain is a monstrously complicated system, segmented into an even more monstrously complicated collection of subsets, down to the connections between individual neurons, each portion of which considers input from sense organs, bodily nerves, and other portions of the brain. There is no "place" where a decision is made. The brain works as a whole system directing action, and conscious awareness of such decision-making is limited and after the fact.

At any level of description — physical, chemical, interneuronal, conscious — what happens in the brain is either completely random or theoretically predictable. Quantum effects do occasionally tip the scale and something weird happens, but unless you posit that that random weirdness is magically motivated, it can in no way be said to be willful. The interactions between neurons are far less random, and can be described and calculated fairly accurately, and interconnected systems of neurons can be isolated by structure and function. So, again, there is nowhere for this metaphysical decision maker to reside.

From a psychological perspective, the case is even more dire! You do what you are inclined to want to do. You take that action which the sum of your habits and motivations does in fact motivate you to take. If you chose to go running instead of eating that tub of ice cream, it's not because you are a free agent in a libertarian universe capable of making any logically possible action. Rather, your sense of guilt over not exercising recently, your motivation to look and feel better, and your desire to be healthier as you get older overcame your desire to eat delicious ice cream and feel aesthetic pleasure for a few minutes. You had these various motivations wrestling inside you, and your brain finally computed that the former motivations were more pressing than the latter, and sent the balance of these desires to conscious awareness so that you could write your "decision" to go running into your conscious narrative.

If you try to explain metaphysical free will from a psychological perspective, you get hopelessly muddled (I can't even formulate a coherent argument for such a thing in my mind), and the only fall-back I can see is on Descartian magic — souls and spirits and such.

Libertarianism (in the philosophical sense mentioned in the previous post) suffers from exactly the same problem as metaphysical free will. What would it even mean to say that the universe "could have gone a different way"? A quantum event could have had some other outcome than it did? Well, sure, in a counterfactual way. But since quantum events are truly random — that is, there is no way on principle to know which way they will come out — all you can possibly say is that it will take some value but you won't know what that value is until you actually measure it. So if there was in fact some metaphysical agent generating our wills in a libertarian universe, without magic powers its determination of a quantum outcome would happen simultaneously with its measurement of that outcome, so it would be beholden to that value no matter what. This is as bad as being beholden to a completely pre-determined outcome! It is worse, in fact, because in the quantum world you can't even make a prediction!

So, I dispose of libertarianism as hopeless. And I dispose of a metaphysical compatibilist view as meaningless at any level of analysis. Since this post is already almost twice as long as I expected it to be, I will hold off on my argument about phenomenological free will, and of my opinion on the nature of our will, until the next post.

Friday, April 13, 2012

Two Types of Free Will, part 1

I recently heard Daniel Dennett's explanation of his concept of deepity. It struck me that the concept of free will is exactly such a thing. It is a concept which is trivially true, but, in another sense, is logically ill-formed. The usual debate about free will is whether the concept of such is compatible with a deterministic (or truly random — that is, quantum) universe. Those who believe free will is compatible with a deterministic universe, and that therefore free will is "real" in some sense, are called compatibilists, while those who believe it is not, and that free will is an illusion, are called incompatibilists. The idea that the universe is not limited in this sense, but could in fact go some way other than the way it went is called libertarianism, but I more or less dismiss it out of hand as incoherent, for reasons I shall explain below.

Normally, free will is somehow taken to be a single, monolithic concept which is either true or false, and therefore is argued over. However, it struck me that there are two very different concepts of free will which no one bothers to distinguish (as far as I have seen). These I call phenomenological free will and metaphysical free will. Let us take them in turn.

Phenomenological free will is what we experience whenever we are awake and aware. It is the feeling of making choices, which is obvious and inescapable whenever we experience usual brain function, not under the influence of hypnosis, drugs or derangement of some sort. It is the conscious mind's narration of the things it sees us doing (we are consciously aware of our actions about 500 milliseconds after our brain initiates them). A large function of consciousness, it seems, is to inhibit actions it realizes aren't desirable, but it does not initiate them. Regardless, as far as one can tell (and in as far as there is a real "I" there to do the telling), we choose our actions and build our identities around those choices.

Metaphysical free will is what I call the idea of making "real" choices. That is, it is what explores the world of counterfactuals relative to what we did indeed choose, and decides that, had it wanted to, it could have selected one of those other choices. Alternately, it can be seen as that device which looks at the current state of the world, and picks what future actions would be most beneficial or desirable for the actor. It is, however, never forced to make any particular choice — it could make some wild, unmotivated flight of fancy at any moment (or at least, that must be a serious possibility in order for this style of free will to be worth considering). It is somehow independent of forces in this world, even if the actor himself isn't.

These two types of free will are very different to each other. One is a fact of experience and perception (hence, phenomenological), while the other makes a claim about the very nature of our minds, about what is true of the universe. I will contrast these two views in my next post, and show why one of these must be the case, while the other cannot be.

Thursday, April 12, 2012

Paradox of Voting

The more people vote, the less chance each vote has of affecting the outcome of the election. This is called the Paradox of Voting. Some economists use this as a reason not to vote. I don't think it's a good reason not to vote (there are plenty of other good reasons not to vote). However, it is a good reason to not harass anyone about voting. The most common thing I hear from liberals and other supposedly civic-minded people when they learn that I don't vote is that it's awful and I don't get to complain about politicians.

First of all, it's not awful. Voting in a government election is a form of consent. Even those people who know that I'm an anarchist occasionally still entreat me to vote "because this is such an important election!" Well, uh, guess what? It's not. What's that? We need to replace the Big Oil and defense contractor-owned corporate stooge with a Big Bank and finance-owned corporate stooge? Why, yes, that is a meaningful Change which will bring Hope!

Second, voting isn't magic. Voting is just a method of collective decision making. There are others. I like consensus-building. It probably wouldn't work on a national level, but it produces far more meaningful results. When you tell me that you want me to go vote, you're not really saying you want me to vote. What you're saying is that you want me to agree with you on the value and legitimacy of the State government which you support at the moment. However, I do not support any such government, and I will not feel guilty for doing so. Give me a ballot with "None of the above" on it, and then I might go vote, because then voting would suddenly be a meaningful action again. It would not just be a choice of oppressors, as participation in any coercive institution is, but a form of expressing political will — in this case, the will to not have any of the jokers who call themselves politicians decide my economic and legal fate.

Finally, let's see what George Carlin has to say. I am a bit smug about coming up with that bit of wisdom before I ever saw this clip. The few times I have had the privilege of actually saying this to someone they end up grasping inarticulately at reasons why it's wrong before quickly ending the conversation. Alas, few people like talking about politics — or political theory, in any case. I look forward to donning my red and black "I DIDN'T VOTE" button this coming November. It generates just the right ratio of curiosity to contempt (I figure, if you're contemptuous to begin with, I'm not gonna get through to you in any case).

This concept has a sort of inverse corollary when it comes to consumption, which I will talk about in a future post.

Wednesday, April 11, 2012

Games

Most people have the intuition that, while you do not have the right to do violence to someone unprovoked, once they have done violence to you, you are justified in doing violence back to them. I have a system by which I try to schematize this intuition. I frame interactions between people as games. Such games have rules, which are agreed upon implicitly by those involved. The players in these games, I assume, are in an equal power relationship — that is, factors such as authority, gender and race are not involved (this mitigates such non-reciprocal cultural artifacts as orders and bigoted statements. [I should probably make a post about this assumption at some point.] ).

Cultural expectations and past relationships play big roles in determining the rules of such games. These rules can include: "talking is allowed", "intentional physical contact is not allowed" (this might well be a rule with strangers on the street), "kissing is allowed" (such as between people in a romantic relationship). For the vast majority of interactions, "don't do violence" is a rule. However, the rules of games can change, and often do in short order. These changes come about when one person implicitly or explicitly violates a standing rule. Once this happens, both players may now play by these new rules. What this creates in effect is an "eye for an eye" situation. If you are willing to violate one of the rules of the interaction, you are implicitly agreeing to play by that rule for the duration of the interaction. So, if you punch someone (a violation of a rule outside of a boxing ring), you agree to getting punched yourself. If you kiss someone, you are implying that they may kiss you back. And so on.

This allows us to set up a  naive code for behaviour: follow the rules, or, if you violate them, accept that the new rules your violation established apply to you as much as to anyone else you interact with.