Ahmed Afzaal

The Voter’s Dilemma (5)

When Chomsky was asked about the “Never Biden” position by Mahdi Hasan, he began by saying that the question brought back memories from the early 1930s. At the time, the German communists refused to form an alliance with the social democrats, which—eventually— allowed the Nazis to take power.

Chomsky’s purpose in recounting that episode was to support the claim that sometimes it is necessary to join hands with your rivals in order to stop a greater catastrophe from happening. If that’s the main lesson we’re supposed to learn from the story, then no one can disagree with Chomsky. It’s a valuable lesson, and the underlying principle is solid.

But what is undoubtedly true in the abstract is not necessarily as obvious in practice. This is because (1) we never have complete information about any real-life situation where an important decision has to be made, and (2) in most non-trivial situations, there are competing values, principles, and motivations that pull us in many different directions. These two factors—lack of complete information and the necessity for subjective judgments—make it hard for us to know with certainty whether a particular abstract truth is applicable or not in a given situation.

Consider the fact that conventional wisdom provides us with contradictory suggestions, such as “look before you leap” versus “you snooze, you lose” or “strike the iron when it’s hot.” So, which advice are we supposed to follow? The answer is: It depends. Sometimes, the right thing to do is to be cautious and careful; at other times, we must take the initiative without wasting time. What makes a decision difficult is precisely the fact that we are able to justify both courses of action in good faith. If this weren’t true, there would’ve been no such thing as regret.

That such opposite proverbs exist at all is itself an evidence of the uncertainty and ambiguity inherent in the human condition. Sometimes we can easily see that an abstract principle fits perfectly with the situation we are facing, but very often our view is obstructed. When there is little or no ambiguity, making a decision is as easy as peeling a banana. Unfortunately, that is not true with respect to the voter’s dilemma.

Let’s say a small town is threatened by rising waters in a nearby river. To save the town from flooding, everyone would have to participate in the work of building a barrier. If one group of people refuses to join the effort simply because they don’t like hanging out with another group, the entire town will be lost. Given these particular facts, the ethic of responsibility says that we must set aside our personal likes and dislikes in order to serve a shared goal, for nothing is more important at the moment than saving the town from a catastrophe that we can all see coming.

The situation that Chomsky describes, however, was not as clear, certainly not at the time. Chomsky is making an analogy, and then using that analogy as a warrant to defend a particular voting strategy. Chomsky’s argument works only if we accept that the case of U.S. progressives not voting for Biden in 2020 is similar in relevant ways to the case of German communists refusing to form an alliance with social democrats some ninety years ago. Let’s ignore the fact that this analogy requires us to equate Trump with Hitler. Let’s also not discuss the fact that the U.S. political system fragments power in ways that are incredibly hard to overcome compared to the German political system in the early 1930s. Instead, consider the fact that American voters do not have the benefit of hindsight, and neither did the German communists. We are all functioning on the basis of an incomplete and imperfect understanding of probabilities.

Suppose you were one of the leaders of the German communist party in the early 1930s. You did not like the Nazis and you did not like the social democrats. You were absolutely convinced of the truth of Marxist ideology; you knew for sure that capitalism’s days were almost over; and you had pledged to always uphold the interests of the working class. You had seen political leaders betray their parties; you had seen parties abandon their supporters; and you had learned to despise the the hypocrisy and opportunism of politicians. You had no respect for individuals who would readily sacrifice their avowed principles in exchange for a seat in the parliament or a meaningless official title. You also did not have a crystal ball, and so you had no idea what Hitler would eventually do; in fact, you were probably not even sure that the Nazis had a realistic chance of gaining power. After all, you were working within a parliamentary system, and you knew that no political party had enough support, either among voters or among members of the parliament, that would allow it to form a government on its own strength, including Hitler’s National Socialist Party. Perhaps most importantly, you  were living in a democracy; the idea that a minority group in a coalition government would soon be able to establish a dictatorship hadn’t occurred to you even in your scariest nightmares.

As a leader within the communist party, even if you had some idea of how dangerous the Nazis were, and even if you were open to forming an alliance with the social democrats in order to prevent Hitler from becoming Chancellor, that still does not mean that you had a stark, black and white choice. At best, the situation you were facing was ambiguous. There were good arguments on both sides, and it was difficult to assign the right weight to each position without knowing what the future held. Of course, you did not take this decision lightly. You looked at all the information you had at your disposal, and you tried to be as rational and logical as possible. You had a sense of responsibility, but you were also committed to your ideals. In the end, the issue of whether to cooperate with the social democrats or not had to be resolved on the basis of subjective judgments about values and principles as well as the weighing of relative probabilities about how different political groups would behave.

I agree with Chomsky that the German communists made the wrong choice when they decided to refuse to build an alliance with the social democrats. But I cannot say with absolute certainty that I would have made the right choice if I were a German communist in the early 1930s—and neither can Chomsky. Given that the ethic of responsibility applies only to the foreseeable consequences of our choices, it’s hard to see how we can blame the German communists for the rise of Hitler.

The Voter’s Dilemma (4)

In the previous post, I suggested that Chomsky’s answer to the voter’s dilemma, otherwise known as “Lesser Evil Voting” or LEV, can be challenged from at least three directions. Here, I want to consider the first of these challenges.

The LEV strategy is based on the moral significance of personal responsibility. If my action produces a foreseeable outcome, then I am not only responsible for the action but also for causing that outcome. Whether or not I intended that outcome is irrelevant; I am responsible either way. As applied to the voter’s dilemma, this viewpoint says that not voting for Biden makes me responsible not only for Trump’s victory but also for all the evil that he will unleash in his second term, and the fact that I do not intend either of these outcomes is irrelevant. That, in a nutshell, is Chomsky’s defense of LEV.

The first challenge to the LEV strategy comes from those who believe that a person is only responsible for his/her own actions. According to this approach, my responsibility in any situation is to act in a manner that I believe to be right. The only thing I can control is my own behavior, and my responsibility does not extend into matters over which I have no control. Consequently, I cannot be held responsible if the world is organized in a such way that my action leads, indirectly, to outcomes that I neither intend nor approve.

Weber

It is critical to note that both viewpoints are rooted in a long history of ethical deliberations and both are supported by good arguments. It would therefore be a mistake to think that one of these viewpoints is right and the other is wrong. Max Weber, for example, recognized the difficulty involved in choosing one or the other of these viewpoints as the basis for practical conduct. Weber addressed the sharp distinction between the two approaches in his famous lecture on “Politics as a Vocation.” The lecture was delivered to a group of students in Munich on January 28, 1919.

Here’s how Weber introduces the problem:

We have to understand clearly that all ethically oriented action can follow two totally different principles that are irreconcilably opposed to each other: an ethic of “ultimate ends” or an ethic of “responsibility.” This is not to say that the ethic of ultimate ends is identical with a lack of responsibility, or that the ethic of responsibility is identical with lack of conviction. There is naturally no question of that. But there is an immeasurably profound profound contrast between acting according to the maxim of the ethic of ultimate ends—to speak in religious terms: “The Christian does the right thing and leaves the outcome in God’s hands,” and acting according to the ethic of responsibility: that one must answer for the (foreseeable) consequences of one’s actions.

By “ultimate ends,” Weber does not mean any particular outcome or goal; rather, he is referring to values. By definition, all true values are ultimate, in the sense that they are not pursued as the means to achieve some other end; rather, values are desired for their own sake and pursued as ends in themselves. This means that values do not have to be justified; in fact, anything that can be justified is not a value. Consequently, if I explain one of my actions by referring to a value that I hold dear, you may ask whether or not my action really does serve that value; that’s a fair question. However, you cannot ask why I am committed to that particular value in the first place, for values are “ultimate” in the sense that they cannot be rationally defended.

The key point is that, according to Weber, the ethic of ultimate ends cannot be harmonized with the ethic of responsibility because of the “profound contrast” between the two. This means, I think, that a specific action in a specific situation can satisfy either the requirements of one or the other of these two approaches, but it cannot satisfy the requirements of both. Notice Weber’s insistence that the ethic of ultimate ends is not about lack of responsibility, just as the ethic of responsibility does not entail a lack of commitment to values. Here’s how I understand Weber’s point: While the ethic of ultimate ends requires each individual to be fully responsible to his/her own conscience, the ethic of responsibility requires us to take into account whether or not the actual consequences of our actions will be in alignment with our values.

Consider the following question: If someone acts according to the dictates of conscience, but the consequences that flow from those actions are deemed evil, who is to be held responsible? Weber quotes Martin Luther, who had said: “Do your duty, and leave the outcome to God.” A Christian, according to Luther, is responsible for doing the right thing, not for ensuring that the world becomes right as a result of one’s actions. The world is God’s responsibility, not mine. This is another way of saying that I am only accountable in front of God, or in front of my conscience, for doing my part. I don’t make the rules that govern society, and there is nothing I can do to control or manage the consequences that may result from the fact that I did my duty. According to Weber, the person who follows the ethic of ultimate ends takes the position that the responsibility for any negative consequences of fulfilling one’s duty does not belong to the conscientious actor but to “the world, the stupidity of other people, or the will of God, who created them like that.”

In contrast, the person who follows the ethic of responsibility takes into account the fact that the world is ordered in a way that good actions do not necessarily produce good consequence. According to Weber, the world is ethically irrational. It does not guarantee that doing the right thing will make everything right for you, or anyone else. In fact, very often the exact opposite happens. The person who follows the ethic of responsibility is acutely aware of this fact. For Weber, such an individual “does not feel himself to be in a position to shift the responsibility for the consequences of his actions, as far as he can foresee them, on to others. He will say: These consequences are attributable to my actions.”

Let’s extend this line of reasoning. If we are responsible for the foreseeable consequences of our actions, then it follows that we may have to put aside the question of whether our actions are moral in themselves; rather, we must always act in ways that would produce the most morally desirable outcomes. That, however, can push us onto a slippery slope, as Weber points out.

No ethic in the world can get around the fact that in many cases the achievement of “good” ends is linked with the necessity of accepting ethically dubious or at least risky means, and the possibility or even the probability of evil side effects. And no ethic in the world can predict when and to what extent the ethically good end “justifies” the ethically risky means and side effects.

That is not a trivial problem. The ethic of responsibility says that we must take responsibility for the consequences of our actions. To ensure that good consequences appear in the world as a result of our actions, we would have to judge our actions not on the basis of whether they are right or wrong in themselves, but whether they lead to moral or immoral consequences. But when our focus is on outcomes, there is a very real risk that we may choose morally questionable actions—and perhaps even immoral actions—whenever we believe that these actions are necessary for producing the outcomes we desire and that the outcomes we desire are, in fact, morally superior. This creates the likelihood of bad faith rationalizations. At the same time, we are prone to become less and less concerned about the unintended but negative consequences of using morally questionable means.

The possibility of self-deception is real, for one can find more or less satisfactory ways to defend and rationalize almost any course of action. Once we have convinced ourselves that a certain end is moral, and that a certain action is necessary to achieve that end, it is easy to disregard the morality or immorality of the action as irrelevant. The end will then be enough to justify virtually any means. This is not just a hypothetical danger, for history is full of cases where morally justifiable ends—such as freedom or equality—led many people to justify and commit all sorts of atrocities. If it’s true that good actions do not guarantee good consequences, it is also true that the road to hell is paved with good intentions.

Weber is sensitive to the problem of unintended consequences, or “side-effects.” The consequences that flow out of our actions are not always the ones we intended. There is, of course, no ethical problem if an action inadvertently produces morally desirable results. But what if some of the unintended consequences are immoral? Even when we anticipate that certain consequences that we neither intend nor approve are likely to flow from our actions, the “ends justify means” approach would cause us to view such a risk as worth taking. This sort of reasoning is found, for example, in the concept of “collateral damage,” where foreseeable civilian casualties are viewed as the acceptable cost we must pay for a morally desirable end, such as “eradicating terrorism” or “making the world safe for democracy.” Of course, whether that end itself is moral is an open question.

As I write this post, some  people are pushing the idea that the loss of a few million lives is a risk worth taking for the sake of restarting the U.S. economy in the midst of a deadly pandemic. They obviously do not intend to kill millions of people, nor do they believe it to be a positive or desirable outcome. Their reasoning is based on the “ends justify means” approach, and indirectly on the ethic of responsibility. They want to ensure that the world is set right. They believe—sincerely, I think—that nothing in this context can be a higher ethical priority than maintaining economic growth. In their view, a particular end (i.e., maintaining the health of the U.S. economy) carries such immense moral weight that an otherwise horrible “side-effect” of pursuing it (i.e., millions of deaths) appears to be a perfectly acceptable risk—or even a perfectly acceptable part of the cost that society must pay for returning to business-as-usual.

It is critical to recognize that neither the ethic of ultimate ends nor the ethic of responsibility can easily deal with unintended consequences. The problem emerges because the future is mostly unknowable, which means that it is impossible for human beings to foresee all of the consequences of their actions. Our capacity to know in advance how a particular action will affect the world in the long run is somewhere between extremely limited to nonexistent. Modernity makes this problem progressively worse. As society becomes more complex, it also becomes less predictable; as a result, we face an ever increasing amount of uncertainty and ambiguity when making even small decisions, let alone morally momentous ones.

People who want to follow the ethic of responsibility, such as Noam Chomsky, would insist that their decision is based on the sense of responsibility they feel in relation to the foreseeable consequences of their actions, and that they cannot be held responsible for any unforeseeable consequences that may flow from their choices. There are two obvious issues with this position, which I consider below.

First, there is no way of knowing the true proportion of foreseeable and unforeseeable consequences of any particular action we might take, especially when we try to consider both the direct and indirect impacts of that action into the long term future. People who want to follow the ethic of responsibility do so by assuming that only the foreseeable consequences matter. This may be a reasonably valid assumption in cases that are simple and straightforward, but it is clearly unwarranted in more complex cases, such as the voter’s dilemma. Assuming that only the consequences that I am able to see at the present moment are worthy of consideration amounts to giving oneself too much credit. We can’t even say that the foreseeable consequences of a particular action will outnumber the unforeseeable ones, let alone know for sure that the foreseeable consequences will be decisive.

Second, individuals who want to follow the ethic of responsibility cannot be certain that the consequences they do foresee will in fact materialize. While this is always true to some degree, most of the time this effect is so negligible that for all practical purposes we safely ignore it. It does become significant, I think, when it comes to voting in a Presidential election. There are so many variables involved in the politics of a large country, such as the U.S., that allocating the correct amount of evil to each candidate is beyond the capacity of mere mortals. For example, is Trump really more evil than Joe Biden? What if that turns out to be true in the short term only? What if his policies end up shaking millions of Americans out of their complacent slumber, who then go on to create a more fair and just society? Far fetched, but not impossible. Chomsky is right that Trump is really bad in terms of climate policies, but our experience with eight years of Obama doesn’t give us any confidence in Joe Biden’s ability to turn this ship around. Biden might be marginally better than Trump on climate, but would that stop the ongoing collapse of the planet’s ecosystems? Hardly, given that Biden is promising that “nothing will fundamentally change” under his administration. It is true that Trump has put some terrible policies into effect, and perhaps Biden would reverse them, which would be nice. But what about the bigger picture? Who can say that a return to pre-Trump policies in a post-Trump world would make things better on the whole? None of us can even see the whole picture, let alone know how it would change.

We are all making guesses—and we should acknowledge, both to ourselves and to the world at large, that that’s what we are doing. This will make us humble. The only way to judge as to which candidate is a lesser evil and which one the greater evil is to rely upon countless assumptions about the future as well as the prejudices we have inherited from the past, not to mention our fallible and finite minds that only give us a hazy and fragmentary view of reality.

None of what I have said here renders the ethic of responsibility untenable, especially as it relates to voting in a U.S. Presidential election. It still remains a valid viewpoint, though I think I did problematize it a little.

It seems to me that the ethic of responsibility does not warrant the high level of confidence and certainty that many Biden voters are demonstrating when they claim to know what the right choice is. I would recommend epistemological humility to anyone who feels completely, absolutely, one hundred percent sure that all options other than voting for Joe Biden are immoral or irrational. I would, of course, recommend the same thing to those who are completely, absolutely, one hundred percent sure that the right thing to do is to not vote for Joe Biden.

Let me reiterate that I am not asking anyone to change their mind about whom they should vote for; I am only asking everyone to be thoughtful and reflective about how they make this decision.

There is more to come. Stay tuned.

The Voter’s Dilemma (3)

Let’s examine Noam Chomsky’s full argument. Here’s a short excerpt from an interview that he did with Mehdi Hasan on April 17.

Mehdi Hasan: What do you make of the “Never Biden” movement?

Noam Chomsky: It brings up some memories. In the early 1930s, in Germany, the Communist Party, following the Stalinist line at the moment, took the position that everybody but us is a social fascist, and so there is no difference between the social democrats and the Nazis. So we are not going to join with the social democrats to stop the Nazi plague. We know where that led. There are many other cases like that. And I think we are seeing a rerun of that.

So let’s take the position “Never Biden, I am not going to vote for Biden.” There is a thing called arithmetic. You can debate a lot of things, but not arithmetic. The failure to vote for Biden in this election in a swing state amounts to voting for Trump. It takes one vote away from the opposition is the same as adding one vote for Trump.

So if you decide that you want to vote for the destruction of organized life on earth, for the sharp increase in the threat of nuclear war, for stuffing the judiciary with young lawyers who will make it impossible to do anything for a generation, then do it openly and [say] yeah, that is what I want.

That’s the meaning of “Never Biden.”

Chomsky is logical and consistent to a fault. He has previously advised progressive and leftist voters to support Bill Clinton and Hillary Clinton on the basis of what he calls the “lesser evil voting” strategy, or LEV. This strategy says that how you vote should depend on the state in which you live. If you happen to live in a Blue state, feel free to abstain from voting or vote for the Green Party; but if you live in a Swing state, then you must vote for the Democratic candidate, regardless of who that is. That’s the claim. The grounds are as follows: In our two-party political system, we know in advance that the next President will be either a Democrat or a Republican. They are both evil, but the former is less evil than the latter. The political system does not allow us to reject evil as such; it only allows us to choose between two types of evil. Since one of these options represents a greater evil while the other option represents a lesser evil, and since there is no realistic chance for a third party candidate to win a Presidential election, it follows that if you want to reduce evil you must vote for whoever happens to be the Democratic candidate—unless you live in a state that Democrats are guaranteed to win, such as California and Massachusetts.

Capture

Why should one’s approach to voting differ from one state to another? Chomsky believes that voting is not a matter of expressing one’s values but a matter of taking responsibility for the consequences of one’s actions. On that basis, he suggests that not voting for Clinton in 2016 or Biden in 2020 is perfectly fine if you live in a Blue state, since your vote (or lack thereof) won’t prevent the Democratic candidate from taking that state; but if you were to do the same thing in a Swing state, you’d be helping the GOP candidate become President. In other words, using your vote to express your values is acceptable when it has no affect on the election results, but it is not acceptable when it does. Either way, it’s the consequences that matter. Chomsky believes that when it comes to choosing one’s actions—such as voting—the likely consequences of those actions should be the only relevant criterion; everything else follows from this fundamental commitment.

But the LEV strategy can be challenged from several directions. First, it can be challenged by people who believe that voting is, in fact, a matter of expressing one’s personal values. They would argue that what matters most is that one acts in a way that is consistent with one’s espoused beliefs, and that, in the words of Martin Luther, “to go against conscience is neither right nor safe.” Second, LEV can be challenged by people who don’t think of voting in terms of individual morality but see it entirely as an issue of collective strategy. They would agree with Chomsky that voting should be all about consequences, but disagree with him as to which set of consequences should be treated as most relevant or decisive. Third, LEV can be challenged by those who don’t agree with Chomsky’s fundamental dichotomy, i.e., the notion that voting can be either an expression of personal values or it is a strategy for social change. They would argue that LEV is based on a false choice, and that it is possible to vote in accordance with one’s conscience while also taking responsibility for the consequences of one’s vote. In fact, they may even argue that the only effective approach towards the desired social change is one that transcends the either/or logic underlying the LEV strategy.

Chomsky’s reasoning is flawless, but that doesn’t make it invincible. This is because his reasoning in defense of LEV is neither an equation nor a theorem; rather, it is a moral and political argument, which makes it susceptible to moral and political challenges.

The Voter’s Dilemma (2)

I have been using the word “dilemma” to name the difficulty of deciding whether, and for whom, should I vote this coming November. After having chosen it, I started wondering if it was, indeed, the right word for this purpose, so I decided to look it up in the OED.

Capture

So, a dilemma is basically a situation that offers two or more alternatives, known as “horns,” which are—or appear to be—equally undesirable.

It is quite interesting that the two horns of a dilemma may or may not be equally undesirable. It is, of course, extremely hard to make a decision when both (or all) alternatives are equally bad. I am not sure that this is usually the case. For if the alternatives are even slightly different, then it’s likely that one of them is at least a tiny bit more undesirable than the other. Of course, the difference in the degree of undesirability between the two alternatives may be so insignificant as to be practically nonexistent, as, for example, in the case of Sophie’s Choice. Yet, I am inclined to speculate that real-life dilemmas (as opposed to hypothetical ones) are unlikely to be pure, in the sense that picking one option over the other need not be entirely random. (This leads me to wonder about the nature of choice, but I won’t deal with it here.)

Regarding the upcoming Presidential election, I am struck by the fact that many people who favor voting for Biden do not seem to experience a dilemma at all. Rather, such individuals tend to be completely, absolutely, one hundred percent sure that they have the right answer and that all other answers are obviously incorrect. As a result, they often become frustrated when others fail to agree with them right away. Apparently, they find it incredible that anyone in their right mind could even imagine that a course of action can be rational that does not involve voting for Biden. It is remarkable that these true believers appear to be totally free of doubts, misgivings, hesitations, or uncertainty of any kind. The truth of the matter is so clear to them that they find it extremely difficult, if not impossible, to try and see the issue from a different viewpoint. As far as they’re concerned, there is no sane viewpoint other than their own. In fact, they probably haven’t noticed that they have a viewpoint, and that voting for Biden is only one of the many justifiable options.

Noam Chomsky is a case in point. In the course of criticizing the “Never Biden” position, he recently made the following statement:

There is a thing called arithmetic. You can debate a lot of things, but not arithmetic.

I am not concerned here with the merits of Chomsky’s argument but only with his sense of certainty. He is absolutely right when he says that arithmetic is not debatable. But he glosses over the fact that the “Never Biden” position is not about arithmetic. That position can be defended from several different viewpoints; one may disagree with those viewpoints and one may criticize the resulting position as inadequate or flawed. Yet, these are actual viewpoints held by actual people who are no less rational than anyone else; as viewpoints, they are all legitimate. In contrast, arithmetic is not a viewpoint. The reason why we cannot debate arithmetic is because it represents a closed, abstract, and self-referential system that does not, in and of itself, say anything about the universe. Arithmetic offers an unusually extreme certainty, such that, for example, 2+2=4 everywhere and always, and there is nothing anyone can do about it. This degree of certainty is impossible when we are dealing with the complex messiness of everyday reasoning, emotions, biases, values, commitments, and all of the social and cultural influences that go into forming a particular human viewpoint.

Personally speaking, I don’t feel confident in the present context that any answer is going to be completely, absolutely, one hundred percent right—or wrong. The reason why I am writing these blog posts is because I want to explore how to come up with a satisfactory answer that I can live with; this is a much more modest goal than finding the holy grail of absolute truth or rightness. Regardless of what I end up deciding, I already know that it won’t give me the axiomatic certainty of 2+2=4. I don’t know of any approach that will allow me to achieve one hundred percent confidence on an issue like this. Of course, the closer I can get to one hundred percent certainty, the happier I would be; at this point, however, I am willing to settle for anything above fifty percent.

What does it mean to have less-than-absolute confidence in a proposition? This degree of confidence will probably make no difference in practical terms. If I am only sixty percent confident that voting for Joe Biden is the right thing to do, I will still act as if I were one hundred percent confident. That is because actions are usually a matter of binary logic: I either vote for Joe Biden or I don’t vote for Joe Biden. I cannot give sixty percent of my vote to Biden and withhold, or give to someone else, the remaining forty percent.

While having less-than-absolute confidence may not make any practical difference, it does make a big difference in how I think about the issue and how I respond to those who disagree with me. In thinking about the issue, a less-than-absolute confidence allows me to (1) consider the respective strengths and weaknesses of different viewpoints and be sensitive to the nuances of each position, (2) continue reflecting on my own viewpoint and position even after I have acted on it, and (3) remain open to new evidence and new arguments that might help me improve my viewpoint, refine my position, or even change my mind entirely. In responding to those who disagree with me, my less-than-absolute confidence will allow me to (1) show genuine respect for viewpoints and positions different from my own, (2) be curious about what other people think and why they think the way they do, and (3) embrace anything I may find in other people’s thinking that may be true or useful or wise, even if the disagreement remains.

Absolute certainty feels good, but it “blocks the road of inquiry,” as Charles Peirce put it. At the opposite end of the spectrum is absolute uncertainty, but that breeds inaction and moral paralysis. It’s only when I am more certain than uncertain that I can act on what I know while still maintaining an open mind and a learning attitude. It’s the best of both worlds!

If you are sure that you possess the holy grail—a definitive, unambiguous answer to the dilemma I am wrestling with—I would say: Congratulations! I won’t try to change your mind about what you believe is the right thing to do. I would, however, advise against putting too much trust in the clarity, obviousness, or finality of your position.

For the feeling of certainty is just that—a feeling. The more certain we feel, the higher is our confidence in relation to a given proposition, and the more likely we are to act in accordance with it. Yet, our feeling of certainty does not tell us a whole lot about the world outside ourselves. The truth or falsehood, the accuracy or inaccuracy, and the rightness or wrongness of a proposition is independent of how we feel about it at any given moment. If you have ever been proven wrong about a belief for which you were once willing to bet your life, or if you have ever changed your mind on a major issue, you may want to recall those experience in order to appreciate just how misleading a felt sense of certainty can be.

As for me, I am glad I looked up the word “dilemma” in the dictionary, for it does capture how I am experiencing the issue of voting in the 2020 Presidential election. Specifically, my dilemma is made up of no fewer than five horns: (1) Don’t vote at all, (2) Vote for Joe Biden, (3) Vote for Donald Trump, (4) Vote for the Green Party candidate, (5) Write down a name that doesn’t appear on the ballot. These are all viable options, but for the sake of simplicity I would like to reduce the dilemma to its classic, binary form:

Option 1: Vote for Biden.
Option 2: Don’t vote for Biden.

These two horns of my dilemma do appear to be equally undesirable at first sight. My goal in future blog posts will be to figure out which of them is significantly more undesirable than the other.

The Voter’s Dilemma (1)

I voted for Bernie Sanders in the Democratic primaries, but he is no longer in the race. I am now being told that I should vote for Joe Biden in the fall, for if I abstain from voting or if I vote for the Green Party candidate then I would be guilty of supporting Donald Trump and would therefore have to accept part of the responsibility for all the horrible things that he would probably do. But I don’t want to lend my support to Biden either, for many different reasons. This situation poses a dilemma. It is a real dilemma, not a made-up one, and so it deserves some serious attention.

dilemma

Let’s begin with a fundamental questions: What is the purpose of a Presidential election? Here’s a tentative answer: The purpose of a Presidential election is to provide citizens the opportunity to express their opinion as to which particular candidate should hold that office for the next four years.

In the United States, the opinion we express through a Presidential election is not binding, for we the people do not actually elect the President. Rather, we elect the 535 electors who, in turn, make that decision on our behalf. The main reason we have this unusual process is because the folks who made the rules back in the eighteenth century thought that the masses were stupid. They believed we weren’t smart enough to know what was good for the country, and so they thought we might vote on the basis of our emotions and elect the wrong person. To prevent that, they decided the choice should be in the hands of a small group of enlightened individuals—called the Electoral College—that could be trusted to use foresight and wisdom to select the right person.

So, technically speaking, what we the people express on election day every four years is not our collective will that must be implemented. It’s merely an opinion, or a preference for this person over that person. The entire process of electing a President was never meant to give the people any actual role in shaping the government or its policies; rather, it was meant to establish the legitimacy of the political system by getting us to perform the equivalent of signing a consent form.

In reality,  we the people are like the toddler who occasionally gets to sit behind the steering wheel of the family car and pretends to drive.

This reality be seen in the fact that the Presidential election has no necessary connection with people’s desire for a particular domestic or foreign policy. Presidential candidates can and do say all sorts of things when running for office, but as actual Presidents they are in no way bound by anything they’ve said before taking the oath. This means that when we vote for a particular candidate because we agree with their views, plans, and promises, there is absolutely no guarantee that, should this candidate wins, those particular views, plans, and promises would actually be enacted. Typically, they aren’t.

A Presidential election is a long and arduous process in which the goal is to win by any means necessary. As any political consultant will tell you, holding on to one’s principles, or trying to maintain consistency between one’s words and actions, is generally a losing strategy. What matters is not who you are but how the voters see you; and how the voters see you can be managed and choreographed. Winning requires getting the support of a wide variety of population blocs, and so it’s imperative to say whatever each bloc wants to hear. If this requires frequently contradicting oneself, then so be it. Deception is a necessary part of political campaigns, just as it is a necessary part of advertising, or magic shows.

Smart candidates speak in a special dialect of English that is meant to entice, attract, fascinate, and arouse, rather than inform or educate. As a result, vagueness has to be an essential ingredient of all such rhetoric so that different groups of people may project their own wishes and dreams to fill up the candidate’s empty words. But even when a candidate expresses a position or makes a promises that is relatively specific, and can therefore be used to hold that candidate accountable, we must not forget that there is no enforceable obligation to actually follow through. Inconsistencies need not be resolved through appropriate actions, for they can be easily covered up through additional rhetoric. Fulfilling one’s campaign promises may be a moral duty, but the Constitution does not recognize it as part of a President’s legal obligations.

This means that in the United States we the people do not possess the right to have our policy preferences implemented. In fact, we don’t even express our policy preferences when we cast our ballots. Voting in a Presidential election amounts to saying “I would like person X to be the President,” and nothing more. What person X does after becoming the President is not up to us, because—remember?—we are not smart enough to know what the country needs.

Most of us haven’t noticed that our Constitution does not give us the right to vote. Voting is not included in the Bill of Rights, which is why state legislatures are free to take a variety of measures to control, restrict, and manipulate our votes. But it’s important to understand why the Framers did not think of voting as an individual right that needed to be guaranteed at the federal level, for it tells us something truly important. It tells us that even our non-binding opinion regarding who should be the President is not all that consequential. The U.S. political system does not need the citizenry to express its preference. If the system was in any way dependent on our votes, it would treat voting as a mandatory civic duty that people can’t easily get out of— just like paying taxes or serving on a jury. Instead, voting is entirely optional, and the system routinely creates hurdles to discourage people from casting their ballots. Of course, if no one votes then the political system will lose all legitimacy, but maintaining legitimacy doesn’t require that everyone votes. Rather, the systems remain sufficiently legitimate even with only half the eligible voters participating.

To summarize, the process of Presidential election in the United States is structured in such a way that the following three conclusions can be safely drawn: First, the political system doesn’t need and therefore doesn’t value most people’s votes, which suggests that the government is not meant to be a reflection of what the majority wants.  Second, people only vote for a candidate and not for their preferred policies, which means that any impact their votes might have is usually indirect or unintentional, and always minimal. Third, the President has no constitutional obligation to fulfill any promises made during the campaign, which means that the perceived trustworthiness of a candidate is often decisive in the election but has little long-term consequence.

So, what does all this got to do with the voter’s dilemma? To reiterate, the issue I am trying to address is whether or not I should vote for Joe Biden. Before I can say anything meaningful about that decision, I need to have some sense of the purpose of voting. When I consider the purpose of voting in a U.S. Presidential election, I find that the system has been set up in such a way that citizens don’t really have much of an impact on what the government does, regardless of whether, or for whom, they vote. The U.S. is a “weak democracy,” in the sense that its political system was intentionally designed to minimize people’s ability to influence the government, while still requiring that the government draws its legitimacy from the consent of the governed.

The points noted above need to be kept in mind when trying to resolve the voter’s dilemma. As of now, I have not seen any evidence that my vote is needed or valued or will make any difference. Neither of the two major candidates has put forward a convincing argument why someone like me should vote for him. Furthermore, no one is asking me about the policies that I would like to see enacted in this country; the electoral system has no interest in what I think or believe or want. Instead of being asked about my policy preferences, I am being asked to choose between two individuals, neither of whom I know personally. I don’t have any way of getting either of these individuals to take seriously what I and others like me believe or think or want, let alone making him take the appropriate actions as President. And yet, I am expected to vote. Under these circumstances, which are obviously not unique to me, the only thing that my vote is sure to accomplish is to help maintain the legitimacy of the political system. Everything else is a matter of chance, and the odds aren’t favorable.

Since the Presidential election is not designed to find out what my favorite policies are, I am supposed to express those preferences indirectly, i.e., by choosing the candidate who I think is most likely to act in ways that I approve of. And I am supposed to make this decision based solely on what the two main candidates have done in the past and what they say they will do in the future. Based on what they have done in the past, I am absolutely sure that I don’t want either of them to become President. As for what they say they will do, I disagree with most of their views, plans, and promises; and when I do agree, I find both gentlemen to be unworthy of my trust. Trump obviously has a long  history of lying, but Biden too has a similar (though shorter) record of willful deception. 

Given that a U.S. President is not bound by anything said or promised during the campaign, I have to be extra careful when deciding to trust that a candidate would actually do what they say they would do. While neither of the two main candidates inspires confidence, there does exist a critical difference. Trump’s lies are petty and self-serving. Whenever he speaks, I know that he is probably lying; as a result, I have never been deceived by his words. Biden, on the other hand, does not lie in the same egregious or shameless manner as Trump; as a result, Biden’s lies are likely to be a lot more consequential as well as a lot more convincing. This makes Biden more dangerous than Trump. Furthermore, Biden claims a high moral ground and talks about restoring honesty, civility, and kindness to the office of the President. By suggesting that he is morally superior to the current occupant of the White House, Biden is essentially asking to be judged at a higher standard than the one we use for Trump. But when both candidates are evaluated according to their own standards of morality, the gap between them all but disappears.

None of this proves that the two main candidates are exactly the same, or that voting for one is just like voting for the other, or that it doesn’t matter whether I vote or not. There is a lot more to consider before I’ll be able to resolve this dilemma—at least for myself.

The Rules We Learn By

About a year ago, I stumbled upon the idea of compiling a list of rules that might help people learn better. I had noticed that I was not always as successful in my own learning efforts as I would have liked, and so I wanted to know if there was anything I could do to become a more effective learner. I had also noticed that some of my students were better at learning than others, and I wanted to find out whether the former knew something special that the latter did not. I thought if I could discover the most important rules for learning, I would be able to become a more successful learner by following those rules; in addition, I would be able to teach my students the same rules and thereby help improve their chances of successful learning as well.

There is an entire branch of psychology that deals with learning, and — not being a psychologist myself — I am obviously in no position to make any original contributions to that field. In any case, I had no intention of reinventing the wheel. What I wanted to do was to pick up some practical tips from other people’s research, especially ones that resonated with my own experience of learning and teaching, and to put them together in the form of a short, manageable list.

I have now come up with such a list; it is by no means complete or final, though it seems to me as more or less adequate for my limited purposes at this time. I do hope to improve this list in the future, as I continue to learn more about the process of learning.

I began compiling my list of rules with the following premise in mind: “Human beings are born with an incredible capacity for learning. In order to realize that capacity, we must follow certain rules.”

The premise is self-evident, in my view, and requires no further discussion. Based on that premise, I started collecting ideas for how to learn in the most effective manner. To prevent my list from growing out of control, I decided to group the ideas I had collected into categories. After numerous revisions, I ended up with three major rules: (1) there is no getting around the fact that learning requires hard work; (2) since I’m free to choose, I’m responsible for my own learning; and (3) since my knowledge will always be fallible, I must never stop learning.

Let me explain these rules in some detail.

Rule No. 1: Acceptance

Learning does not take place in a vacuum. It takes place within a world that exists independent of our thoughts and desires. To take effective action within such a world, we must come to terms with the way the world actually is. This means that if we are to succeed in pursuing our goals, we must begin by accepting the way in which reality functions and then adjust our own attitudes and behaviors in light of that reality. For example, if we want to build an airplane, we must understand and accept the laws of physics that exist independently of us. The only way to build an airplane that actually flies would require that we adjust ourselves to the laws of physics, rather than trying to adjust the laws of physics to our desire for flying. In other words, we are most likely to be effective when we work with reality rather than against it.

I have found that a major obstacle to learning is our resistance to certain facts. I am using the word “fact” in the sense of a knowable unit of reality. Simply put, a “fact” is a knowable unit of reality that, by definition, is what it is, regardless of anyone’s — or even everyone’s — beliefs, preferences, opinions, thoughts, feelings, desires, wishes, etc. It’s a complete waste of time, as well as a major cause of human suffering, to be upset about things that cannot be changed, i.e., to want the facts to be different than what they are. There is no point is resenting or complaining that “the water is wet” or “the ice is cold.” It so happens that the water will remain wet and the ice will remain cold, regardless of how much we may dislike these facts.

When it comes to learning, we are faced with a number of facts that must be embraced at the very outset or we won’t be able to make much progress. We must accept, for instance, that learning is neither easy nor painless, that we are almost certainly going to fail repeatedly before we start to succeed, and that any worthwhile learning requires a serious investment of time, attention, and effort.

While most people don’t reject these facts explicitly, there is often a subtle resistance or resentment within us based on certain subconscious assumptions. These assumptions tend to be unrealistic desires or expectations, such as “I should be exempt from pain” or “learning shouldn’t be hard” or “learning shouldn’t involve failure.” Even if we are unaware of harboring such unrealistic desires or expectations within ourselves, they can still exert a significant influence on our feelings, producing unnecessary suffering; and even sabotage our efforts to learn.

rule-1

Rule No. 2: Reminders

The second rule is based on the recognition that human beings are liable to forgetfulness, which is why we must put into place some sort of mechanism that periodically reminds us what we are most likely to forget. Perhaps the most important truth that we tend to forget is that we are responsible.

Part of being human is that we are free to make choices. Each choice we make, no matter how big or small, gives birth to certain consequences. We are free to choose our actions, but we are not free to choose which consequences will emerge from those actions. The consequences of our choices, in turn, shape our own immediate and long-term future. The same consequences also ripple out far into the world, affecting the world’s circumstances as well as the lives of other people.

Waking up to the fact that we are free to choose is essential to becoming proactive. Here’s one of my favorite quotes from Stephen Covey:

What does it mean to be proactive? It means more than merely taking initiative. It means that as human beings, we are responsible for our own lives. Our behavior is a function of our decisions, not our conditions. We can subordinate feelings to values. We have the initiative and the responsibility to make things happen.

The opposite of being proactive is to be reactive. Very often, we go through life as if we’re half asleep. In such a condition, we do not live deliberately or freely, but automatically — we react out of our past conditioning or we mindlessly imitate others around us. When we are reactive, we lose our capacity to shape our own future as well as our capacity to influence the world. We begin to see ourselves at the mercy of other people and of the circumstances that are beyond our control.

Becoming aware that we are free to choose is necessary for becoming responsible, in the true sense of the word. According to Stephen Covey:

Look at the word responsibility — “response-ability” — the ability to choose your response. Highly proactive people recognize that responsibility. They do not blame circumstances, conditions, or conditioning for their behavior. Their behavior is a product of their own conscious choice, based on values, rather than a product of their conditions, based on feeling.

When we are proactive, we know that no matter how difficult or challenging our situation may be, there is always some amount of freedom available to us — the freedom to choose our response. And we know that this freedom isn’t static. The more we use our freedom, the more it grows. It is true that we can’t control how other people act, and that very often we don’t choose the circumstances in which we find ourselves. But we can almost always choose how we are going to respond to the stimuli we receive from people and circumstances. As Lou Holtz famously said, “Life is ten percent what happens to you and ninety percent how you respond to it.”

stimulus-and-response

The purpose of the second rule is to help us become aware of how our freedom to choose is connected with our capacity for learning. We are responsible for our learning insofar as we are aware that learning is a choice that we make (or fail to make) in each moment. We are free to learn, just as we are free not to learn. The truth is that if I have chosen to learn, then nothing can really stop me from learning; and if I have chosen not to learn, then nothing in the world can make me learn. To quote a classroom poster I once saw, those who’ve made the decision to learn will always find a way, while those who’ve made the decision to not learn will always find an excuse. Since choice belongs to the individual, each person is individually responsible for his or her own learning.

When we are reactive, we blame others (“students these days don’t want to learn” or “the professor doesn’t know how to teach”). But proactive people know that learning is primarily a matter of choice. Proactive people don’t blame; rather, they take responsibility. As we become proactive, a mutually enriching relationship begins to develop between the student and the teacher. Both sides come to terms with the fact that the learner is responsible for learning and the teacher is responsible for teaching; yet, the teacher cannot cause learning to happen but can only provide the conditions in which the student is most likely to learn. As Roger Schank puts it, “learning happens when someone wants to learn, not when someone wants to teach.” Or, as Herbert Simon was fond of saying, “Learning results from what the student does and thinks, and only from what the student does and thinks. The teacher can advance learning only by influencing what the student does to learn.”

In his book The Prophet, Khalil Gibran expressed the same insight as follows:

The astronomer may speak to you of his understanding of space, but he cannot give you his understanding.

The musician may sing to you of the rhythm which is in all space, but he cannot give you the ear which arrests the rhythm nor the voice that echoes it.

And he who is versed in the science of numbers can tell of the regions of weight and measure, but he cannot conduct you thither.

For the vision of one man lends not its wings to another man.

I like to think of learning as analogous to mountain climbing, for it allows me to visualize the responsibilities of the student and the teacher. A teacher is like a guide who knows a particular mountain well because he/she has been climbing that mountain for a long time. Such a guide can inform the climbers about the best routes to the top and can warn them about the dangers that may lie ahead. But a guide, no matter how skilled, can’t do the climbing for you. You must carry your own gear and supplies, and you must do your own climbing.

rule-2

Rule No. 3: Attitudes

What sorts of attitudes are most conducive to learning? Or, to ask the same question from a different angle, what motivates us to do the hard work involved in learning? Many people would say that interest or curiosity is an important motivating factor. This is true as far as it goes. However, we are not born with an interest in any particular subject or a curiosity about any particular question; rather, we acquire these during the course of our learning. What causes us develop interests and curiosities that last a lifetime?

doubtThis is a very broad question, and a great deal can be said to answer it. For my limited purposes, however, one of Charles Peirce’s suggestions would have to suffice. In his paper “The Fixation of Belief” (1877), Peirce argued that human beings embark upon the path of inquiry whenever they wish to overcome a disturbing state called “doubt” and to replace it with a satisfying state called “belief.” Being a pragmatist, Peirce emphasized that “beliefs guide our desires and shape our actions,” each “according to its degree,” while “doubt” has no such effect. Having a “belief” means, for Peirce, that some sort of “habit” has been “established in our nature” that “will determine our actions.” In the absence of “belief,” we are unsure how to act, or how we would act, and this “uneasy and dissatisfied state” is called “doubt.” According to Peirce, since “doubt” is a feeling of unease, akin to having a splinter in the eye, we “struggle to free ourselves” from it, and seek to achieve “a calm and satisfactory state” known as “belief.” This struggle is known as “inquiry.” Hence:

The irritation of doubt causes a struggle to attain a state of belief.

Irritation-of-Doubt

So, what is it that motivates us to invest our time, attention, and effort in learning something new? While the immediate cause can be correctly identified as interest or curiosity, Peirce’s suggestion helps us to see that what we call interest or curiosity is itself motivated by the desire to overcome the “uneasy and dissatisfied state” known as “doubt.” From this, we can draw the conclusion that “doubt” is a powerful motivator for learning. When we feel doubt, we are sometimes tempted to ignore or suppress that feeling; we try to wish it away by acting as if it doesn’t exist. To do so would be self-defeating. The irritation of doubt is really the awareness that we don’t know something that we do need to know. The uncomfortable feeling of doubt is not our enemy; it’s merely a message informing us of our own ignorance, a sign that we need to embark upon a journey of discovery. Even though doubt irritates, we ought to welcome that irritation, for without it we would have no reason to learn anything beyond what we already know.

Peirce says that the irritation of doubt stimulates a process of inquiry, and that this process of inquiry lasts as long as doubt continues to irritate. The process of inquiry can only come to an end when the irritation of doubt is replaced by “a calm and satisfactory state” which he calls “belief.” Peirce warns us, however, that reaching a state of belief does not mean that we have reached absolute truth. We attain a state of belief when we feel that we have found a resolution to our doubt, and that the resolution is somehow “true.” However, that may or may not be the case. Consequently, virtually any belief is vulnerable to further doubt, which initiates another process of inquiry, which leads to another belief. If we are lucky, every round of inquiry leads us to a belief that is better and truer than our previous belief. It is important, therefore, that we never allow ourselves to fall into the trap of believing that we have reached the absolute final stage of inquiry. Even when we are more or less satisfied with our present beliefs, we ought to remain open to the possibility of further doubts and fresh avenues of inquiry. This is why Peirce said, in another context, that in desiring to learn, we must never be satisfied with what we’re already inclined to think.

rule-3

You can find the above rules presented in the form of a poster here.

Believing and Knowing

“The X-Files” is a popular television series that was originally aired from September 1993 to May 2002. It was produced by Chris Carter for the Fox Network. While I did catch an occasional episode or two when it was first aired, it’s only now — almost two decades after the series began — that I’ve started watching “X-Files” religiously, i.e., in a dedicated, deliberate fashion. At the time of this writing, I am somewhere in the middle of season 3.

In this post, I am concerned neither with the mythology of “X-Files” nor with any of its specific stories or characters. Rather, I want to explore the meaning of one of the two slogans that became iconic in American culture thanks to the series’ popularity—“The Truth is Out There” and “I Want to Believe.” I’d like to tackle the latter slogan first, leaving the former for another day.

Throughout the series, or at least in the episodes I’ve watched so far, one of the protagonists — FBI agent Fox Mulder (played by David Duchovny) — frequently expresses his desire/ambition to “believe.” He does so both verbally and through his actions. Mulder even has a poster hanging prominently in his office that depicts a hovering UFO or alien spaceship, with bold letters proclaiming “I Want to Believe.” The slogan appears to be intended by the creators of “X-Files” to serve as a quick description of what motivates this particular character to engage in a relentless, even obsessive, struggle to track down aliens, observe paranormal phenomena, and expose government conspiracies designed to cover up the first two — even at the cost of endangering his life.

The question I want to explore concerns the value of adopting “I Want to Believe” as one’s goal or purpose in life —  something akin to what Stephen Covey calls a “personal mission statement.” The slogan appears to suggest that believing is some sort of virtue that ought to be cultivated for its own sake, that it is something all of us (or at least the noblest and the most ambitious among us) should aim for. The assumption is that the vast majority of us don’t believe —  most of us are either inherently incapable of believing or we have recently lost the ability to believe; and this general lack of belief is precisely what makes Mulder a lone warrior, a “cry in the wilderness” type of prophetic figure, who insists on continuing to believe even when the evidence is either scanty or ambiguous. What makes him a hero is that he goes on believing in extreme possibilities despite all the pressures of a skeptical culture and despite all the eye-rolling of his partner Dana Scully (Gillian Anderson). And yet, Mulder seems to be fully aware of the difficulties involved in maintaining a practical commitment to his beliefs, as there are powerful forces attempting to discredit his theories and findings. Given that he regularly comes up with hypotheses that are too fantastic from the viewpoint of his peers, Mulder needs all the support he can get in order to persevere in following his hunches. Since that support is hard to come by from other people, the poster in his office appears to function as a surrogate. Presumably, the poster is a constant reminder of what his life’s purpose is supposed to be, a reminder that he must take his own hypotheses seriously even if they appear silly or unscientific to everyone around him.

To keep one’s commitment to believe intact in the face of opposition and ridicule clearly represents an act of exceptional courage. Either that, or it is a sign of delusional schizophrenia. There is a fine line separating genius from madness, a line that is far too easy to cross. Because of the possibility that one may have lost one’s mind, to believe against the collective pressure of society is to take a tremendous risk. There is safety in believing what everyone else believes and denying what everyone else denies. More than safety, there is considerable wisdom in accepting what has become established as true after centuries or millennia of collective human experience; there is, after all, no need to reinvent the wheel. At the same time, there are occasions when it is worth going out on a limb — when it is worth believing and proclaiming a truth that is neither commonly acceptable nor currently provable — simply because one has an intuitive sense of having caught a glimpse of some aspect of truth. But then again, one’s own sense of confidence that one sees what others can’t or won’t see is no guarantee that one isn’t delusional. There is no dearth of highly confident individuals in mental asylums, folks who are absolutely convinced of the truth of whatever they happen to believe. While risking one’s position in society for the sake of one’s convictions is very often the cause of real human progress, a complete lack of doubt in one’s own private thoughts reveals a deficiency in self-awareness and cannot be a very healthy condition. Some form of objective, external confirmation of one’s hunches or visions is therefore necessary for gaining a relative assurance that one’s feet are firmly planted on this side of the genius-madness boundary.

Assuming that one hasn’t gone crazy, it is no doubt highly noble to maintain one’s commitment to believe what one personally knows to be true, especially when that commitment doesn’t provide any obvious, material advantage but is actually detrimental to one’s social status and approval ratings. In other words, believing what’s true is a virtuous act, especially when performed in the face of opposition or ridicule. But this raises the possibility that one can also believe what isn’t true—one may believe what’s actually false. Clearly, believing what’s false may or may not be a vice, but it cannot be a virtue. It follows that there is nothing noble or virtuous in believing as such.

In everyday English, believe means (1) to have confidence or trust in a person; (2) to give intellectual assent to, or accept the truth or accuracy of, a statement, doctrine, etc. The dictionary doesn’t say that in order to believe one must be justified in one’s convictions, or that one’s convictions must, in fact, be true. The concept of truth is not part of the concept of belief. All people—including delusional schizophrenics—do believe something. The really interesting issue therefore is not the fact of belief but the content of belief. It is a trivial point that people believe; the non-trivial question concerns what they believe, and whether or not what they believe is, in fact, true.

At first sight, the slogan “I Want to Believe” appears to be incomplete, for it lacks an object for the main verb. If the slogan is taken out of its narrative context and presented before a group of people unfamiliar with the television series, they would probably wonder about the missing object—“believe what?” Of course, this is not a problem for the audience of “The X-Files.” They are fully aware that, within the mythology of the series, the kinds of statements and reports that people find incredible concerns extraterrestrial aliens, paranormal phenomena, and government conspiracies, and that these are most likely the sort of things that Mulder “wants to believe.” Indeed, the “The X-Files” mythology never suggests that there has been any decline in the human ability to believe as such; rather, the decline is only in the human ability to believe certain kinds of statements and reports, and in their ability to give credence to certain kinds of interpretations of observed events or data. It is in the face of this very specific sort of incredulity that Fox Mulder wishes to believe otherwise.

I would like to emphasize that believing as such is not a virtue, partly because everyone believes something just by being human and partly because of the possibility of believing what’s false; and that only believing what’s true can properly be seen as virtuous, especially when a person goes on believing despite facing opposition and ridicule. People who are delusional—as well as those who are confused, mistaken, uninformed, misinformed, brainwashed, deceived, and so on—can be extremely certain and steadfast in their beliefs; they may be so convinced that they are willing to kill other people or sacrifice their own lives. Yet, no one thinks of their commitment to whatever they believe as particularly virtuous. It seems that people do not associate virtue with belief unless they are convinced that the belief in question is, in fact, true. This seems to suggest that humanity, in general, does not have a high regard for believing as such, but only for believing in a truth—especially an unpopular truth. For all practical purposes, what matters is judgments like these is people’s perception of whether something is true or false; they would respect a person’s commitment to what they take to be true and not what they take to be false. Leaving aside the epistemological question, it seems to me that this is indeed the right attitude.

Let’s return to the poster in Fox Mulder’s office that says “I Want to Believe.” What kind of believing does this slogan advocate? When the slogan is taken out of context, it seems to suggest believing as such, without any reference to the content of what is to be believed. But we have seen that the absent object in the sentence “I Want to Believe” is not really missing, for it is implied by the overall mythology of “The X-Files.” Apparently, the slogan refers to believing in the plausibility of particular kinds of scenarios — scenarios that are likely to be seen by the mainstream of society as having little or no probability of being real.

The wanting part is obvious, for Mulder approaches every perplexing situation with a strong bias towards the most fantastic and least probable hypothesis, and is visibly disappointed whenever a mundane explanation wins out (which is relatively rare). He is not open-minded, in the sense of someone who is receptive to all possibilities. For this reason, and as Scully keeps bringing it to his attention, Mulder demonstrates a tendency to pick and choose only that evidence which suits his pet hypothesis in any given case, revealing the depth of his commitment to believe. In real life, this tendency will be normally seen as a violation of the scientific spirit; within the narrative framework of “The X-Files,” however, it is depicted as Mulder’s extraordinary ability to identify the most relevant clues in a given case.

Every now and then, it appears that Mulder’s issue is not believing per se; rather, it is finding concrete evidence for what he already believes intuitively. In this respect, he is not all that different from most of us, including scientists. Yet, the fact remains that in real life intuition can both guide and misguide, depending on how it is interpreted.

At the same time, the poster in Mulder’s office proclaims believing to be the object of his heart’s desire, rather than confirming his beliefs. If he already believes, what’s the point of saying that he wants to believe? Or is the poster referring to the degree or intensity of his belief? It would seem that Mulder’s belief in extreme possibilities is rather fragile, always about to fall apart, and so he constantly needs reassurances in the form of concrete evidence; what he really wants to believe is that he has not invested his life in pursuit of something illusory. He constantly needs, from this perspective, external validation that he is not living a meaningless life.

At another level, however, what is not always clear is why Mulder wants to believe. Whether one has a hunch or one is in doubt, in either case it is worthwhile to inquire and investigate until one discovers the truth of the matter. But finding or figuring out what’s true is not the same thing as believing whatever one wants to believe. Isn’t finding out the truth better than proving one’s beliefs? Take, for instance, the issue of extraterrestrial aliens visiting the earth and abducting humans for experiments, a theme that “The X-Files” writers find particularly attractive. In fact, alien abduction is a pivotal theme for the entire series. Depending on the viewpoint of a given character and the specific narrative frame of a given episode, this scenario can be either true or false. But regardless of whether or not alien abduction turns out to be factual within a specific context, it seems to me that trying to cultivate a belief in its factuality would be a pretty useless enterprise. Whenever I see the poster, I want to give Mulder a piece of my mind: You should be aiming at knowing, Mr. Mulder, not believing.

What is intriguing is that he does know. The FBI agent is fully aware that extraterrestrial aliens have been abducting humans for experiments, and he knows this on the basis of his own countless experiences and encounters. At a personal level, he has no reason to doubt that alien abduction is a factual phenomenon. Since he knows that the scenario is true, what he clearly needs to do is to demonstrate its factuality before the wider public, thereby defeating the government conspiracy to keep this a secret. And this is precisely what motivates him in episode after episode of “The X-Files.” Given that Mulder knows, I am troubled by the poster in his office that says “I Want to Believe.” For if Mulder truly knows, I don’t understand why he still wants to believe.

In an earlier post on “Faith and Belief,” I quoted Wilfred Cantwell Smith’s observation that, in contemporary English usage, the word “belief” is frequently used in a way that implies its sharp contrast with respect to the word “knowledge.” Generally speaking, when people say “I believe” they’re indicating that (1) they are not completely sure, and/or that (2) there is legitimate room for disagreement. On the other hand, when people are completely sure that what they believe is true, so much so that no rational and informed person could possibly disagree, they would simply say it in a matter-of-fact fashion, without bothering to preface it with “I believe.” Thus, it makes a great deal of difference whether a person says “It is raining” or “I believe it is raining.” The former sentence implies, but usually does not include, the phrase “I know.”

In everyday English usage (as opposed to academic language), a tenacious belief does not attain the status of knowledge unless it happens to be true. A wrong belief, no matter how firmly or confidently held, can be seen as a mistake, a confusion, a misunderstanding, etc., but it is never seen as knowledge. In other words, knowledge is not simply a belief about which a person is completely sure. In addition to subjective certitude on the part of the believer, the belief itself must be objectively true for it to qualify as a piece of knowledge. (How do we know that a belief is true is besides the point.) Consider the following examples, slightly modified from Smith.

The above examples demonstrate the following features of beliefs: (1) beliefs can be true or false, (2) a person can be certain or uncertain about the truth of a given belief, and (3) a belief amounts to knowledge only when it fulfills two conditions, i.e., subjective certitude and objective truth. Out of the four statements, Smith contends that only the last one, “I know that Washington DC is the capital of the United States,” would qualify as knowledge.

These simple observations lead us to the following axiom: The more we know, the less we believe. Or, as knowledge expands, beliefs shrink.

What, then, is the value of the slogan “I Want to Believe” as a personal mission statement? Not a great deal, I would say. Since the word “believe” usually implies a feeling of uncertainty, and since even a strong feeling of confidence does not guarantee that a given belief is objectively true, it seems to me that knowing is a much higher goal to pursue than mere believing.

I would like to see Mulder’s poster proclaiming a different goal: “I want to know.”

Quarreling over Names and Shapes

People disagree!

Some of our disagreements are the result of our diverse tastes, values, preferences, and perspectives; these disagreements may be mitigated, tolerated, accepted, or celebrated, but they are unlikely to disappear. Other disagreements are the result of the limitations of our sense perception and/or shortcomings in our reasoning capacity; these disagreements are neither inevitable nor permanent, for these can be eliminated, to a lesser or greater degree, with the help of appropriate tools and methods.

In his Masnavi, the Sufi poet Jalal al-Din Rumi articulates the latter point by narrating two stories, “Quarreling over Names” and “Quarreling over Shapes.” The first story explains how the same reality can be described by means of a wide variety of names; the second story emphasizes how knowledge of a part of reality, no matter how accurate, does not easily translate into knowledge of the whole.

Here is Nicholson’s translation of the first story, “Quarreling over Names.”

Pass beyond (external) names and look at the (underlying) qualities,so that the qualities may show you the way to the essence.  

The opposition (among) people takes place because of names. Peace occurs when they go to the real meaning.  

The argument of four persons over grapes, which each one had understood by a different name;   

A man gave four persons a silver coin. The (first) one (who was a Persian) said, “I will give this for (buying) some angur.” 

Another one (who) was an Arab said, “No! I want ‘inab — not angur, O deceitful (man)!” 

The (third) one was a Turk and he said, “This (coin) is mine.  I don’t want ‘inab. I want uzum.”   

The (fourth) one, an Anatolian Greek, said, “Quit (all) this talk!  I want istafil.”  

In (their) disagreement, those individuals were (soon) in a fight — since they were uninformed of the hidden (meaning) of the names.  

They were striking at each other (with their) fists out of ignorance.  They were full of foolishness and (were) devoid of knowledge.  

This is a delightful, yet somewhat poignant, story. The narrative makes it clear that angur, ‘inab, umuz, and istafil are merely four different words in the Persian, Arabic, Turkish, and Greek languages, all of which denote the same reality (grapes). Listening to Rumi’s poetic description of the conflict, we are tempted to laugh at the four characters in the story and their silly behavior. Our laughter is soon turned into serious introspection as we realize that the story is describing an important aspect of the human condition, a predicament that each of us faces by virtue of being human.

Our disagreements can sometimes lead to heated arguments that may escalate into fist-fights and even bombing campaigns. And yet, many of our disagreements are too shallow to warrant anything more than a smile. We disagree over words while ignoring that words are merely the vehicles for meanings. We routinely take our mental positions, our doctrines and our dogmas, with a seriousness they don’t deserve. We fight over superficial differences, such as words and phrases, while forgetting that there is a single reality underlying our varying interpretations. Just because that single reality is susceptible to a wide range of descriptions does not mean that our words and phrases are worth fighting over, or that our disagreements are permanent or essential.

Ruth Bebermyer says, in one of her poems, that “words are windows, or they are walls.” The purpose of language is communication, but our language often becomes the biggest obstacle that prevents us from communicating.  We’ve all experienced the situation in which words fail to carry the meanings that we wish them to convey, and when the more words we use the more convoluted the situation becomes. This happens, partly, because we tend to overestimate the power of language. Somehow, we have forgotten that experience is more basic, and therefore more important, than the words we use. Language is a great tool, but it has its limits. Our vocabulary, no matter how large, is no match for the range and richness of our actual experiences.   

So long as all parties in a given context understand what they are referring to, it makes absolutely no difference exactly which words they use to describe it; or, indeed, whether they speak at all. On the other hand, when our actual experience of reality is inadequate, or when we put too much trust in the ability of language to convey our meanings, then we easily fall into disagreements over such superficial matters as names and labels.

For Rumi, the conflict over interpretations can only be resolved when someone with a deeper insight helps us notice the shallowness of our disagreements and the true nature of reality. As soon as we see and taste the actual grapes, the conflict disappears. We realize that it doesn’t matter whether we call the object of our desire angur, ‘inab, umuz, istafil, or something else. This need for personal experience calls for a spiritual guide, someone who knows the different languages and can therefore appreciate the underlying cause of our predicament, someone who can see through the names and labels, someone who understands what all of these words are intended to signify.

Rumi’s second story is rather well-known, but is no less delightful or profound than the one about grapes.

The disagreeing over the qualities and shape of the elephant;

(An) elephant was in a dark building.

(Some) people from India had brought it for exhibition.

Many people kept going into that dark (place) in order to see it.

Each one was stroking it (with his) hands in the dark, since seeing it with the eyes was not possible.

In the case of one person, (whose) hand landed on the trunk, he said, “This being is like a drain pipe.”

For (another) one, (whose) hand reached its ear, to him it seemed like a kind of fan.

As for (another) person, (whose) hand was upon its leg, he said, “I perceived the shape of the elephant (to be) like a pillar.”

(And) in the case of (another) one, (who) placed (his) hand upon its back, he said, “Indeed, this elephant was like a throne.”

In the same way as this, any one who reached a part (of the elephant) used his understanding (in regard to) any (particular) place he perceived (by touch).

Their words were different and opposing because of the (different) viewing places.

One person gave it the nickname of (the bent letter) “dal,” this (other one), gave it the nickname (of the straight letter) “alif.”

If there had been a candle in the hand of each person, the disagreement would have gone out (completely) from their speech.

The eye of (physical) sense is like the palm of the hand, nothing more.

(And) the palm (of the hand) has no access the whole of (the elephant).

In this narrative, a number of individuals go inside a dark room where an imported animal is being kept, a creature they have never encountered before. The individuals are supposed to touch and feel the body of the animal, and thereby form some kind of image in their minds as to how the beast looks like. The story brings out a whole range of disagreements among those who have felt some part of the creature and, on the basis of this limited experience, feel perfectly confident to make judgments about the whole.

In the story of the grapes, the four men are able to resolve their disagreement when they are presented with a bowl of grapes. The actual experience of seeing and tasting the fruit makes them realize that all of them had exactly the same desire, and that their problem stemmed from the fact that they were expressing their desire in different words. The situation in the story of the elephant is slightly different. Here, a number of individuals are disagreeing over how this exotic creature looks like, based on their differing perceptions generated by their experience of touching and feeling different parts of the animal’s body. If the story of the grapes brings out the limitations of language, the story of the elephant is designed to emphasize the limitations of sense perception. While each individual is absolutely correct in describing a particular part of the creature, the epistemological shortcoming lies in their assumption that knowing a part of a given reality is identical with knowing the totality of that reality.

For Rumi, the disagreement could be resolved if each person was given a candle. While sense perception (exemplified by the faculty of touch) gives us accurate but limited information, it is possible to supplement that information by means of a source of insight (exemplified by the candle) into the nature of the whole of reality, so that we may come to perceive it in its fullness or totality.

The story of the elephant can be read as an allegory for the nature of science. While science can give us more or less accurate information about a given strip of reality, it would be a mistake to assume that this strategy of dividing reality into tiny fragments and studying it as isolated pieces can help us see how reality as a whole looks like. The individuals who judged the shape of the elephant based on what they knew about the shapes of its parts were using an analytical method. This is good and useful as far as it goes, but some form of illumination or enlightenment is needed if we are to employ a synthetic method in order appreciate the wholeness of reality as well.

The religious implications of this story are intriguing, to say the least.

Angels and Demons (3)

Perhaps the most nefarious feature of a Domination System is that it seduces us into losing ourselves in a jumble of thoughts and judgments. More specifically, since a Domination System is bad for everyone, it can only function by misrepresenting itself to all or most of its victims.  The victims of a given Domination System must believe in certain false propositions, even when ample evidence to the contrary happens to be right under their noses.  In fact, the truth is always already within us, which means that our inability to recognize truth is not built into our minds but is something we acquire from our culture.  This is possible only through a powerful process of forgetting what we naturally know to be true, a process that happens gradually through many years of education and socialization.  By the time we become adult members of civilized society, we are already in deep sleep.  A given Domination System can only thrive on false consciousness, on its capacity to make people less conscious than they are meant to be.

The most important truth that we are made to forget is the truth about who we are.  This is so because all Domination Systems function by dividing, classifying, and labelling people both horizontally and vertically.  If we were to learn the truth about who we are in reality, all of the existing divisions among humankind will become instantly relativized.  The absolute solidity of such divisions as gender, race, ethnicity, class, religion, and political affiliation will disintegrate; the distinctions will remain, but they will lose their tremendous capacity to determine our identity and our actions.  We will be able to see through them and rise above them.  This, obviously, is very difficult to achieve; for a Domination System works incessantly to ensure that we will always identify ourselves with this or that group and that most of us will never find out the truth of who we really are.

Another truth that we are made to forget is the truth about our freedom of choice.  This is so because all Domination Systems require that the vast majority of people behave in predictable ways.  A Domination System cannot deal with human actions that are spontaneous and authentic, for such actions cannot be controlled and regimented in the service of the system.  Consequently, we are educated and socialized into believing that we act in certain ways not because we choose to but because we have to, or because other people or events make us act in those ways.  The more we forget our inherent freedom to choose, the less we are able to use it.  We lose our freedom merely by believing it does not exist.  Very soon, we also forget that we are responsible for our choices.  Domination Systems love people who lack freedom as well as responsibility.  Such people can be made to feel anything; they can also be made to do anything.

A third truth that we are made to forget is that it is possible for all people to meet their needs.  This is so because all Domination Systems thrive on constant, never-ending competition; they are nourished by the win/lose mentality.  Consequently, we are made to believe that there is a permanent scarcity of resources, that it is impossible to have winners without creating losers, that it is a jungle out there, that wants and needs are the same, and that happiness is just around the corner (usually sitting in a shelf in the supermarket).  Once we accept the proposition that only some people will be able to get the desirable goods and everyone else will suffer deprivation, we know that the purpose of our life is to become (and remain) part of the first group.  This also teaches us to constantly compare ourselves with everyone else, for we must determine at each moment whether we are among the winners or the losers; we also wish to know our relative status among the winners, whether we are getting ahead of others or falling behind in the “human race.”  All of this ensures that we are never satisfied, that we are always looking for more, and that we won’t help anyone else.

A successful Domination System is able to hide all these truths in plain sight.  It does so by employing a very clever trick.  Since truth is always already accessible to us, the only way to make it “disappear,” as it were, is by diverting our attention, which is accomplished through the age-old magical trick called distraction.  This is where the jumble of thoughts and judgments comes in.  We lose touch with reality, and with the truth that reality is willing to offer us each moment, when we learn to give greater attention to thoughts and judgments instead of our actual experiences.  The raw experience is our direct portal into reality and truth; this portal is often blocked by a jumble of thoughts and judgments.  No Domination System can interfere with our raw experience; it can only divert our attention away from our experience and into the artificial world of thoughts and judgments, a realm that is much more susceptible to manipulations.

Insofar as a Domination System is successful, the overwhelming majority of people living under its influence tend to lose their humanity.  But there is hope.  Every now and then, some start to wake up from their artificial slumber.  They gradually rediscover the forgotten truths and begin to reclaim their humanity.  No Domination System is fond of people like these, for they are trouble-makers of the worst kind.  Such people are the only hope of humanity.

Waking up, however, involves considerable suffering; it’s definitely not as easy as taking the red pill rather than the blue one.

Angels and Demons (2)

I ended my previous post by asking this question:  If human beings are fundamentally good, what makes them act in evil ways?  I suspect that if we were to make a list of all the factors that contribute to the persistence of human evil — factors that motivate, encourage, or cause us to act in immoral ways — then we would end up with a very long list . . .

But what if we don’t need to make such a list?  What if the totality of anthropogenic evil can be traced to just a handful of variables?  Indeed, what if there were only a couple of factors involved in all acts of human corruption?

I have recently come to believe just that. I think there are only two basic factors that contribute to the entire range of human depravity and immorality. I may turn out to be in error, but at this time it does appear to me that all of the usual suspects — all the causative or contributory factors that philosophers, psychologists, sociologists, and ethicists have been able to identify — can be reduced to one of only two variables.

I am tempted to call these two variables the “evil twins.”

One of the “evil twins” is inside the human individual, in the soul or the psyche. The other is outside the human individual, in the workings of society and culture. The two can be identified and discussed separately, though in practice they often thrive on each other. I call the internal factor “lack of self-awareness” and I call the external factor “structures.” Today my task is to explain the latter, leaving the former for another day.

I have been nudged in the direction of identifying “structures” as one of the “evil twins” after reading Emile Durkheim, Max Weber, Johan Galtung, Kenneth Boulding, Ronald Wright, Walter Wink, John Dominic Crossan, Derrick Jensen, and Marshall Rosenberg.  None of them, however, can be held responsible for the errors of my interpretation.

By “structures” I mean the objective and systemic aspects of human society and culture. The social and cultural structures in which we live and move and have our being are the products of human interactions with each other and with their environments over hundreds of years; and yet, I locate them outside the human individual. Even though we are partially responsible for having created them, we haven’t created them with full awareness of what they are capable of doing, nor are we in full control of what they do to us. As such, structures are much more than human products. These structures have a life and a momentum of their own; once produced, they come to acquire an undeniable influence on how we think, feel, and act. The influence that social and cultural structures have on us tends to remain beyond our own ability to fully understand, control, or modify them. We are not absolutely determined by them, however. We are determined by social and cultural influences only to the extent that we suffer from the internal factor, i.e., from “lack of self-awareness.”

To sum up, society is more than the sum of its parts; it enjoys a reality that is pretty much independent of its members — particularly if the individuals who make up the society are lacking in self-awareness. Similarly, culture is made up of ideas, beliefs, skills, and habits that are ultimately human products; yet these ideas, beliefs, skills, and habits also act back on us in profound ways but do not easily change in response to our efforts — particularly if we are lacking in self-awareness.

It is very difficult to deliberately change social and cultural structures; they do change, of course, but very slowly and only after much concerted effort and sacrifice . . . and with a great deal of self-awareness.

Let me be more specific. When I say that social and cultural “structures” constitute the external factor responsible for human evil, I am referring to one very particular aspect of these structures. To understand that, we’ll have to look into the origins of these structures.

Human beings have been living on this planet for more than 150 thousand years. Human behavior is remarkably different from that of other living organisms in the degree of its range and flexibility. Like other primates, we organize ourselves socially; unlike other primates, we can organize our societies in virtually as many different ways as we choose. We also have the unique need for “meaning,” which creates the further need for cultural goods in addition to social ones. This combination of traits makes the creation of social and cultural structures a human inevitability. For most of our history, however, we created very simple forms of social and cultural structures, but things changed drastically with the beginning of civilization.

Civilization is a particular type of culture, characterized by the domestication of plants, animals, and humans; which usually leads to the development of writing, a complex division of labor, and the urban-rural divide. About 10 thousand years ago, humans discovered or invented large-scale farming and domestication of animals. This led the previously mobile populations of hunter-gatherers bands to start settling down in villages and towns. As we become increasingly proficient in our control of plants and animals, we began to abandon the old habits associated with subsistence living and encountered for the first time the mixed-blessing of food surplus. This led, about 5000 years later, to the birth of full-fledged civilization in at least 4 or 6 different cultural zones. With the growth of large-scale farming came private property, which produced a class system of landowners and peasants. With surplus food came the need for granaries; the birth of cities meant the concentration of wealth in a relatively small area, creating the need for professional warriors, taxation to protect the city-state, and a distinct religious class to justify all this as part of a larger sacred reality. The earlier hunter-gatherer communities were egalitarian, with few hierarchies, no organized stated, and no stratification. With the birth of civilization, however, Domination Systems emerged for the first time in human history. They have been with us ever since.

A Domination System is a social hierarchy in which those at the bottom live at a considerable disadvantage as compared with those at the top.  This is not to say that hierarchy itself is problematic.  In fact, many hierarchies can be advantageous to both parties, such as parent-child or teacher-student. A Domination System, however, is a particular type of hierarchy with the following features: (1) it consists of two classes of people, one of which is usually (but not always) more numerous but enjoys significantly less power than the other class; (2) it is a more or less permanent arrangement that allows little possibility of reversal or equality; (3) there is systemic exploitation, so that the advantage of those at the top requires the disadvantage of those at the bottom.

Primates as well as many other animals have social hierarchies, but human beings are the only animals who have created, perpetuated, and legitimized Domination System as an essential element of their social and cultural structures.

A domination system is intended to maintain a significant asymmetry of power between two classes; as such, it requires two additional mechanisms for its own continued existence: (1) violence or threat of violence; and (2) religious or ideological legitimation. A Domination System functions successfully in the long run only if a large majority of people continues to act in the expected fashion; the human tendency to help support a domination system results from a fear of repercussions on the one hand, and an acceptance of the Domination System as legitimate and moral on the other hand.

Civilization has had a paradoxical set of consequences over the last 5000 years, and it is not at all self-evident whether its positive contributions outweigh its many disastrous results. Civilization may be compared to the strange case of Dr. Jekyll and Mr. Hyde. On the one hand, civilization is the engine that increases the level of complexity in social and cultural structures, thereby offering human communities a tremendous adaptive advantage. Without civilization we would have no cars, no hospitals, no schools, no computers, no books, no indoor plumbing, and no electronic gadgets. Without civilization there may be some art, science, history, philosophy, and music, but these would be of a very poor quality relative to what humankind has actually produced. We should all be thankful to the progress of civilization.

On the other hand, we should acknowledge that Domination Systems came into existence alongside, and as a consequence of, the same progress of civilization that has brought us countless desirable goods.  Had there been no civilization, there would have been no system of domination—which means no stratification based on wealth and power, no organized violence, and no exploitative hierarchies.

Domination Systems cannot function in the absence of civilization. Whether civilization can function in the absence of Domination Systems is not yet known.

Domination Systems are abnormal and unnatural. They are not in harmony or alignment with human nature; instead of helping us realize our innate goodness, domination systems bring out the worst in ourselves by encouraging us to focus on our short-term and selfish interests.

Domination Systems tend to harm everyone — those at the top as well as those at the bottom; humans as well as animals, plants, oceans, soil, and air. The social and cultural structures serving the Domination System try to convince us that violence is a necessary element in human existence, that the only way to survive in this jungle is to carry a big stick and be ready to use it. Domination Systems are factories whose main product is violence—in multiple sizes, shapes, and brands — and the resulting suffering for both the victims and the perpetrators.

Can a Domination System be dismantled? Yes; better yet, it can be transformed or converted into something healthier and organic, something in harmony with human nature and in alignment with the needs of the creation. But this requires using a form of power that is very different from the one that the domination system likes to use. A Domination System thrives on violence, on destructive power. Throwing a bomb at a domination system is like feeding spinach to Popeye: it will only make the system stronger than before, and more deadly. Instead, the only form of power that has any hope of neutralizing or transforming a Domination System is integrative power. But that is a topic for another day.

Angels and Demons (1)

Are human beings fundamentally good or fundamentally evil? Before we tackle the issue, let’s become aware that this is not an abstract question.  It is not the kind of dry, bookish problem that we normally reserve for ivory-tower philosophers who have nothing better to do than split imaginary hair all day long. Instead, this is a vital question — a question about life itself. Anyone who has the slightest interest in life cannot afford to ignore a question like this.

The practical significance of the question is undeniable. In order to live as social beings, each of us needs some understanding of what people are really like, what they truly are deep down in their essence, how they actually look like behind their everyday masks. We need this understanding in order to guess the contents of other people’s inner worlds and predict the nature of other people’s most likely reactions. This is important because, as social beings, we must adjust our own behavior in response to our guesses and predictions about other people. We perform these adjustments on a moment to moment basis, with the aim of enhancing our ability to fulfill our needs and promote our interests. The accuracy of our guesses and predictions determine the extent to which we are able to fulfill our needs and promote our interests. In turn, the innumerable guesses and predictions that we make everyday about other people’s feelings and reactions are closely tied with our basic presumption about the morality of human nature, i.e., our conscious or unconscious sense of whether humans are fundamentally good or fundamentally evil.

If we find ourselves interacting with other human beings, this in itself is sufficient evidence that we already have some sense of the basic goodness or depravity of human nature. Such a sense, whether we are consciously aware of it or not, determines the kind of guesses and predictions we make about other people, which, in turn, determine how we organize our individual and collective lives, how we structure our families, societies, and governments, the kind of laws we prefer, the sort of politicians we vote for, and even the way we act in our most ordinary moments.

How do we know whether human beings are fundamentally good or fundamentally evil? Each of us must grapple with this question and come to a conscious understanding about the morality of human nature — an understanding that is based, preferably, on a broad range of relevant and correct information and is arrived at as a result of serious, thoughtful reflection. For if we do not deal with this question methodically and deliberately, we would continue to live according to an unconsciously held answer; we would continue functioning on the basis of a tacit presumption that we acquired or formed many years or decades ago, probably on the basis of our socialization or our limited and improperly remembered personal experiences.

It’s difficult not to take a cynical stance on this issue.  Any study of human history will convince most of us that there is something really crooked or wicked in human nature; that there is more evil in the human makeup than there is good; that the evil in our nature is more basic while the good is merely accidental.  Corruption, in other words, is our default setting, which isn’t very easy to change.  We are also likely to arrive at the same conclusion by only a few months of reading newspapers and watching television news.

But there is a problem with taking such a position. If we assume that human nature is fundamentally evil or corrupt or immoral, then we would act as if everyone is a potential enemy and nobody can ever be trusted. When in doubt about someone’s motive or intention, our first response would be to assume the worst, unless the other person proves his/her innocence. We would remain armed all the time, both literally and metaphorically, with our defense mechanisms in a state of permanent alert — always looking out for the next threat or attack. We would be suspicious of our neighbors, our friends, our colleagues, our spouses, and our children . . . everyone. And they would be likewise suspicious of us.

Wouldn’t our negative assumption about human nature then become a self-fulfilling prophecy?

And yet, there is plenty of evidence indicating that people are very often nice and friendly and helpful, and that they do not always harbor selfish and malevolent desires against each other. Research in neuroscience, anthropology, and primatology has shown that there is something inherently good in human beings; there is in our nature a quality that expresses itself in acts of kindness and benevolence, empathy and altruism, cooperation and forgiveness. This is not a marginal or rare quality, but a significant and central part of what makes us human. In fact, our very existence is a proof that there is far more goodness in us than depravity; if human beings were fundamentally evil and selfish, they would have destroyed themselves a very long time ago. Given the high degree of interdependence that is so characteristic of our species, we would not have survived for thousands of years if it were not for our instincts for cooperative and selfless behavior. This natural goodness has to be acknowledged as real and valid and relevant, despite all the violence and corruption that human beings are also undoubtedly capable of exhibiting. Indeed, the fact that we find violence and corruption abhorrent is itself a strong piece of evidence indicating that our basic, default nature is good.  If we weren’t good at some deep level of our being, we would never find anything wrong with cruelty, injustice, or bloodshed.

Obviously, this argument solves only part of the problem. It doesn’t tell us why we act badly. It doesn’t explain all the negative and dark aspects of human behavior that we find so powerfully illustrated in our history books, the aspects that we observe in our daily interactions with other people and that we encounter within ourselves during moments of honest self-examination.

If we are basically good, why do we so frequently act in morally undesirable ways? If we are inherently predisposed toward morally desirable behavior, what is it that so often hinders us from realizing this potential?

We sometimes hear that a human being is both an angel and a demon; if we have within ourselves the ability to be good as well as the ability to be evil, what is it that pushes us toward the latter and away us from the former? What makes the demon stronger than the angle? If we are far more virtuous than we are evil, what is it that allows a small part of ourselves to enlarge disproportionately and overshadow all of our natural goodness?

Learning Time Management from Nature

On September 17, 1839, a twenty-two year old man — named Henry David Thoreau — wrote the following in his journal:

Nature never makes haste; her systems revolve at an even pace.  The bud swells imperceptibly, without hurry or confusion, as though the short spring days were an eternity. All her operations seem separately, for the time, the single object for which all things tarry. Why, then, should man hasten as if anything less than eternity were alloted for the least deed? Let him consume never so many aeons, so that he go about the meanest task well, though it be but the paring of the nails. If the setting sun seems to hurry him to improve the day while it lasts, the chants of the crickets fails not to reassure him, even-measured as of old, teaching him to take his own time henceforth forever. The wise man is restful, never restless or impatient. He each moment abides where he is, as some walkers actually rest the whole body at each step, while others never relax the muscles of the leg till the accumulated fatigue obliges them to stop short.

As the wise is not anxious that time wait for him, neither does he wait for it.

Sometimes we miss the forest for the trees; at other times we miss the trees — or the bark, the leaves, the chants of the crickets — for the forest. Sometimes we are in so much of a hurry to reach our destination that we fail to enjoy the steps we take on the pathway; at other times we are so engrossed in the individual steps that we forget where we are headed, or even why we are traveling in the first place. Sometimes we feel that time is passing too quickly and we become anxious because it seems to be leaving us behind; at other times we feel impatient because time appears not to be moving as fast as we wish it would.

“Time,” like most objects of knowledge, has both a subjective and an objective dimension. As humans, our perception of time is neither wholly subjective nor wholly objective — but some combination of the two. Objectively speaking, there are only 24 hours in a day and only 60 minutes in an hour; for each one of us, there is only one life-time to live with a definite number of years, days, hours, and minutes. Clocks tells us that time passes at a certain rate, regardless of what we feel or wish or imagine.

The subjective aspect of time, on the other hand, is equally real. The attitudes we adopt do not affect the passage of time — objectively measured — but they do expand or shrink time as it manifests in the world of our subjective experience.

How long does it take to tie one’s shoe laces or cut one’s fingernails? We may approach these tasks as if there were an unlimited amount of time at our disposal — or, in Thoreau’s words, as if an eternity had been allotted for their completion. Alternatively, we may approach these tasks as if we were already out of time, as if we didn’t have even a single moment to spare.

Whether we take the first approach or the second, the actual tying of our shoe laces or the cutting of our fingernails is probably going to take the same amount of clock time.  The subjectively experienced quality of that time, however, is likely to be very different in the two cases.  When we approach a task in the spirit of  hastiness, we tend to perform it reluctantly, anxiously, grudgingly, for our attention is not focused on the task, but, perhaps, on what we wish to gain through it.  On the other hand, when we approach a task in the spirit of eternity — as if time did not matter — the task is effectively raised to the status of an end that has its own value,that is not a means to some other end.  We tie our shoe laces for the sake of tying our shoe laces.  At that moment, the purpose of our life is nothing less and nothing more than tying our shoe laces.

Thoreau is not suggesting that we become lazy or ignore our larger goals; instead, he is suggesting that we act without any particular concern for time — that we perform every task deliberately, just as nature does, guided only by the task’s inherent demands.