Ahmed Afzaal

The Voter’s Dilemma (3)

Let’s examine Noam Chomsky’s full argument. Here’s a short excerpt from an interview that he did with Mehdi Hasan on April 17.

Mehdi Hasan: What do you make of the “Never Biden” movement?

Noam Chomsky: It brings up some memories. In the early 1930s, in Germany, the Communist Party, following the Stalinist line at the moment, took the position that everybody but us is a social fascist, and so there is no difference between the social democrats and the Nazis. So we are not going to join with the social democrats to stop the Nazi plague. We know where that led. There are many other cases like that. And I think we are seeing a rerun of that.

So let’s take the position “Never Biden, I am not going to vote for Biden.” There is a thing called arithmetic. You can debate a lot of things, but not arithmetic. The failure to vote for Biden in this election in a swing state amounts to voting for Trump. It takes one vote away from the opposition is the same as adding one vote for Trump.

So if you decide that you want to vote for the destruction of organized life on earth, for the sharp increase in the threat of nuclear war, for stuffing the judiciary with young lawyers who will make it impossible to do anything for a generation, then do it openly and [say] yeah, that is what I want.

That’s the meaning of “Never Biden.”

Chomsky is logical and consistent to a fault. He has previously advised progressive and leftist voters to support Bill Clinton and Hillary Clinton on the basis of what he calls the “lesser evil voting” strategy, or LEV. This strategy says that how you vote should depend on the state in which you live. If you happen to live in a Blue state, feel free to abstain from voting or vote for the Green Party; but if you live in a Swing state, then you must vote for the Democratic candidate, regardless of who that is. That’s the claim. The grounds are as follows: In our two-party political system, we know in advance that the next President will be either a Democrat or a Republican. They are both evil, but the former is less evil than the latter. The political system does not allow us to reject evil as such; it only allows us to choose between two types of evil. Since one of these options represents a greater evil while the other option represents a lesser evil, and since there is no realistic chance for a third party candidate to win a Presidential election, it follows that if you want to reduce evil you must vote for whoever happens to be the Democratic candidate—unless you live in a state that Democrats are guaranteed to win, such as California and Massachusetts.


Why should one’s approach to voting differ from one state to another? Chomsky believes that voting is not a matter of expressing one’s values but a matter of taking responsibility for the consequences of one’s actions. On that basis, he suggests that not voting for Clinton in 2016 or Biden in 2020 is perfectly fine if you live in a Blue state, since your vote (or lack thereof) won’t prevent the Democratic candidate from taking that state; but if you were to do the same thing in a Swing state, you’d be helping the GOP candidate become President. In other words, using your vote to express your values is acceptable when it has no affect on the election results, but it is not acceptable when it does. Either way, it’s the consequences that matter. Chomsky believes that when it comes to choosing one’s actions—such as voting—the likely consequences of those actions should be the only relevant criterion; everything else follows from this fundamental commitment.

But the LEV strategy can be challenged from several directions. First, it can be challenged by people who believe that voting is, in fact, a matter of expressing one’s personal values. They would argue that what matters most is that one acts in a way that is consistent with one’s espoused beliefs, and that, in the words of Martin Luther, “to go against conscience is neither right nor safe.” Second, LEV can be challenged by people who don’t think of voting in terms of individual morality but see it entirely as an issue of collective strategy. They would agree with Chomsky that voting should be all about consequences, but disagree with him as to which set of consequences should be treated as most relevant or decisive. Third, LEV can be challenged by those who don’t agree with Chomsky’s fundamental dichotomy, i.e., the notion that voting can be either an expression of personal values or it is a strategy for social change. They would argue that LEV is based on a false choice, and that it is possible to vote in accordance with one’s conscience while also taking responsibility for the consequences of one’s vote. In fact, they may even argue that the only effective approach towards the desired social change is one that transcends the either/or logic underlying the LEV strategy.

Chomsky’s reasoning is flawless, but that doesn’t make it invincible. This is because his reasoning in defense of LEV is neither an equation nor a theorem; rather, it is a moral and political argument, which makes it susceptible to moral and political challenges.

The Voter’s Dilemma (2)

I have been using the word “dilemma” to name the difficulty of deciding whether, and for whom, should I vote this coming November. After having chosen it, I started wondering if it was, indeed, the right word for this purpose, so I decided to look it up in the OED.


So, a dilemma is basically a situation that offers two or more alternatives, known as “horns,” which are—or appear to be—equally undesirable.

It is quite interesting that the two horns of a dilemma may or may not be equally undesirable. It is, of course, extremely hard to make a decision when both (or all) alternatives are equally bad. I am not sure that this is usually the case. For if the alternatives are even slightly different, then it’s likely that one of them is at least a tiny bit more undesirable than the other. Of course, the difference in the degree of undesirability between the two alternatives may be so insignificant as to be practically nonexistent, as, for example, in the case of Sophie’s Choice. Yet, I am inclined to speculate that real-life dilemmas (as opposed to hypothetical ones) are unlikely to be pure, in the sense that picking one option over the other need not be entirely random. (This leads me to wonder about the nature of choice, but I won’t deal with it here.)

Regarding the upcoming Presidential election, I am struck by the fact that many people who favor voting for Biden do not seem to experience a dilemma at all. Rather, such individuals tend to be completely, absolutely, one hundred percent sure that they have the right answer and that all other answers are obviously incorrect. As a result, they often become frustrated when others fail to agree with them right away. Apparently, they find it incredible that anyone in their right mind could even imagine that a course of action can be rational that does not involve voting for Biden. It is remarkable that these true believers appear to be totally free of doubts, misgivings, hesitations, or uncertainty of any kind. The truth of the matter is so clear to them that they find it extremely difficult, if not impossible, to try and see the issue from a different viewpoint. As far as they’re concerned, there is no sane viewpoint other than their own. In fact, they probably haven’t noticed that they have a viewpoint, and that voting for Biden is only one of the many justifiable options.

Noam Chomsky is a case in point. In the course of criticizing the “Never Biden” position, he recently made the following statement:

There is a thing called arithmetic. You can debate a lot of things, but not arithmetic.

I am not concerned here with the merits of Chomsky’s argument but only with his sense of certainty. He is absolutely right when he says that arithmetic is not debatable. But he glosses over the fact that the “Never Biden” position is not about arithmetic. That position can be defended from several different viewpoints; one may disagree with those viewpoints and one may criticize the resulting position as inadequate or flawed. Yet, these are actual viewpoints held by actual people who are no less rational than anyone else; as viewpoints, they are all legitimate. In contrast, arithmetic is not a viewpoint. The reason why we cannot debate arithmetic is because it represents a closed, abstract, and self-referential system that does not, in and of itself, say anything about the universe. Arithmetic offers an unusually extreme certainty, such that, for example, 2+2=4 everywhere and always, and there is nothing anyone can do about it. This degree of certainty is impossible when we are dealing with the complex messiness of everyday reasoning, emotions, biases, values, commitments, and all of the social and cultural influences that go into forming a particular human viewpoint.

Personally speaking, I don’t feel confident in the present context that any answer is going to be completely, absolutely, one hundred percent right—or wrong. The reason why I am writing these blog posts is because I want to explore how to come up with a satisfactory answer that I can live with; this is a much more modest goal than finding the holy grail of absolute truth or rightness. Regardless of what I end up deciding, I already know that it won’t give me the axiomatic certainty of 2+2=4. I don’t know of any approach that will allow me to achieve one hundred percent confidence on an issue like this. Of course, the closer I can get to one hundred percent certainty, the happier I would be; at this point, however, I am willing to settle for anything above fifty percent.

What does it mean to have less-than-absolute confidence in a proposition? This degree of confidence will probably make no difference in practical terms. If I am only sixty percent confident that voting for Joe Biden is the right thing to do, I will still act as if I were one hundred percent confident. That is because actions are usually a matter of binary logic: I either vote for Joe Biden or I don’t vote for Joe Biden. I cannot give sixty percent of my vote to Biden and withhold, or give to someone else, the remaining forty percent.

While having less-than-absolute confidence may not make any practical difference, it does make a big difference in how I think about the issue and how I respond to those who disagree with me. In thinking about the issue, a less-than-absolute confidence allows me to (1) consider the respective strengths and weaknesses of different viewpoints and be sensitive to the nuances of each position, (2) continue reflecting on my own viewpoint and position even after I have acted on it, and (3) remain open to new evidence and new arguments that might help me improve my viewpoint, refine my position, or even change my mind entirely. In responding to those who disagree with me, my less-than-absolute confidence will allow me to (1) show genuine respect for viewpoints and positions different from my own, (2) be curious about what other people think and why they think the way they do, and (3) embrace anything I may find in other people’s thinking that may be true or useful or wise, even if the disagreement remains.

Absolute certainty feels good, but it “blocks the road of inquiry,” as Charles Peirce put it. At the opposite end of the spectrum is absolute uncertainty, but that breeds inaction and moral paralysis. It’s only when I am more certain than uncertain that I can act on what I know while still maintaining an open mind and a learning attitude. It’s the best of both worlds!

If you are sure that you possess the holy grail—a definitive, unambiguous answer to the dilemma I am wrestling with—I would say: Congratulations! I won’t try to change your mind about what you believe is the right thing to do. I would, however, advise against putting too much trust in the clarity, obviousness, or finality of your position.

For the feeling of certainty is just that—a feeling. The more certain we feel, the higher is our confidence in relation to a given proposition, and the more likely we are to act in accordance with it. Yet, our feeling of certainty does not tell us a whole lot about the world outside ourselves. The truth or falsehood, the accuracy or inaccuracy, and the rightness or wrongness of a proposition is independent of how we feel about it at any given moment. If you have ever been proven wrong about a belief for which you were once willing to bet your life, or if you have ever changed your mind on a major issue, you may want to recall those experience in order to appreciate just how misleading a felt sense of certainty can be.

As for me, I am glad I looked up the word “dilemma” in the dictionary, for it does capture how I am experiencing the issue of voting in the 2020 Presidential election. Specifically, my dilemma is made up of no fewer than five horns: (1) Don’t vote at all, (2) Vote for Joe Biden, (3) Vote for Donald Trump, (4) Vote for the Green Party candidate, (5) Write down a name that doesn’t appear on the ballot. These are all viable options, but for the sake of simplicity I would like to reduce the dilemma to its classic, binary form:

Option 1: Vote for Biden.
Option 2: Don’t vote for Biden.

These two horns of my dilemma do appear to be equally undesirable at first sight. My goal in future blog posts will be to figure out which of them is significantly more undesirable than the other.

The Voter’s Dilemma (1)

I voted for Bernie Sanders in the Democratic primaries, but he is no longer in the race. I am now being told that I should vote for Joe Biden in the fall, for if I abstain from voting or if I vote for the Green Party candidate then I would be guilty of supporting Donald Trump and would therefore have to accept part of the responsibility for all the horrible things that he would probably do. But I don’t want to lend my support to Biden either, for many different reasons. This situation poses a dilemma. It is a real dilemma, not a made-up one, and so it deserves some serious attention.


Let’s begin with a fundamental questions: What is the purpose of a Presidential election? Here’s a tentative answer: The purpose of a Presidential election is to provide citizens the opportunity to express their opinion as to which particular candidate should hold that office for the next four years.

In the United States, the opinion we express through a Presidential election is not binding, for we the people do not actually elect the President. Rather, we elect the 535 electors who, in turn, make that decision on our behalf. The main reason we have this unusual process is because the folks who made the rules back in the eighteenth century thought that the masses were stupid. They believed we weren’t smart enough to know what was good for the country, and so they thought we might vote on the basis of our emotions and elect the wrong person. To prevent that, they decided the choice should be in the hands of a small group of enlightened individuals—called the Electoral College—that could be trusted to use foresight and wisdom to select the right person.

So, technically speaking, what we the people express on election day every four years is not our collective will that must be implemented. It’s merely an opinion, or a preference for this person over that person. The entire process of electing a President was never meant to give the people any actual role in shaping the government or its policies; rather, it was meant to establish the legitimacy of the political system by getting us to perform the equivalent of signing a consent form.

In reality,  we the people are like the toddler who occasionally gets to sit behind the steering wheel of the family car and pretends to drive.

This reality be seen in the fact that the Presidential election has no necessary connection with people’s desire for a particular domestic or foreign policy. Presidential candidates can and do say all sorts of things when running for office, but as actual Presidents they are in no way bound by anything they’ve said before taking the oath. This means that when we vote for a particular candidate because we agree with their views, plans, and promises, there is absolutely no guarantee that, should this candidate wins, those particular views, plans, and promises would actually be enacted. Typically, they aren’t.

A Presidential election is a long and arduous process in which the goal is to win by any means necessary. As any political consultant will tell you, holding on to one’s principles, or trying to maintain consistency between one’s words and actions, is generally a losing strategy. What matters is not who you are but how the voters see you; and how the voters see you can be managed and choreographed. Winning requires getting the support of a wide variety of population blocs, and so it’s imperative to say whatever each bloc wants to hear. If this requires frequently contradicting oneself, then so be it. Deception is a necessary part of political campaigns, just as it is a necessary part of advertising, or magic shows.

Smart candidates speak in a special dialect of English that is meant to entice, attract, fascinate, and arouse, rather than inform or educate. As a result, vagueness has to be an essential ingredient of all such rhetoric so that different groups of people may project their own wishes and dreams to fill up the candidate’s empty words. But even when a candidate expresses a position or makes a promises that is relatively specific, and can therefore be used to hold that candidate accountable, we must not forget that there is no enforceable obligation to actually follow through. Inconsistencies need not be resolved through appropriate actions, for they can be easily covered up through additional rhetoric. Fulfilling one’s campaign promises may be a moral duty, but the Constitution does not recognize it as part of a President’s legal obligations.

This means that in the United States we the people do not possess the right to have our policy preferences implemented. In fact, we don’t even express our policy preferences when we cast our ballots. Voting in a Presidential election amounts to saying “I would like person X to be the President,” and nothing more. What person X does after becoming the President is not up to us, because—remember?—we are not smart enough to know what the country needs.

Most of us haven’t noticed that our Constitution does not give us the right to vote. Voting is not included in the Bill of Rights, which is why state legislatures are free to take a variety of measures to control, restrict, and manipulate our votes. But it’s important to understand why the Framers did not think of voting as an individual right that needed to be guaranteed at the federal level, for it tells us something truly important. It tells us that even our non-binding opinion regarding who should be the President is not all that consequential. The U.S. political system does not need the citizenry to express its preference. If the system was in any way dependent on our votes, it would treat voting as a mandatory civic duty that people can’t easily get out of— just like paying taxes or serving on a jury. Instead, voting is entirely optional, and the system routinely creates hurdles to discourage people from casting their ballots. Of course, if no one votes then the political system will lose all legitimacy, but maintaining legitimacy doesn’t require that everyone votes. Rather, the systems remain sufficiently legitimate even with only half the eligible voters participating.

To summarize, the process of Presidential election in the United States is structured in such a way that the following three conclusions can be safely drawn: First, the political system doesn’t need and therefore doesn’t value most people’s votes, which suggests that the government is not meant to be a reflection of what the majority wants.  Second, people only vote for a candidate and not for their preferred policies, which means that any impact their votes might have is usually indirect or unintentional, and always minimal. Third, the President has no constitutional obligation to fulfill any promises made during the campaign, which means that the perceived trustworthiness of a candidate is often decisive in the election but has little long-term consequence.

So, what does all this got to do with the voter’s dilemma? To reiterate, the issue I am trying to address is whether or not I should vote for Joe Biden. Before I can say anything meaningful about that decision, I need to have some sense of the purpose of voting. When I consider the purpose of voting in a U.S. Presidential election, I find that the system has been set up in such a way that citizens don’t really have much of an impact on what the government does, regardless of whether, or for whom, they vote. The U.S. is a “weak democracy,” in the sense that its political system was intentionally designed to minimize people’s ability to influence the government, while still requiring that the government draws its legitimacy from the consent of the governed.

The points noted above need to be kept in mind when trying to resolve the voter’s dilemma. As of now, I have not seen any evidence that my vote is needed or valued or will make any difference. Neither of the two major candidates has put forward a convincing argument why someone like me should vote for him. Furthermore, no one is asking me about the policies that I would like to see enacted in this country; the electoral system has no interest in what I think or believe or want. Instead of being asked about my policy preferences, I am being asked to choose between two individuals, neither of whom I know personally. I don’t have any way of getting either of these individuals to take seriously what I and others like me believe or think or want, let alone making him take the appropriate actions as President. And yet, I am expected to vote. Under these circumstances, which are obviously not unique to me, the only thing that my vote is sure to accomplish is to help maintain the legitimacy of the political system. Everything else is a matter of chance, and the odds aren’t favorable.

Since the Presidential election is not designed to find out what my favorite policies are, I am supposed to express those preferences indirectly, i.e., by choosing the candidate who I think is most likely to act in ways that I approve of. And I am supposed to make this decision based solely on what the two main candidates have done in the past and what they say they will do in the future. Based on what they have done in the past, I am absolutely sure that I don’t want either of them to become President. As for what they say they will do, I disagree with most of their views, plans, and promises; and when I do agree, I find both gentlemen to be unworthy of my trust. Trump obviously has a long  history of lying, but Biden too has a similar (though shorter) record of willful deception. 

Given that a U.S. President is not bound by anything said or promised during the campaign, I have to be extra careful when deciding to trust that a candidate would actually do what they say they would do. While neither of the two main candidates inspires confidence, there does exist a critical difference. Trump’s lies are petty and self-serving. Whenever he speaks, I know that he is probably lying; as a result, I have never been deceived by his words. Biden, on the other hand, does not lie in the same egregious or shameless manner as Trump; as a result, Biden’s lies are likely to be a lot more consequential as well as a lot more convincing. This makes Biden more dangerous than Trump. Furthermore, Biden claims a high moral ground and talks about restoring honesty, civility, and kindness to the office of the President. By suggesting that he is morally superior to the current occupant of the White House, Biden is essentially asking to be judged at a higher standard than the one we use for Trump. But when both candidates are evaluated according to their own standards of morality, the gap between them all but disappears.

None of this proves that the two main candidates are exactly the same, or that voting for one is just like voting for the other, or that it doesn’t matter whether I vote or not. There is a lot more to consider before I’ll be able to resolve this dilemma—at least for myself.

The Pastor’s Dilemma

In the wake of this year’s MLK Day, a colleague shared with me a short sermon that was recently delivered in a church and asked what I thought about it. As I read the sermon, I realized that I had rather strong opinions about the ideas expressed in the sermon, and that I would very much like to share those opinions, not only with the colleague who asked for them but with anyone who might be interested in the topic. I also realized it wasn’t going to be enough to just say “I don’t like it,” but that I would also have to back up that judgment with some arguments and evidence. Hence this lengthy post.

The Pastor begins his sermon as follows:

Back in the 1960s, Dr. Martin Luther King, Jr. and Malcolm X were two very different leaders of the civil rights movement.

King was an African American Baptist pastor. Who used nonviolent strategies to try to change religious, social and political systems from within.

Malcolm X was a Black Muslim revolutionary. Who believed in extreme tactics. Who rejected the mainstream movement championed by King.

A little later, the Pastor has this to say:

The stark differences in leadership style are not that unusual in social or religious movements. Any time you are trying to change something, there are advocates who want to quietly work with the current power structure.

But those frustrated with the status quo often want things to change quickly and dramatically. King and Malcolm X are extreme opposites in the way they acted as change agents of their time.

Toward the end of his sermon, the Pastor offers a prayer in which, among other things, he explicitly identifies King as the model for his preferred leadership style.

Based on the above quotations, I understand the Pastor as conveying the following ideas:

  1. King and Malcolm were polar opposites in terms of their leadership style. This is clear from the phrases “two very different leaders,” “stark differences in leadership style,” and “extreme opposites in the way they acted.”
  2. Given that King and Malcolm were polar opposites, if King “used nonviolent strategies,” it follows that Malcolm must have favored violent strategies. The wording that Malcolm “believed in extreme tactics” reinforces the idea that he believed in violent tactics.
  3. Given that King and Malcolm were polar opposites, if King tried “to change religious, social and political systems from within,” it follows that Malcolm must have tried to bring about the desired change from outside these systems.
  4. It was probably because Malcolm wanted “things to change quickly and dramatically” that he became a “revolutionary,” chose “extreme tactics,” and worked from outside the systems he wanted to change.
  5. King was an “African American Baptist pastor” who used “nonviolent strategies,” while Malcolm was a “Black Muslim” who favored”extreme tactics.” Draw your own conclusions.
  6. As “change agents,” leaders tend to fall in two categories: (1) those “who want to quietly work with the current power structure,” and (2) those who feel so “frustrated with the status quo” that they “want things to change quickly and dramatically.” King belonged to the former category and Malcolm to the latter.
  7. We should emulate King, not Malcolm. People should not become “so frustrated with the status quo” as to take a “revolutionary” approach, use “extreme tactics,” and try to change the systems “quickly and dramatically.” What people ought to do is use “nonviolent strategies,” work “quietly” in collaboration “with the current power structure,” and always “from within” the systems they want to change.

What follows is my response to the Pastor’s thoughts and suggestions.

First of all, I wonder where the Pastor is getting his information about King and Malcolm. How familiar is he with the writings and speeches of these two men? Has he read any of the numerous biographies of King and Malcolm? Has he done any research into the African American struggle for civil rights? Has he studied social movements and leadership styles? How informed is he about race and racism in American history? The reason I ask these questions is because the very idea of King and Malcolm being polar opposites is based on outdated stereotypes and has been conclusively debunked. Yet, it keeps coming back, like a zombie. And the reason that this myth refuses to die is because it serves a useful function—it helps justify racism.

When Discipleship is Too Hard

The King/Malcolm binary is not just historically incorrect; it is also part of a problematic narrative that is favored by certain sections of the White population—specifically, by the type of liberals that King referred to as “White moderates.” These were people who opposed racism in theory but believed that King’s nonviolent movement to end racism was too extreme. This way of thinking is alive and well today, and it is often justified by a fictional narrative about King and Malcolm being mutually incompatible figures.

King and Malcolm

Consider the fact that during his life-time King was a controversial figure who was despised by a majority of White Americans. At the time of his death, King had an approval rating of only 25%. It was only in subsequent decades that King was gradually appropriated by mainstream American culture, which turned him into a larger-than-life hero, an iconic representation of liberal or “American” values, and the very symbol of polite respectability. In the process, King’s sharp critiques of American capitalism and imperialism were forgotten; his disillusionment with White liberals and the political establishment was buried away in an unmarked grave; and his radical agenda for justice and liberation became practically unmentionable. Today, the King of popular imagination is a great but harmless figure who cannot offend anyone, for his ideas and aspirations have been so thoroughly erased from our collective mind that they don’t pose any serious threat to the status quo. This is part of the secret of King’s posthumous popularity. Everyone—especially those who have the most to lose if King’s wishes were to come true—can now “celebrate” his life. In effect, the mainstream culture has essentially declawed and defanged much of King’s legacy.

I don’t want to digress, but it’s probably worth considering whether the memory and legacy of Jesus of Nazareth has suffered a similar tragedy. Worship, literal or metaphorical, is a powerful strategy that allows us to continue identifying with someone after we’ve decided that discipleship is too hard.

One particularly egregious way in which King has been domesticated is through a spurious comparison between him and Malcolm. In this story, Malcolm is imagined as a wild and dangerous revolutionary precisely so that this scary bogeyman would make King look gentle and docile in comparison—a safe alternative to the unhinged Malcolm. Among other things, this narrative allows King’s aggressive nonviolence to be re-branded as civility, patience, and even passivity in the face of oppression. In effect, Malcolm is demonized and King is deified—but not for anything that either of these two leaders actually stood for.

Every time I come across this narrative of “Bad Malcolm” vs. “Good King,” I know exactly what will come next—what inevitably follows in the wake of this comparison is unsolicited advice for how the oppressed are supposed to behave: Don’t be like the crazy Malcolm. Be mild and mellow. Don’t rock the boat. Now is not the right time. Don’t be in such a hurry. Change will come if you ask for it nicely, so try to be patient. Work to change the system quietly, in small increments, and from within. Be like [the safe version of] King.

This may sound like sincere advice—until we realize that it’s the same advice that was repeatedly given to King himself, who rejected it outright, and until we learn that King and Malcolm were nowhere near as different as this narrative would have us believe.

Violence vs. Nonviolence

Let’s begin with the heart of the purported contrast between King and Malcolm, i.e., the issue of violence. The notion that Malcolm supported violence while King advocated nonviolence is based on a grossly oversimplified view of the civil rights movement and a lack of understanding of what these two men were trying to accomplish.

Below are some of the most important points to keep in mind:

First, the suggestion that Malcolm “believed in extreme tactics,” with the implication that he advocated violence, is not only preposterous; it’s also offensive, especially when uttered by a White person. When it comes to Blacks and Whites in the United States, it is no mystery as to which side has perpetrated violence and which side has suffered from it. Anyone even slightly informed about the past and present of racial violence in this country should be able to see just how ignorant and insulting it is to accuse Malcolm of believing in violence when his entire struggle was aimed at ending the centuries-old violence against his people.

Second, Malcolm never suggested that Blacks should initiate violence against the White population. Instead, he merely told the victims of racist violence to do whatever was necessary to protect themselves. He promoted self-defense. From an ethical viewpoint, the violence perpetrated by the oppressors and the powerful does not belong in the same moral category as the violence that the oppressed and the powerless may commit to defend themselves. Malcolm had seen White brutality against African Americans all through his life, starting with the murder of his own father. He refused to accept that being victimized in this way was a normal part of being Black in America. He therefore urged his Black audience to stop being doormats, to not let White people walk all over them. He taught them to develop the courage to stand up against bullies, to recognize their self-worth even when others don’t, and to defend themselves against racist violence by any means necessary—because their bodies and their lives were worth defending.

Malcolm himself did not carry a weapon and he was not known for acting violently toward anyone. By advocating for self-defense, Malcolm was trying to establish the full humanity of African Americans. The gist of his argument was as follows: If White people have the right to defend themselves, and if Black people are equal to them as human beings and as citizens, then they must also be able to enjoy that same right. Here is Malcolm making this exact point in his 1963 speech, “Message to the Grassroots.”

If violence is wrong in America, violence is wrong abroad. If it’s wrong to be violent defending black women and black children and black babies and black men, then it’s wrong for America to draft us and make us violent abroad in defense of her. And if it is right for America to draft us, and teach us how to be violent in defense of her, then it is right for you and me to do whatever is necessary to defend our own people right here in this country.

If a White man doesn’t approve of “extreme tactics” when they are used by African Americans to fight racism, then what I would like to know is how often does he condemn our nation’s military adventures around the globe?

The point, of course, is that many people who insist that oppression must only be fought through nonviolent means tend not to criticize the violence perpetrated by the oppressors, especially by the groups and institutions they identify with. Such individuals would invoke King’s name to argue that any demonstration against police brutality must be peaceful, but the trigger for such demonstrations—i.e., police brutality itself—doesn’t seem to offend them as much as protesters blocking traffic or burning tires. This puts a huge question mark on their commitment to nonviolence.

Third, the Pastor approves of the fact that King used “nonviolent strategies” but does not approve of Malcolm’s support for “extreme tactics” (i.e., violence); this may seem logically consistent, but dig a little deeper and you’ll find a glaring contradiction. That’s because King’s advocacy for nonviolence and Malcolm’s encouragement of self-defense were closely located on the same strategic and ethical spectrum. King’s own source in this matter, i.e., Mahatma Gandhi, did not reject all violence in absolute terms; in fact, he thought there were worse things than violence.

According to Gandhi, the best option in the face of oppression is nonviolent resistance. However, he also believed that violence was preferable if the only other option was to surrender, feel humiliated, or lose respect for oneself: “I do believe that where there is only a choice between cowardice and violence, I would advise violence.” Elsewhere, he said that it is best to “cultivate the cool courage to die without killing,” but if one cannot muster that sort of courage, then “I want him to cultivate the art of killing and being killed rather than, in a cowardly manner, flee from danger.”

On self-defense, Gandhi had this to say: “I have been repeating over and over again that he who cannot protect himself or his nearest and dearest or their honor by nonviolently facing death may and ought to do so by violently dealing with the oppressor.” Furthermore, “Though violence is not lawful, when it is offered in self-defense or for the defense of the defenseless, it is an act of bravery far better than cowardly submission.” Unlike the Pastor, Gandhi didn’t think that he had the right to tell the oppressed how they should fight oppression: “Under violence, there are many stages and varieties of bravery. Every man must judge this for himself. No other person can or has the right.”

I am sure that Gandhi would strongly favor King’s approach, but I am equally certain that he would also appreciate Malcolm’s perspective as a courageous and honorable response to oppression.

Fourth, people in our country have the right to bear arms. In an ideal world, weapons wouldn’t exist because no one would need them. But so long as they do exist, society is going to need some mechanism to prevent one group from dominating another simply by virtue of being better armed. There is still a double standard in the United States around gun ownership: A White man holding an assault rifle is a patriot but a Black kid holding a toy gun is an imminent threat. Yet, the U.S. Constitution allows every American citizen to possess weapons, including guns, and to use such weapons if it becomes necessary for defending oneself, one’s family, or one’s property. In fact, one reason why someone might refrain from initiating violence is the fear of retaliation. White folks who perpetrated violence against Blacks in American cities were able to do so precisely because of the confidence that comes from being heavily armed; they didn’t fear retaliation from their victims and they knew their racial privilege protected them from any meaningful prosecution. In that situation, when Malcolm would urge African Americans to purchase guns, he was neither advocating aggression against Whites nor a revolution against the government; he was simply telling the brutalized to do what they had every right to do as American citizens—to legally own weapons. The purpose of encouraging gun ownership was not to harm White people but to discourage them from harming Black people. This isn’t what the Pastor might call “extreme tactics.” It’s deterrence.

Moreover, it is absurd to even suggest that Malcolm would have wanted his people—a racial minority in the United States—to pick up arms against the White majority, a significant portion of which had a long history of getting away with brutal racist violence, including lynchings. Does anyone really think that Malcolm was so dense as to not understand that it was his people who would have suffered annihilation had they started a race war in America?

Fifth, the common belief that African Americans achieved civil rights (to the extent that they did) through entirely nonviolent means isn’t correct either. In the South, it was common for Blacks to own guns. They had to be willing to shoot in case the KKK were to visit their homes or farms. Black ownership of weapons and their determination to use them if necessary played a major role in the civil rights movement. Among other things, it allowed King to make the argument that the White establishment ought to listen to his movement, for otherwise they might have to face possible uprisings from desperate African American who had lost all hope and had nothing left to lose. The Pastor may want to consult the work of Charles E. Cobb on this topic.

The history of the civil rights movement is complex and multifaceted. Reducing its many dimensions and nuances to a single strategy or a single leader is an act of carelessness that shows a lack of genuine interest in the topic.

The King/Malcolm Binary

In his sermon, the Pastor claims that Malcolm was a “revolutionary” while implying that King, being the polar opposite, was not. This is not only historically inaccurate; it also reveals a lack of understanding of what it means to be a “revolutionary.” In fact, both Malcolm and King were revolutionaries in their own ways. It was precisely because of their revolutionary tendencies that both Malcolm and King were attracted to certain specific aspects of their respective faith traditions and were not interested in other aspects. Indeed, King and Malcolm weren’t “religious” in some generic sense of the word; they were religious in particularly radical and revolutionary ways that matched their personal and social contexts.

King was a Christian, but he didn’t agree with many strands of Christianity, such as those that supported slavery, segregation, and warmongering, which he openly criticized; instead, he identified with some of the most radical and liberationist elements in the Christian tradition. In fact, he was more than willing to claim such elements as his own regardless of which religious or secular tradition they came from.

The connection between King’s Christian faith and his use of “nonviolent strategies” is not as simple or straightforward as the Pastor might have led his congregation to assume. King was initially attracted to the writings of Reinhold Niebuhr, whose theology of “Christian Realism” taught that political violence was ethically justifiable and that Christians should not be pacifists. As he acknowledged in “Pilgrimage to Nonviolence” (1960), King as a young seminary student “had almost despaired of the power of love in solving social problems,” and thought that the whole idea of loving one’s enemies was only relevant for interpersonal conflicts while racial or national conflicts required “a more realistic approach.” While he did not believe that war could be a “positive or absolute good,” he thought it could be “a negative good in the sense of preventing the spread and growth of an evil force.” It was only after King attended a talk in 1950 by Dr. Johnson, President of Howard University, on the life and teachings of Mahatma Gandhi, that his mind started to change. He eventually met many proponents of nonviolence during his doctoral studies at Boston University, and that’s when he fully outgrew Niebuhr’s influence.

It was after King had discovered nonviolence by way of Gandhi that he began to appreciate it in relation to Jesus’ life and teachings: “I came to see for the first time that the Christian doctrine of love operating through the Gandhian method of nonviolence was one of the most potent weapons available to oppressed people in their struggle for freedom.” King didn’t know that “love” could be a revolutionary force until he encountered Gandhi. It was for this reason that King, in a 1959 sermon, declared Gandhi to be “the greatest Christian of the twentieth century.”

In contrast, Malcolm’s lived experience had taught him that religious faith wouldn’t restrain the racist tendencies of American Christians; he knew that Christianity had been complacent in White supremacy and Black oppression, and he had no reason to believe that this tradition could offer anything positive to his people. In fact, Malcolm initially had no interest in religion. In the prison, he couldn’t bring himself to pray because it required him to be humble, and his life experience up to that point had showed him that humility would only make you a target for bullies. He did learn to pray, however, and joined the Black religious movement called the “Nation of Islam” while still in prison. Malcolm joined the Nation of Islam because it offered him a powerful opportunity for meeting his needs and serving his values—he found meaning, purpose, acceptance, belonging, confidence, dignity, self-respect, and direction. The teachings of Elijah Muhammad appealed to Malcolm’s revolutionary tendencies and his passion for justice, for the leader of the Nation of Islam was willing to openly condemn White supremacy while showing the path towards Black dignity and Black liberation.

It is interesting to note that the sermon quoted above doesn’t give any details about Malcolm’s religion; it simply says that he was a “Black Muslim.” This is problematic, even for a brief sermon, since the phrase is ambiguous and can be easily misinterpreted by an uninformed audience. The author of the sermon doesn’t clarify to his audience that the phrase “Black Muslim” can either mean “an African American person who is an adherent of Islam” (where “Islam” refers to the global religion founded in the 7th century) or it can mean “a member of the Nation of Islam” (where “Nation of Islam” refers to an African American sectarian movement established in 1930 and led by Elijah Muhammad and later by Louis Farrakhan). Nor does the Pastor explain that the overlap between these two meanings of “Black Muslim” is somewhere between minimal and non-existent. This convenient omission is likely to activate some of the stereotypical fears that White people often associate with the words “Black” and “Muslim.” It’s a subtle effect, and it can happen instantly as well as unconsciously. But not recognizing this possibility and not doing anything to prevent it still count as irresponsible negligence.

What complicates the matter is that the term “Black Muslim” applies to Malcolm in both senses of the phrase, but not simultaneously. Malcolm joined the Nation of Islam around 1948 and announced his break from this movement in March 1964. Before the break, Malcolm was a “Black Muslim” in the latter sense of the phrase; after the break, he became a “Black Muslim” in the former, and more commonly understood, sense of the phrase. From that moment until his death in February 1965, Malcolm was busy “reinventing” himself, as Manning Marable would say, renouncing many of his previous beliefs and charting a new direction for his life and public career. This process of reinvention included his conversion to mainstream Islam, as epitomized by his participation in the annual Muslim pilgrimage, or the Hajj. All of these developments, as well as the nuances in the evolution of Malcolm’s thinking, seem to be irrelevant to the author of the sermon who seems to suggest, instead, that all we need to know about Malcolm is that he was a “Black Muslim” who believed in “extreme tactics.”

The Pastor also believes that King worked from “within” the systems he wanted to change, and that this was a better option than what Malcolm chose. That astonishing claim fits right into the “Bad Malcolm” vs. “Good King” narrative. The reality is that King was willing to take any route that would help his cause; he worked with Presidents Kennedy and Johnson to get civil rights legislation passed and to help implement Supreme Court rulings, but that doesn’t mean that he was somehow against working from outside the system. In fact, King is best known for his leadership in mobilizing African Americans in order to force the status quo to change at a time when it was particularly stubborn in its support for racism. Most of King’s strategizing was around marches, boycotts, protests, civil disobedience, and direct actions. Knowing this should be enough for anyone to recognize that much of King’s movement was about putting popular pressure on deeply entrenched systems from the outside. King would never have used such tactics if he were committed to working “quietly” and incrementally and entirely from “within” the established order.

The Pastor suggests that working in collaboration with “the current power structure” is the best option for “change agents,” and that this is exactly what King did. In reality, working with “the current power structure” is only one strategy among many others; depending on a variety of factors, it may or may not be the best option. That’s why King used this strategy on some occasions but not on other occasions. For King, working with “the current power structure” was only a means and never an end in itself. Consider the fact that taking a strong stand against the Vietnam war meant that King had to sacrifice his relationship with President Johnson. King took that stand because he had a greater commitment to his conscience than to collaborating with the government. King knew that collaborating with “the current power structure” can sometimes help solve a problem, while at other times such collaboration is itself part of the problem.

It is also worth considering that King’s activities frequently involved breaking laws, which is why he was arrested almost 30 times in the span of ten years. The FBI was constantly running a surveillance operation against King. According to FBI’s official assessment, King was “the most dangerous Negro leader of the future.” King received a constant barrage of death threats throughout his public career, and eventually died of an assassin’s bullet. None of this is compatible with the proposition that King wanted to change things by collaborating with “the current power structure” and by “quietly” working from “within” the existing systems.

We shouldn’t forget that the only avenues for change from “within” that the political status quo offered at the time involved either the ballot box or the courts; the former was effectively blocked by systemic disenfranchisement, while the latter was too slow and inefficient. Of course, King wasn’t opposed to using these avenues—and he did use them whenever he could—but he also knew that entrenched systems did not alter their course until they absolutely had to, and that this often required pressure from the outside. It is worth recalling that King wrote his “Letter from Birmingham Jail” in response to the clergymen who were insisting that racial segregation should be fought only in the courts—that is, from “within” the system—and not in the streets.

The Pastor’s suggestion that King’s leadership style involved working “quietly” is not a compliment; it actually diminishes his stature. King had been accused of many things, including that he was a “demagogue” and a “communist,” but no one in his life-time ever accused his of working “quietly.” Try to imagine how far we have come from the historical King—a man who was viewed by the establishment as a radical trouble-maker and a threat to national security—all the way down to this uninspiring portrait of a tame and innocuous individual who wouldn’t even raise his voice.

The idea that Malcolm wanted things to change “quickly and dramatically” whereas King was content with making slow and incremental progress in collaboration with the political establishment is total nonsense. Yet, this fake praise of King can be fixed in only a few words: Just ponder the title of King’s 1964 book, Why We Can’t Wait, and then contemplate this powerful passage from King’s 1967 sermon “Beyond Vietnam.”

We are now faced with the fact that tomorrow is today. We are confronted with the fierce urgency of now. In this unfolding conundrum of life and history there is such a thing as being too late. Procrastination is still the thief of time. Life often leaves us standing bare, naked and dejected with a lost opportunity. The “tide in the affairs of men” does not remain at the flood; it ebbs. We may cry out desperately for time to pause in her passage, but time is deaf to every plea and rushes on. Over the bleached bones and jumbled residue of numerous civilizations are written the pathetic words: “Too late.”

The element of urgency and impatience in King’s rhetoric is no less than what we find in Malcolm’s speeches.

The Pastor describes King’s movement as “mainstream,” implying that Malcolm and his followers were a fringe element. Not true. Outside the southern states, Malcolm had immense influence. Even today, Malcolm continues to attract both scholarly and popular attention. As evidence, consider the number of books and articles that have been published about Malcolm in just the last twenty years.

To say that Malcolm was “frustrated with the status quo” implies that King wasn’t. This claim is intended to elevate King over Malcolm by suggesting that the former could manage his emotions better than the latter. The implicit assumption, however, is that the main problem was not so much that American racism was awful and so dealing with it would have been immensely frustrating for anyone, but that Malcolm was being too sensitive. The implication that King never become “frustrated with the status quo” is obviously unfounded, for he did receive plenty of harassment, opposition, and animosity throughout his public career, and, as a result, experienced strong feelings of anger, disappointment, and righteous indignation. But it takes a privileged White man to accomplish the virtually impossible task of failing to imagine what any Black person fighting racism in American must feel every day.

The Sermon’s Approach

Having addressed some of the specific issues, I would now like to identify three major problems that characterize the approach behind the sermon, any one of which would’ve been fatal on its own.

First, it appears that the Pastor is not familiar with what nonviolence means. He seems to believe that nonviolence is about (1) not acting in violent ways, (2) being calm and agreeable, (3) not engaging in conflict, (4) being patient and willing to compromise, (4) being content with slow and incremental change, and (5) never getting frustrated with the status quo. He would be surprised to learn that other than the first one (“not acting in violent ways”), none of these features have any connection to nonviolence as understood and practiced by King. Contrary to the Pastor’s naive assumption, nonviolence is not synonymous with “absence of violence.” This error is so basic that it damages the credibility of the entire sermon.

The second major problem I see is that the sermon’s portrayals of King and Malcolm appear to be based on rather superficial impressions that were probably formed at a particular point in the distant past and were never expanded, corrected, or updated.

Basically, there are two main ways of thinking about a person—we can either take a synchronic approach or a diachronic approach. In the former case, we look at how a certain individual was at a particular moment in time, without considering the past or the future. In the later case, we consider an individual as a dynamic being, and try to understand how this person grows, evolves, and transforms over time. The synchronic approach is analogous to taking a snapshot of a river; the river is in constant motion, but the camera freezes it into a two-dimensional static image. It may be a high-resolution image, but it is devoid of the temporal element and therefore incapable of conveying any sense of change over time. Of course, the same river can also be studied using the diachronic approach, but that requires a lot more work and a much greater commitment to the subject matter than just pushing a button.

The Pastor’s approach to King and Malcolm is entirely synchronic. It takes two highly complex and dynamic human beings with freedom and agency, seeks to capture their essence at a single moment in time, and then attempts to convey that essence in a few short sentences. No wonder this approach produced two flat, static, and oversimplified images that bear little, if any, resemblance to the persons in questions.

If we compare the two leaders as they might have appeared to most White Americans in the early 1960s, we would notice that Malcolm was angrily denouncing the nightmare of being Black in America while King had high hopes that the nation would fully embrace its Black citizens. Recall that King’s most optimistic speech—“I have a dream”—was delivered during the march on Washington in 1963. Malcolm wasn’t impressed, for he could see no point in trying to integrate Black people into what he saw as an incorrigibly racist White society. If the story of Malcolm and King had ended that year—and if there were no resources available other than what was shown on television at that time—a relatively uninformed White person would have readily assumed that Malcolm and King were totally different. However, the story did not end at that point and significant changes in the lives of both leaders continued to take place in the following years. 

I don’t dispute the fact that Malcolm and King were very different individuals. What the Pastor gets wrong is not that they were different but that they were opposites. In fact, I think the biographical factors that account for some of the differences between the two men are worth knowing. These include social class, family, education, life experiences, attitudes toward Whites, and approaches to the problem of race in America. Much of the following account is based on James Cone’s 1991 book, Malcolm & Martin & America: Dream or Nightmare. This background is necessary for understanding how these two individuals evolved in the last years of their respective lives.

Let’s start from the beginning. King was raised in a relatively affluent home. His father was an influential and well-known pastor who personally knew many powerful individuals. King belonged to the southern, upwardly mobile Black professional class, and had therefore absorbed from his immediate surroundings a sense of optimism about racial progress.

In contrast, Malcolm came from a poor working class family. After his father was murdered by the KKK when he was still a child, Malcolm spent several years being tossed from one foster home to another with various White foster parents. Unlike King, Malcolm did not grow up with older Black individuals around him who could serve as positive role models. Malcolm’s very skin tone (“red”) was a constant reminder to him of White brutality, as his maternal grandmother was raped by a White man. In contrast, King was “black,” which means his immediate ancestors hadn’t experienced White sexual violence.

Malcolm faced intense racial discrimination from his teachers, and ended up dropping out of school in the eighth grade. He eventually taught himself to read and write all over again while he was in prison, and went on to educate himself on every conceivable topic by reading books from the prison library. In contrast, King’s mother was a school teacher who taught him to read before he entered school. King did not encounter racial discrimination in the course of his schooling, for this was the era of segregation; he attended two public schools in Atlanta, GA, before going to Morehouse College. King had access to enough family support and cultural capital to be able to skip grades twice, which is how he ended up in college at age 15 without formally graduating from high school, and eventually earned a PhD at the age of 26.

coneAfter dropping out of school, Malcolm had to learn how to survive on the streets. He was a thief and a hustler, until he ended up in prison. Malcolm eventually transformed himself into an orator, a preacher, and a leader through self-discipline and determined effort. In contrast, King got his first job as a pastor at age 25 when he was still finishing his dissertation; he never had to struggle financially. By becoming a pastor, King was following in his father’s footsteps and in some ways he was simply continuing the family business. Malcolm had to win his sense of self-worth through a hard struggle against internalized racism; in contrast, even as a child King never doubted that he was a worthy individual despite being Black.

Malcolm had no reason to believe that African Americans would ever be fully accepted in White society, for he hadn’t had many positive experiences involving White people. King too had acquired strong anti-White sentiments while growing up, which he was only able to overcome through his interaction with White students who were involved in interracial organizations at Crozer Theological Seminary, which was the first integrated school that King attended. Towards the end of his life, however, King was rapidly losing his initial faith in the ability of White people to overcome their racist tendencies.

Integration vs. Independence

Ultimately, however, Malcolm and King had different leadership styles because they were attracted to two distinct Black traditions. These traditions represent two different answers to the question that African Americans have repeatedly asked themselves: How do we find justice? One answer has been “integration,” the other “independence.”

According to the first tradition, African Americans can establish positive relations with White people and win equality and dignity by appealing to the common American values of freedom and democracy. According to second tradition, African Americans must practice unity and solidarity within their own communities and learn to stand on their own feet, without being dependent on White society. The first tradition is optimistic about integration and equality, whereas the second comes out of a collective sense of despair. The first tradition is associated with figures like Frederick Douglass, and that’s the tradition to which King subscribed. This hope for integration is based on the assumption that the American commitment to the idea that “all men are created equal” is sincere and genuine, and that the problem has only been in its faulty implementation. In contrast, the desire for independence and self-sufficiency is associated with figures like Marcus Garvey, though it can be traced all the way back to the earliest slave revolts, and that’s the tradition that made most sense to Malcolm. This tradition is based on a hermeneutic of suspicion; it asserts that African Americans shouldn’t gamble their lives and future on the unproven and risky idea that White folks actually mean it when they say that “all men are created equal.”

Of course, it would be too simplistic to say that there are just two traditions. Rather, there is a spectrum of viewpoints between integration and independence. The relevant point here is that while King remained loyal to the integrationist approach, by the end of his life he had started to recognize that a certain level of despair was justified, and that the Black tradition that insisted on achieving independence without any help from White society wasn’t completely misguided. Similarly, Malcolm remained a staunch supporter and advocate of Black independence, sometimes referred to as “nationalism,” but in the final year of his life he had started to appreciate the value of the integrationist tradition.

In 1964, Malcolm left the Nation of Islam and joined the mainstream Muslim community. At that time, Malcolm announced that he wanted to work with other civil rights leaders, something he hadn’t done until that point because Elijah Muhammad had prohibited such cooperation; but now he was free to pursue his own instincts. In the last year of his life, Malcom become increasingly global in his outlook and critical of capitalist exploitation. He started to recognize the necessity of building solidarity among all oppressed people. He also began to notice that the Black/White dynamic was part of the larger, and more salient, Oppressor/Oppressed dynamic.

Malcolm, of course, was assassinated in February 1965, while King lived for another three years—until he too was murdered in April 1968. These three years are immensely relevant for our story, for it was during this time that King began to appreciate what Malcolm had been saying all along. King grew increasingly frustrated as White racism started to wake up to the challenge posed by the civil rights movement and quickly succeeded in reducing the pace of racial progress. As King tried to take his movement into the northern states, he was increasingly met with sophisticated forms of White opposition that proved harder to defeat than the overt racism of the South. With the passage of time, King became more and more pessimistic about the ability of White people to give up their power and privilege for the sake of a moral imperative. Like Malcolm, King also expanded his vision, became increasingly global in his outlook, and start to openly talk about the intersection of racism, capitalism, and militarism. Today, the mainstream culture tends to be exclusively focused on King’s most optimistic words, especially his “I have a dream” speech, but if King had lived just few more days, the world would have heard him deliver a sermon with the provocative title “Why America May Go to Hell.”

The conclusion should be obvious: During the final years of his life Malcolm had shown significant progress in moving closer to King’s viewpoint, whereas King, during the last years of his life, had moved a great deal toward Malcolm’s perspective. On the days of their respective deaths, therefore, it is fair to say that Malcolm and King had very similar ideas, hopes, fears, and concerns. I like to imagine that if they hadn’t been assassinated during the prime of their lives, by the early 1970s Malcolm and King would have succeeded in bringing their followers together into a single, world-wide movement for human liberation. I believe this is a reasonable conjecture based on how their respective viewpoints were evolving in the final years of their lives.

The Pastor’s Dilemma

The third major problem in the sermon’s approach, and perhaps the most fundamental one, has to do with its reliance on bad logic. The Pastor seems to believe that difference is synonymous with opposition, but that is incorrect. It is possible for two propositions, A and B, to be true at the same time, and if that’s how we choose to think of the leadership styles of Malcolm and King then we can appreciate how both were perfectly valid despite their differences. The alternative is to imagine two propositions, A and not-A, which by definition cannot both be true at the same time, for the truth of one logically requires the falsity of the other. As the wording of the sermon demonstrates, the Pastor is not approaching the two leadership styles in question as merely different from each other; he is thinking of them as opposites and therefore mutually exclusive.

According to this reasoning, if King’s leadership style was valid then Malcolm’s must be invalid, and if Malcolm’s leadership style was valid then King’s must be invalid. By assuming that difference in this case is identical with incompatibility, the Pastor is making a fundamental error. As a result, if he affirms something for King, he must negate it for Malcolm; if he affirms something for Malcolm, he must negate it for King. Trapped by faulty logic, the Pastor cannot bring himself to see the many overlaps, interconnections, and similarities between the two men and their leadership styles; it also makes him uninterested in how these leaders had started to move toward each other’s viewpoints. The Pastor then expresses an unequivocal preference for King, over and against Malcolm, expecting that everyone else will agree with him. Yet, he is actually putting his congregation in a dilemma by creating an unnecessary choice.

It is indeed true that Malcolm and King had different leadership styles. These two men came from different backgrounds, had different temperaments, and experienced different forms of racism. As a result, they were attracted to two different Black traditions: King sought integration while Malcolm pursued independence. Yet, none of this means that Malcolm and King were opposites.

While they only met once in person, Malcolm and King were in constant dialogue. They kept each other sharp and honest. Malcolm was strong in the areas where King was weak, and King was strong in the areas where Malcolm was weak. While neither of them would have have admitted it, Malcolm and King were dependent on each other. Their viewpoints were not mutually exclusive; they were complementary. America needed both men, and still does. There is no need to pick one and reject the other.

On White Privilege

The dilemma created by the Pastor is not only unnecessary; it is also based on historically inaccurate images—caricatures, really—of the two men, which makes the comparison false and the choice pointless. This means that when the Pastor picks King, he is only picking what he thinks King represented; and when he rejects Malcolm, the Pastor is simply rejecting what he believes Malcolm represented. In the sermon, King and Malcolm are like two blank screens on which the Pastor projects his own likes and dislikes, respectively. Given how uninformed the Pastor is regarding King and Malcolm, we end up learning more about him than about the two towering figures.

Yet, there is value in studying this sermon that goes far beyond its particular author. We can study the sermon for it unintentionally reveals regarding how racism adapts and reproduces itself in American society.

The sermon can also give us a glimpse of how White racial privilege works. For instance, I can’t imagine myself getting up on stage, let alone standing behind a pulpit, and speaking confidently on a topic about which I know next to nothing. I can’t imagine doing this because I have a healthy fear of being challenged and contradicted in public for saying something dumb, insensitive, or ignorant. The Pastor who delivered the sermon on King and Malcolm had no such fear. What could have been the source of his otherwise unreasonable confidence except the privilege that comes from being White and Christian in a society that views these characteristics as normative?

The ninth principle of the concentration of wealth and power deals with one of Chomsky’s abiding themes, i.e., the mechanisms through which thought control—or the manufacturing of consent—takes place in a liberal democracy.


Chomsky begins by referring to the origins of the public relations and advertising industries at the turn of the twentieth-century:

The public relations industry, the advertising industry, which is dedicated to creating consumers, it’s a phenomenon that developed in the freest countries, in Britain and the United States, and the reason is pretty clear. It became clear by, say, a century ago, that it was not going to be so easy to control the population by force. Too much freedom had been won. Labor organizing, parliamentary labor parties in many countries, women starting to get the franchise, and so on. So, you had to have other means of controlling people. And it was understood and expressed that you have to control them by control of beliefs and attitudes.

Here, Chomsky is answering a critical question: How does the ruling class maintain its control over the population despite the growing consciousness of civil rights and democratic freedoms? During most of civilized history, elite control of the population was maintained largely through the threat and use of organized violence, and to a lesser extent through religious legitimation of the status quo. Every now and then popular rebellions did occur, but they tended to be ruthlessly crushed by the rulers. From the rise of cities to the beginning of the industrial age, violent force remained the main instrument employed by the ruling classes for keeping the masses obedient and for discouraging any fantasies of rebellion against the established order. This dynamic started to change, however, in the wake of the Glorious Revolution (1688) and the French Revolution (1789). As the ideals of human equality and popular sovereignty started to gain wide acceptance, it became increasingly difficult for the elite to rule through force alone. To the extent that violence or the threat of violence were no longer effective in controlling the masses, it became imperative for them to solicit and gain the support and consent of the masses—or risk losing their legitimacy.

In the United States, a number of key freedoms and rights were won during the Progressive Era—developments that were correctly seen by the elites as threats to their economic and political interests. As a result, a need arose at the beginning of the twentieth-century for managing the perceptions of the population in increasingly subtle and sophisticated ways. The public relations and advertising industries came into being at that time precisely to meet that need, and just in time for President Woodrow Wilson to use them for his own project: gaining public support for American entry in the First World War. The aim of these new industries and associated professions was to apply the latest scientific discoveries concerning human motivation and behavior in the service of the rich and powerful; this was to be achieved through using the print media and other forms of mass communication for shaping popular beliefs and attitudes.

Chomsky has previously discussed the political aspects of this phenomenon in his 1988 Massey lectures, subsequently published as Necessary Illusions: Thought Control in Democratic Societies (1989). In the preface to that book, Chomsky explains that a major contradiction is inherent within the structure of capitalist democracies: Capitalism tends to concentrate power in the hands of the wealthy, while democracy requires that power be widely distributed among the population. For Chomsky, the manufacturing of consent through mass media is the most common way in which the ruling elite have typically sought to overcome that contradiction:

In capitalist democracies there is a certain tension with regard to the locus of power. In a democracy the people rule, in principle. But decision-making power over central areas of life resides in private hands, with large-scale effects throughout the social order. One way to resolve the tension would be to extend the democratic system to investment, the organization of work, and so on. That would constitute a major social revolution, which, in my view at least, would consummate the political revolutions of an earlier era and realize some of the libertarian principles on which they were partly based. Or the tension could be resolved, and sometimes is, by forcefully eliminating public interference with state and private power. In the advanced industrial societies the problem is typically approached by a variety of measures to deprive democratic political structures of substantive content, while leaving them formally intact. A large part of this task is assumed by ideological institutions that channel thought and attitudes within acceptable bounds, deflecting any potential challenge to established privilege and authority before it can take form and gather strength.

To paraphrase, a society that claims to follow both capitalism and democracy at the same time, such as the United States, must somehow deal with the following tension: on the one hand, democracy requires that people enjoy the right to participate in any decision that affects them, while, on the other hand, capitalism requires that private owners of capital enjoy the right to manage their capital without any interference. This means that the requirements of democracy are incompatible with the requirements of capitalism, making it impossible for both of them to coexist in the same society. According to Chomsky, this tension between democracy and capitalism can be resolved in one of the three ways: (1) by extending the principle of democracy to the realm of capital; (2) by using violent force to obtain people’s compliance; or (3) by using the mass media and institutions of socialization to shape public opinion in favor of the established order. The first option violates the basic principle of capitalism and will be unacceptable to the elite classes, while the second option foregoes any pretense to democracy and will be unacceptable to the masses. The third option is what is actually practiced in liberal democracies: it involves maintaining a facade of democratic institutions to placate the population while allowing the elites to keep their power and privilege. Such an arrangement inevitably requires extensive ideological management of the population, which is basically propaganda without any overt use of force, threats, or other authoritarian tactics. Propaganda has replaced violence.


Much of the ideological management of the population—also known as “thought control” (Chomsky’s phrase) or “manufacture of consent” (Walter Lippmann’s phrase)—takes place through news and political commentary in the mass media, a topic that Chomsky has discussed in detail in Manufacturing Consent: The Political Economy of the Mass Media, which he co-authored with Edward S. Herman. In Requiem, Chomsky focuses on the role of advertising and public relations industries. He points out that the contemporary culture of relentless consumption is by no means a manifestation of natural human desires; it is, rather, the result of sophisticated manipulation of people’s feelings at a mass scale.

One of the best ways to control people in terms of attitudes is what the great political economist Thorstein Veblen called “fabricating consumers.” If you can fabricate wants, make obtaining things that are just about within your reach the essence of life, they’re going to be trapped into becoming consumers.

The consumer culture serves two main objectives for the ruling classes. First, it keeps the vast majority of population preoccupied with the most superficial pursuits, such as fashion and gadgetry,  thereby diverting their attention away from gross injustices and the continuous erosion of their rights. Second, it keeps the treadmill of production and consumption moving at an ever-increasing pace, a process that is essential for capital accumulation. In effect, consumer culture undermines democracy, causing it to loses more and more of its substance until it is reduced to nothing more than a shell.

Incidentally, the theory behind “fabricating consumers” and “manufacturing consent” is shared by conservatives and liberals alike: it is the idea that the vast majority of human beings are incapable of critical thinking, that people don’t know what is good for them, and that they shouldn’t therefore be allowed to make major decisions that affect their lives. The masses are like little children, not mature enough to understand how the world actually works and therefore undeserving of actual participation in public affairs. Consequently, the reasoning goes, the population must be led by an elite minority, i.e., by those who deserve to rule by virtue of their worldly knowledge, maturity, and superior intellect. The notion that society can only function on the basis of a natural hierarchy has been central to conservative thought, but Chomsky argues that many so-called progressives have embraced it too, either explicitly or implicitly: Walter Lippmann is a case in point.


Walter Lippmann, The Phantom Pubic (1927), p. 145.

According to Chomsky, consumerism plays a central role in keeping the “bewildered herd” preoccupied with childish concerns, allowing the elite to maintain an exploitative political-economic order behind the facade of democracy. Commercial advertising is the main engine of consumerism, inculcating a sense of inadequacy, deficiency, and alienation—a sense of existential lack that can only be overcome by acquiring the latest gadget, the newest car, or the most fashionable dress or accessory. The relief, of course, is short-lived, and the cycle keeps repeating itself over and over again. The resulting consumer culture serves the ruling classes by eroding community and solidarity, producing a type of individualism that isolates people from each other and drains their collective power, reducing citizens to consumers. Chomsky explains:

The ideal is what you actually see today, where, let’s say, teenage girls, if they have a free Saturday afternoon, will go walking in the shopping mall, not to the library or somewhere else. The idea is to try to control everyone, to turn the whole society into the perfect system. Perfect system would be a society based on a dyad, a pair. The pair is you and your television set, or may be now you and the Internet, in which that presents you with what the proper life would be, what kind of gadgets you should have. And you spend your time and effort gaining those things, which you don’t need and you don’t want, and may be you’ll throw them away, but that’s the measure of a decent life.

Chomsky notes that the purpose of commercial advertising is the exact opposite of what is taught in economic theory. The avalanche of advertisements to which we are subjected daily through television, billboards, and the Internet is intended to deceive, not educate. Advertising is not meant to inform the viewers so they can make wise choices; the aim, rather, is to indoctrinate them into desiring the goods and services they don’t actually need and often can’t afford.

If you’ve ever taken an economics course, you know that markets are supposed to be based on “informed consumers” making “rational choices.” Well, if we had a system like that, a market system, then a television ad would consist of, say, General Motors putting up information, saying “here’s what we have for sale.” That’s not what an ad for a car is. An ad for a car is a football hero, an actress, the car doing some crazy think like going up a mountain or something. The point is to create uninformed consumers who will make irrational choices. That’s what advertising is all about.

The principles and strategies that have proven so successful for selling cars and cigarettes and for branding corporations are widely used for selling political candidates and their agendas as well, come election time. This is the main reason why running for public office is such an unusually expensive undertaking, and why candidates who spend more money on their campaigns than their opponents win the elections 90% of the time. It is also for this reason that electoral campaigns are run just like any other marketing campaign, and that high-profile races, such as the Presidential elections in the United States, are managed by some of the most highly-paid advertising executives.

And when the same institution, the PR system, runs elections, they do it the same way. They want to create uninformed electorate which will make irrational choices, often against their own interests, and we see it every time one of these extravaganzas take place. Right after the election, President Obama won an award from the advertising industry for the best marketing campaign. It wasn’t reported here, but if you go the international business press, executives were euphoric. They said, we’ve been selling candidates, marketing candidates like toothpaste ever since Reagan and this is the greatest achievement we have.

The characteristic features of contemporary political campaigns include uncritical celebration of personal charisma, absence of specific promises, and vague appeals to emotions—features that are already familiar to us from commercial advertising. Barack Obama’s 2008 election campaign was a highly successful example of this model. His electoral/marketing campaign won the Advertising Age’s “marketer of the year” award with 36.1% votes, defeating Apple, Zappos, and Nike. Just like most commercial advertising, Obama’s election campaign was designed to sell, not to inform, which is why it consisted of empty rhetoric that his audience were supposed to fill with their own ideas. As Chomsky notes, Obama never actually promised to deliver anything specific in his speeches. Rather, he used generic and vague slogans—emotionally charged words like “hope” and “change”—that the listeners could interpret in any way they wished. Large crowds at Obama’s rallies enthusiastically chanted “Yes We Can,” without noticing that the slogan was a meaningless claim that strategically avoided any commitment or standard on which the candidate’s performance could be evaluated.



The tenth and final principle of the concentration of wealth and power consists of the imperative to “marginalize the population.” According to Chomsky:

One of the leading political scientists, Martin Gilens, came out with a study of the relations between public attitudes and public policy. What he shows is that about 70% of the population has no way of influencing policy. They might as well be in some other country.

Chomsky is referring to a study published in 2014 by Martin Gilens of Princeton University (co-written with Benjamin Page of Northwestern). Gilens also published the expanded version of his research in book form as Affluence and Influence: Economic Inequality and Political Power in America. In the original article, the authors begin by asking: Who really controls policy-making in the United States? Overall, there is a strong status quo bias, which means that it is very hard to get any kind of change happen through the political process. But when change does happen, it almost always favors the economic elite rather than the average citizen. The data shows that when the economic elites want to have a policy changed, the probability that the change will be enacted increases as the number of people supporting it rises. On the other hand, when the average citizens want to have a policy changed, and their preference is not in alignment with what the economic elites want, then such a policy has virtually no chance of being enacted—regardless of how many people support it.


downloadWhat Martin Gilens has shown, in effect, is that the United States may be a representative democracy in the technical sense but it is by no means an actual, functioning democracy. This is because government policies in our country do not reflect the preferences of the majority; rather, they reflect the preferences of a tiny economic elite. The term for this sort of arrangement is oligarchy, not democracy. When Gilens’ study was first published, it attracted significant attention in the media. The study is groundbreaking in that it proves through hard data and statistical analysis that the United States is, in fact, ruled by an economic elite. This conclusion, however, is something that most people already know and understand, even without any help from academic researchers. Chomsky makes the same point in Requiem. He also seems to predict that the resulting frustration and resentment could lead to adverse social and political consequences that we are, in fact, witnessing in the age of Donald Trump.

And the population knows it. What it has led to is a population that’s angry, frustrated, hates institutions. It’s not acting constructively to try to respond to this. There is popular mobilization and activism, but in very self-destructive directions. It’s taking the form of unfocused anger, attacks on one another, and on vulnerable targets. That’s what happens in cases like this. It is corrosive of social relations, but that’s the point. The point is to make people hate and fear each other, and look out only for themselves, and don’t do anything for anyone else.

The attempt to implement the ideals of democracy and capitalism at the same time leads to a serious contradiction, as mentioned above. While political power remains firmly in the hands of a small group of wealthy elites, the population is constantly told that they are the actual sovereigns who control their own destiny. The gap between dreams and ambitions on the one hand and the frustrating reality on the other hand goes on widening, leading to “unfocused anger” that demagogues are then able to channel towards scapegoats, i.e., religious and ethnic minorities. Trump’s victory in the 2016 elections, along with the rise of the ultra-right and white supremacy, seem to vindicate Chomsky’s analysis.

The disempowerment of ordinary people is evidenced, according to Chomsky, in how Americans feel about paying taxes.

April 15 is kind of a measure, the day you pay your taxes, of how democratic the society is. If a society is really democratic, April 15 would be a day of celebration. It’s a day when the population gets together, decides to fund the programs and activities that they have formulated and agreed upon. What could be better than that? So, you should celebrate it. It’s not the way it is in the United States. It’s a day of mourning. It’s a day in which some alien power that has nothing to do with you, is coming down to steal our hard-earned money, and you do everything you can to keep them from doing it. That is a kind of measure of the extent to which, at least in popular consciousness, democracy is actually functioning.

In an actual, functioning democracy, the general population will have control over how their tax dollars are spent. Americans hate paying taxes because they know, deep down, that they have no control over policy-making. They don’t see government as a manifestation of their own will, which would be the case in a real democracy, but as an alien entity that is more or less completely unresponsive to their needs and preferences. This is another way of saying that the ruling classes in the United States have successfully marginalized the population, and that they have done so as a matter of conscious policy.


The consequences are hardly surprising: When people lack any actual power to affect policy, they often show a tendency to become increasingly selfish, deciding that they must live according to the maxim “every man for himself.” But human beings can only flourish through cooperation and mutual goodwill. A society that’s based on the pursuit of narrowly defined self-interest will only destroy everything that comes in its path, and will eventually destroy itself.

The tendencies that we’ve been describing within American society, unless they’re reversed, it’s going to be an extremely ugly society. I mean, a society that’s based on Adam Smith’s vile maxim “all for myself, nothing for anyone else,”a society in which normal human instincts and emotion of sympathy, solidarity, mutual support … [are] driven out. … If the society is based on control by private wealth, it will reflect the values that it, in fact, does reflect. The value that is greed, and the desire to maximize personal gain, at the expense of others. Now … a small society based on that principle is ugly, but it can survive. A global society based on that principle is headed for massive destruction.

principle-7Large corporations and super-rich individuals can spend more money in a single election than the vast majority of people will earn in a lifetime. While one citizen can cast only one vote, concentrated wealth can allow you to shape the views of thousands of voters. Campaigns are expensive, and the availability of funds is often the decisive element. A candidate who is able to outspend his/her opponent wins the election nine out of ten times. Even if your favorite candidate doesn’t win, the money you’re able to contribute to the winner’s next election campaign can still buy you a significant amount of influence. Corporations tend to support both political parties, though their relative contributions can vary from one industry to another and also from one election cycle to another. This means that corporate funding is important not just for participation in elections but also for the day to day management of the party structure. Since both major political parties are constantly in the fundraising mode, they have little choice but to pay attention to the likes and dislikes of their big money donors.

Chomsky returns to a point he made earlier in the documentary:

Concentration of wealth yields concentration of political power, particularly so as the cost of elections skyrockets, which forces the political parties into the pockets of major corporations.

The U.S. Congress has tried to limit how much control big money interests can have on the electoral process, but it has not been able to go very far, thanks largely to a whole series of corporate-friendly decisions by the Supreme Court going back to the nineteenth century. Most people are at least vaguely aware of the Citizens United decision, but that particular Supreme Court ruling didn’t come out of the blue; it has, rather, a very interesting backstory. Chomsky suggests that we take a close look at history, so that’s what we’ll do.

“Corporations,” says Chomsky, “are  state-created legal fictions.” Basically, a corporation is an imaginary entity that is brought into existence when State agrees to give it certain legal rights. A corporation is considered a “legal person,” because it has the right to own property, make contracts, and hire employees, and because it is subject to applicable laws, just like an actual citizen. Everybody understands that corporations are not really persons—they don’t eat, drink, breathe, feel sad or happy, get sick, or die; rather, they are treated as persons only for the purposes of law, taxation, and so on. Throughout the nineteenth and twentieth centuries, however, corporations have acquired more and more rights that were originally intended only for real persons.


The first major step in this direction was Dartmouth College v. Woodward, the 1819 Supreme Court decision that turned the corporate charter from a government-granted privilege into a contract that cannot be altered by government, making it difficult for the government to control corporations; it also held that corporations have standing in the Constitution. However, the most important developments in the expansion of corporate personhood rights took place after the Civil War, when corporate lawyers decided to take advantage of the word “person” as used in the Fourteenth Amendment.

In the wake of the Civil War, the Congress passed three amendments to the Constitution. These were meant to (1) abolish slavery, (2) expand the rights of personhood to former slaves, and (3) to give African American men the right to vote. Thus, the Thirteenth Amendment (1865) said in part “Neither slavery nor involuntary servitude, except as a punishment for crime whereof the party shall have been duly convicted, shall exist within the United States, or any place subject to its jurisdiction.” The Fourteenth Amendment (1868) said in part “All persons born or naturalized in the United States and subject to the jurisdictions thereof, are citizens of the United States and of the States wherein they reside. No State shall make or enforce any law which shall abridge the privileges or immunities of citizens of the United States; nor shall any State deprive any person of life, liberty, or property, without due process of law; nor deny to any person within its jurisdiction the equal protection of the laws.” The Fifteenth Amendment (1870) said in part “The right of citizens of the United States to vote shall not be denied or abridged by the United States or by any State on account of race, color, or previous condition of substitute.”

The context of these three amendments make it abundantly clear that the word “person” was used in the Fourteenth Amendment with reference to the legislature’s concern for safeguarding the civil rights of former slaves in particular and African Americans more generally. There is no ambiguity here. Yet, as Noam Chomsky says in “Requiem,” that’s now how it was interpreted.

The fourteenth amendment has a provision that says no person’s rights can be infringed without due process of law. And the intent, clearly, was to protect freed slaves. So, okay, they’ve got the protection of the law. I don’t think it’s ever been used for freed slaves, if ever, [may be] marginally. Almost immediately, it was used for businesses, corporations. Their rights can’t be infringed without due process of law.

The issue first came up in San Mateo County v. Southern Pacific Railroad, an 1882 Supreme Court case. Among the railroad company’s lawyers was Roscoe Conkling, a former U.S. Senator and Representative from New York who had served on the committee that drafted the Fourteenth Amendment. Arguing before the Supreme Court in 1882, Conkling claimed that the drafting committee had decided to use the word “person” instead of “citizen” so as to ensure that corporations were covered under the equal protection clause (which turned out be a lie). The Court did not address the issue of corporate personhood in its ruling. Soon afterwards, in a related but separate case, Santa Clara County v. Southern Pacific Railroad (1886),the Chief Justice was reported to have said before the hearing began: “The Court does not wish to hear argument on the question of whether the 14th Amendment to the Constitution, which forbids a State to deny to any person within its jurisdiction the equal protection of the laws, applies to corporations. We are all of the opinion that it does.” This opinion of the Chief Justice was not included in the Court’s final ruling, yet it was recorded by the court reporter in the “headnotes” and was subsequently treated by other courts as if was, in fact, part of the Supreme Court’s official verdict.

The rest, of course, is history: Since Santa Clara County v. Southern Pacific Railroad, corporations steadily increased their power and were even able to successfully claim for themselves the various provisions of the Bill of Rights, sometimes at the expense of the rights of natural persons. Chomsky finds this phenomenon a moral outrage.

So they gradually became “persons” under the law. Corporations are state-created legal fictions. May be they’re good; may be they’re bad. But to call them “persons” is kind of outrageous. So they got personal rights back about a century ago, and that extended through the 20th century. They gave corporations rights way beyond what persons have. … While the notion of person was expanded to include corporations, it was also restricted. If you take the Fourteenth Amendment literally, then no undocumented alien can be deprived of rights, if they’re persons. Undocumented aliens who are living here and building your buildings, cleaning your laws, and so on, they’re not persons, but General Electric is a person—an immortal super-powerful person.

In relation to the engineering of elections, the most relevant Supreme Court rulings are those that applied the “free speech” clause of the First Amendment to corporations.

In 1971, the Congress passed the Federal Election Campaign Act (FECA), requiring candidates to disclose sources of campaign contributions and expenditures. A scandal erupted in 1972 when an insurance magnate, W. Clement Stone, contributed $2 million to President Nixon’s election campaign, prompting Congress to thoroughly revise the FECA in 1974. The amended law included statutory limits on contributions by individuals to election campaigns as well as to political action committees (PACs), new disclosure requirements, campaign spending limits. It also created the Federal Election Commission (FEC) as an enforcement agency.

The provisions limiting campaign expenditures, however, were soon declared unconstitutional by the Supreme Court. In Buckley v. Valeo (1976), the Supreme Court ruled that political spending was equivalent to speech, and the First Amendment’s protections included financial contributions to candidates and political parties. Earlier, in Grosjean v. American Press Co. (1936), the Supreme Court had ruled that a newspaper corporation had a First Amendment liberty right to freedom of speech. In First National Bank of Boston v. Bellotti (1977), the Court decided that non-media corporations had the right to spend money on ballot initiative campaigns.


Given the consistent tendency of the Supreme Court to give more and more rights of natural persons to for-profit corporations, the Citizens United ruling in January 2010 did not come as a complete surprise. That case, essentially, centered on the constitutionality of “soft money.” In the late 70s, the FEC had allowed donors to contribute unlimited money to political parties (but not to individual candidates) so long as it was used for “party building activities” as opposed to election campaigns. In reality, both the Republican and Democratic parties freely spend this “soft money” to support candidates, and efforts at bringing such spending under control by Presidents George H. W. Bush and Bill Clinton did not succeed in Congress. In 1995, Senators John McCain (R) and Russ Feingold (D) started working on campaign finance reform to address the problem. The resulting legislation was blocked by Senate Republicans in 1998, but it passed the Congress in 2002 as the Bipartisan Campaign Reform Act (BCRA) and was signed into law by President George W. Bush .

In Citizens United v. Federal Election Commission (2010), the Supreme Court overturned most provisions of the McCain-Feingold legislation that restricted corporate money in federal elections. The Supreme Court ruling declared as unconstitutional the prohibition on corporations (both for-profit and nonprofit) as well as unions regarding political advocacy through “independent expenditures” and the financing of electioneering communications. The ruling allows corporations and unions to spend unlimited sums on political advertising and other forms of advocacy aimed at convincing voters to support or reject particular candidates. Neither corporations nor unions were permitted to donate money directly to election campaigns or political parties, but they were now free to spend as much money as they want on promoting or undermining a candidate so long as there was no “coordination” with any campaign. As a result, the Citizens United decision made possible the rise of Super PACs—which are basically PACs on steroids.

Political Action Committees or PACs are organizations that collect funds and make them available to political parties of their choice, or donate them to a candidate’s election campaign. There are various legal restrictions on PACs in terms of who can donate to them and how much they can spend. For example, donations to traditional PACs are capped at $5,000 per year.

Two months after the Citizens United ruling, the federal Court of Appeals for the D.C. Circuit held in Speechnow.org v. FEC that PACs that did not make direct contributions to candidates or political parties were allowed to receive unlimited contributions and to spend those contributions for political advocacy.


This decision, along with Citizens United, led to the proliferation of “independent expenditure only committees” or Super PACs. Such organizations can receive unlimited donations from individuals, unions, and corporations (both for-profit and nonprofit), and they can spend these funds to support a cause or a candidate, but are prohibited from “coordinating” their activities with any political party or election campaign. They are also required by law to disclose who their donors are.

The above rulings have not only opened the floodgates of political spending by both wealthy individuals  and business corporations, they have also created a legal loophole that allows unlimited spending by donors who prefer to remain in the shadows. This phenomenon has been aptly named “dark money.” Certain nonprofit organizations—mainly 501(c)(4) “social welfare” organizations—can act as Super PACs so long as political advocacy is not their primary function. Since these nonprofit organizations are not required to disclose who their donors are, they can receive unlimited money while shielding their donors from public scrutiny, and, at the same time, channeling these anonymous donations to political action committees. This nonprofit loophole has given rise to a relatively new phenomenon called “dark money.” Probably no one has exploited this loophole more than the Koch Brothers and the billionaire members of their secretive network.

The following chart (courtesy of Open Secrets) depicts political spending by outside groups. The term “outside spending” refers to political expenditures made by groups or individuals independently of a candidate’s election campaign. Groups in this category include conventional party committees, super PACs, and 501(c) nonprofit organizations. Notice the impact of Citizens United by comparing outside spending in the 2006 midterm elections to that in the 2010 midterm elections.


Another major blow to the proponents of campaign finance reform came in 2014, when the Supreme Court declared Section 441 of the Federal Election Campaign Act (FECA) to be unconstitutional. The relevant law dealt with aggregate limits on individual spending per election cycle. For the 2013–14 election cycle, for example, an individual could give no more than $2,600 to a candidate for federal office, with an aggregate limit of $48,600. Moreover, individuals were prohibited from donating more than $74,600 to political parties and PACs. The total aggregate limit was therefore $123,200 per election cycle. In McCutcheon v. Federal Election Commission, the Supreme Court upheld the spending limit per candidate per election cycle, but struck down all the aggregate limits—allowing individual donors to support as many candidates per election cycle as they want. This decision paved the way for “joint fundraising committees,” which allow candidates to band together and legally raise large sums of money from the same individuals.



Chomsky identifies the eighth principle of the concentration of wealth and power in terms of the necessity, from the viewpoint of the elite, to prevent the working class from organizing and demanding its rights.

There is one organized force which [has] traditionally … been in the forefront of efforts to improve the lives of the general population. That’s organized labor. It’s also a barrier to corporate tyranny. A major reason for the concentrated, almost fanatic attack on unions, on organized labor, is that they are a democratizing force. They provide a barrier that defends workers’ rights, but also popular rights generally. That interferes with the prerogatives and power of those who own and manage the society.

The working class constitutes the overwhelming majority of the population. Unlike the plutocrats and the oligarchs, however, the working class is not as organized as it needs to be in order to safeguard its collective interests. There have been periods in the U.S. history when the working class did manage to organize itself through labor unions and socialist parties, and whenever it was so organized it was successful in gaining new rights. These include some of the most common features of the American workplace that we today take for granted, such as minimum wage laws, an 8-hour workday, overtime pay, lunch breaks, paid vacations, sick leave, wrongful termination laws, health insurance, sexual harassment laws, pensions, workers’ compensation, unemployment insurance, and the weekend. From the viewpoint of the elite, of course, this tendency of the working class to organize and successfully demand rights and seek improvements is simply intolerable. The United States has a long and violent history of repression against workers who dared to protest their conditions or sought to organize. By the 1920s, much of the labor movement had been successfully crushed by business interests. It was only in the wake of the Great Depression that it was able to resurrect and reorganize itself.

Chomsky explains how the credit for the New Deal can’t be given solely to President Roosevelt or the Democratic Party. These reforms would never have been implemented without the popular pressure from the masses, i.e., from organized labor and socialist parties.

Franklin Delano Roosevelt, he himself was rather sympathetic to progressive legislation that would be in the benefit of the general population, but he had to somehow get it passed. So he informed labor leaders and others, “force me to do it.” What he meant is, go out and demonstrate, organize, protest, develop the labor movement. When the popular pressure is sufficient, I’ll be able to put through the legislation you want. So, there was kind of a combination of sympathetic government, and by the mid-30s, very substantial popular activism [which made the New Deal possible]. There were industrial actions. There were sit-down strikes, which were very frightening to ownership. You have to recognize that sit-down strike is just one step before saying, “we don’t need bosses; we can run this by ourselves.” And business was appalled. You read the business press, say, in the late 30s, they were talking about the “hazard facing industrialists” and the “rising political power of the masses,” which has to be repressed.

Chomsky notes that the business interests returned to the task of marginalizing labor unions in the immediate aftermath of the Second World War. At that point, quarter of the workforce was unionized, and the labor movement’s promise to avoid going on strikes during the war was no longer in effect. Prompted by business lobbies, the Congress passed the Taft–Hartley Act in 1947, severely restricting the power of labor unions. It amended the National Labor Relations Act of 1935, also known as the Wagner Act and nicknamed the “labor’s bill of rights.” The earlier law had given workers the right to organize and join labor unions, to strike, and to bargain collectively. It had also prohibited business owners from attempting to dominate or influence a labor union, and from encouraging or discouraging union membership through any special conditions of employment or through discrimination against union or non-union members in hiring. In effect, the Wagner Act had permitted a “closed shop” (when an employer agrees to hire only union members) as well as a “union shop” (when an employer agrees to require new employees to join the union). When the Republican Party gained control of the Congress in the 1946 midterm elections, one of its first priorities was to attack and weaken as many New Deal laws as possible. The first target was labor unions, hence the Congress’ gutting of the Wagner Act.


The Taft–Hartley Act, also known as the Labor Management Relations Act of 1947, was the first step in the decades long process of dismantling the New Deal. It prohibited jurisdictional strikes, wildcat strikes, solidarity or political strikes, secondary boycotts, secondary and mass picketing, and monetary donations by unions to federal political campaigns. Closed shops were prohibited and union shops were heavily restricted. States were allowed to pass “right to work” laws that outlawed closed or union shops. The act allowed the president to block or prevent the continuation of a strike on the grounds that it would endanger national health or safety. Democrats denounced the law as s “new guarantee of industrial slavery.”

Chomsky continues:

Then McCarthyism was used for massive corporate propaganda offensives to attack unions. It increased sharply during the Reagan years. I mean, Reagan pretty much told the business world, if you want to illegally break organizing efforts and strikes, go ahead. It continued in the 90s and, of course, with George W. Bush, it went through the roof. By now, less than 7% of private sector workers have unions.

The union membership in the private sector reached a peak in the 1950s and has since been on the decline. In 1954, about 35% of private sector workers were unionized; today, that figure is only 6.5%. The public sector unions have remained stable since the 1980s at about 11–12%. Labor unions act as barriers to economic inequality. When unions decline, the rich get richer while the working class incomes stagnate or plunge downwards, as depicted in the following chart. Notice how the share of income going to the richest tenth of the population (red line) came close to 50% on two occasions—1929 and 2008—just before the system crashed.

According to Chomsky, the post-WWII attacks on labor unions have virtually dissolved the main counter-force to the expanding power of the business class. As a result, when worker productivity and real wages started to diverge in the 1970s, there was no organized labor to speak of that could challenge the exploitation.



The decline of labor unions is also correlated with a decline in class consciousness among the working people. In sharp contrast, class consciousness is alive and well among the elite. In the United States, the plutocrats and oligarchs are busy exploiting the working class, which is the only reasonable explanation for the fact that all economic indicators show rising inequality. Yet, anyone who mentions this is immediately accused of causing division or fomenting class warfare. Strangely enough, Americans have, for the most part, grown allergic to the word “class.” The only time it is okay to use the word  is when someone is referring to the “middle class.” Apparently, neither the upper class nor the lower class exists anymore—we are all part of the middle class.

Chomsky explains how class consciousness has declined since the late nineteenth century, when the Republican Party represented the progressive element in the U.S. politics and regarded wage labor as nothing more than a type of slavery.

Now, if you’re in a position of power, you want to maintain class-consciousness for yourself, but eliminate it everywhere else. Go back to the 19th century, in the early days of the Industrial Revolution in the United States, working people were very conscious of this. They in fact overwhelmingly regarded wage labor as not very different from slavery, different only in that it was temporary. In fact, it was such a popular idea that it was the slogan of the Republican party. That was a very sharp class-consciousness. In the interest of power and privilege, it’s good to drive those ideas out of people’s heads. You don’t want them to know that they’re an oppressed class. So this is one of the few societies in which you just don’t talk about class.

The concept of class has to do with three main variables: wealth, income, and power. Your location in the class hierarchy is determined by how much of these you possess. Chomsky, in his inimitable style, simplifies the concept down to its bare essence: “Who gives the orders? Who follows them? That basically defines class.”


Chomsky’s fourth principle of the concentration of wealth and power is to “Shift the Burden.” He uses the word “burden” in the sense of the responsibility for maintaining and managing the society in which one lives. Morality demands that people who have greater wealth and bigger incomes ought to be held responsible at a higher level for meeting the needs of their community—they should contribute more to society because they’ve taken more from society. This means that the rich ought to pay a larger percentage of their wealth and income in the form of taxes. The idea behind the fourth principle is that, in violation of basic social ethics, the rich try to shift their tax burden onto the middle and lower classes. They do this by paying less than their fair share of taxes, thereby forcing everyone else to pay more than their fair share.

In “Requiem,” Noam Chomsky explains that wealthy individuals and business corporations used to pay very high taxes in the United States all the way up to the 1960s. Over the subsequent fifty years or so, however, they have succeed in changing the tax codes and other regulations so that today they are able to get away with paying only a tiny fraction of their fair share.

The higher taxes on corporations were put in place by President Roosevelt in the 1930s, despite vehement opposition from some sectors of the business class. The American economy expanded considerably during this period of high taxation. In the quarter-of-a-century following the Second World War, when Japan and much of Europe were struggling to recover from the devastation of war, United States experienced a period of rapid economic growth. This generated new wealth, which, coupled with the various welfare state policies established by Presidents Roosevelt and Johnson, led to increasing prosperity for large parts of the population. Even though economic disparities remained, they were prevented from growing out of control through government regulations. People generally felt that anyone who was willing to work hard would be able to escape poverty and achieve a relatively comfortable lifestyle.

Chomsky notes:

The American Dream, like many [other] ideals, was partly symbolic, but partly real. So in the 1950s and 60s, say, there was the biggest growth period in American economic history—the Golden Age. It was pretty egalitarian growth, so the lowest fifth of the population was improving almost as much as the upper fifth. And there were some welfare state measures which improved life for much of the population. It was, for example, possible for a black worker to get a decent job at an auto plant, buy a home, get a car, have his children go to school, and so on. And the same across the board.

In fact, many business leaders understood that their profits depended on the ability of ordinary people to buy their products, and therefore paying higher salaries made perfect business sense. They knew that if they didn’t pay good wages to their workers, those workers wouldn’t have enough money to spend, and that lower spending would mean lower consumption, which would mean lower profits. But this was true only when the workers and the consumers were essentially the same people. As Chomsky explains, for the super-rich in the United States, that is no longer the case.

When the U.S. was primarily a manufacturing center, it had to be concerned with its own consumers, here. Famously, [in 1914] Henry Ford raised the salary of his workers so they’d be able to buy cars. When you’re moving into an international “plutonomy,” as the banks like to call it—the small percentage of the world’s population that is gathering increasing wealth—what happens to American consumers is much less a concern, because most of them aren’t going to be consuming your products anyway, at least not on any major basis. Your goals [if you’re a business executive] are: profit in the next quarter, even if it’s based on financial manipulations; high salary, high bonuses [for yourself and your friends]; produce overseas if you have to and produce for the wealthy classes here and their counterparts abroad.

In the above quotation, notice the keyword “plutonomy.” Chomsky says that this is the word that banks use to describe an economy where only the super-rich matter, for they’re the main investors and the main consumers. What he doesn’t say is where that word came from, which is an interesting and very revealing story in its own right. Simply google the phrase “Citibank Memos” and you’ll find the information that’s missing from the film.

What about the rest? Well, there’s a term coming into use for them, too. They’re called the “precariat”—precarious proletariat. The working people of the world who live increasingly precarious lives.

Chomsky is referring to the term that was first used by French sociologists in the 1980s to describe the class of temporary workers. More recently, the term has been popularized by Guy Standing, an economist at the University of London, through his book The Precariat: The New Dangerous Class (2011). According to Standing, the precariat is a new class produced by the neoliberal policies of the last thirty or so years. This new class is defined by the precarious, uncertain life conditions of its members; they are forced to do a great deal of hard work that is neither recognized nor properly compensated, with no sense of security about their immediate or long-term future. Writing for Policy Network in 2011, Standing defined the precariat as follows:

It consists of a multitude of insecure people, living bits-and-pieces lives, in and out of short-term jobs, without a narrative of occupational development, including millions of frustrated educated youth who do not like what they see before them, millions of women abused in oppressive labour, growing numbers of criminalised tagged for life, millions being categorised as ‘disabled’ and migrants in their hundreds of millions around the world. They are denizens; they have a more restricted range of social, cultural, political and economic rights than citizens around them.

In Requiem, Chomsky goes on to talk about the issue of taxation.

During the period of great growth of the economy—the 50s and 60s, but in fact earlier—taxes on the wealthy were far higher. Corporate taxes were much higher, taxes on dividends were much higher, simply taxes on wealth were much higher. The tax system has been redesigned [during the subsequent decades], so that the taxes that are paid by the very wealthy are reduced, and, correspondingly, the tax burden on the rest of the population is increased.

In the following graph from Wikipedia, notice the ups and downs in the marginal tax rate for the lowest and highest earners over a hundred year period. According to Investopedia, “A marginal tax rate is the amount of tax paid on an additional dollar of income.” Taxpayers are divided into various tax brackets depending on their income; as they move from lower tax brackets to higher ones, the rate at which they are taxed goes up. In other words, “As income increases, what is earned will be taxed at a higher rate than the first dollar earned.” The exact rates of increase from one tax bracket to another is up to the legislature to decide, which is where the influence of the affluent comes into play. The trends are clear, or, as Chomsky puts it, “the numbers are striking.”


There is a crucial distinction between the marginal tax rate and the effective tax rate. According to Investopedia, “The marginal tax rate refers to the tax bracket into which a business’s or individual’s income falls.” This, however, does not reflect the actual rate at which the income of the said business or individual is taxed, which depends on numerous factors, rules, and loopholes besides the income. For this reason, it is the effective tax rate that is “a more accurate representation of tax liability than an individual or business’s marginal tax rate.”

The tax on corporations is essentially a burden on members of the upper classes who own the most stocks and shares, which is why they invest considerable time and money to get the government to reduce the effective corporate tax rate. In practice, however, there are plenty of ways that allow the corporations to avoid paying even the reduced effective tax rate—such as hiding their wealth in tax havens like Delaware or the Cayman Islands.

The following graph from Wikipedia shows the ups and downs in the effective corporate tax rates over a sixty year period. Notice the downward trend.


Compare the downward trend in the effective corporate tax rate shown above with the upward trend in corporate profits over the same period. The latter is depicted in the following graph, also from Wikipedia.


Chomsky then goes on to say:

Now the shift is towards trying to keep taxes just on wages and on consumption—which everyone has to do—not, say, on dividends, which only go to the rich.

Dividend is basically the cash payment that a company pays on a regular basis out of its profits to those who own the company’s stocks; such income is taxable. When the value of an investment or real estate increases above its purchase price, that’s called a “capital gain.” When that asset is sold, a capital gains tax is applied. Taxes on both dividends and capital gains are to be paid only by the wealthy, which is why there has been a trend towards lowering these taxes. When the tax burden is reduced on the upper classes, it has to be shouldered by the rest of society. As a result, taxes on wages and salaries are increased, as well as the consumption taxes that the government imposes on the sale of goods and services. These latter taxes are collected mainly from the middle and lower classes.

The unfairness of the U.S. tax system is pretty obvious. The top 0.1% of the population gets to increase its wealth while bankrupting the working classes. Worker productivity has consistently increased since the 1970s, but the wages during the same period have been stagnant or declining (when corrected for inflation). Corporations are earning record amounts of profits but are refusing to pay their fair share of taxes to sustain the very society that is allowing them to earn those profits. The public bailouts and stimulus packages after the crash of 2008 helped create an “economic recovery,” but almost the entire new wealth generated during this period has gone to the super-rich, leaving a few crumbs for the rest of the population.


By considering the above facts, anyone can reach the conclusion that the U.S. system of taxation is basically unacceptable and needs to be dismantled. Yet, we don’t see masses of people in the streets carrying the proverbial pitchforks and torches. This is so because while there is a great deal of discontent, there is very little clarity on allocating responsibility. It is all too tempting under such circumstances to look for saviors and to blindly follow charlatans and false prophets.

An important cause for the lack of clarity has to do with how our ideology has been shaped by powerful interests, as discussed under Principle #2. Whenever a grossly unjust arrangement is put into place, some form of justification is required to make this unfairness palatable to the exploited masses. Even when large numbers of people do not swallow the given ideology, those who do tend to make substantial change that much harder.


The most common justification for lowering taxes on wealthy individuals and business corporations, while increasing it on wages and consumer goods, is called “trickle-down economics.” But this is sheer falsehood, since there is no concrete evidence that cutting taxes on the rich somehow “increases investment and increases jobs.” As already mentioned, the exact opposite seems to be the case: the period of maximum economic growth in the United States was a period with some of the highest marginal and effective tax rates. As Chomsky explains: “If you want to increase investment, give money to the poor and the working people. They have to keep alive, so they [will] spend their incomes. That stimulates productions, stimulates investment, leads to job growth, and so on.”



Noam Chomsky’s fifth principle of the concentration of wealth and power is “Attack Solidarity,” which is basically refers to the same idea as “divide and conquer.” Chomsky defines the word solidarity simply as “caring for others,” but it’s a little deeper than that. Solidarity is the social form of altruism.

Solidarity is quite dangerous, from the viewpoint of the masters. You’re only supposed to care about yourself, not about other people. This is quite different from the people they claim are their heroes, like Adam Smith, who based his whole approach to the economy on the principle that sympathy is a fundamental human trait, but that has to be driven out of people’s heads. You’ve got to be for yourself, follow the vile maxim, don’t care about others—which is okay for the rich and powerful but is devastating for everyone else.

The economic ideology that has been dominant over the last half-a-century is known as “neoliberalism.” According to neoliberal doctrine, individuals are naturally selfish and, if everyone is allowed to follow his or her own (narrowly defined) self-interest then the whole society ends up benefiting. Exactly how does selfish behavior on the part of a society’s individual members lead to the overall benefit of that society is a mystery, usually explained by the magic of the free market as orchestrated by the “invisible hand.” In popular mythology, the eighteenth century Scottish philosopher Adam Smith (1723–1790) is believed to have discovered this phenomenon. For anyone who takes the trouble of actually studying Smith’s work, neoliberal ideology turns out to be the exact antithesis of everything he stood for.

Adam Smith was a moral philosopher and one of the first political economists; he was also a leading figures of the Scottish Enlightenment. His two most important books are titled The Theory of Moral Sentiments (1759) and The Wealth of Nations (1776). Smith’s economic theory is grossly misunderstood, or deliberately misinterpreted, primarily because the relationship between his two books is not widely appreciated, and also because he is frequently quoted out of context. Smith believed that sympathy, or benevolence, which is the foundation of solidarity, is one of the most basic human sentiments. We are the kind of creatures who’re naturally inclined to help our fellow human beings. While we often act from self-interest, no society can function if its members are lacking in sympathy.

In sharp contrast, neoliberal ideology emphasizes the pursuit of self-interest as the main human motivation. Since it gets human nature wrong, neoliberalism has to convince people through education and propaganda that they are supposed to be individualistic in their goals and that emotions like sympathy, benevolence, and altruism are to be shunned. We’re told that a society functions best when its individual members act entirely or almost entirely in their own self-interest, and that any attempt by society to restrain that motive is counterproductive to its own well-being. When people are brainwashed into thinking that they have no obligation to help their fellow human beings, they turn against each other and lose their capacity to resist and oppose the powers that be.

In “Requiem,” Chomsky discusses two major American institutions that are based on the principle of solidarity, both of which are under relentless attack—Social Security and public education.

Social Security means, I pay payroll taxes so that the widow across town can get something to live on. For much of the population, that’s what they survive on. It’s of no use to the very rich, so therefore there’s a concerted attempt to destroy it. One of the ways is defunding it. You want to destroy some system? First defund it. Then, it won’t work. People will be angry. They want something else. That’s a standard technique for privatizing some system.

In the United States, the Social Security Act was originally signed into law by President Franklin Roosevelt in 1935, and is now codified as U.S. Code, Title 42, Chapter 7. It is the foundation of the welfare state measures established in the wake of the Great Depression. The two Social Security Trust Funds are funded through payroll taxes collected by the IRS. It is the major source of income for the elderly, people with disabilities, and families needing temporary assistance. Attacks on Social Security include the argument that it reduces private ownership and redistributes wealth through government intervention rather than through free markets. Conservative and libertarian think tanks have been lobbying for the privatization of Social Security since the 1990s. In 1997, President Bill Clinton and Speaker of the House Newt Gingrich reached a secret agreement to “reform” Social Security. Clinton was supposed to make the announcement in his State of the Union address in January 1998, but this was derailed due to the Monica Lewinsky scandal. President George W. Bush also tried to privatize Social Security at the beginning of his second term, but he failed to receive sufficient popular support, while the Democratic victories in the 2006 midterm election basically killed Bush’s proposal in the Congress. Even though Social Security has so far survived privatization attempts, proposals for reducing benefits in a variety of ways keep appearing on a regular basis.

Another way in which the principle of solidarity has been institutionalized is public education, including K-12 schools and state funded colleges and universities. Chomsky calls public education “one of the jewels of American society.” Both sets of institutions are under attack. Writing in the Indypendent, scholar and activist Lois Weiner notes that the neoliberal assault on K-12 public education includes such tactics as “privatization of schools and services; charter schools, public-school closings, fragmentation of the school system’s administrative apparatus; budget cuts, high-stakes standardized testing and the destruction of the teacher unions.”

According to Chomsky:

Public schools are based on the principle of solidarity. I no longer have children in school; they’re grown up. But the principle of solidarity says, I happily pay taxes so that the kid across the street can go to school. Now that’s normal human emotion. You have to drive that out of people’s heads. I don’t have kids in school. Why should I pay taxes? Privatize it. The public education system, all the way from kindergarten to higher education is under severe attack.

Chomsky then goes on to discuss how public support for college education has declined in the United States and how this decline has contributed to huge student debt that only serves the interests of the wealthy elite.

You go back to the Golden Age again, the great growth period in the 50s and 60s. A lot of that was based on free public education. One of the results of the Second World War was the GI Bill of Rights, which enabled veterans, and remember, that’s a large part of the population then, to go to college. They wouldn’t have been able to, otherwise. U.S. was way in the lead in developing extensive mass public education at every level. By now, in more than half the states, most of the funding for colleges comes from tuition, not from the state. That’s a radical change.



Perhaps the most important means for preventing huge concentrations of wealth and power involves appropriate government regulations on business, banking, and finance. The sixth principle, “Run the Regulators,” is meant to circumvent that problem. Chomsky explain the phenomenon of “regulatory capture,” whereby the foxes become guards at the hen house.

If you look over the history of regulation—say, railroad regulation, financial regulation and so on—you find that quite commonly it’s either initiated by the economic concentrations that are being regulated, or it’s supported by them. And the reason is because they know that, sooner or later, they can take over the regulators. And it ends up with what’s called “regulatory capture.” The business being regulated is in fact running the regulators. Bank lobbyists are actually writing the laws of financial regulation—it gets to that extreme. That’s been happening through history and, again, it’s a pretty natural tendency when you just look at the distribution of power.

Regulatory capture occurs when the government agencies responsible for ensuring that businesses follow the regulations allow themselves to become complacent; instead of actively preventing problems from arising in the first place, they lose interest in anticipating or detecting possible problems. They become “captured,” in effect, by the very entities they’re duty bound to keep under control. As Scott Hempling points out, regulatory capture is not the same thing as corruption. Illegal acts like “financial bribery, threats to deny reappointment, promises of a post-regulatory career” do occur, but they are examples of corruption carried out by the entity being regulated. In contrast, regulatory capture describes the attitudes, actions, and non-actions on the part of regulatory agencies that prevent regulations from being fully or properly enforced. According to Hempling:

A regulator is “captured” when he is in a constant state of “being persuaded”: persuaded based on a persuader’s identity rather than an argument’s merits. Regulatory capture is reflected in a surplus of passivity and reactivity, and a deficit of curiosity and creativity. It is evidenced by a body of commission decisions or non-decisions—about resources, procedures, priorities, and policies, where what the regulated entity wants has more influence than what the public interest requires.

In “Requiem,” Chomsky explains that the enormous expansion of lobbying in the 1970s constitute a direct response of the business elite to the restrictions imposed on them by government regulations, the purpose of which was, and continues to be, the control of legislation so that it serves the interests of businesses, and not those of workers, the general public, or the natural environment.

The business world was pretty upset by the advances in public welfare in the 60s, in particular by Richard Nixon. It’s not too well understood that he was the last New Deal president, and they regarded that as class treachery. In Nixon’s administration, you get the consumer safety legislation, safety and health regulations in the workplace, the EPA (the Environmental Protection Agency). Business didn’t like it, of course. They didn’t like the high taxes. They didn’t like the regulation. And they began a coordinated effort to try to overcome it. Lobbying sharply increased.

Business corporations have learned that the money they spent on lobbying is part of their normal cost for doing business in American.


The success of the lobbying initiative could be seen almost immediately, as deregulation began in the Carter administration and gained tremendous momentum during the Reagan era. Chomsky notes how President Reagan bailed out banks like Continental Illinois. Instead of letting the bank fail, the FDIC spent $4.5 billion to bail it out; this was the largest government bailout at the time. That’s also when the term “too big to fail” became popular, after it was used by Congressman Stewart McKinney during a 1984 Congressional hearing. A company that’s “too big to fail” is basically too strong to be regulated by any government agency; it can indulge in questionable practices with the assurance that the taxpayers will rescue it if it starts to go down. One of the highlights of the Reagan presidency was the Savings & Loans crisis, which led to large bailouts. Financial regulations were weakened even further when the Glass-Steagal Act was dismantled under President Clinton, as already discussed under Principle #3. In 2008–09, the Bush and Obama bailouts of Wall Street set a new record. As a result of the Savings & Loans scandal of the 1980s, however, more than a thousand bankers were jailed. Nothing like that happened after the global financial meltdown of 2008. As Matt Taibbi explains here, the individuals and institutions responsible for the suffering of millions of people were never held accountable, let alone convicted or punished, mainly because of their deep financial ties with the Washington elite.


Illustration by Victor Juhasz

According to Chomsky, all of these bailouts violate the ideology of neoliberal capitalism, since governments are not supposed to intervene in the functioning of the “free markets.” Conservatives use the term “nanny state” to criticize public assistance like Social Security or publicly funded programs like schools. In practice, however, they are eager to ask for taxpayer bailouts every time a major corporation is about to go bankrupt or an investment bank is about to face the consequences of its own irresponsible behavior.

In a capitalist economy, you wouldn’t do that. In a capitalist system that would wipe out the investors who made risky investments. But the rich and powerful, they don’t want a capitalist system. They want to be able to run to the nanny state as soon as they’re in trouble, and get bailed out by the taxpayer. That’s called “too big to fail.”

While happily accepting government bailouts, the American oligarchs continue to preach the free market ideology and the need for small government to everyone else.

Meanwhile, for the poor, let market principles prevail. Don’t expect any help from the government. “Government is the problem, not the solution.” That’s essentially neoliberalism. It has this dual character which goes right back in economic history: one set of rules for the rich; opposite set of rules for the poor.

It didn’t have to be that way. When President Obama took office in January 2009, he could have listened to the advice of experts on how to fix the financial sector. Some even thought that the tremendous mandate that Obama had received in the elections meant that the had a once-in-a-lifetime chance of bringing about real change—that he could become a worthy successor to FDR. It soon became apparent that these hopes for change were illusory, as the President surrounded himself with the same people whose policies and actions had previously produced one crisis after another.

There are Nobel laureates in economics who significantly disagree with the course that we’re following. People like Joe Stiglitz, Paul Krugman, and others, and none of them were even approached. The people picked to fix the crisis were those who created it—the Robert Rubin crowd, the Goldman Sachs crowd—they created the crisis [and] are now more powerful than before.

Chomsky is fond of saying that the increasing concentration of wealth and power should not surprise anyone. It is not by accident or bad luck. It is the natural and expected result of the policies that our representatives have supported for decades. If the individuals who run large corporations and financial institutions end up as chief government regulators, there is no reason to think that they would suddenly become defenders of the rights and interests of the general population.

Nothing surprising about this. It’s exactly the dynamics you’d expect. … Everywhere you look, policies are designed this way, which should come as absolutely no surprise to anyone. That’s what happens when you put power into the hands of a narrow sector of wealth, which is dedicated to increasing power for itself, just as you’d expect.

Members of the United States Congress are responsible for safeguarding the rights and interests of the American people whose votes elect them. In practice, they tend to pay a lot more attention to what lobbyists want than what their voters need. Every member of the Congress has to spend about 30% of his or her time asking people for money, and each of them must raise about $10,000 a week for the next election campaign. Most lobbyists represent the biggest donors, and the policies they favor cannot be ignored. American business has certainly taken the advice of the Powell Memorandum. Writing in the Atlantic, Lee Drutman points out that business corporations spend approximately $2.6 billion on lobbying. “Of the 100 organizations that spend the most on lobbying, 95 consistently represent business.” Large corporations often have more than a hundred lobbyists working for them in Washington DC, “allowing them to be everywhere, all the time.” Which citizens’ group can possibly match the spending power and ubiquitous reach of Exxon Mobile or Goldman Sachs? “For every dollar spent on lobbying by labor unions and public-interest groups together, large corporations and their associations now spend $34.” Is it any wonder, then, that laws and policies are made that consistently increase the wealth and power of business corporations? Or that regulatory agencies like the EPA or the SEC are essentially powerless to enforce public interest regulations? No surprises here. You can’t put the foxes in charge of guarding the hen house and still hope to see your brood cackling in the morning.


The third principle of the concentration of wealth and power is “Redesign the Economy,” i.e., use your political influence to change the rules of the economic system, so that it favors the already advantaged class in new and more powerful ways. Noam Chomsky identifies two major factors under this principle: (1) financialization of the economy and (2) the offshoring of production.


Writing in Forbes, Mike Collins defined the term financialization as “growing scale and profitability of the finance sector at the expense of the rest of the economy and the shrinking regulation of its rules and returns.” In the film, Chomsky says that financial institutions—”banks, investment firms, insurance companies, and so on”—have a legitimate role to play in the economy, but starting in the 1970s they started to expand their power and influence beyond that legitimate role, thereby enriching the wealthy and making the economy vulnerable to crashes. During this period, the U.S. economy weakened due to the shift from manufacturing to finance, until, in the words of Mike Collins, “The emphasis was no longer on making things—it was [on] making money from money.” According to Chomsky:

By 2007, right before the latest crash, they had literally 40% of corporate profits—far beyond anything in the past. Back in the 1950s, as for many years before, the Unites States’ economy was based largely on production. The United States was the great manufacturing center of the world.Financial institutions used to be a relatively small part of the economy, and their task was to distribute unused assets, like bank savings, to productive activity. That’s a contribution to the economy. Regulatory system was established. Banks were regulated, the commercial and investment banks were separated [to] cut back their risky investment practices that could harm private people. There were, remember, to financial crashes during the period of regulation. By the 1970s that changed.

In the United States, the history of growing inequality is really a history of the systematic dismantling of the New Deal. Chomsky points out that Richard Nixon was the last New Deal President, though he is rarely recognized as such. Starting from Franklin D. Roosevelt in the early 1930s all the way to Richard Nixon in the early 1970s, the United States Government’s domestic spending shows a continuous upward trend. This trend started to reverse with Jimmy Carter, who was also the first President to increase the social security tax, reduce the capital gains tax, and started the process of deregulation. The dismantling of the New Deal became an increasingly important priority in the successive administrations of Ronald Reagan, George H. W. Bush, and Bill Clinton. Since the mid 1970s, both the Republicans and the Democrats have played an active role in this dismantling.

Many of the financial regulations put in place by President Roosevelt under the New Deal were intended to prevent risky behavior on Wall Street, and these regulations functioned well in preventing financial bubbles and crashes.The most of famous of these was the Glass–Steagall legislation (named after Senators Carter Glass and Henry Steagall), also known as the Banking Act of 1933 (revised in 1935). Among other financial reforms, the Glass–Steagall Act established a firm separation between commercial and investment banking. Commercial banks could issue short-term loans but were prohibited from speculating with depositors’ money, while investment banks could invest in equity and long-term loans but were not allowed to take deposits. These provisions of the Glass–Steagall Act were repealed under Bill Clinton as part of his drive towards deregulating the financial sector.


President Franklin Roosevelt at the signing of the Banking Act in 1933.


The role of the United States Congress in the dismantling of the Glass-Steagall Act is especially instructive. Since the mid 70s, no fewer than 25 attempts were made to repeal that law. In 1991, the George H. W. Bush administration tried to amend the law so that commercial banks could participate in investment activities. The House voted 216–200 against the proposed amendment. Seven years later, in 1998, the House passed a very similar legislation 214–213. However, these numbers don’t tell the full story.

As Roslyn Fuller explains in her book Beasts and Gods: How Democracy Changed its Meaning and Lost its Purpose (2015), there was a critical difference between the two pieces of legislation. The 1991 reform effort favored banking interests while the one in 1998 favored insurance and investment interests. The latter were against the 1991 reform, and so they gave substantial donations to Congressional Democrats, who were in the majority, in order to prevent the law from being passed. Their spending paid off, as 74% of Democrats opposed the amendment, as compared to only 22% of Republicans. The same interests then supported the 1998 version of the legislation and, to ensure success, significantly increased their financial contributions to Congressional Republicans, who were now in the majority. As a result, this time 77% of Republicans supported the amendment, as compared to only 38% of Democrats.

Out of the 182 representatives who voted in both 1991 and 1998, two-thirds switched their votes depending on which way the wind was blowing. In 1991, Democrats voted in favor of the insurance and investment companies from whom they were receiving substantial sums of money, but by 1998 the money supply had shifted in favor of the Republicans, which made it rational for the Democrats to start favoring their more reliable donors from the banking industry. On the other hand, the Republicans supported the banking interests in 1991, but changed their votes seven years later in response to the increased generosity of the insurance and investment interests.

Finally, in 1999, the Congress passed the Financial Services Modernization Act with bipartisan support, putting the final nail in the coffin of the Glass-Steagall Act. By this time, the interests of both banking and investment companies had converged, and this was reflected in how the House voted: 362–57 in favor.


President Bill Clinton signing the Gramm–Leach–Bliley Act in 1999.

As the New York Times reported at the time, Treasury Secretary Larry Summers was ecstatic when the Congress passed  the Financial Services Modernization Act (also known as the Gramm–Leach–Bliley Act) : “This historic legislation will better enable American companies to compete in the new economy.” Senator Charles E. Schumer (D-New York) praised the new legislation: “There are many reasons for this bill, but first and foremost is to ensure that U.S. financial firms remain competitive.” In contrast to these optimistic assessments, Senator Byron L. Dorgan (D-North Dakota) made the following prescient comment: “I think we will look back in 10 years’ time and say we should not have done this but we did because we forgot the lessons of the past, and that that which is true in the 1930’s is true in 2010.” Today, President Clinton’s repeal of the Glass-Steagall Act is widely believed to have paved the way for conditions that made the financial meltdown of 2008 much worse than what it might have been; others believe that the meltdown would not have happened if that New Deal legislation were still in place.

As already mentioned, however, the trend toward financial deregulation had started much earlier—specifically, during the Carter administration. The landmark event was a 1978 decision by the US Supreme Court that practically ended all limits on interest rates. In Marquette National Bank v. First of Omaha Service Corp., the Supreme Court ruled that national banks did not have to follow the interest rate regulations of the borrower’s state but only those of their own home states. This decision provided a powerful incentive for financial companies to relocate to the states with the least onerous regulations, thereby encouraging states to do abolish anti-usury laws and end interest rate ceilings. Two years later, the U.S. Congress passed the Depository Institutions Deregulation and Monetary Control Act, which included a provision exempting federally chartered savings banks, installment plan sellers, and chartered loan companies from state mandated anti-usury laws. As a result, both the federal judiciary and the legislature effectively ended the age-old practice of capping interest rates. The resulting competitive pressure led to an explosion of financial services, since a lot more money could now be made through lending than through investing in the real economy.

Writing in Harper’s, Thomas Geoghegan explained “how the dismantling of usury laws” produced such as results as “the loss of our industrial base” and “the loss of our best middle-class jobs.”

First, thanks to the uncapping of interest rates, we shifted capital into the financial sector, with its relatively high returns. Second, as we shifted capital out of globally competitive manufacturing, we ran bigger trade deficits. Third, as we ran bigger trade deficits, we required bigger inflows of foreign capital. We had “cheap money” flooding in from China, Saudi Arabia, and even the Fourth World. May God forgive us — we even had capital coming in from Honduras. Fourth, the banks got even more money, and they didn’t even consider putting it back into manufacturing. They stuffed it into derivatives and other forms of gambling, because that’s the kind of thing that got the “normal” big return; that is, not five percent but 35 percent or even more.

As the financial sector became more profitable than manufacturing, it started to bloat up in unprecedented ways. In the documentary, Noam Chomsky describes the results of financialization as follows:

You started getting that huge increase in the flows of speculative capital—just astronomically increase—enormous changes in the financial sector from traditional banks to risky investments, complex financial instruments, money manipulation, and so on. Increasingly, the business of the country isn’t production, at least not here. The primary business here is business. … By the 1970s, [U.S. corporations], say General Electric, could make more profit playing games with money than you could by producing in the United States. You have to remember that General Electric is substantially a financial institution today. It makes half its profits just by moving money around in complicated ways. And it’s very unclear that they are doing anything that’s of value to the economy.

The following graphs tell the story of how the decline of American manufacturing has been accompanied by the rise of American finance.


Finance & Manufacturing share of domestic corporate profits.

Offshoring of Production

The second factor in how the economy was redesigned to suite the wealthy was the offshoring of production. This consists of two basic components: (1) lobbying governments to deregulate the movement of goods and capital across national borders; (2) moving factories out of countries that have strong labor and environmental protection laws to countries were workers can be made to work longer hours and at lower wages.


Chomsky explains:

The trade system was reconstructed with a very explicit design of putting working people in competition with each other all over the world. And what it’s led to is a reduction in the share of income on the part of working people. It’s been particularly striking in the United States, but it’s happening worldwide. It means an American worker is in competition with the super-exploited worker in China. Meanwhile, highly paid professionals are protected. They are not placed in competition with the rest of the world—far from it. And, of course, the capital is free to move. Workers aren’t free to move; labor can’t move, but capital can. Well, again, going back to the classics like Adam Smith, as he pointed out, the free circulation of labor is the foundation of any free trade system. But workers are pretty much stuck. The wealthy and the privileged are protected, so you get obvious consequences.

As the world is increasingly integrated into the global economy, large business corporations are able to lower their labor costs by manufacturing their products in relatively poor countries. This allows them to bypass the rights that working people have won in more developed countries, as well as avoid the various environmental regulations. Exploitation of labor goes on in places like India, China, Bangladesh, and Mexico, while unemployment rises in the United States. Capital can move anywhere in the world in search of higher profits, but workers aren’t allowed to go from one country to another in search of higher wages or better working conditions.


Workers in a Chinese iPhone factory.

Globalization protects the owners of capital while further degrading those who have nothing to sell but their labor. The consequences include increasing wealth for the already wealthy and diminishing prospects for the working class. Chomsky notes that such consequences are not accidental; they are the intended goals for which offshoring of production is pursued in the first place. Indeed, it is not uncommon for economic policy-makers to proudly take credit for institutionalizing policies that intensify the financial insecurity of the lower and middle classes. Such insecurity helps maintain obedience on the part of the population.

Alan Greenspan, when he testified to Congress [in 1997], he explained his success in running the economy as based on what he called “greater worker insecurity.” Keep workers insecure, they’re going to be under control; they’re not going to ask for, say, decent wages or decent working conditions, or the opportunity of free association, meaning unionize. Now, for the “masters of mankind,” that’s fine—they make their profits, but for the population it’s devastating.

The Greenspan quotation that Chomsky is referring to is from “Monetary Policy Report to the Congress,” dated February 26, 1997. Greenspan, who was chairman of the Federal Reserve from 1987 to 2006, had made the following comment: “Atypical restraint on compensation increases has been evident for a few years now and appears to be mainly the consequence of greater worker insecurity.”

buckley_chomskyAt this point in the narrative, the film digresses a bit from the main discussion to provide an introduction of Noam Chomsky himself. We learn about Chomsky as a groundbreaking intellectual who transformed the field of linguistics in the 1950s before becoming famous for his public opposition to the Vietnam war during the mid 1960s. We watch a short clip from William F. Buckley’s interview of (and/or debate with) a much younger Noam Chomsky that took place in New York on April 3, 1969, as part of the TV show “Firing Line.” The video of the debate is available here, while a complete transcript can be found here. In “Requiem,” Buckley is seen introducing a younger Chomsky as follows:

Professor Noam Chomsky is listed in anybody’s catalog as one of the half-dozen top heroes of the New Left. This standing he achieved by adopting over the past two or three years a series of adamant positions rejecting at least American foreign policy, at most America itself.

The older Chomsky then responds to the charge of “anti-Americanism,” which is fun to watch. He points out that in all societies anyone who criticizes the status quo usually becomes a target of various types of attacks, but it is only under totalitarian rule that the critics of concentrated wealth and power are accused of being enemies of their own countries—as in “anti-Soviet” or “anti-American.” The very existence of such forms of verbal abuse in a “democratic” society is rather revealing.

Systemic Problems (2)

The following discussion is based on, and inspired by, the work of Jack Harich and associates, which can be accessed here.

A systemic problem is one that originates in the structure of a system rather in the behavior of individuals participating in the system. This does not mean that individuals play no role in causing the problem; rather, it means that replacing the individuals or attempts at changing their behavior, without changing the system’s structure, will not solve the problem. This is because in any social system there is a dynamic and dialectical relationship between the structure of the system (including all the interconnections, feedback loops, explicit or implicit rules, goals, etc.) and the individuals who participate in the system. The human factor is important—it is, after all, the people whose aggregate behavior is largely responsible for bringing the structure into existence and whose continuing participation is what maintains that structure over time. The structure, however, tends to acquire a reality of its own—becoming stronger than its individual participants in many ways—that both influences and limits the behavior choices of the people operating within the system. The structure not only encourages and rewards certain behavioral tendencies but also makes alternative choices harder to imagine, let alone implement.


A social problem that persists—and often gets worse—over time, despite the application of various intuitive or commonsense solutions, is likely to be a systemic problem whose root causes lie in the structure of the system. Such a problem cannot be solved unless its root causes are accurately identified and the appropriate solution elements devised to push at high leverage points in order to address those root causes. A root cause is defined as the deepest element in a causal change that is susceptible to resolution.

When a social system operates in a way that produces desirable outcomes, we can say that it is functioning in the right mode; when it operates in way that produces undesirable outcomes, we can describe it as functioning in the wrong mode. Solving a social problem is therefore a matter of shifting the mode of the relevant social system from wrong to right. But the persistence of a social problem over a long period of time, despite huge efforts to solve it, indicates not only that the relevant social system is operating in the wrong mode, but that it has somehow become locked into that undesirable state. In other words, structural mechanisms such as feedback loops have developed that prevent the system from changing in the desired direction. As soon as any effort to shift the system’s mode starts to succeed, these mechanisms spring into action and immediately reverse those gains. When effort after effort fails to change the mode of the system, activists ought to realize that trying harder is not the solution. They need to go back to the drawing board and examine their own assumptions about the causes of the problem. In most cases of stubborn social problems, the lack of success is not due to a deficiency of effort on the part of the activists but due to their incorrect diagnosis.

To arrive at the correct causal analysis of a persistent large-scale problem, social diagnosticians must consider three types of causal forces: (1) root cause forces, (2) superficial solution forces, and (3) fundamental solution forces. These forces (as well as new root cause forces) appear in blue text in Jack Harich’s “Standard Social Force Diagram” depicted below.


When a social system is locked into the wrong mode, that’s because root cause forces arising from root causes are operating to keep the system in that mode and to oppose and defeat any and all efforts to bring about a mode change. A system operating in the wrong mode produces undesirable outcomes or symptoms, such as deforestation, high morbidity, ineffective government, too many industrial accidents, lack of sufficient housing, and so on. Concerned citizens, who find these outcomes disturbing and unacceptable, reason their way backwards from the symptoms to their possible causes. Most of the time, however, they end their analysis prematurely—as soon as they have identified what appears to them as a plausible explanation for the symptoms but is, in reality, only a set of intermediate causes. Thinking that they have found what they were looking for, they focus their efforts at unleashing superficial solution forces, which leads them to devise superficial solutions that push on low leverage points in order to address the intermediate causes. As expected, their strategy fails to solve the problem since superficial solution forces are, by definition, weaker than the root cause forces. The error, of course, lies in the incorrect diagnosis.

A successful strategy to bring about a systemic change must begin with the correct diagnosis. Instead of ending their analysis as soon as they’ve found the first plausible explanation, activists must keep digging until they have identified the root causes of the problem that lie hidden in the system’s structure, sometimes deep underneath the intermediate causes. Once the activists have identified the root causes, they will be able to focus their efforts at unleashing fundamental solution forces, which will lead them to devise fundamental solutions that push on high leverage points in order to address the root causes. Since fundamental solution forces are, by definition, stronger than root cause forces, this is the only strategy that can bring about an actual mode change in the system. As the fundamental solution forces introduce into the system new root causes, the latter give rise to new intermediate causes, which, in turn, produce new symptoms (or desirable outcomes). The new root cause forces will also give rise to new structural mechanisms including feedback loops that will keep the system locked into the right mode.The persistence of the desirable outcomes over time, and the ability of the system to maintain itself in the right mode despite any opposing efforts, will indicate that the system has, in fact, permanently shifted from being in the wrong mode to being in the right mode. This is the definition of a systemic change. Anything short of that does not deserve to be called a “success.”

The Fifth Discipline (1)

The_fifth_discipline_coverThe following post consists of quotations from Peter Senge’s book, The Fifth Discipline: The Art and Practice of the Learning Organization, along with my own attempts at paraphrasing them. My purpose is to put in one place the most important lessons that I think I should learn from reading The Fifth Discipline, and so the following material is best described as consisting of my personal notes more than anything else. Since this isn’t a summary of the book, I will select passages that appeal to me for one reason or another, and not necessarily because they’re central to the author’s argument.

There are three basic principles of systems thinking, as described in chapter 3.

Structure Influences Behavior: Different people in the same structure tend to produce qualitatively similar results. When there are problems, or performance fails to live up to what is intended, it is easy to find someone or something to blame. But, more often than we realize, systems cause their own crises, not external forces or individuals’ mistakes.

Structure in Human Systems is Subtle: We tend to think of “structure” as external constraints on the individual. But structure in complex living systems … means the basic interrelationships that control behavior. In human systems, structure includes how people make decisions—the “operating policies” whereby we translate perceptions, goals, rules, and norms into action.

Leverage Often Comes from New Ways of Thinking: In human systems, people often have potential leverage that they do not exercise because they focus only on their own decisions and ignore how their decisions affect others.

The structure of a system is found in the pattern of the interrelationships among the system’s key variables. We can easily identify the key variables of a system, but it takes time and effort to understand how they influence each other and how the patterns of their mutual influence changes in response to a change elsewhere in the system. Even though structures are not obvious, we can discern their power simply by noticing the feeling that compels us to act in particular ways and do what is expected of us. When you find yourself saying or thinking “I have no choice but to …,” it is very likely that you’re facing a structural constraint or imperative.

pogoIndeed, the most important factor that shapes people’s actions is the structure of the system within which they are operating. This is why very different individuals often end up behaving in very similar ways when they are placed within the same position in a system—for instance, an otherwise peaceful person can turn overnight into an arrogant bully  when assigned the position of a prison guard. Yet, it would be incorrect to assume that people are merely cogs in a machine with no agency of their own. In the case of a human system—such as a school, a company, or a prison—individuals do not exist apart from the system’s structure; rather, they are very much an integral part of it. This means that blaming someone or something else is not helpful since it doesn’t take us off the hook; nor is blaming the “system” a legitimate excuse. It also means that we are not at the mercy of external forces that are lie somewhere else, above and beyond our control; rather, we have considerable leverage precisely because we are intimately connected to the web of influences that defines a system’s structure. Just as the system’s structure influences and shapes our behavior, we too have the power to change some of the structure within which we function. The flow of influence is active in both directions.

Towards the end of chapter 3, Senge discusses the three levels at which any complex situation can be analyzed and explained. The resulting explanations may all be valid, but they do not have the same usefulness. Event explanations focus on gathering such information as “who did what to whom” and tend to trap us in a reactive mode. Pattern of behavior explanations focus on identifying long-term trends and figuring out their implications; this approach begins to liberate us from the reactive mode and allows us to deliberately respond to the situation. Structural explanations are the least common and the most powerful. They focus on understanding the system’s structure in order to find the deeper causes that generate the observed patterns of behavior; such explanations can help us identify the root causes of the situation, thereby empowering us to take the appropriate corrective action. According to Senge:

The reason that structural explanations are so important is that only they address the underlying causes of behavior at a level that patterns of behavior can be changed. Structure produces behavior, and changing underlying structures can produce different patterns of behavior. In this sense, structural explanations are inherently generative. Moreover, since structure in human systems includes the “operating policies” of the decision makers in the system, redesigning our own decision making redesigns the system structure.

It seems that one progresses through these levels of explanation by repeatedly asking the “why” question. Suppose a particular event happened—a relatively new car broke down, a marriage disintegrated, a child contracted an illness that is supposed to have been eradicated, a family lost their home to foreclosure. When we ask “why” for the first time, the answer will provide an event explanation, consisting of the specific and immediate chain of causation that led to the particular event. Thus, the house was foreclosed because Mr. Smith was laid off by his employer, which prevented him for continuing to make his mortgage payments.

Asking “why” a second time will broaden our view and help us see that the specific behavior of particular individuals that caused the event in question was not a rare or isolated occurrence; rather, similar behavior could be observed in many other cases as well. In other words, we find that we are dealing with a relatively common phenomenon resulting from the same overall pattern of behavior. There are larger processes and long-term trends at play. It turns out, for example, that it wasn’t just Mr. Smith who lost his home; tens of thousands of similar cases were happening across the nation, all at the same time, mainly because of certain reckless practices by creditors.

Understanding the pattern of behavior explanation is empowering, for it allows us to see broader trends, which helps us anticipate future events and plan accordingly—but we have not yet arrived at the root causes. The second level of explanation calls for asking the “why” question a third time. Why is this particular pattern of behavior occurring? What elements of the system are allowing and/or incentivizing people to act in this way? To answer this question, we’d have to go into the structure of the system where the underlying  causes are to be found.

The structural explanation will help us see, for example, that the reckless practices by the creditors were made possible by the popularity of certain financial instruments on Wall Street, which in turn was due to financial deregulation many years earlier as well as decline in government oversight, and that these two trends were themselves caused by other processes or mechanisms. We would have to keep digging until we reach the root causes, i.e., the set of final elements in the causal chain that are susceptible to resolution. This is the level at which we must apply the corrective action if we are to have any real impact on the situation.


To summarize, focusing on a particular event and asking “why” produces an event explanation, revealing only the immediate causes; problem-solvers are trapped in a reactive mode. Asking “why” a second time yields a pattern of behavior explanation, revealing the intermediate causes; problem-solvers can now respond to shifting trends instead of reacting to particular events. Asking “why” a third time leads to a structural explanation, revealing the root causes; such an explanation is generative, since it empowers the problem-solvers to exercise maximum leverage.

Systemic Problems (1)

The following discussion is based on, and inspired by, the work of Jack Harich and associates, found at their website.

Problems come in all shapes and sizes, and they vary in terms of causes, scope, duration, etc. Here I am concerned with problems that seem to result directly or indirectly from the decisions made by individuals. Some of these problems are rarely encountered, since they occur due to mistakes or accidents, while other problems tend to occur over and over again.The problems that show up repeatedly may simply be the result of individuals making wrong decisions on a regular basis. Such problems are relatively easy to solve, since it usually doesn’t take a lot of effort to trace the symptoms of the problem to the particular decisions of particular individuals and then educating or training those individuals so they start making the right decisions.

Not all problems are so easy to solve. The most important problems are those that are so widespread that we might file them under the category of “social problems.” Typically, these problems persist over multiple generations and tend to resist commonsense or intuitive solutions. Examples include drug addiction, prostitution, obesity, and domestic violence. In any given case, we may be tempted to identify the victim’s own past decisions as the cause of his or her current undesirable state; for example, obesity can be traced to the consumption of large amounts of junk food and domestic violence can be traced to poor judgment in selecting a spouse. Upon deeper investigation, however, we discover that those past decisions were made under constrains over which the individual had little or no control, and that relative to those constraints the decisions seemed pretty rational at the time they were made. This insight becomes even more compelling when we approach these problems not as individual events or isolated occurrences but in terms of large-scale patterns that repeat themselves over and over again. If only a handful of people were to become addicted to alcohol or another drug in the course of a year, we may attribute the problem to the individuals’ personal situation and bad decisions; but when drug addiction starts affecting millions of people at any given time, we have to abandon our individualistic approach and start thinking in terms of social systems. For this to happen, we must zoom out in order to look at the behavior of many more people than just the ones who are directly involved, at which point we would realize that the problems that occur repeatedly and on a large, society-wide scale cannot be traced to particular individuals making wrong decisions. Rather, these problems occur precisely because all who are directly or indirectly involved are trying to do the best they can with the resources they have under the objective conditions they’re facing.

Such problems may be called systemic problems because they originate not so much from the behavior or decisions of individuals, but rather from the structure of the system itself.

Imagine a situation where everyone is making the right decisions—they are doing more or less what they’re supposed to—and it is precisely these decisions that are giving rise to certain undesirable symptoms on a large scale. You might ask: how can right decisions lead to problems? The answer is that these decisions are “right” only within the constrains of the particular system that is shaping the behavior of the individuals in question; and so, when we look at the situation from a viewpoint that is not constrained by the system, it becomes obvious that the decisions being made are actually wrong. This does not mean that individual behavior plays no role in the genesis of these problems. Individuals, after all, do not stand totally aloof or apart from the system; they participate in the system and thereby make it happen through their decisions. The point, rather, is that the decisions of individuals are themselves influenced by the rules and goals of the system in which they participate. We cannot, therefore, pin the blame for a systemic problem on anyone in particular. The individuals making the problematic decisions are part of the system, yet they are not the real culprits; their decisions only represent the intermediate causes of the problem. The root causes of the problem lies elsewhere, i.e., in the way the system itself is structured.

Consider obesity as a case in point. The responsibility for the problem of obesity in the United States cannot be placed simply on the poor eating habits of large sections of population. We must ask: why are so many people eating unhealthy food? It turns out that certain government policies are providing strong incentives for doing so, including subsidies for corn farming that makes high-fructose corn syrup available at low prices, which results in junk food being considerably cheaper than healthy options like fruits and vegetables. And if we ask the next logical question—why do these policies exist in the first place?—we would only uncover additional layers of systemic causation.

The lesson here should be obvious: We cannot solve a systemic problem simply by replacing some individuals or by educating or training them differently; the fresh arrivals will behave in the same way as their predecessors, and education or training will not have a major effect—unless we change the rules and/or the goals of the system. The only way to solve a systemic problem is to find and implement a systemic solution. The individuals through their decisions are merely responding to the expectations and constraints that the system imposes on them; if those expectations and constraints are changed, so would their decisions.

A systemic solution is one that change those elements of the system’s structure that represent the root causes of the problem. In contrast, a superficial solution is one that addresses the intermediate causes while ignoring the root causes. By definition, a superficial solution cannot solve the problem, or solves it only at an insufficiently small scale and/or temporarily. Thus, Michelle Obama’s effort to inspire and encourage healthy eating habits among kids is a superficial solution to the problem of obesity, just as her husband’s effort kill suspected terrorists through drone strikes is a superficial solution to the problem of terrorism. Once it becomes obvious that a particular solution is not working, it’s time to take a step back and consider the possibility that the solution being implement is only addressing intermediate causes. What is needed at that point is a deeper analysis of the systemic structure (i.e., the rules and goals of the system) in order to find the root causes of the problem so that systemic solutions can be devised and implemented. The insistence on using the failed superficial solutions not only prevents the problem from being solved while wasting time, energy, and resources; it also makes the problem bigger and more complicated and therefore harder to solve. In due course, superficial solutions themselves become part of the problem.

A solution consists of several solution elements. The place in the structure of a system where a particular solution element is to be applied is known as a leverage point. Results vary depending on whether the leverage point being pushed will affect the intermediate causes or the root causes. When a relatively large amount of force is applied at a particular leverage point in the system for a relatively long period of time but the systems shows only a small amount of change, then this provides good evidence that we’ve been pushing at a low leverage point, and that our effort only affected the intermediate causes. In other words, we were implementing a superficial solution, not a systemic solution. On the other hand, when the solution elements are applied at high leverage points in the system, they affect the root causes of the problem; in this case, the application of a relatively small amount of force can produce a large amount of change in the system.

Superficial solutions are based on an incomplete understanding of the system’s structure, which is why they provide solution elements that are applied at low leverage points and only address the intermediate causes of the problem. Systemic solutions, in contrast, are based on a sufficiently complete understanding of the system’s structure, which is why they provide solution elements that are applied at high leverage points and affect the root causes of the problem.

Requiem for the American Dream (3)

Throughout the history of the United States, there has been a constant struggle between two tendencies: On the one hand, we have “a democratizing tendency that’s mostly coming from the population—a pressure from below.” On the other hand, there is the tendency coming from the elite to maintain the status quo, and to reverse any concessions that may have been given in response to popular demands—a pressure from the top. As a result of these two tendencies acting and reacting in relation to each other, we see in our history alternating “periods of progress” and “periods of regression.” Thus, the 1960s constituted a period of “significant democratization,” and so the rights and freedoms won during that time brought about a powerful backlash from the elite, resulting in decades of regression and the reversal of those victories.

According to Chomsky:

[In the 1960s] sectors of the population that were usually passive and apathetic became organized, active, started pressing their demands.And they became more and more involved in decision-making, activism, and so on.It just changed the consciousness in a lot of ways: minority rights, women’s rights, concern for the environment, opposition to aggression, concern for other people. These are all civilizing effect. And that caused great fear. . . . I should have, but I didn’t anticipate the power of the reaction to these civilizing effects of the 60s—the backlash!

principle 2

Chomsky continues: “There has been an enormous, concentrated, coordinated business offensive beginning in the 70s, to try to beat back the egalitarian efforts that went right through the Nixon years.” Chomsky suggests two key documents from the early 1970s as excellent sources for understanding what the elite were thinking at that time and how they decided to respond to the challenge of democracy: (1) the Powell Memorandum from the conservative side of the political spectrum and, (2) from the liberal side, the first major report of the Trilateral Commission, titled The Crisis of Democracy. Both documents reveal the American elite’s alarm at the fact that the population is becoming too informed, too conscious, and too assertive in demanding its rights, as well as their recognition of the urgent need to influence the institutions that shape public opinion.

Chomsky views the Powell Memo and the Trilateral Commission’s report as representing the two ends of the extremely narrow range of thinking that goes on among the American elite. Despite their apparent differences in ideology, both the conservative and the liberal camps agree on doing everything possible to expand capitalism and keep the population in its place by subverting the democratic impulse.

The Powell Memo: “Attack on American Free Enterprise System”

In the late 60s and early 70s, the U.S. federal government responded to popular pressure by expanding its regulatory control over big business. This included legislation meant to ensure environmental protection, occupational safety, and the safeguarding of consumer rights. The new regulatory regime was seen by the business elite as an attack on their profits, leading them to the conclusion that they must organize politically in order to maintain their economic power.

PowellOne of the most important documents that we must study to understand the elite reaction is the famous “Powell Memorandum.” Titled “Attack on American Free Enterprise System,” the memo was submitted to the U.S. Chamber of Commerce on August 23, 1971. It was written by a corporate lawyer named Lewis Powell, at the request of his friend and neighbor Eugene Sydnor Jr., who at the time was chair of the Education Committee of the Chamber of Commerce. Powell himself was appointed by Richard Nixon to the U.S. Supreme Court only a couple of months after he wrote his famous memo. The document was originally intended to be confidential, but it was soon leaked to the press and subsequently published in the newsletter of the Chamber of Commerce.

The key point of the Powell Memo was that business must use its financial power for political purposes; it was vital for the business elite to gain influence over the government and the legislature in order to ensure the survival and empowerment of the “free enterprise system.” Powell argued that “Business must learn the lesson … that political power is necessary; that such power must be assiduously cultivated; and that when necessary, it must be used aggressively and with determination—without embarrassment and without the reluctance which has been so characteristic of American business.” To achieve greater influence over the political sphere, business must organize itself and plan for the long-term. “Strength lies in organization, in careful long-range planning and implementation, in consistency of action over an indefinite period of years, in the scale of financing available only through joint effort, and in the political power available only through united action and national organizations.”

An important part of Powell’s prescription was the necessity of changing public opinion in favor of the “free enterprise system” through influencing the media and the education system. He wrote:

Reaching the campus and the secondary schools is vital for the long-term. Reaching the public generally may be more important for the shorter term. The first essential is to establish the staffs of eminent scholars, writers and speakers, who will do the thinking, the analysis, the writing and the speaking. It will also be essential to have staff personnel who are thoroughly familiar with the media, and know how to most effectively communicate with the public. The national television networks should be monitored in the same way that textbooks should be kept under constant surveillance. This applies not merely to so-called educational programs (such as ‘Selling of the Pentagon’), but to the daily ‘news analysis’ which so often includes the most insidious type of criticism of the enterprise system.

While changing public opinion was a slow and gradual process, Powell emphasized that business must maintain an uncompromising focus on gaining political power.

But one should not postpone more direct political action, while awaiting the gradual change in public opinion to be effected through education and information. Business must learn the lesson, long ago learned by labor and other self-interest groups. This is the lesson that political power is necessary; that such power must be assiduously cultivated; and that when necessary, it must be used aggressively and with determination —without embarrassment and without the reluctance which has been so characteristic of American business.

The Powell Memo is available here, along with other primary sources that provide additional background. Commentaries are found herehere, here, here, and here.

Report of the Trilateral Commission: The Crisis of Democracy

crisisThe Trilateral Commission was created in July 1973 under the initiative of David Rockefeller. According to the Commission’s website, it was formed “by private citizens of Japan, Europe (European Union countries), and North America (United States and Canada) to foster closer cooperation among these core industrialized areas of the world with shared leadership responsibilities in the wider international system.” The American members of the Trilateral Commission included, among others, Henry D. Owen (Brookings Institution), George S. Franklin (Council on Foreign Relations), Robert R. Bowie (Harvard Center for International Affairs), William Scranton (former Governor of Pennsylvania), as well as Alan Greenspan and Paul Volcker (later heads of the Federal Reserve). Members of the Trilateral Commission were heavily represented in the Carter Administration, including Walter Mondale (Vice President), Zbigniew Brezinski (National Security Adviser), Cyrus R. Vance (Secretary of State), W. Michael Blumenthal (Secretary of the Treasury), Harold Brown (Secretary of Defense), and Andrew Young (Ambassador to the United Nations). Jimmy Carter himself is a member, so are Henry Kissinger and Bill Clinton. It is interesting to note that President Obama has appointed several members of the Trilateral Commission to important positions in his own administration, including Tim Geithner (Secretary of the Treasury), Susan Rice (Ambassador to the United Nations), and James L. Jones (National Security Adviser), among others.

The first report of the Trilateral Commission was published in 1975 under the title The Crisis of Democracy. The chapter on the United States was written by Samuel P. Huntington, who is now known mostly s for his “clash of civilization” thesis. Huntington begins by identifying the democratizing tendency that has become unleashed in the previous decade:

The 1960s witnessed a dramatic renewal of the democratic spirit in America. The predominant trends of that decade involved the challenging of the authority of established political, social, and economic institutions, increased popular participation in and control over those institutions, a reaction against the concentration of power in the executive branch of the federal government and in favor of the reassertion of the power of Congress and of state and local government, renewed commitment to the idea of equality on the part of intellectuals and other elites, the emergence of the “public interest” lobbying groups, increased concern for the rights of and provisions of opportunities for minorities and women to participate in the polity and economy, and a pervasive criticism of those who possessed or were even thought to possess excessive power or wealth. The spirit of protest, the spirit of equality, the impulse to expose and correct inequities were abroad in the land. … It was a decade of democratic surge and of the reassertion of democratic egalitarianism.

Huntington then goes on to explain what he believes to be the heart of the problem, the inverse relationship between the “vitality” of a society and its “governability.” Too much vitality in the general population leads to the erosion of authority, making the society less governable from the viewpoint of the elite.

The essence of the democratic surge of the 1960s was a general challenge to existing systems of authority, public and private. In one form or another, this challenge manifested itself in the family, the university, business, public and private associations, politics, the governmental bureaucracy and the military services. People no longer felt the same compulsion to obey those whom they had previously considered superior to themselves in age, rank, status, expertise, character, or talents. Within most organizations, discipline eased and differences in status became blurred. Each group claimed its right to participate equally —and perhaps more than equally—in the decisions which affected itself.

Huntington attributes the erosion of older forms of authority to the fact that the population has become too assertive in demanding equal rights, including the right to participate in both private and public decision-making. It is this change in popular ideology that poses the greatest danger to the ruling class. For Huntington, it is perfectly fine to believe in egalitarian and democratic values so long as it is understood that they cannot be fully established in the real world. Motivated by a new egalitarian ideology, the population is seen as demanding large-scale changes that will effectively turn the structure of society upside down.

American society is characterized by a broad consensus on democratic, liberal, egalitarian values. For much of the time, the commitment to these values is neither passionate nor intense. During periods of rapid social change, however, these democratic and egalitarian values of the American creed are reaffirmed. The intensity of belief during such creedal passion periods leads to the challenging of established authority and to major efforts to change governmental structure to accord more fully with those values.

Huntington predicts that the democratizing tendency of the 1960s, being part of a normal political cycle, will gradually lose steam with the passage of time. Moreover, he argues that it is important that this tendency loses steam, otherwise it would become difficult for the elite to continue their task of governing the masses.

Predictively, the implication of this analysis is that in due course the democratic surge and its resulting dual distemper in government will moderate. Prescriptively, the implication is that these developments ought to take place in order to avoid the deleterious consequences of the surge and to restore balance between vitality and governability in the democratic system.

Democracy is a good thing, according to Huntington, but only in moderation. American population has recently become too passionate in demanding greater participation in shaping the nation’s political and economic system. This democratizing tendency is giving birth to an “excess of democracy,” and must therefore be restricted within limits determined by the ruling elite.

Al Smith [former Governor of New York] once remarked that “the only cure for the evils of democracy is more democracy.” Our analysis suggests that applying that cure at the present time could well be adding fuel to the flames. Instead, some of the problems of the governance in the United States today stem from an excess of democracy — an “excess of democracy” in much the same sense in which David Donald used the term to refer to the consequences of the Jacksonian revolution which helped to precipitate the Civil War. Needed instead is a greater degree of moderation in democracy.

Huntington identifies two areas in which he would like to see this “moderation in democracy” implemented. First, he insists that “democracy is only one way of constituting authority, and it is not necessarily a universally applicable one.” In fact, democracy is not desirable in most spheres of social life, and the “arenas where democratic procedures are appropriate are … limited.” Secondly, a well-informed and politically active population poses a serious threat to the maintenance of a democratic system of government, and “the effective operation of a democratic political system usually requires some measure of apathy and noninvolvement on the part of some individuals and groups.” This is evidenced by the fact that “every democratic society has had a marginal population, of greater or lesser size, which has not actively participated in government.” Huntington admits that the marginalization of some groups “is inherently undemocratic,” but then immediately claims that it “has also been one of the factors which has enabled democracy to function effectively.” For Huntington, we can either have a society in which some groups are severely marginalized or one in which all groups are moderately marginalized; what we cannot have is a society in which all groups have equal access to resources and political power. To help stabilize the American democracy, Huntington wants to see greater apathy and less political involvement on the part of the population; group clamoring for more rights should accepts some reduction in its egalitarian agenda and agree settles for whatever they’ve already achieved: “Less marginality on the part of some groups thus needs to be replaced by more self-restraint on the part of all groups.”

Insofar as The Crisis of Democracy reflects the views of the Trilateral Commission, it reveals the mindset of the liberal wing of the American elite and its similarity to the mindset of the Framers of the U.S. Constitutions. In both cases, there is a clear desire to prevent an “excess of democracy.” In both cases, a small group of people decides that it represents the rational and enlightened element of society, and then arrogates to itself the right to rule in a way that bypasses the demands and opinions of the majority. The only difference is that the Framers never claimed that they were establishing a democracy, while the contemporary elite are far more cynical in their use of the English language.

The complete text of The Crisis of Democracy is available here. Commentaries can be found herehere, here, and here.