Author Archives: jbernal

More Reflections and Critique of Platonism

Need for Transcendent Justice?

[Here I reply a correspondent who argued that human-based justice is not real justice.]

Your reference to a “higher ideal of justice” suggests that you’re too mired in a Platonist view of reality. Unless you gain vision the form of justice, you only have a poor imitation of justice (or something to that effect).

Contrary to Plato and all other-world doctrines, ‘justice’ is a concept that arose in an earthly, human context. A rough statement of one notion of justice (in a moral context) is simply fairness or fair play. Treating someone fairly implies that her rights are respected and she is treated with the dignity each human deserves. This value is something that can be traced to the type of social beings that we are, social animals who evolved a large brain making a complex culture possible. Some of our moral values can be traced to the family and kin relationships. Justice and other moral values also came about because of the cultures, practices, institutions, education, training, etc. Yes, this is cryptic and incomplete; but we can explore extensive works on this issue, such as that of John Rawls, who offers an interesting theory as to how the concept of justice might have come about. But nothing here demands that in order to have a complete understanding of justice and just treatment of others we must believe in some other world or after life. Nothing demands — pace Plato — that one must have a vision of perfect justice in order to make sense of the down-to-earth value of justice.

————–
The other point I shall make relates to the disconnect seen by some philosophers between a concern for social justice here on earth and an other worldly religion, such as that taught in the Gospels and in Paul’s writings. If the major concern is with salvation and gaining a position in the eternal realm, working for social justice here on earth may have a lower priority. Walter Kaufmann raises this objection against those liberal Christians who see the Gospel Jesus as working for social justice. (See his book, Faith of a Heretic.) Gospel Jesus is mainly concerned with teaching the message of salvation: how to gain eternal life in heaven.

It seems that Plato’s message in his dialogue, Phaedo, agrees with this general teaching. If the important point is the transcendent destiny of the eternal soul, how much concern does one have for achieving justice in this lower, physical world and reducing suffering of the body, which has little or no value after all.

With respect to your affirming the ideas of reincarnation and Karma as part of your account of justice: Of course, reincarnation is not part of the Christian otherworldly faith, but it is part of the otherworldly faith of several Eastern religions. It is not at all clear that this doctrine contributes positively to a concern with achieving some social justice here on earth. After all, what is happening now — no matter how unjust — is just Karma working itself out; and all things that happen — good or bad — have their just consequences in another existence. I take it this is what you mean by claiming that one will “meet justice in some future life.” Sorry, this just doesn’t do much for me.
—————

Philosophy and Death:

“Philosophy as a preparation for death?” “Philosophy as the study of death?” Why should we accept such characterizations of philosophy? Maybe I’m being disrespectful to Plato, and his version of the great Socrates, when I say “thanks, but no thanks!”

Plato’s dialogue, Phaedo recounts a conversation that Socrates allegedly had with some of his friends and disciples as he awaited his death by hemlock. Here Plato credits Socrates with giving proofs of the immortality of the soul and with arguing that genuine philosophy is the study of death. Here death is understood as the soul’s separation from the body and from the physical-material world. This is a good thing because at death the soul returns to that eternal, higher reality, the realm of the forms. Given this picture of reality, we can see why, according to Plato, the philosopher should devote himself to a study of death, i.e, to the soul’s preparation for re-entry into the higher ‘divine’ reality.
The dialogue’s arguments only work if you assume, as this group of ancient Athenians did, that the soul is real and if you accept the reality of the realm of forms (an eternal, unchanging, non-physical reality). These are the background for the Socratic arguments that purport to prove that the soul preexisted birth, will survive after death, and is immortal. Also working in the background is Plato’s typical degrading of the physical and material reality, the body, and material values, as being less real and having less value than those ‘objects’ of the higher world. It is easy to see how Plato’s otherworldly philosophy became an inspiration and philosophical ground for some of the mystical, otherworldly philosophies that followed, including Christian other-world religious doctrines and Christian mysticism.

Relevance to contemporary Issues and Philosophies?

Admittedly Plato’s dialogues and his philosophy are very valuable as a piece of intellectual history. But I cannot imagine how his metaphysical philosophy has much relevance today to anyone sensitive to the work of the sciences and of critical, positive philosophies since the time of Spinoza and the Enlightenment. I believe that the biological sciences, neurology, cognitive sciences, linguistic sciences, and linguistic philosophies have shown that ancient arguments for the reality of soul, eternal and separate from the body, do not carry too far; and that the ancient Greek arguments for the realm of forms are no longer very persuasive. (None of this should be understood as discounting the relevance of the more down-to-earth, moral philosophy of Socrates.)

Some of us find more philosophical inspiration in the rational, humanism of an Epicurus (based on the atomistic materialism of Democritus), than we find in Plato’s dialogues. Certainly, as an ancient precursor to modern science and rational secular thought, Epicurus is as good an ancient source as Plato. Plato’s student, Aristotle, develops a philosophy which has more to offer the contemporary student than the other-worldly mysticism of Plato.

—————————————————-

If philosophy retains any relevance and importance for people today [and that’s a big “IF”), it is as an attempt to make sense of life here on earth, not some purported afterlife or other-world — and as an attempt to help people deal with the challenges and problems that life presents. A philosophy obsessed with other worlds or ideal worlds, and which degrades our mortal, material existence on earth (like Plato’s does) or a religion emphasizing the life-to-come (e.g. Christianity) would propose such propositions as “Philosophy is a preparation for death” or “Philosophy is the study of death.”

Don’t we have enough to do just trying to deal intelligently and effectively with this life? Why spend our energies with speculations as to some ideal world (Plato realm of forms)? Why look on life as a preparation for some putative future existence following death? This is Plato’s unfortunate legacy to the Western intellectual world. Of course, the doctrine builders and theologians of Christianity loved him, as do most mathematicians who adopt some of his ideas regarding a higher ideal reality. But for the rest of us, this obsession with an ideal world, an afterlife, or an other-world — along with it’s devaluation of this world — is simply not a path that we wish to take. For many of us, this ‘philosophy of the other-world surely seems to be a premature, sorry resignation from this world, the only reality we can be sure about.

I’m as uncomfortable with this Platonist notion of philosophy as my correspondent is reassured by it. With apologies to all Platonists and spiritualists out there, I shall conclude with some lines of prose on this ever-popular obsession with other-worlds by Friedrich Nietzsche, who had some insight on these things.

“”It was suffering and incapacity that created all afterworlds — this and that brief madness of bliss which is experienced only by those who suffer most deeply.
“Weariness that wants to reach the ultimate with one leap, with one fatal leap, a poor ignorant weariness that does not want to want any more: this created all gods and afterworlds.”

(from Thus Spoke Zarathustra, 1st part – translation by Walter Kaufmann)

The Hanging Day that could not happen, but did?

Here’s an interesting paradox you might not have heard before. It is called the paradox of the alleged impossibility of scheduling a surprise hanging, or fire drill, or appearance by Obama, or any such event that will be anticipated or not by people involved.

This paradox is found in William Poundstone’s book Labyrinths of Reason (Anchor books, 1988)

There are different ways of stating the paradox. Here’s one of them:

A man, call him Brad, has been convicted of murder and sentenced to hanging. The hanging must take place on the final week of the year, sometime during the final five-day work week. The judge who imposed the penalty also dictates that the convicted man will not know beforehand which day of the week he will be hanged, in short, he requires that the specific day of execution will come as a very bad surprise.

SENTENCE: Brad’s hanging will happen one day of the last week (M,T,W,Th,F) of the final week of the year. But we’re not saying which day.

Brad’s lawyer, Chris, happens also to be a logician. When he hears the judge pronounce sentence, he smiles. Brad is taken aback by this. Later, when he and Chris confer, Chris explains. “Relax, Brad,” he says, “the judge just contradicted himself. The hanging cannot take place.” “Why not?” asks Brad.

Chris explains: “The judge requires that the hanging day be a surprise, one which we cannot anticipate. But consider the possibilities. Let’s start by taking Friday as a possible hanging day. Well if we get through the first four days of the week (Mon, Tues, Wed., Thurs.) without a hanging, then we would know that the hanging would be on Friday, which denies the condition for the hanging. This implies that Friday cannot be the day; so eliminate Friday.

“Now consider Thursday as a possible day for the hanging. Well, since Friday has already been eliminated, and we get through the first three days (Mon, Tues, Wed) without a hanging, then by deductive inference we would know that Thursday was the day, since Friday has been eliminated. But we cannot anticipate the day; so Thursday cannot be the day. So eliminate Thursday, as well.

“Now let’s take Wednesday as a likely day for the hanging. Well, since Thursday and Friday have already been eliminated. Now suppose that we get through the first two days of the week (Mon, Tues) without a hanging, then by deductive inference we would know that Wednesday would be the day. But we cannot anticipate. So eliminate Wednesday, as well.

“Now consider Tuesday as a possibility. Well, Wednesday, Thursday and Friday have already been eliminated. Suppose we get through the Monday without a hanging; then by deductive inference we would know that Tuesday has to be the day. But this would anticipate Tuesday. So eliminate Tuesday as well.

So with Tuesday, Wednesday, Thursday, and Friday all eliminated as possible hanging days, only Monday remains. But then the possible hanging on Monday would not be a surprise, and thus cannot happen on Monday. So eliminate Monday.

Ergo, the surprise hanging cannot take place that week. Ergo, it won’t happen!

“So, chortles the triumphant Chris to the worried Brad, all five days of the week are logically eliminated! None of them can be the day of your hanging. You will not hang!”

But the final week of the year arrived and on Wednesday at 6 P.M. Brad was led to the gallows and hanged, contrary to the assurance given by defense lawyer-logician Chris that it could not occur.

What happened? Wasn’t Chris’s logic impeccable? How is it possible that the hanging took place and caught Chris and Brad by surprise, much Brad’s great disadvantage?

Can anyone give me a clear analysis of the paradox? Why the contradiction between the conclusion of a sound, deductive argument (hanging cannot happen) and the fact (hanging happened on Wednesday)?

Did Darwin Suppress his work in the Face of Religious Opposition?

I appreciate books that argue in favor of Science and the ideals of the Enlightenment against the obstruction of religion and the obscurity of philosophy. Timothy Ferris has written such a book, The Science of Liberty. It promises to be a book well worth our time of close, critical reading. However, Mr. Ferris commits a minor error which I found annoying. In the third chapter of his book, “The Rise of Science,” Ferris refers to Charles Darwin and the delay in publishing his famous work, The Origin of the Species. Ferris classifies Darwin with history’s “martyrs to the cause of science,” such as Galileo, who was coerced by the Inquisition into recanting a few of his early astronomical findings. Ferries writes:

“Every age has produced its own martyrs to science. . . Charles Darwin long suppressed his theory of evolution rather than face the religious indignation that indeed greeted its eventual publication.”

(page 45)

This is a surprising, somewhat annoying statement for some of us who know a little of the history of the writing and publication of Darwin’s great work; taken as an explanation of Darwin’s delay in publishing his great work, it is simply false.

Most students of Darwinian evolutionary theory and the history of science know that Darwin took over twenty years to research, prepare, write, and eventually publish his great work. Students are also aware of the great opposition and hostility that its publication inspired from the religious authorities and from a great part of the intellectual community of the mid-nineteenth century England. Those of us who are familiar with some biographical works, publications and film on Darwin, know that he was aware of, likely apprehensive about, the controversy that his work would trigger. He also lamented that his elimination of a Creator from his account of life’s evolution would have a troubling affect on his wife, Emma, who was a pious Christian. But to my knowledge there is nothing in any reputable biography of Darwin’s life or in any of Darwin’s letters and autobiographical comments for the years leading up to the 1859 publication of the The Origin of the Species which indicates that Darwin suppressed publication of his work because he was reluctant to face the “religious indignation that .. greeted its eventual publication.” The facts, as recounted in such biographies (e.g., The Survival of Charles Darwin, by Ronald W. Clark) and in autobiographical statements and letters by Darwin himself, are that it took over twenty years of intense research and development before he felt he had an adequately grounded theory to present. He did not suppress or delay publication because of religious or political factors; he simply did not feel that his work was ready to do what he hoped to do: make as strong a case as possible for natural evolution of species in the face of centuries of belief in God’s creation of animal and plant species in static, unchanging forms.* As Steve Jones remarks,

“Before Darwin, the great majority of Naturalists believed that species were immutable productions, and had been separately created.”

(page xviii, Introduction to Jones’ book, Darwin’s Ghost).

In short, Darwin faced opposition from variousl camps, not just religious ones; but his delay had nothing to do with his reluctance to fly in the face of such opposition.

Admittedly, Ferris’s remark to the contrary (that Darwin suppressed publication of his work because of religious opposition) is not an important part of the idea that he develops in Chapter 3 (“The Rise of Science”) of his book, namely that science has had its share of “martyrs” and that many significant steps in the development of a naturalistic theories of the world and humans have met with strong opposition from the religious side. But Ferris should have taken more care and not included such a misleading statement about Darwin’s momentous work, and misleading it is, if not outright false.

In his book, Darwin’s Ghost, Steve Jones tells us that in spite of his twenty-year search for evidence, Darwin was so conscious of the gaps in his thesis that he might never have made it public; and that his book is full of apologies:

“To treat this subject at all properly, a long catalogue of dry facts should be given; but these I shall reserve for my future work . . . It is hopeless to attempt to convince any one of the truth of this proposition without giving the long array of facts which I have collected, and which cannot possibly be here introduced . . . I must here treat the subject with extreme brevity, though I have the materials prepared for ample discussion.”

Jones then adds,

“Today’s readers may feel a certain relief that the promised book never appeared. By happy chance, Darwin was stung into publication of a summary of his ideas by an unexpected letter from Alfred Russel Wallace containing the same notion.”

(xxiii-xxiv)

Here Jones refers to a few elements of the long story of how Darwin finally got around to publishing his great work. The facts seem to be that he was developing such a big work that publication seemed a remote event. Eventually he was compelled to put together what he called an abstract of his greater work. Clark writes

“Darwin’s “Abstract” of which he wrote to Spencer in November of 1858 was the result of a series of traumatic events. IN the spring of 1855 he had written to William Darwin Fox: “I am hard at work on my notes, collecting and comparing them, in order in some 2 or 3 years to write a book with all the facts & arguments, which I can collect, for and versus the immutability of species.” The plan then was for something much longer and almost certainly less readable than The Origin turned out to be. At the worst, it could have been a book that would never be finished at all.”

Clark than tells us that

“these prospects were dramatically changed by the appearance on the scene of Alfred Russel Wallace, then in the Far East, to whom “a sudden flash of insight,” as he called it, had revealed a solution to the species problem identical in its main idea to Darwin’s.”

(page 95)

In summary, the story here is not one of delay and suppression because Darwin feared the indignation of religious authorities. The story, rather, is one of a natural scientist who wanted to build the best possible case for his theory of evolution of species, who apparently could not stop accumulating additional evidence for his theory, and who eventually was spurred to publication of an “abstract” of his work by the prospect that Wallace would get priority with his publication of a theory of natural selection. In all works on Darwin which I have studied, including Clark’s very detailed biography ** and account of the events leading up the publication of The Origin of Species, and some autobiographical material and letters by Darwin himself, there is absolutely no reason for concluding that he suppressed publication of his work because he anticipated great religious indignation and opposition.
——————————————————————–

*If anyone can find information to the contrary (supporting the Ferris remark) in any reputable work on Darwin, I’ll be glad to look at it.

** more biographical information relevant to the subject from Clark:
Clark: “…Wallace’s book was never written. But in September 1855 issue of the Annals and Magazine of Natural History there appeared his paper “On the Law Which Has Regulated the Introduction of New Species.” A cautious argument for the evolution of species, the paper maintained: “The following law may be deduced from these facts: –Every species has come into existence coincident both in space and time with a pre-existing closely allied species.” . . . Wallace’s paper fell short of the theory on which Darwin was working, but there were sufficient similarities in it to alarm Lyell, who wrote to Darwin urging that he should delay no longer in publishing his own findings. . . . . Darwin still dallied, and it was April 1856 before he revealed to Lyell the position that he had now reached. (97) . . Lyle urged Darwin to publish his theory, and his other scientific friends appear to have agreed . . . Surely now was the time for Darwin to start writing. But he still hesitated. “I hardly know what to think, “ he wrote to Lyell on May 3, “but will reflect on it, but it [publication] goes against my prejudices. To give a fair sketch would be absolutely impossible, for every proposition requires such an array of facts. If I were to do anything, it could only refer to the main agency of change – selection – and perhaps point out a very few of the leading features, .. and some few of the main difficulties. But I don not know what to think; I rather hate the idea of writing for priority, yet I certainly should be vexed if any one were to publish my doctrines before me.” (98) . . . He was still anxious that his theory should be presented to the world only when every detail was buttressed by evidence, when all the questions that he knew would be raised could be countered by satisfactory answers. (98-99) But he was also worried about priority. His ideas were farther ahead, and far more detailed, than those of Wallace. But he was only human. . . (99) . . . On May 14, 1856, he noted in his personal journal: “Began by Lyell’s advice writing species sketch,” and on June 10 he told William Darwin Fox that Lyell was strongly urging him to write a preliminary essay. “This I have begun to do,” he said, “but my work will be horribly imperfect & with many mistakes so that I groan & tremble when I think of it.” Once he had begun, the prospects of a “little volume” quickly vanished. “Sometimes, “ he wrote to Fox, “I fear I shall break down, for my subject gets bigger and bigger with each month’s work.” (99)

References:

The Science of Liberty, by Timothy Ferris (Harper-Collins Publishers, New York, NY, 2010)

The Survival of Charles Darwin, by Ronald W. Clark (Random House, New York, NY, 1984)

Darwin’s Ghost, by Steve Jones (Ballantine Books, New York, NY, 1999)

Charles Darwin- Autobiography and Letters, (Ed. by Francis Darwin, D. Appleton & co. New York, NY, 1893)

Platonism and the Invasion of Iraq

I have argued in another posting against the view that Platonism should be our model for philosophy and for should guide our thinking on important matters. But even when we allow, for the sake of argument, that Platonism has something to offer in relation to our contemporary thinking, it requires an unsustainable ‘stretch’ to see how Platonism and the “higher” knowledge, called noesis, can help people avoid bad decisions and make good decisions with regard to challenges such as the U.S. government policy in invading Iraq.

However, John Uebersax argues for the contrary view and offers to show us the way. In an essay titled “The Pathology of American Thinking: How Plato Might Have Helped Us Avoid an Iraq Debacle,” Uebersax contends that Plato’s doctrine of the four levels of knowledge is a remedy for such “pathology.”
[link: http://www.john-uebersax.com/plato/eikasia.htm]

In this article, he

“. . . aims to explain: (1) what these basic categories of knowledge are, using examples related to the Iraq war; (2) how the collective thinking that brought America to its injudicious Iraq involvement reflected the poorest kind of knowledge; and (3) how we might avoid similar situations in the future–and instead accomplish positive things–by greater attention to superior forms of knowledge.”

His proposal is that Plato’s philosophy, as expressed in the Divided Line Analogy for the four levels of knowledge, with achievement of the highest knowledge (noesis) would enable us to avoid poor policy decisions such as the invasion of Iraq.

First, he summarizes the four levels of knowledge, starting with a short discussion of the lower types of “knowledge” –

Eikasia: This Greek word literally means “picture-thinking” (from the root eikon. It’s not far from our modern word, imagination, but somewhat broader in meaning. Eikasia reflects the knowledge and thinking that derives not from objects, but their images–in particular, the images in our own minds.

Pistis is knowledge based on sense experience of real-world things and the practical skills that relate to them. Building a house is a pistis-based activity.

He says the following us about “scientific, logical” knowledge:

Dianoia corresponds to what we ordinarily mean by scientific, mathematical, and logical reasoning. It proceeds from initial hypotheses or first principles, using specified rules, to logical conclusions. It gives knowledge superior to eikasia and pistis, but has the limitation that it rests on untested and often untestable initial hypotheses

There are reasons for questioning this division of “knowledge”; but I shall defer such a discussion, except to point out that characterizing ‘scientific’ reasoning as a form of deductive inference should set off our skeptical alarm bells. Moreover, it is not at all obvious why we should agree that “…. that dianoia (i.e., scientific, logical knowledge) rests on untested, initial hypotheses.” (This is false with respect to modern scientific reasoning.)

But setting those issues aside, let us consider what he says about the highest level of knowledge and how he thinks it can apply to our modern-day problems.

First he characterizes this higher knowledge as a “mental apprehension of timeless and unchangeable entities”:

“Noesis–or as it is sometimes called, Wisdom–is knowledge of a completely different order than the other forms. It is direct mental apprehension of timeless and unchangeable entities. It applies in particular to moral and spiritual issues

Then he tells us that

“If we are to meet the present world challenges and thrive as a society then we must become a noetic or sapiential culture. Most of all this requires a mental change at the individual level. “Be transformed by the renewal of your thinking,” (Romans 12:2) St. Paul says.”

The quote from Paul and other remarks indicate that Uebersax sees this “noetic” level of knowledge in terms of religious spirituality:

“ For Plato, noesis is inseparable from a pious, devout, and virtuous life. An undevout person may be intelligent, but not wise. Our Judeo-Christian heritage agrees with Plato on this. “Wisdom is the principal thing; therefore get wisdom” (Proverbs 4:7). But “The fear of the LORD is the beginning of wisdom” (Psalms 111:10), where fear of the LORD here means not servile fear, but a mind directed to God and things holy–that is, a devout life. “

This is not much of a surprise, as Plato’s doctrines have been easily applied to various kinds of religions spirituality and theology throughout the centuries following Plato. However, this religiosity by Uebersax is not relevant to his claim that Plato’s doctrine regarding levels of knowledge can help us in dealing with twenty-first century dilemma.
But how exactly is this supposed to happen? First, it is not at all clear that anyone really achieves this state of noesis as an actual “mental apprehension of timeless and unchangeable entities.” People might think they do (surely religious spiritualists think they do) and Plato’s “Socrates” thought he did. But thinking does not make it so, especially when all this presupposes a very questionable metaphysical doctrine of a transcendent world of timeless, unchanging forms. So, two big problems present themselves:

1) What grounds are there for positing this transcendent reality?

2) How can human cognitive faculties access this transcendent reality?

It does not help much to assume, as Plato does, that the immortal soul can apprehend this higher reality; for this only brings up the question: what grounds are there for positing this immortal soul?

But even if we set aside these philosophical difficulties with the higher state of noesis and allow that individuals who have achieved “wisdom” can see this transcendent reality, we can ask how this phenomenon would enable us to deal with the type of political-ethical issues that Uebersax mentions.

I suppose that if a sufficient number of Socratic mystics achieved this ‘higher knowledge’ of a transcendent reality, for example, a number of such individuals who have a direct vision of the essence of justice, and were in positions of authority or policy makers, then we would avoid many of the disastrous policies and actions that our governments too often perpetrate. (Of course, this presupposes that all these guys would have the same type experience and apprehend the same “truths.” This is a very questionable presupposition.) But this is surely presents a utopian dream. Most human beings are not Socratic mystics who directly apprehend a timeless, unchanging transcendental reality (I doubt that anyone really does!). Basing our hope for better policy decisions in the future on that utopian dream seems forlorn. It is somewhat like declaring that unless we have a vision of perfect justice we cannot have even a practical idea of justice and injustice in the world; but we can and we do recognize real world injustices, and all without any mystical vision of perfect justice.

The poor foreign policy that that led to the disastrous invasion of Iraq resulted in part from a lack of knowledge; but other contributing factors were a measure of dishonesty, false beliefs (e.g., the false belief that Saddam’s Iraq was connected with the 9/11 attacks), invalid inferences, deception, and even self-deception at the highest levels of U. S. decision making. The disaster could have been avoided had our policy makers possessed sufficient, relevant knowledge regarding the actual threat (or lack of a threat) posed by Saddam Hussein’s Iraq; but it also could have been avoided had our government and military leaders been more honest about what was known and less anxious to engage in military invasion of another country. It is too much to hope that policy makers possess all knowledge necessary for making good decisions. Sometimes the best anyone can do is act on the basis of a very limited knowledge; the hope is that policy makers will gauge their action to the knowledge and reliable information they do possess.

But none of this requires that anyone adopt a Platonist view of genuine knowledge or a Platonist view of reality. Knowledge and well-grounded beliefs about the workings of the social world do not require that one adopt Plato’s four-part division of knowledge and reality, along with the claim that only the highest level, noesis, is effective in dealing with the great issues we face. In fact, John Uebersax seems to admit this point.

“While logical, scientific, and mathematical thinking alone do not produce noesis, training in these areas are, for Plato, steps in the right direction. They promote mental discipline and accustom one to seeking intellectual answers. No government could plunge the country indiscriminately into war were the populus sufficiently intelligent and attendant to the principles of logical critical thinking.”

In fact, if training in logical, scientific, and mathematical thought would prevent a government from “plung[ing] the country indiscriminately into war,” then it seems that we have a remedy for the “pathology” of thinking which our leaders suffered at the time of the Iraq war. But this does not require that we appeal to the Platonist notion of the highest knowledge, noesis.

Finally it doubtful that Uebersax’s analysis of the poor thinking and policy decision in terms of the “Divided Line Analogy” does anything to illuminate how people go wrong in their beliefs and actions.* Intellectual integrity and honesty about what we do know and what we do not know can be gotten on the basis of a common-sense philosophy emphasizing critical reason, a respect for scientific methods, and judicious use of the evidence available to us. This attitude in conjunction with a pragmatic, utilitarian ethic — which respects human dignity and human rights — would have improved the odds of avoiding foreign policy debacles like the war in Iraq.
—————————————————

* Even on a sympathetic reading, his claim that the Divided Line comprises the best psychological theory of human knowledge is an overstatement.

. . . . the Divided Line analogy, which comes at the end of Book 6 of the Republic. This analogy, along with the more famous Cave Allegory, arguably comprise the best psychological theory we have about the nature and variations of human knowledge.

Is Platonism the Model for Philosophy?

Platonism: … a type of metaphysical philosophy, one directed toward a transcendent reality. The rationalistic aspect: a belief in the power of thought directly to grasp transcendent realities (e.g., forms, mathematical objects); logic and mathematics are seen as providing keys to the structure of the universe. Includes belief in degrees of reality, and belief in the immortality of souls – Platonism is opposed to anything that can be called materialism; it affirms that a system of moral conceptions will reflect the nature of the universe; morality is more than merely human.

[From the Encyclopedia of Philosophy, Collier-MacMillan, 1967, vol. 6, ed. Paul Edwards]

What can we say to the claim that Plato’s philosophy characterizes the best of philosophy, or that Plato’s wisdom is “the proper domain of philosophy”? Let’s start by considering what one might mean when one makes a claim regarding the nature of philosophy.

When people claim that ‘philosophy’ is one thing as opposed to another thing they might be speaking in one of two modes:

Descriptive mode: When we try to state what the institution or discipline of philosophy is, i.e., what kinds of philosophers, philosophies, teachings, perspectives, university courses in philosophy there have been.

Prescriptive mode: When we recommend what we think philosophy should be: e.g., as when someone claims that genuine philosophy is based on the metaphysical/epistemological position indicated by Plato’s Divided Line analogy.

The Question of Platonism: Is philosophy (in general) a form of Platonism (or as Whitehead said, “a series of footnotes to Plato”)?

The Descriptive Claim:

Taken as the descriptive statement, the claim that philosophy is a form of Platonism is simply false. The work, activity or discipline of philosophy features a variety of perspectives, of which only a few can be called Platonism. Even Aristotle, a student of Plato, did not develop a philosophy that was faithful in important respects to the teachings of Plato’s Dialogues. There have been and currently are many different lines of philosophy which are very anti-Platonistic in their perspective: materialistic philosophies, Atomistic philosophies. Epicureanism, Stoicism, Modern Skepticism, Positivism, Analytical philosophies, Existentialism, … to name just a few. Some of our great figures in the history of Western philosophy, e.g., Epicurus, Spinoza, David Hume, F. Nietzsche, John Dewey, Bertrand Russell, Jean P. Sartre, and Ludwig Wittgenstein, advanced what can be called an anti-Platonistic approach to philosophical problems and issues. Only a few of the courses taught at university and college departments of philosophy can be called courses in Plato or Platonism. The majority professors of philosophy do not teach Platonism; and most likely the majority of people who practice philosophy do not identify themselves as Platonists.

In short, when considering the question, Is Philosophy a form of Platonism? most of us would answer in the negative. “No,” philosophy is not generally identifiable as a form of Platonism, nor as “a series of footnotes to Plato.” Much of philosophy that is actually practiced and taught by people can actually be seen as a counter-thesis to Platonism.

The Prescriptive Claim:

So what about the prescriptive statement? Do we have reasons for agreeing that good philosophy should be a form of Platonism? Do we have good reasons for assenting to the view that good philosophy will base itself on Plato’s Analogy of the Divided line?

When people like Whitehead and Uebersax state the case for Platonism as the key to genuine philosophy, we should understand their statements as prescriptive statements; they’re telling us what they believe philosophy should be. Admittedly, the Whitehead quotation often cited suggests a descriptive claim:

“The safest general characterization of the European philosophical tradition is that it consists of a series of footnotes to Plato.”

A. N. Whitehead, Process and Reality, 1929.

As a descriptive claim, this is simply false, unless we qualify it so that “a series of footnotes to Plato” include counter arguments to Plato. But we normally don’t describe contrary philosophies as “footnotes to each other.

So, I prefer to read Whitehead as a recommendation of what he best of European philosophy should be, i.e., as a prescriptive statement. This is how I shall interpret other statements in favor of Platonism: e.g., “Plato’s wisdom is the proper domain of philosophy”; “Plato’s Divided Line Analogy is the best way to deal with epistemological and ethical issues.”
[The Divided Line Analogy is given in Plato’s Republic, at Book VI, the four stages of cognition, “Speaking through the character of Socrates, Plato divides human knowledge, and its related cognitive activities, into four categories. From poorest to best, these are: eikasia, pistis, dianoia, and noesis.”]

So what are we to make of these recommendations? If there is a case to be made for Platonism as the philosophy of choice, it is a weak case. If we concentrate only on the Divided Line Analogy, we find that it is based on a specific metaphysics and epistemology, Both Plato’s metaphysics and his theory of knowledge are very questionable, to say the least.

First, consider Plato’s metaphysics. Unless we are already committed to Platonism as a philosophy, we don’t find very good arguments supporting Plato’s other world metaphysics. The arguments in Plato’s Dialogues for the necessity of a realm of eternal, unchanging forms are not cogent or sound arguments. Much in the dialogues assumes, rather than argues for the reality of a soul, separate from the body. What scientific grounds or sound philosophical arguments are there for adopting this dualistic doctrine of human reality, that essentially we are eternal souls separate from our body? What grounds – philosophical or scientific – do we have for asserting a separate, higher reality of eternal, unchanging forms? Unless we are already inclined to accept this notion of a higher separate reality, as many spiritually inclined, religious people and some mathematicians are, we shall find little or no reason for affirming such a view of reality.

Accepting Plato’s realm of eternal forms would imply that we reject the reality of modern scientific perspective of a world of evolving animal and plant life, a world of constant, dynamic change as described by astronomy, cosmology, physics, chemistry, biology, anthropology, history, and sociology. Many of us find the case for doing that to be far from convincing. In addition, Darwinian evolution by natural selection is a direct, scientific refutation of essentialism in biology; hence, it is a direct refutation of Platonism insofar as it could apply to the biosphere.

At best, Plato’s metaphysics applies to a particular philosophy of mathematics. Many mathematicians are Platonists of sorts, but the work of mathematics does not entail a Platonist metaphysics, since there are alternative philosophies of mathematics.

Likewise, the case for Plato’s theory of knowledge is a weak one. The notion of knowledge, as embodied in the Divided Line Analogy, assumes that ‘knowledge’ can be understood as a cognitive state, that it is characterized by the object of the cognitive state, and that genuine knowledge is infallible and mathematically certain. All of these propositions concerning human knowledge are questionable, to say the least. There is much work of analytical philosophers which denies these notions. First, as Gilbert Ryle, Richard Rorty and other writers on epistemological issues have argued, knowledge cannot be adequately described as a state of mind, even when it is labeled a “cognitive state.” Briefly, this is because propositional knowledge requires that something is a fact apart from the subject’s state of mind; and knowledge identifiable as a capability (knowing how to do something) requires being able to do something, not just indulge a cognitive state of mind.

“What is knowledge? Whether or not knowledge involves belief, the distinction between knowledge and belief should not be seen as a distinction between states of mind. The truth conditions of statements about knowledge must include reference to things other than states of mind . . . …being sure is not by any means necessary to knowledge, even if in the majority of cases people who know things are also sure of them. . . There are no grounds for supposing that knowledge is a conscious activity or state, nor for supposing that knowledge and awareness are the same. …

[D.W. Hamlyn, The Theory of Knowledge, Anchor Books, 1970, p. 95]

Secondly, as the twentieth century English philosopher D.W. Hamlyn points out, the view that knowledge must be infallible and mathematically certain is based on a confusion:

It is to say that we cannot both know and be wrong. Nothing follows from this about whether what we know must be such that it is impossible to be wrong about it. . . To suppose that it does is to mistake the role that “cannot” plays in “if I know, I cannot be wrong.” In fact, “cannot” merely expresses the incompatibility between knowledge and being wrong; it does not say that the only appropriate objects of knowledge are things about which it is impossible to be wrong.

[Hamlyn, ibid, p. 12]

That my ‘knowing that X’ implies that I cannot be wrong about X follows from the correct application of the concept “knowledge.” We don’t say that we know the solution to a problem if we also admit that we could be wrong. But this does not mean that in order for our claim to knowing to be appropriate it must be infallible or mathematically certain. But according to Plato, genuine knowledge is only of the infallible:

However, historically Plato and others have equated knowledge and infallibility. Plato, at one stage, cast doubt on the view that perception provides knowledge. According to him, knowledge must be reserved for objects of a higher kind, the forms. Accordingly, knowledge and infallibility go together and anything that is not infallible is not a suitable subject for knowledge. . . . . The search for indubitable and infallible truths is therefore a common feature of traditional epistemology.

[Hamlyn, ibid, p. 13 ]

However, for many philosophers this represents a wrong turn in the history of epistemological philosophy. Accepting this ‘theory of knowledge’ would imply that we really do not have knowledge in many of the natural sciences, not to mention sociological and historical sciences. It would also imply that we really all of our beliefs and affirmations about empirical matters and matters of common sense do not qualify as knowledge. Except for those who subscribe to Platonism, most people are not prepared to admit Plato’s exaggerated notion of genuine knowledge.

Hence, the case for upholding Plato’s Divided Line Analogy as a key to understanding what philosophy should be (or should aspire to) is a weak case, given the problems with its metaphysical and epistemological presuppositions. A large number of philosophers rightfully dissent.

Our Premature Jump to the New Millennium

Most people took the transition from 1999 – 2000 as the transition to new millennium.
Most people took the transition from 1899 – 1900 as the transition to new century.
Most people took the transition from 2009 – 2010 as the transition to 2nd decade.
They were wrong!

—————
There are many topics which provoke debate and even hostility, such as politics and religion. Wise barbers and taxi drivers avoid such topics for fear annoying or disturbing their customers. Better to stay with talk about the weather or sports, but even talk about football or baseball can be sensitive subjects. In this blog I don’t avoid topics which can be minefields of controversy such as subjects of political thought, economic theory, religion and philosophy. Hopefully readers will find much to stimulate thought and even find some discussions to be educational. Readers will also find much that will disturb their ordinary ideas and even provoke angry responses. That is fine as long as we keep the discussion civil; I welcome disagreement. It challenges me to clarify and improve my philosophical expression.

The Puzzle of New Millennia, New Century, and New Decade:

Contrary to political and religious topics which are important and controversial, there are some which, although somewhat trivial, provoke much dispute. One of this was the question that some of us posed before the turn of the century: When does the new millennium really begin? As most of you recall, the world in general celebrated the start of the new millennium on January 1, 2000. But this bothered a few of us. We took on the role of spoil sports and pointed out that the world was premature in their celebration by one year. The new millennium did not start until January 1, 2001, we argued. People were annoyed, even irritated, by this reasonable dissent to the popular opinion. Personally I annoyed and irritated a few friends and colleagues by arguing that the transition to the new millennium was the transition from the year 2000 to 2001, not the transition from 1999 to year 2000, as most people thought. Along with other knit-pickers like myself, I argued that when we count decades (or ten of any countable items, e.g., coins in my pocket, beans in a bag, people in a room) we start with the first item and recite “one,” add 1 for each year, and finish with the tenth year (tenth item) and recite “ten.” In other words, the decade runs from 1 through 10, the century runs from 1 through 100, and the millennium runs from 1 through 1000. So, 10, 100, and 1000 are the numbers that indicate the end of the decade, century, and millennium respectively (not the numbers 9, 99, and 999 respectively). The logical conclusion is that years 11, 101, and 1001 mark the start of the succeeding decade, century, or millennium. From which it follows logically that year 2001, not year 2000, was the start of the new millennium ten years ago. The new millennium should have been celebrated on January 1, 2001!

But notwithstanding the logical validity of these arguments, the world celebrated the new millennium on January 1, 2000. My analytical, logical friends and I marveled at what appeared to us as a nearly universal confusion. Was this general confusion just a phenomenon connected with the excitement of the new millennium? Added to this great anticipation of the imminent new millennium was the general apprehension, even fear, about how our computer systems would handle the change of year designation from ‘1999’ to ‘2000.’ People feared that computer systems would crash and many vital functions disrupted. But, as most will recall, computer specialists prepared early for the transition and things went smoothly for most companies and government agencies. The world as we know it did not end on January 1, 2000! But getting back to the premature observance and celebration of the new millennium, let’s ask again: Was this error one that was unique to the transition to the new millennium?

After some reflection and brief study, I found that this general error and confusion was not limited to the transition to a new millennium at years 1999-2000-2001. It has occurred also with respect to transitions between centuries and decades. Newspaper and history book accounts from the year 1899 indicated that folks back then also celebrated the transition to the new century prematurely, on January 1, 1900. They did not wait for the correct start of the twentieth century, January 1, 1901. Were they simply too impatient? Likewise, most people think that this current year of 2010 marks the start of the second decade of the twenty-first century (as indicated by my informal, unscientific polling of relatives, friends and neighbors). But when we apply the logical arguments stated above, we see that the start of the second decade does not happen until the year 2011. Our high school graduating class of 1960 recently held a reunion. Most of my old classmates held that our graduating class was the first of the 1960 decade. But, again, simple logic shows that 1960 was the final year of the 1950 decade, not the start of the 1960s.

Grouping numbers with other numbers that look the same.

So what is happening here? Surely people are not that illogical. Are they simply too impatient and prematurely mark transitions from one decade, century, or millennium to the next, mistaking the transition from 1999 to 2000 for the correct transition from 2000 to 2001? My initial diagnosis relies on simple folk psychology: People tend to group numbers with sets of numbers that look the same, rather than consider what those numbers signify. For example, the year 1960 looks like the years 1961 – 1969. These are the sixties, after all! The year 2010 looks like the years 2011 – 2019, and therefore belongs with that group. And the year 1900 looks like the years 1901 through 1999, and belongs with that group. Finally, anyone can see that year 2000 resembles years 2001 through 2999, and therefore belongs with the new millennium! Most people are prone to this natural way of thinking; and are not too impressed by logical arguments that demonstrate that our natural inclination to group numbers by appearance results in error. Most of those friends and colleagues that I approached on this question of the correct transition time, either laughed at me or expressed annoyance that I spent any time on such a trivial topic. The transition to the new millennium happens when the world agrees that it happens, and we don’t have time for the technical dissent (on a triviality) of a few mathematicians and logicians!

Yes, they’re surely correct. This is surely not a crucial issue which can affect what happens in our world. People observe and celebrate transitions when they think it appropriate. “Let us move on to more interesting and important issues,” seems to be the prevailing attitude. I agree; there are more interesting and interesting issues to discuss. But before wrapping this one up, let me see whether we can find a significant philosophical point behind all this disputation about the start of the next decade, century or millennium. Hopefully the reader will exercise a little more forbearance and stay with me for a few more paragraphs.

Numbers as Labels Versus Numbers as Counting Numbers:

A curious fact about our use of numbers is that we use them in two very different ways, often without noticing this difference. A number can function as a label or name for something; and a number can function as a member of series of numbers. Examples of the use of numbers as labels or names are easy to find: the street number ‘15’ may be used to identify a particular city street; the number ‘2502’ may identify a specific property; or the number “18” a particular floor in a high-rise building. In each case, the fact that the street may not really be the fifteenth street in the city (there are only fourteen streets), or that there aren’t 2501 properties lined up prior to property 2502, or that the high-rise building omits the thirteenth floor, going from floor 12 to floor 14, does not affect the identification of 15th street, or address 2502, or the identification of floor 18 on our high-rise. Here the numbers function as labels or names; they are simply identifiers. They could just as well be names using only alphabetical characters. The number ‘15’ for our street functions the same as the name “Elm” in identifying a specific Avenue. The number ‘2501’ functions the same as a name — e.g. the James place — would work to identify a specific property. This use of numbers (as labels or identifiers only) can apply to our telephone or cell phone numbers, to our social security numbers and other numbers that are assigned to us at various stages of our lives. In earlier years of the telephone, telephone identifiers included words or letters along with numbers. The numbers were not counting numbers.

With the second use of numbers, the function of the number is not limited to identifying the item at issue. If I tell you correctly that the new Chevrolet parked outside my house is the eighth car I have owned, the number ‘8’ could identify the car as car number 8. But the number ‘8’ also tells you that if you count the number of cars I have owned starting with my first as ‘1’, the Chevrolet would be number ‘8’; in other words ‘8’ is part of a series of numbers (counting numbers) representing a series of cars that I have owned.
Along the same line, if I tell that I was the fourth child born to my parents, I not simply labeling myself as the “fourth child,” although this label would be accurate enough. I would also be telling that, counting each child from the first to me, assigning a number to each, you would arrive at the number ‘4’. Child number 4 indicates ‘4’ as a usable label, but more importantly, it indicates my birth as occurring fourth in line. In short, ‘4’ functions as a member of a counting series of numbers. Likewise, if I tell you correctly that this year marked my 41st wedding anniversary, I’m not simply naming or labeling this year as marriage number ‘41’; I’m telling you that 41 years have passed since my wife and I were wed. Count them, one by one, starting with year 1969, and you get ‘41’. The same is true for the use of numbers to indicate our age. This year I am 50 identifies this year as year ‘50’ for me; but it also tells you that if you count the years from my birth starting with ‘1’ and adding ‘1’ for each year until the current year, you will derive my age. The number ‘50’ is part of a series of counting numbers, 1 through 50.

Now let us apply this distinction between numbers as labels and as counting numbers to the question of new millennia, centuries, and decades. The number 2010 is not just a label for the current year; it also functions as a counting number (count the number of years since the start of the decade at year 2001, adding ‘1’ for each year and the total is ‘10’ and you arrive at number ‘2010.’ Hence, 2010 marks the end of the first decade. The number 1900 is not just a label assigned to a specific year. It represents a number in a counting series; in principle, were you to count years from 1 through 1900, adding ‘1’ for each year, after 1900 additions you would arrive at year 1900. But 1900 divided by 100 yields 19. So ‘1900’ marked the end of 19 centuries, with the year 1901 marking the start of the 20th century. Likewise, the year 2000 was not just a label for that year ten years ago. It also indicated that 2000 years had passed since our conventional start of the Julian calendar at year ‘1’. Now when we divide 2000 by 100 we get the answer ‘20’; so 2000 marked the end of 20 centuries, year 2001 as the start of the new twenty-first century. The case for correct identification of the new millennium is easier. Divide 2000 by 1000 and you get ‘2’ with no remainder. So the year ‘2000’ marked the end of the second millennium, with year ‘2001’ marking the start of the new millennium. Hence, when we take into account the use of the numbers ‘2000’ and ‘2001’ as counting numbers, and not simply labels, it follows that January 1, 2001 was the start of the new millennium, and the general observance of the new millennium on January 1, 2000 was an error. Observing the new millennium at the start of year 2000 betrayed a general failure to distinguish between two distinct functions of numbers. This is moderately offensive to anyone who likes to see things done rationally and cleanly.

Does this matter to anyone besides people like me who dwell on these oddities? Maybe not or maybe it could. One could imagine the confusion that might arise when a person on floor fifteen needs emergency attention and the paramedics are sent to floor fourteen instead because someone did not see the difference between ‘15’ used as name or label and its use as part of counting series, a straight count of floors from 1 to 15. Or imagine a stranded soldier who is told that the third platoon will rescue him, but counts only two platoons passing his position and fails to signal his presence and is not rescued. The second platoon passing by was really the ‘third’ platoon. Does this ever happen? I imagine that it does happen and has happened in the past.

Chopra’s Deep Confusion: The Brain & Doubts about the External World

In an article titled “A conversation: consciousness and the connection to the universe” Deepak Chopra recounts an interview (March 27, 2010)** that he held with Dr. Stuart Hameroff of the Center for Consciousness Studies of the University of Arizona.

The interview is interesting on a number of points, e.g., Hameroff’s attempt to explain perceptual consciousness in terms of quantum physics. This is an ambitious project that cries for scrutiny and critique. But presently I shall focus on another aspect of the interview. The interviews discloses some fundamental misconceptions and fallacies committed by both men. Let us look briefly at a few excerpts from that interview and see where they fell into old traps and confusion.

The interview starts with some statements and a question directed to Hameroff by Chopra:

“You’re an anesthesiologist as well as an expert in consciousness. Here’s my question: Our brain inside our skull has no experience of the external world. The brain only responds to internal states like, pH, electrolytes, hormones, ionic exchanges across cell membranes and electrical impulses. So, how does the brain see an external world?”

Right from the start, the good doctors Chopra and Hameroff fall into some basic misconceptions. To recap the main points:
First, they note (Chopra states and Hameroff agrees) that the brain resides inside the skull (obviously!).
Then we have the inference that the brain has no direct experience of the external world: “The brain only responds to internal states.”
From this Chopra raises the profound question: “[H]ow does the brain see an external world?”

The very notion that the “brain sees anything” is suspect. (More on this later.) But for now let’s look at what Hameroff replies to Chopra’s heartfelt question as to the mystery of how the “brain sees the external world.”

“Well that question goes back at least thousands of years, and the Greeks said that the world outside is nothing but a representation in our head. Then of course Descartes recognized the same thing. That the only thing of which he could be sure was that he is, that he is conscious. I think therefore I am. So, we’re not really sure the outside world is as we perceive it. Some people would say it’s a construction, an illusion, some people would say it’s an accurate representation. It’s kind of a mix of views. And then when you add quantum properties to it, it’s really uncertain if the world we perceive is the actual world out there.”

Chopra then brings up the example of seeing a rose:

“So, Dr. Hameroff lets just take an example. I’m looking at a rose, my retinal cells are not actually looking at the rose they’re responding to photons aren’t they?”

This gives the good Dr. Hameroff the opportunity to expound on the processes that go into our “looking at a rose”:

“Yes. It’s also possible that quantum information is transduced in the retina in the cilia between the inner and outer segments before the photon even gets to the rhodopsin in the very back of the eye. So it’s possible that there’s additional quantum information being extracted from photons as they enter your eye through the retina. They might somehow more directly convey the actual essential quality or properties of the rose and the redness of the rose. . . .”

I don’t know about all this extracting of quantum information, but I doubt that there’s anything approaching consensus among physicists and neurologists on these speculations. However, the points I wish to focus on are conceptual points: the identification of the subject who ‘sees’ or doesn’t ‘see’ the external world with the brain; and the inference that all this leads to the ages old skeptical problems about our knowledge of the external world.

Hameroff seems to think that the Greeks (which ones?) held that the “world outside is nothing but a representation in our head” and that Descartes recognized the same thing. In short, we cannot know for certain that the world is anything like what we perceive.

Of course, none of this follows from the initial premise that the brain is located inside the skull and the brain processes our perceiving of the features in the world external to the brain.

The first gross confusion is to hold that the brain is the subject which sees anything. Let us grant that the appropriate sciences can describe and analyze the processes by which the nervous system (sense faculties, brain) enable the animal to perceive and negotiate its environment. But this is an analysis of how the animal (e.g. human, apes, monkeys, dogs, etc.) perceives the world; the brain is a vital element of this process, as are the sense faculties; but the brain is not the subject who sees X (the object of perception) and then faces the problem of connecting ‘X’ to the external world. Furthermore, the skeptical issue (that we face the problem of connecting ‘X’ to the external world) does not follow.

Furthermore, we are not rationally compelled to affirm that “the world .. is just a representation in the head”. Which the of the ancient Greeks held this view? Likewise, there isn’t any cogent argument for inferring the dualistic Cartesian picture (that the mental subject is distinct and apart from the material world). Furthermore, for Descartes the brain, being a physical organ, is found in the ‘external,’ material world. The isolated brain – encased in the skull and separated from the object perceived – which worried Chopra, has nothing to do with Cartesian skepticism about the external world.

At any rate, the skeptical problems outlined by Hameroff have at best a loose connection with Chopra’s initiating question: How does the brain see the external world? Furthermore, any putative skepticism about the external world is in order only if we fall into the initial trap of taking some entity inside the head (the brain?) as the subject who perceives the world. But of course, the animal acting and reacting in its natural, social environment (e.g., the small ape on the tree) is the subject who perceives features of that environment. Hameroff has simply fallen into some basic misconceptions here, misconceptions set up by an even more confused Chopra.

The words used in the title that Chopra gives this dialogue with Hameroff “….consciousness and the connection to the Universe” suggests another fundamental confusion at work here: this is the confused idea that ‘consciousness’ is a mysterious ‘thing’ of sorts, which may or may not be “connected with the universe.” Chopra’s assumption, like many who talk this way, is that consciousness involves more than a commitment to the facts that certain animals (including human in a social setting or small apes sitting on a tree branch) are capable of taking in or being aware of features in their environment. But there aren’t any good reasons for asserting that we’re committed to something called “consciousness.” (Imagine someone proclaiming that in addition to the small ape on the tree, the ape’s consciousness sits there as well.) As some philosophers (e.g.,Gilbert Ryle, Richard Rorty, D.W. Hamlyn) have argued, one can dispense altogether with the idea of consciousness as an entity (?) or as a mental state and still give adequate accounts of all the mental, perceptual capabilities of complex, evolved animals as humans. Science can account for my seeing the rose or being aware of the cool temperature in my environment without anyone having to posit my state of consciousness or an actor called “consciousness.” That I see things and am aware of things is beyond dispute. But this does not commit us to the reality of some mysterious state or entity called “consciousness.”

When we speak of a person being in a state of consciousness, or perception, or awareness – we simply resort to a way of talking. We don’t make an ontological commitment. The same may be said for a statement like: “There was an awareness that we were in trouble.” None of these require that we posit a mysterious state or entity called “consciousness” or another called “awareness,” which may or may not be connected to the external world. Chopra is just falling victim to an age-old confusion here.

All the ensuing talk by Hameroff concerning the “fine structure of the universe,” and “quantum information extracted from photons” is at best questionable speculation, at worst, a bit of New-Age, post-modernistic “mumbo-jumbo.”
————————-

** The full interview can be found at

http://articles.sfgate.com/2010-04-07/news/20840306_1_quantum-information-interview-brain

Charles Rulon: God and the problem of evil

Evil: Immoral, corrupt, sinful, wicked, depraved, harmful, malignant, malevolent, misery, suffering, disaster, ruinous, disease, catastrophe, calamity; anything causing injury, harm and pain.

The Christian god is described as an all-good, all-loving, all-merciful, all-just, all-compassionate, all-knowing, all-powerful, interventionist god. Of course! Who wants to worship a hateful, vengeful, ignorant, absentee god? But if this god really exists, then why has there been so much heinous evil throughout all of human history — endless wars, genocides, famines, tsunamis, hurricanes, earthquakes, floods, droughts, cancers, heart disease, strokes, crippling arthritis, horrible birth defects and parasites—parasites which make up the majority of species on Earth and which spread flesh-eating diseases, bubonic plague, small pox, malaria, cholera, TB and on and on.[i] In response, one only hears God’s maddening silence. Where is His Goodness and Compassion, His Omnipotence? The presence of such horrendous levels of “evil” has been a potent reason for many to turn away from God.
Still, over the centuries Judaeo-Christianity has steadfastly held to the conviction that the universe is good — that it is the creation of a good God for a good purpose. Thus, over the centuries men of the cloth have agonized over all this evil and have attempted to explain away why such horrific levels of pain and suffering have been visited on the innocent. Surely, this wasn’t the best God could do?

Religious explanations

In the face of all this evil, religions struggle to continue to validate and glorify God’s goodness. So in comes Satan. In comes God’s punishment for sinning (“God sent Hurricane Katrina [or 9/11, or the tsunami, or the earthquake, or…] because God is mad at us for allowing pre-born babies to be butchered and homosexuals to run rampant.”). In comes the argument from Free Will (“God gave us free will so that instead of being robots we can freely choose to love Him. But the price paid is that we are now also free to do evil.”). In addition, “We are all depraved and deserve punishment from the very beginning.” Or, “suffering occurs because God’s creation is unfinished. As the universe continues toward perfection, diseases, natural disasters and other forms of evil will disappear.” And let’s not forget that “Suffering is good for us. God uses suffering because it is remedial and medicinal. Pain is the means by which we become motivated to finally surrender to God and to seek the cure in Christ. Suffering is necessary to forge high-quality souls for the afterlife. The point of our lives in this world isn’t comfort, but training and preparation for eternity.”

Skeptical responses

Non-theists respond that the obscene levels of excruciating pain, monstrous suffering and horrible deaths throughout history seem out of all proportion to what one might expect from any kind of god worth worshiping. They respond that “creating Satan” to explain away all evil begs the question of why an all-powerful, all-good God would permit Satan to exist in the first place—a rival who has inflicted so much harm on the good and innocent. And what about all of the scientific, medical and social advances which interfere with these “God ordained” punishments for sin? Are these blasphemous?

God’s inability to eliminate evil

Non-theists also point out that God’s finest creation was filled with so much evil several thousand years ago that this God drowned everyone except for one good family (Genesis flood). But as this family multiplied, evil once again returned with its endless wars, genocides, tortures, inquisitions, witch hunts, hatreds, greed, thefts and so on. Thus, skeptics observe, either this biblical god can’t create evil-free humans, or won’t.
Believers are quick to respond that God gave us free will so that we can freely choose to love Him, which also means that we are free to sin. But so much evil?! So many wars and genocides?! So much cruelty!? Besides, what does free will have to do with cancer, parasites and earthquakes?

Evil & the Old Testament God

Skeptics also emphasize that the Old Testament, the foundation for the world’s three major monotheistic religions, has been described as “… one of the most brutal war mythologies of all time with the enemies of the Hebrew’s tribal god consistently treated as sub-human things.”[ii] If we judge evil by today’s standards in civilized societies, then this tribal god of the Hebrews, observes Richard Dawkins, is “a misogynistic, homophobic, racist, infanticidal, genocidal, filicidal, megalomaniacal, sado-masochistic, capriciously malevolent bully.”[iii]

The existence of evil doesn’t disprove God

Of course, all of these horrendously obscene levels of evil don’t disprove the existence of a god or gods. If “God” exists, perhaps He’s an evil capricious god. Or possibly she’s a loving god with very limited powers. Or perhaps there is some kind of cosmic battle on Earth between the forces of good and evil. Or perhaps the Creator of the Universe doesn’t intervene in human affairs and/or has more important things to do than to worry about human suffering. Insight: Once we introduce non-testable supernatural explanations for the existence of evil, we’re only limited by our very fertile imaginations.

We must move beyond mental gymnastics

All of the above attempts to explain the existence of evil in a good universe created by a good God, plus all the skeptical responses, are simply arm-chair debates. Though satisfying and convincing for many, such mental gymnastics merely gloss over our fundamental ignorance. They don’t move us any closer to empirically verifiable explanations as to why all this evil—diseases, earthquakes, human cruelties—exist in the first place, much less how to reduce their occurrence.

Science and God

Regarding how an all-good God could co-exist with all this evil, most scientists respond that it’s because God is an imaginary being. The last several hundred years of scientific discoveries—from astrophysics, evolutionary biology and biochemistry, to the lack of any solid evidence for the existence of paranormal and supernatural events—all this evidence has reached a critical mass which strongly supports the powerful thesis that there never were any gods in the first place, at least in any kind of manifestation that is of interest to the overwhelming majority of Christians, Muslims, Jews and other religious folk. As scientific knowledge continued to advance over the last 400 years, supernatural explanations for events continued to retreat …and retreat. Many scientists faced with such a consistent trend have extrapolated to what seems an obvious conclusion: All of our earthly gods are non-existent.

Scientific explanations for evil

Scientists (through incredibly hard work over centuries) have been able to arrive at natural explanations for the existence of most of the world’s evils, from diseases, to natural disasters, to the evolution of parasites, to our xenophobic human nature. They have also discovered that our universe and Earth are even more dangerous (evil) than ever imagined.

Some educated Christians who accept modern cosmology and the fact of our biological evolution have responded that since evil can now be explained naturally, there’s no reason to blame God for it. Do they really grock the depth to which scientific discoveries have seriously devastated their core religious dogmas?

The universe and Earth: Our universe is far from being a safe and peaceful backdrop for God’s finest creation, man. Black holes suck in entire star systems. Gigantic explosions at the center of galaxies destroy millions of worlds, many possibly populated with sentient beings. And Earth, itself, is hardly a peaceful setting for God’s favorite species. Catastrophic events — meteor impacts, gigantic volcanic eruptions, ice sheets covering much of Earth, plate tectonic movements tearing apart entire continents — have repeatedly devastated Earth’s surface, resulting in horrifically high death rates, pain and suffering, even numerous mass extinctions over the past 600 million years. Skeptics ask why an all-powerful, loving god would place his favorite creations on a planet destined to experience catastrophic disasters, in such a violent universe.

Natural selection: Natural selection has been the primary mechanism driving evolution for billions of years. With natural selection comes an ‘infinity’ of dead ends, starvation, disease, plagues, cruelty, flawed designs, violent deaths, and a prodigious waste of life. Shortsighted selfishness usually wins out, no matter how much pain and loss it produces in the long term. Parasites are a major out¬come, outnumbering all other species. Very few biologists see evidence of some deity’s hand anywhere in natural selection—particularly evidence that humans were “planned” ahead of time.

Extinctions: Over the past hundreds of millions of years, natural selection, plus the contingencies of history, plus cataclysmic events have resulted in the extinction of over 99% of all the species to have ever evolved on our planet. Such horrendous levels of extinction leave many religious people quite upset. If essentially all of God’s creatures eventually go extinct, doesn’t that imply a God that’s inept, wasteful, careless, cruel, and/or unconcerned with the welfare of His creations?

________________________________________
Notes:
[i] There are over 10,000 species of tapeworms, 2000 species of biting lice and untold numbers of species of harmful bacteria, viruses and protozoa. Parasites have blinded millions of children. The Black Death of 1348 wiped out half the population of Europe. The influenza pandemic of 1918 killed 50 million people. Over 300 million people every year become deathly sick with malaria.

[ii]Campbell, J., 1988. Myths to Live By, p. 181-183.

[iii] Dawkins, R., The God Delusion, 2006. Dawkins based his description on the fact that in the Old Testament this god instructs his followers to kill those who work on the Sabbath (Exodus 31:15; 35:2), to kill children who curse their parents (Exodus 21:17; Lev. 20:9; Deut. 21:18-21) and to stone to death brides found not to be virgins on their wedding night (Deut. 22:13-21). Virgin girls could be offered to an angry mob to protect male guests from harm[iii] The Hebrews’ god also commanded his followers after they defeated enemy cities to slaughter all of the men, the elderly, the crippled, the women, the children—everyone (Deut. 20:16-18; Deut. 7:1-6; Joshua 6:21-24; 10:40).

Did the Twentieth Century mark the failure of Secular Governance?

My colleague, Pablo, started the exchange with some questions posed to another philosophical colleague, Spanos, who had made some earlier claims about the failure of secular politics.

Pablo:

You make an interesting claim: “The claim that naturalism offers better chances for political reconciliation has been falsified by twentieth century experience with naturalism.” Though I’m not quite sure what you mean by ‘political reconciliation’ I’ll take it to mean naturalistic systems of thought have done no better in the political realm than non-naturalistic systems. I’m not sure, either, what naturalistic political systems you have in mind of the twentieth century, but I hope you don’t mean to include fascistic and Marxist systems.

——————————-

Spanos replied:

I did have fascist and Marxist systems in mind. But it’s pretty hard to distinguish between naturalist and supernaturalistic systems of government. Ever since the late middle ages, the Church and the state have been vying to see who is in control. Generally, the state has won that battle. Should we say, then, that all secular systems of government are naturalist systems of government? If not, how would we distinguish naturalist systems of government from other secular systems? Is the United States a naturalist society with a naturalist system of government? Or is it supernaturalist with a supernaturalistic system of government? Or is it a secular, non-naturalistic system of government? Capitalism, certainly, is not a religious system of economic organization. Therefore it must be a secular system. But if it is secular, is it naturalistic? If not, why not?

I would say that Pol Pot and the killing fields of Cambodia illustrate a secular or naturalistic system of government in operation. The Dalai Lama and his response to Chinese oppression illustrate a religious system of government in operation. I do not accept the idea that the Taliban and al Qaeda are typical of Islamic political systems. These, along with Hamas and Hezbollah, are motivated not by religion, but by resentment. They use religion, but religion tries to discourage resentment, anger, hatred, violence, etc. Religion holds up compassion, love, and forgiveness as primary virtues. Religious institutions are the only institutions devoted to the spread of these virtues.

The issue we recently discussed concerning a sanction for moral behavior, and whether naturalism can provide an adequate sanction, probably underlies our different views on this issue of naturalistic vs supernaturalistic systems of government. I believe that morality, deprived of a supernatural base, erodes away. This belief is synergetic with my idea of how Marxism failed. The only goodness left in Russian society today is a remnant of its previous religious culture. It certainly doesn’t stem from Marxism.

————————————

I jumped into the discussion.
Moi: Earlier I had asked Spanos the same question that Pablo asked; namely: what do you mean by naturalism in political or governmental affairs? Now we have some idea as to what he had in mind. (Why couldn’t he use the more familiar term, “secular governance”?)

Of course, with careful “cherry picking” of history and political events, anyone can argue that government independent of religious authority will result in evil and disaster. Likewise, one can argue the same for theocracies or governments closely tied to religious authority. They too can be seen as eventually leading to evil and disaster. (I say to Spanos: two or more can play the same game and get very different results.)

However, I’m amused that someone characterizes “Pol Pot and the killing fields of Cambodia” as illustrative of secular or naturalistic government in action. And of course, the peaceful Dalai Lama illustrates how a religiously imbued authority can work so beautifully. Wow, I didn’t know that! Of course, these claims could just be a couple of HOWLERS that Spanos throws our way to get our reaction. Surely he cannot seriously propose that the actions of the murderous Pol Pot and his boys resulted from the application of science and reason to politics, which is what I understand by “naturalism” in politics. Or does he seriously propose this?

This all strikes me as so simplistic as to be risible. If your government is a religious system in action, it is good. If your government is secular, it is evil. (This probably is not what you’re saying, Spanos. But what you do argue suggests something like this.)

Marxism failed for a number of reasons, only a few of which might be correctly related to the attempt to oppress religious expression/spirituality in the Soviet Union. It oversimplifies history and politics terribly to suggest that religious expression in pre-Soviet Russia resulted in good for the people; and that any governmental action that did not promote such spiritual expression resulted in evil and suffering. It is far too easy — and distorts history —- to suggest that the failure of Marxism shows the triumph of religion. It is much like arguing that the industrial revolution, the rise of science and the enlightenment proved the failure of religious spirituality. “Yes,” I would say to anyone asserting this proposition contra religion. “It sounds good, but let us see the rest of the argument.” I say the same to anyone who tells me that the fall of Marxism somehow vindicates religion and religious authority.

————————————
Spano replied:

It seems that I need to clarify the claim that I made about naturalism and government. I did not claim that government independent of religious authority will result in disaster. Neither did I claim that governments tied to religious authority will not lead to disaster. Neither did I say that the failure of Marxism shows the triumph of religion. These misunderstandings distort my claim.

I said, “The claim that naturalism offers better chances for political reconciliation has been falsified by twentieth century experience with naturalism.”

In order to clarify this claim, let us first consider the idea that naturalism offers better chances for political reconciliation. This idea stems from the Enlightenment, the “Age of Reason.” It regards religious ideas as out of place in the constitutions of modern states. It holds that reason and science provide sufficient foundation for the
institutions of government. It has no esteem for tradition, especially insofar as traditions involve supernaturalistic assumptions. In its early phase, it believed that the Enlightenment marked a decisive turning point in the history of the human race. It believed that, on the new foundation of reason and science, barbarism was behind us and the human race would march triumphantly into a bright future. It expected political and social harmony such as had never been seen before.

Now let’s consider the impact of the twentieth century on this idea. The twentieth century showed that barbarism is not behind us after all. Barbarism reappeared in new and more devastating forms. The persistence of social injustice and the horrors of war on a world-wide scale plunged European intellectuals and artists into a blue funk called modernism. Modernism as embodied in literature and the visual arts was the collapse of Enlightenment naivete. Not only has the expected bright future not materialized, but the failures of reason and science as instruments for managing our relations with one another and our natural environment make our future look darker than ever.

I hope that this explanation helps you to a more accurate understanding of my claim.

————————————

Moi: Yes, we’re aware that the high optimism and expectations of a number of Enlightenment writers, philosophers, and scientists proved to be too high; and that much more needs to be done, besides merely removing supernaturalism and superstition from governance, before we can eradicate the barbarism and inhumanity that plague humanity. Admittedly the twentieth century showed that much, as likely will events in the twenty-first century.

Of course we cannot yet celebrate with August Comte’s belief that reason and science would spell the end of the barbarism and inhumanity associated with ignorance and superstition, nor can we celebrate the advent of a scientifically and rationally grounded humane society.

But nothing in all this shows that we would achieve a better society and better government by reverting to religious tradition and ecclesiastical authority in any form. None of this shows that we would be wise in any respect to jettison science, reason, logic and technology in an attempt to establish workable societies and humane value schemes. Finally, none of this shows what you seem to think it shows; namely, “the failure of science and reason as instruments for managing our relations with one another and with our natural environment.” You’re signaling the end of the game (the scientific, rational experiment) is far premature and you have misinterpreted past failures as due to the use of science and reason. Those failures are due to a variety of reasons, none of which can reasonably be portrayed as too much reliance on science and reason in managing human affairs. In fact, many of us would argue the contrary.

In short, you make two basic errors in your assessment. A misinterpretation of past events and a hasty rush to judgment contra science and reason. Mostly what you do is construct a ‘strawman’ version of the ideals of the Enlightenment, reason, and science and then proceed to knock down that strawman. But I suppose this an acceptable tactic, given your post-modernistic judgment that modernism has failed, a judgment which is as faulty as it is bizarre.

Some confusion on Justice and The Utilitarian Principle

Not too long ago, one of my philosophical correspondents (“Pablo”) took up the question of justice (What is its source?) and utilitarianism. The ensuing discussion brought out some utilitarian claims on this issue and conceptual problems with the Utilitarian solution.

Pablo raises the problem of the source of ‘justice’ and offers his solution, which refers us to the cleansing work of science and to the utilitarian principle as a guarantor of justice:

“If justice is some kind of ideal, from where does it emanate? Is it already in our heads in some evolutionary way? Does a god place it there? Is it self-evident? Where?

My claim is simple (though I’m sure some would consider it simple-minded) but I think profound. Once science has erased all the obvious prejudices we humans have of one another, including the religious ones, then we can be sure what’s good for ourselves is also good for other members of society and this is the basis for justice as I see it. Once prejudices against African-Americans, women, Native Americans, Jews, et. al., have been undermined, then we can expect for others what we want for ourselves. It’s the Golden and/or Silver Rule writ large from the bosom of the utilitarian principle. We don’t need an ideal of any sort to follow; we just need to use a principle similar to Rawls with a view toward consequences that benefit all of society.”

—————————

Some background for this exchange: Originally another of our correspondents, Spanos, had cited John Rawls’ theory based on the “original position,” as one attempt to give the source of ‘justice.’ But apparently Pablo does not see Rawls as providing what he, Pablo, seeks. Presently I shall set aside his reasons for discounting Rawls. Instead I will focus on Pablo claims: Science serves to prepare the way for justice; and the principle of utility provides a basis for justice.

Pablo makes the surprising claim that “science erases prejudices that humans have about one another.” This is interesting, since there’s much reason for doubting that science does any such thing. Science may provide us with certain knowledge and understanding about human beings and human society. Eventually this knowledge might contribute to a more tolerant, accepting attitude to those different from ourselves, and eventually humans might realize some degree of moral progress and a improve in their treatment of others. But the claim that science will erase all bias and prejudice strikes me as a great overstatement. However, for the sake of argument, let’s allow the possibility that “science erases all prejudices toward others.” Following this cleansing, Pablo tells us that we would work to realize for others the good that which we desire for ourselves.

Apparently the putative cleansing applied by science (the “erasing of all prejudices) would release a sense of fairness toward all fellow humans. We would desire that others be treated as we’re treated. But where did this ‘sense of fairness’ come from? From the scientific cleansing that Pablo postulates, of course. So, science, not utilitarianism, is the foundation for justice. The principle of utility is a principle of justice only when the people conceiving it and applying already have a notion of fairness. The principle of utility by itself does not guarantee justice or fairness, it only aims at the greatest good for the greatest number, which can leave many out in the cold.

Rawls presents a ‘theory’ of justice which is in some respects a consequentialist theory and in some respects a social contract theory. But it does not beg the question. As Spanos noted, Rawls imagines that a group of rational, self-interested persons would come up with rules for a putative society in which none of them would know what position they would occupy in that society. So each one has to come up with rules which treat every member of society fairly. But they do this only because they wish to invent a society in which each one of them, thinking primarily of his/her own self-interest, would get an even shake. They don’t do this because they already think that fairness should be evenly applied. The resulting ‘fairness’ follows from the rules that are created as a result of each rational, self-interested person calculating the arrangement that would insure that he/she would come out all right. This qualifies as a legitimate attempt to explain justice; it does not presuppose (beg the question on) justice.

Contrary to the Rawlsian theory, Pablo’s position appears to beg the question. For what he seems to say is that the principle of utility (greatest good for the greatest number) fairly applied (i.e., so that nobody is sacrificed for the good of the majority) is the foundation for justice. He now argues that a sense of fairness is built into the principle of utility. (He seems to have forgotten that he had found the source of justice in the “cleansing work” of science.) The principle of utility cannot explain justice if it presupposes justice.

—————————————
Pablo replied:

Juan asks: Doesn’t utilitarianism beg the question? Does it assume fairness in order to work?

My view is not question begging. Utilitarianism, essentially, has a built-in fairness since it advocates the greatest good for all. It is only fair that all should benefit from the rules and laws of society (each citizen counts as one under the principle.) Yes, it advocates minority rights but only because we are all minorities of some sort and because giving rights to all these minorities is for the greater good of society as a whole. However this does not guarantee individual the same rights for all members of society for obvious reasons. Some are criminals, some insane, some have beliefs which harm others (the KKK for example), those guilty of hate crimes, those in the country illegally, et. al. Since the rights of the above groups cause more harm than good, generally, we separate them from society; in other words, we punish them until or unless they can be rehabilitated (or become citizens). If the activity harms no one, then, clearly, it is to be allowed as an individual right. The principle of utility, however, is not always easy to apply and it could be used unjustly. We know when it’s unjust when we can determine the rule or law does more harm than good for the persons involved. This is precisely what we do in practice, and how new laws are made and applied. So how does this principle beg the question?

One problem with utilitarianism, brought up by a friend many years ago and others as well, was that of scapegoating. In the days of the wild west, the law-enforcing authorities would often find some disreputable person and accuse him of a crime which he did not commit. They did this so the people in the city could feel safer (clearly for the greater good of all). Even though there may have been little evidence the disreputable person was guilty, he was accused nonetheless. At the same time, the authorities also got rid of someone they didn’t like. Now, was that justice? How is utilitarianism to be applied in such a case (and such cases still exist today: the case of the four innocent college students accused of raping a girl was just such an instance. They were claimed guilty because of a prejudiced prosecuting attorney as you may recall).

There are a number of reasons for claiming an injustice here and from a utilitarian point of view. For one, the real criminal in the case of scapegoating is still out there and may well strike again. The problem of safety has not really been resolved and clearly that’s not for the greater good. Secondly, if (or when?) the knowledge about the real criminal comes out, the respect for law enforcement agencies will be quickly undermined (and could lead to taking the law in one’s own hand; vigilantism). That’s certainly not for the greater good. Also, even if neither of the two mentioned defenses turn out to be applicable, the authorities may well think it a good idea to do the same thing in other cases where they have trouble finding the real criminal. This habit would certainly lead to the problems mentioned in the first two cases sooner or later. So, in the long run, such practices would be unfair since they would, ultimately, lead to more harm than good. (There is also the psychological problem of the authorities having to live with a guilty conscience if they knowingly accuse and prosecute an innocent person.)

————————————–

Pablo has to decide whether he believes that the principle of utility can be the basis for justice or whether it already incorporates justice in its formulation. These are different positions, which Pablo states at different times.
He tells us first that his “..view, ,,is not question begging. Utilitarianism, essentially, has a built-in fairness since it advocates the greatest good for all. It is only fair that all should benefit from the rules and laws of society (each citizen counts as one under the principle.)”

This means that the principle of utilitarianism incorporates in its statement a statement of justice, “a built-in fairness” in that it “advocates the greatest good for all.” To me this asserts that the principle will always be applied fairly, so that everyone benefits. So stated it is just another way of stating the idea of justice, not a basis for justice. Pablo even suggests that this “built-in fairness” is an essential element of utilitarianism. This obviously states that utility includes a justice-feature from the beginning. Utilitarianism explains justice only in the sense that utility is qualified by a justice principle.

On the other hand, Pablo admits, while considering the problem of a convenient scape-goat, that the principle of utility “can be used unjustly.” This happens when someone’s application of the principle of the “greater good” requires that some individual or a minority group be scape-goated in order to maximize benefit for the rest of society. But how could this happen if the principle essentially has the “built-in fairness” which Pablo claims above? Can a rule which essentially includes a built-in-fairness element be applied unfairly? I would not think so.

My recollection is that predicate of the principle of utility (as stated by most utilitarians) is the “greatest good for the greatest number.” So the utilitarian action or rule is labeled a morally good act or policy because it results in the greatest good for the greatest number of those affected by that action or policy. We can imagine this working as a principle applicable in the real world; and applied in the real world it would sometimes result in benefit for the majority derived at the cost of others not benefiting or even suffering as a result. For example, a war necessary to defend the nation can presumably be justified on utilitarian ground. The nation as a whole benefits in not being destroyed by the enemy; but a good number of people (soldiers, victims of the war) must suffer as a result. To state the principle as one which realizes the “greatest good for all” is to lay down a rule which is unrealistic and inapplicable. How could we ever say that a specific action/rule is one that results in benefit for all affected by that action? A natural, even inevitable element of social-political life is that there are conflicts of interest between individuals or groups of people. Someone doing the best he can to treat everyone fairly will still do things do not maximize the interests of some people, even work contrary to some people’s interests. In other words, we cannot insure that a principle of action always results in “fairness” for all concerned. This is why the principle of utility is normally stated as that action-policy which brings about the greatest good for the greatest number, not the good for all.

Pablo goes on to explain that unjust applications of the utilitarian principle would inevitably result in bad consequences, which could be ferreted out by utilitarian principles. Yes, initially it might appear that unjust scape-goating of innocent persons will result in maximum benefit for the rest of society; but eventually this injustice would prove to have negative consequences for the general state of society (an example might be the treatment of Japanese Americans in the 1940s). So, on utilitarian grounds we could show that initially the utilitarian principle was not observed. I understand this is the gist of Pablo’s argument.

The problem here is that utilitarianism, as a principle that preserves justice, can only be defended by “just-so stories.” Yes, of course, if victimization of innocents was always exposed and always led to bad consequences for the perpetrators and for society, then we could say that utilitarianism always proves just. But it takes a very naive and trusting soul to buy all this; the real world does not work this way and often great injustice is the road to greater benefit and profit for those who bought the utilitarian policy in the first place. Those willing to scape-goat the defenseless are not always, maybe not hardly ever, exposed for their unjust acts. These “just-so stories” are really a flimsy ground on which to rest the argument for the justice of the utilitarian principle, given that it must be argued for and does not have a “built-in-fairness” as an essential element. But Pablo was not consistent on this point.