Author Archives: jbernal

A murky “Moral Landscape” in the horizon?

Sam Harris, one of the “new atheistic” writers, apparently has a new book coming, The Moral Landscape: Thinking about human values in universal terms. Someone sent me a text of a recent interview in which he answers a few questions about the way in which science provides answers to moral questions. Below are the first three questions and Harris’s reply to each. I will show that his replies are as perplexing as they are problematic and seem to discount the really hard questions of moral situations. To anyone (like myself) who holds out hope that the work of the sciences is relevant to moral philosophy, Harris’s perspective on these issues leaves a lot to be desired, to put it as generously as I can.

1. Are there right and wrong answers to moral questions?

Harris: Morality must relate, at some level, to the well-being of conscious creatures. If there are more and less effective ways for us to seek happiness and to avoid misery in this world — and there clearly are — then there are right and wrong answers to questions of morality.

2. Are you saying that science can answer such questions?

Harris: Yes, in principle. Human well-being is not a random phenomenon. It depends on many factors — ranging from genetics and neurobiology to sociology and economics. But, clearly, there are scientific truths to be known about how we can flourish in this world. Wherever we can have an impact on the well-being of others, questions of morality apply.

3. But can’t moral claims be in conflict? Aren’t there many situations in which one person’s happiness means another is suffering?

Harris: There are some circumstances like this, and we call these contests “zero-sum.” Generally speaking, however, the most important moral occasions are not like this. If we could eliminate war, nuclear proliferation, malaria, chronic hunger, child abuse, etc. — these changes would be good, on balance, for everyone. There are surely neurobiological, psychological, and sociological reasons why this is so — which is to say that science could potentially tell us exactly why a phenomenon like child abuse diminishes human well-being.
But we don’t have to wait for science to do this. We already have very good reasons to believe that mistreating children is bad for everyone. I think it is important for us to admit that this is not a claim about our personal preferences, or merely something our culture has conditioned us to believe. It is a claim about the architecture of our minds and the social architecture of our world. Moral truths of this kind must find their place in any scientific understanding of human experience.
————————————–

Now let’s consider Harris’s reply to each question in turn.

#1: What is a question of morality, and in what sense are there right and wrong answers to moral questions? Harris and the interviewer seem to limit questions of morality to questions about the well-being of conscious creatures (e.g. humans), and seem to further delimit these to the problem of maintaining or increasing the well-being of such creatures and decreasing or eliminating suffering. As a colleague pointed out, this is the perspective of utilitarianism. On this perspective, given that we can outline effective ways of maintaining and increasing human well-being and effective ways of reducing suffering, then we have the utilitarian’s proposal for resolving those moral issues which lend themselves to this ‘weighing benefits against cost’ of our policies and actions. In principle (on utilitarian principles), there are right and wrong answers to this limited set of moral questions. But aren’t there other types of moral questions? Don’t we sometimes have to act out of respect the rights of an individual and not in terms of purported consequences of our action, as when we have to keep a promise we made to that person? Don’t we sometimes have to speak honestly although the consequences may not result in more benefit than harm?

#2: In what sense can science answer such questions? Harris assumes that human well-being can be scientifically defined; and assumes also that the means for achieving human well-being also rests on “scientific truths.” Somehow science can tell us much about how we “flourish in the world” and how our actions affect others. Given these ‘facts,’ he suggests that science can answer moral questions.

Taken by themselves, these remarks are far too simplistic to offer much help to those who wish to connect the sciences with morality. Science might be able to explain what kind of creature we are (evolved biological creatures with big brains) and even explain the social culture necessary to understanding the kind of social animals that we are. But how does this explain human flourishing? A full explanation of that requires that we explain values and purpose, and explain individual decisions as to what values or purposes to realize in one’s existence. Does science explain these? Science might be able to provide a menu of choices that a person may face; but it does not determine which of those choices the person should select. Regarding the issue of our how our actions affect others, science can offer some explanation, but not a full explanation. The consequences of our actions can be traced only in simply cases; but there are many consequences of our actions which cannot be foreseen, even by the best that science can offer. Insofar as answers to moral questions rest on our ability to foresee the consequences of our actions, those moral questions cannot be answered fully, with or without the aid of science. Whenever our actions have an impact on others, moral questions do apply. But this does not mean that such questions can be answered. (An example of what seemed to be good moral policy which had unforeseen bad consequences is the establishment of huge units for public housing back in the 1960s, public housing which soon became the base for a variety of socially dysfunctional families, criminal activities, juvenile crime, illicit drug trade and prostitution. The moral choice to offer public housing to poverty stricken families seemed to be a good choice; the best of the social sciences supported that policy; but nobody foresaw the bad consequence resulting from that policy.)

#3: This question relates to moral conflict, and Harris simply dismisses it as applicable in most cases of moral question or moral choice. He assumes that the significant moral situations are those in which it is clear what we should do in order to bring about “what is good, on balance, for everyone.” But this is too fast and not at all a case of dealing seriously with the issue of moral conflict. Harris dismisses such cases as “zero sum” contests which differ from the “most important moral occasions.” This might be a convenient position to take when one is trying to show that morality is reducible to utilitarian considerations amenable to scientific treatment; but it hardly strikes me as an honest effort to deal with real cases of moral conflict, cases in which the best reasoning we can apply does not give us an answer as to what is the right thing to do and in which any prediction as to the consequences of our decision is questionable at best. It is well enough to assert that we all know that child abuse is bad for everyone, and that this can be explained in terms of the “architecture of our minds” and “the social architecture of our world.” This sounds good and may impress the unwary reader, but often the hard choice is one that does not lend itself easily to a choice leading to child abuse and one which avoids it. (E.G. Is it better to remove a child from dysfunctional parents when the child welfare system often subjects that child to abusive foster parents? Someone has to make the hard choice without any guarantee, scientific or otherwise, that the action will be the one most beneficial for the family and child directly affected by the action.)

Moral conflict is more significant in the area of practical morality than Harris assumes. Many aspects of our individual and social circumstances, including those that Harris lists — war, nuclear proliferation, malaria, chronic hunger, child abuse – are such that involve moral conflict, hard choices between alternatives in which it is not at all clear which is the best choice in terms of beneficial or harmful consequences of the alternative actions. War is generally a very bad thing for most humans affected by it; but there are many cases in which it is far from clear which choice between war and avoidance of war is the morally good choice in terms of beneficial consequence. Nuclear proliferation is another very bad thing to impose on the world; but is it obvious that the correct moral choice for any nation facing the choice of developing as a nuclear power or foregoing such development is to avoid such development? Is it possible that the consequences of such a choice might turn out to be most harmful for that nation? Surely neither science nor rational analysis can give a definitive answer to that question. The frequency of malaria, chronic hunger, and child abuse in large parts of the world is surely a moral failing on a grand scale. But do any of the experts, scientific or otherwise, know exactly which national and international policies are ones that insure beneficial consequences in resolving these scourges? That we should work to eliminate these as much as possible is without dispute. What we must do and what actions individual societies must take is subject to trial and error. The best intentions of individuals and governments often result in bad consequences. Science can help to reduce these tragic mistakes; but neither science nor the most enlightened thinking will insure that the hard decisions that humans must make will have the best consequences.

Any “scientific understanding of human experience” will divulge that humans often face tragic situations, moral dilemma, in which there is no guarantee as to what is the right choice in terms of beneficial or harmful consequences. Jean Paul Sartre offers the example of a young French man during the Nazi occupation of France who had to decide between joining the underground resistance fight the Nazi invaders or staying with his aged mother who needed him to care for her. This is not a choice that could resolved by tracing the consequences of each alternative, weighing the benefits and cost, and making a morally correct decision. All the science in the world would not help him; knowing the “architecture of the mind” would not help him. The young man had to make a choice without the comfort of knowing that he made the correct moral choice. He had to make a choice and then live with it, never sure that he had done the right thing. Many moral situations are like this, contrary to Harris’s facile dismissal of such situations as not being “important moral situations.” Any situations in which there are more potential recipients of some benefit than there are benefits to dispense: when there’s a shortage of food or water and a decision must be made as to who gets fed and watered and who must do without; when some medical procedure (vital organ transplant) is limited to a few patients out of a large waiting list; when a decision must be made as to who ( of a limited number) gets the position at a company or the admission at the university and there are many qualified applicants or when trying to decide where to direct our donated dollars when many worthy charitable organizations are making honest appeals. In all these cases, the moral choice is not one that can be made in terms of a scientific understanding of human experience or anything close to an ability to make a cost-benefit analysis of the consequences of our action.

In summary, based on the Sam Harris’s replies to the first three questions by the interviewer, I have little or no confidence that his latest literary effort, The Moral Landscape, will offer much that is helpful to those who look to the sciences for some help in dealing with moral issues.

Responding to the charge of Argumentum ad Ignorantiam

A few months ago an interlocutor put this question to me:

“Did you actually argue that God does not exist on grounds that there is no evidence for the existence of God? I wouldn’t have insulted your intelligence by explaining the argument from ignorance, but I was given to believe that you actually did it.”

Was my intelligence insulted? Maybe, but I hastened to respond.

I’m not sure what exactly was said to you, but “you were given to believe” wrongly if you “were given to believe” that I proclaimed logical proof God’s non-existence based on a lack of evidence for his existence. Whoever “gave to you believe” that I proceed on such a simplistic basis does not know of what he speaks. What I do hold is that proposition that ‘God is real’ is at best a statement of ungrounded faith, given that there are no neutral, objective grounds that support that proposition. The proposition at issue lacks credible credentials. On logical, evidentiary grounds we have as little reason for accepting it as true as we do for accepting the claim that Martians control the world financial markets.

The fallacy of arguing from ignorance does not apply in all cases in which we point to ignorance or a lack of evidence as basis for a conclusion. It does not follow that one argues fallaciously from ignorance whenever one argues that ‘P’ is not likely because there is no evidence adduced for ‘P’. (Suppose that ‘P’ represents the claim that Muslims secretly plan to overthrow the U.S. government.) Sometimes our best information is that there is no evidence to support the claim that P, and when we point this out we are not committing the fallacy of “argument from ignorance.

When I take out my school boy logic text (Irving M. Copi, Introduction To Logic, 6th edition), I read the following:

Argument from ignorance: ..illustrated by the statement that there must be ghosts because no one has ever been able to prove that there aren’t any Generally, the fallacy occurs when it is argued that a proposition is true because nobody has ever proven it false; or when one argues that a proposition is false because nobody has ever proven it to be true. An example of the latter: It is false that there are ghosts because nobody has ever proven that there are ghosts.

The idea being that ghosts might exist although we have not demonstrated that there are such entities. Our ignorance of any proof demonstrating the existence of ghosts is not proof that such things do not exist. To argue otherwise is to base a conclusion on our ignorance. Another example may apply the issue of God’s existence when one argues that God does not exist because nobody has demonstrated (logically from factual premises) that he does exist. To say that our ignorance of any proof of his existence shows that the does not exist would be to commit the fallacy of argument from ignorance.

However, Copi goes on to note that

….in some circumstances it can be correctly assumed that if a certain event had occurred, evidence of it could be discovered by qualified investigators. In such circumstances it is perfectly reasonable to take the absence of proof of its occurrence as positive proof of its non-occurrence. .. the proof here is not based on ignorance but on our knowledge that if the event had occurred it would be known.

This qualification expresses the idea by which I argue that the thesis affirming the existence of a supernatural-God is weakened by a complete lack of evidence to support the thesis. I do not argue that this lack of evidence proves the non-existence of a supernatural-God. Nor do I argue from my ignorance of any proof of God’s existence to conclude that God does not exist. I simply make the reasonable assumption that, if such an entity were real, there would be some neutral, objective evidence to indicate that reality. Given the lack of any such evidence, it is an obviously rational position to hold that most probably such an entity does not exist. The conclusion is not base on ignorance; it is based on knowledge of how the world works and how rational humans proceed.

An Old Intellectual “Failure of Nerve” Shows Again

Recently I have had come contact with students of philosophy who embrace strange forms of “pop philosophy” in which subjective relativism and irrationalism are presented as solutions to an alleged narrow, limited perspective of the sciences and rational philosophy. Any casual glance at historical and current trends in philosophies shows us that this rush to the irrational is nothing new.

Historically this irrationalist yearning was part of the religious opposition to rational, skeptical philosophy; and then part of the Romantic writer’s reaction to the rational trend of the Enlightenment. In the nineteenth and twentieth centuries, some aspects of the existentialist perspective picked up the same theme.

Some philosophers — either precursors to existentialism, e.g. Kierkegaard, or part of the ‘existentialist’ trend, such as Heidegger, express a strong anti-scientific attitude. Sometimes this takes the form of the assertion that truth as subjectivity (Kierkegaard) and in other places it is a strong denial that science and reason can give us objective truth (Heidegger). This attitude, which strongly rejects the ideals of the Enlightenment, is also found in the relativism of the some so-called ‘post-modernist’ philosophers.

Psychologist and cognitive scientist, Steven Pinker, writing in his insightful book, The Blank Slate, gives an informative summary of the philosophy of the post-modernist trend. He sees in Postmodernism “the abandonment of Enlightenment confidence in the achievement of objective human knowledge through reliance upon reason . . .” Pinker notes that socio-biologist E.O. Wilson characterized postmodernistic philosophy

“…as the polar antithesis of the Enlightenment; …[whereas] enlightenment thinkers believed that we can know everything, radical postmodernists believe we can know nothing.”

According to Wilson,

“postmodernists challenge the very foundations of science and traditional philosophy. … [They see] …reality as a state created by the mind, not perceived by it. In the most extravagant version of this constructivism, there is no “real” reality, no objective truths external to mental activity, only prevailing versions disseminated by ruling social groups.(40)

In short, here we have an assertion of subjective relativism in matters related to “truth.”

Matt Cherry, writing in the magazine Free Inquiry (Fall 1998), tells us that

“postmodernism questions accepted standards and emphasizes how social context affects beliefs and theories. It therefore tries to “deconstruct” the assumptions underlying truth claims, and it encourages openness to the points of view outside the mainstream.”

He also points out that

postmodernist thinkers go much further than simply stressing the difficulty of getting at the truth. They reject the very notion of “truth” itself. They argue that there is no “objective knowledge” and no “facts,” only personal interpretation, and that “reason” and “science” are no better than any other “myth,” “narrative,” or “magical explanation.”

(FI, fall 1998, page 20)}

So, according to the postmodernist, there is no objective knowledge, only specific culturally conditioned ways of talking about reality and specific culturally-based theories of reality.

Steven Pinker finds the same relativistic philosophy in George Orwell’s famous novel, 1984. There the regime presents a philosophy which is equivalent to the postmodernistic view. The philosophy of the regime

“.. is explained to Winston Smith by the government agent O’Brien. When Winston objects that the Party cannot realize its slogan, “Who controls the past controls the future; who controls the present controls the past,” O’Brien replies:
You believe that reality is something objective, external, existing in its own right. You also believe that the nature of reality is self-evident. When you delude yourself into thinking that you see something, you assume that everyone else sees the same thing as you. But I tell you, Winston, that reality is not external. Reality exists in the human mind, and nowhere else. Not in the individual mind, which can make mistakes, and in any case soon perishes; only in the mind of the Party, which is collective and immortal.

O’Brien admits that for certain purposes, such as navigating the ocean, it is useful to assume that the Earth goes around the sun and that there are stars in distant galaxies. But, he continues, the Party could also use alternative astronomies in which the sun goes around the Earth and the stars are bits of fire a few kilometers away. And though O’Brien does not explain it in this scene, Newspeak is the ultimate “prisonhouse of language,” a “language that thinks man and his ‘world.’ “

(Pinker, p.426)

Those who embrace the current trend in “Pop Philosophies” – with their confused notions of subjective truth and the value of irrationalism – should be aware of their companion travelers in this rush to irrationalism.

Philosophical Jewels: The Enlightenment, Reason, and Sciences

There are many ways of stating the value of the Enlightenment outlook both for society and for philosophy. Immanuel Kant’s way, with its emphasis on the freedom of thought is one good way:

“For enlightenment of this kind, all that is needed is freedom. And the freedom in question is the most innocuous form of all freedom to make public use of one’s reason in all matters. But I hear on all sides the cry: Don’t argue! The officer says: Don’t argue, get on parade! The tax-official: Don’t argue, pay! The clergyman: Don’t argue, believe! (Only one ruler in the world says: Argue as much as you like and about whatever you like, but obey!). . All this means restrictions on freedom everywhere. But which sort of restriction prevents enlightenment, and which, instead of hindering it, can actually promote it? I reply: The public use of man’s reason must always be free, and it alone can bring about enlightenment among men; . . . . If it is now asked whether we at present live in an enlightened age, the answer is: No, but we do live in an age of enlightenment. As things are at present, we still have a long way to go before men as a whole can be in a position (or can ever be put into a position) of using their own understanding confidently and well in religious matters, without outside guidance. But we do have distinct indications that the way is now being cleared for them to work freely in this direction, and that the obstacles to universal enlightenment, to man’s emergence from his self-incurred immaturity, are gradually becoming fewer”

Other statements emphasize the scientific, rational character of the perspective of the enlightenment:

“Enlightenment thinkers combined the philosophic tradition of abstract rational thought (Descartes and other philosophers) with the tradition of experimentation or empirical philosophy (from Galileo, Newton, Bacon and others). The result was a new system of human inquiry that attacked the old order and privileges, put emphasis and faith on science, the scientific method and education… . . . ,The new approach was an empirical and scientific one at the same time that it was philosophical. The world was an object of study, and the Enlightenment thinkers thought that people could understand and control the world by means of reason and empirical research. Social laws could be discovered, and society could be improved by means of rational and empirical inquiry. This form of thought was reformist, and one that challenged the old order. Enlightenment thinkers were generally optimistic in outlook, looking on their system of thought as a way of improving the social world.”

Kant’s view and that of other proponents of the Enlightenment may have been overly optimistic and over-stated, but the positive value of the philosophy implied is beyond question.

The rational, scientific approach —- the ideal of the Enlightenment —-is our best way of getting at whatever truth may be accessible to human minds and the best way for operating intelligently in the world; furthermore, there are very few good reasons for exempting any area of our lives from a rational, empirical approach.

I advance the following considerations in support of the preceding assertions:

- Education: what is education if not the effort to operate rationally, scientifically, and intelligently in the world?

- Settling disputes: When people disagree the best hope for resolving the dispute is to use neutral, objective, rational deliberation.

- Distinguishing facts from non-facts: When we need to learn the truth about something, to separate fact from fiction (fantasy, lies, myths, legends, false rumors, etc.) our best bets are an empirical investigation and application of rational criteria to discover the truth.

- Accomplishments of science, technology and engineering: Any educated, intelligent person will agree that the great achievements of science and technology utilize rational thinking and the method of science.

- Clear distinction between scientific enterprise and religious/political ideology: A minimal understanding of the scientific methodology shows that it differs categorically from all forms of religious/political ideology. One works tentatively requiring testable hypotheses, subject to confirmation or disconfirmation by other investigators; the other starts with certain truths of faith which can never be subject to question or rational inquiry.

- A common sense point about ordinary, intelligent conduct in the world. To the extent that we achieve intelligent behavior and thought in the world, our thought and behavior will resemble a rational, empirical, scientific approach. (See John Dewey’s thought on this point.) To reject this way of acting in the world is to invite folly, chaos, and eventual social insanity.

Yes, historically religion, Romanticism and Existentialism, and currently Postmodernism, have raised challenges against scientific-based, rational philosophy. But those challenges are more in way of denying that science-reason say everything that can be said about human experience; and claims about the superiority of literary-poetic-religious expression of human reality. In reply, I would remind the reader that the scientific-rational approach does not claim a final, complete account of human reality; and it is not in the business of a literary, poetic, religious expression of human reality. It simply attempts to clarify, resolve, and articulate answers to a host of problems faced by the human race; and it does this far better than any other philosophy that has ever attempted the task.

REMARKS ON THE ‘JUST-WAR’ IDEA

Assuming that we have not completely become moral zombies, we should be prepared to question the moral justification for our government’s policy of carrying on foreign wars. Today it is our continuing military presence in Iraq and our incursion into Afghanistan which should concern us. Eight to nine years ago (2003), it was the invasion of Iraq which occupied our thinking about war. At that time at a meeting of local humanists in Orange County, California, I conducted an impromptu, informal survey which showed that most attendees felt that the United States’ participation in World War II was morally justified, whereas our country’s involvement in the war in Vietnam during the 1960s and 1970s was not justified. The results of my un-scientific survey suggested that many of us have a rough idea of what constitutes a “just war,” whether we articulate it or not.

Furthermore, in the past decade we have often heard references to the just-war idea. In the period preceding our attack on Iraq (2002-3), GW Bush administration spokespeople often invoked the idea that war against Iraq would be a morally justifiable war. Much of the accompanying debate on our government’s war policy touched on the idea. For example, following the September 11, 2001 attacks on the New York World Trade Center and the Pentagon, an LA Times published an article entitled “Catholic Church Debates ‘Just War’.” In this article (9/30/01), the writers reported attempts by Roman Catholic theologians and philosophers to evaluate the U.S. war against terrorism on the basis of the just-war criteria. The article summarized the Catholic teaching that war may be declared if the cause is just, if it is led by a legitimate authority and not guided by revenge, if the results do not produce more evil than the good sought, if it is waged as a last resort, if there is a reasonable chance of success, and if the goal is peace.”

In a similar vein, an October LA Times article (10/13/2002) entitled “Bishops Dare to Dissent” reported that U.S. Roman Catholic Bishops issued a statement against the Bush administration plans to carry out war against Iraq. The writer stated that “Bishop Wilton D. Gregory, president of the U.S. bishops’ conference, writing on Sept. 13, 2002 to Bush on behalf of the bishops …, expressed grave doubts that an American invasion of Iraq could meet the just-war criteria and urged the president to “step back from the brink of war.” He also stated that

“… one of the key tenets of just-war theory is probability of success. The loss of innocent life in war cannot be justified if, after victory, the status quo ante is quite likely to remain unchanged. But will Al Qaeda not remain a threat after victory in Iraq, just as it has remained a threat after victory in Afghanistan?”

Gregory also raised the question whether force would succeed in thwarting serious threats or, instead, provoke the very kind of attacks that it was intended to prevent?

We see then that the just-war concept, insofar as it asks whether a nation is justified in going to war, is not merely an abstract, philosophical concept but applies whenever people deliberate about the morality of war. Maybe some of the participants in my informal survey mentioned above had in mind some of the criteria of a just war and found that U.S. participation in WWII was justified, but that the U.S. war in Southeast Asia (Vietnam) was not.

Most people who give any thought to the question of moral justification of war have some ideas about the criteria that justify a nation’s entry into war. Consider some of these (jus ad bellum) criteria:

A nation has a just cause for entry into war when it must:

1. Defend against an attack by an aggressor.
2. Resort to war for survival as a nation and to defend a way of life.
(Here few question a nation’s right to fight a war; a nation must fight or surrender.)

In addition, many argue that a nation is justified in resorting to war in order to
3. stop an aggressor/oppressor from killing, torturing, oppressing innocent victims; or
4. restore a just order (e.g., re-establish democratic, humane form of government).
(Here war is discretionary or optional, and thus a debatable proposition.)

The following tests are often applied to cases in which war is optional or discretionary:

a. War must be the last resort; all attempts to find peaceful remedies have been exhausted.
(In light of the UN and international law, this is a vitally important condition.)

b. Those declaring the need to war have legal, constitutional authority to do so.
(Revolutionary and civil wars can present special problems here.)

c. The ultimate object of the war is a just and lasting peace.

Some accounts include this fourth test:

d. A “cost-benefit analysis” has been done in which these questions have been answered.

• Is there a high probability of success, i.e. high probability of winning the war?

• Do the predicted benefits of waging war outweigh the cost (death, destruction, suffering)?

[This is often posed as a question of “proportionality” of means (the war) to ends (political or national objective). Obviously, this is a very difficult requirement, calling for predictions or projections about future events and consequences.]

Besides the criteria for a just entry into war, the just-war theory also includes discussion of just conduct of war (jus in bello), sometimes referred to as the rules of warfare. These can break down into two general questions:

a) What kinds of military actions are justified in achieving our end?
b) Who or what are legitimate targets of our military actions?

With reference to the first question (What kinds of actions are justified?), the just-war theory categorically rejects the claim (often heard) that once a nation engages in a war anything its military forces do in that war is justified. On the contrary, there are institutionalized rules of war which spell out legal and moral limitations on how war is conducted. Some of these rules result from international treaties, such as the Hague Peace Conferences (1899, 1907) and the Geneva Conventions, and are spelled out in military documents such as Department of the Army Field Manual PM 27-10, The Law of Land Warfare.

R.B. Brandt, in an essay entitled “Utilitarianism and the Rules of War,” informs us that the preamble to this Manual states that the law of land warfare “is inspired by the desire to diminish the evils of war by:

a. Protecting both combatants and non-combatants from unnecessary suffering,

b. Safeguarding certain fundamental human rights of persons who fall into the hands of the enemy, particularly prisoners of war, the wounded and sick, and civilians; and
c. Facilitating the restoration of peace.”

Other examples of rules of warfare listed in the army manual:

Military personnel are forbidden to kill or wound an enemy who, having laid down his arms, or having no longer means of defense, has surrendered….[also ] forbidden to employ arms, projectiles, or material calculated to cause unnecessary suffering. [And ..] The pillage of a town or place, even when taken by assault is prohibited.”

An obvious purpose of these “rules of war” is to prevent war atrocities, or at least minimize them. Some of these “rules of war” relate to the treatment of prisoners of war. Some are attempts to limit the violence and destruction of war to levels that necessary for achieving legitimate military objectives. Some are efforts to protect civilians and noncombatants from the violence of war.

Do warring nations and warring factions follow these rules? We’re probably too optimistic if we think that most of the time they do. Certainly fanatics and extremists (religious or secular) do not and, historically have not. These rules mean nothing to an Islamic fanatic bent on a suicide bombing of a busload of civilians, or the hijacking and crashing of an airliner full of civilians. Did these rules mean anything to the Nazis in WWII, or to the Allies who fire-bombed European cities, or to the Americans who justified, in some way, the dropping of nuclear bombs on Japanese cities? Yet, some of us hold that, at least in some cases, these rules have helped to reduce the barbarism and organized murder that we call “war.”

This brings us to the second general question of jus in bello: Who or what are legitimate targets of our military actions? Some discussions of just-war theory refer to the principle of discrimination, which prohibits direct, intentional attacks on noncombatants and nonmilitary targets. Most debates about the morality of war have focused on the principle of discrimination, and many who deny that modern war can be morally justified do so because they find that modern war, by its very nature, violates the principle against the killing of noncombatants. The following points outline the case against a just modern war.

1. At first glance, it seems that this should be an unquestioned moral axiom of war: you don’t attack noncombatants or civilian targets.

2. Historically, the “rules” of warfare limited combat to opposing armies or military forces. Civilians were not supposed to be legitimate targets. (In actual fact, these rules were not always followed, and civilians often paid the cost.)

3. With the advent of modern total war, this restriction was no longer observed. Since the American Civil war attacks on the means of war production, often in civilian centers, were regarded as legitimate military targets.

4. Soon modern-war tactics were aimed at breaking the will of the opposing forces, both on the battlefield and at home. Thus, air bombardment of cities, for example, has generally been accepted as part of modern warfare.

5. In this modern-war context: How does one define “noncombatants”? How does one define “non-military targets”? Wm. V O’Brien tells us that
“… well before the advent of weapons systems that are usually employed in ways that do not discriminate between combatants and noncombatants, military and nonmilitary targets, the wall of separation between combatants and noncombatants had .. broken down.”

6. Many, thus, conclude that the principle of discrimination is no longer a “meaningful limit on war.”

7. But some people reject this conclusion, arguing that the principle of discrimination is based on an absolute moral axiom “that evil may never be done in order to produce a good result.” Accordingly, killing noncombatants intentionally is always an inadmissible evil.

8. Others (e.g. O’Brien) reject this moral absoluteness and argue for “a more flexible and variable international-law principle of discrimination,” which attempts to balance a general rule limiting attacks on noncombatants with the needs of modern military operations (which sometimes call for military action that results in civilian deaths.)

10. This “flexible, variable” principle will likely involve the concept of double effect in one form or another.” According to this notion,
“ — the unintended killing of noncombatants is allowable in some cases as “tolerable, concomitant, unintended effects” of military action —- ‘collateral damage’ in contemporary terms.”

11. But “…this distinction between primary, desired effect and secondary, concomitant, undesired by-product” is very questionable. It is not obvious that the secondary effect (the killing of civilians) can be legitimately seen as merely an unintended “by-product” of one’s primary action, when one knows that the action will likely cause the death of civilians.

12. Hence, it is doubtful that the moral principle prohibiting the intentional killing of noncombatants can be reconciled with the tactics of modern warfare. Consider, for example, our tactics in Afghanistan and Pakistan of using remote controlled air-droids to fire missiles at targets, which are selected as “insurgents” or “Taliban” by ‘analysts’ situated thousands of miles away. Thus, some people conclude that one cannot defend modern war, with its air bombardment and remote controlled missile attacks on cities and villages, as morally justifiable.

In any case, the just-war theory should play a central role (sometimes controversial role) in discussions and debates relating to our government’s war policy, given that our citizens and Congress have not become so lazy and complacent that they won’t even consider the question of the morality of their government’s war policy.

Explaining the Universe Calls for a Designer?

An acquaintance (call him “Bob”) tagged me in Facebook with a set of remarks arguing that random chance and physical processes alone could never explain how the universe came about. I considered his remarks, replied to them, and tried to show why (like most scientists and rationally-critical people) I reject this argument. It is not even remotely close to making a good case for an intelligent designer working behind the scenes to bring about the universe.

Below I list Bob’s Facebook remarks, with my criticism in highlighted brackets:

But in reality, the existence of the universe around us is not just the product of chance and time. In fact, chance, working alone, will produce nothing. Imagine that you have a universe which consists of a box of titanium marbles and a box. The box is so constituted that it randomly shakes the marbles every ten seconds. Will this system ever produce anything new? No way. Billions of years later you will still have nothing but titanium in a box.

[This is a bad analogy. A strictly naturalistic picture of the primal physical processes that worked following the Big Bang do not lend themselves to this rather simplistic analogy of “chance working alone.” Again, I suggest you look at what physicists and scientific cosmologists have to say about this early scenario of how the processes (strictly naturalistic, physical, and eventually chemical processes) that led to the birth of stars, galaxies, planet, and such. It is only those people, like Paul Davies, a good scientists, who look for indications that the process required some type of intelligent direction. But they have not managed to make a good scientific case for that hypothesis.. Also, John Wilkins, in the article on the web (http://www.talkorigins.org/faqs/chance/chance.html) takes up claims like yours, pretty much refutes them as misconceived, and offers a much more interesting, relevant analogy].

What produces things in this, our marvelous universe, are the initial conditions, which are carefully tuned and very complex.

[You’re making a big assumption, which is either false or completely misleading, when you assert that the “initial conditions .. are carefully tuned and very complex. Most scientists who work in this area reject this assumption. You assume without argument that initial simplicity cannot eventually result in greater complexity. .]

After the initial “Big Bang” a specific amount of energy was released, which congealed down to a specific amount of hydrogen gas. Where did the Hydrogen gas come from? The entire structure of the universe was already implicit in that initial explosion of fire and energy. The entire structure of the periodic table. The four basic physical laws. The properties of carbon molecules.

[Again, a very vague, strange notion of the “entire structure of the universe ..already implicit in the initial explosion of fire and energy.” What exactly do you mean by “implicit”? And what is you basis (scientific, logical) for this assertion?]

What time plus chance did is massage this initial set of conditions and move it along to its amazing current state of complexity on earth. Hydrogen condensed into stars and produced Helium, and heavier atoms by a process of fission. The property of these stars caused them to explode at a certain stage, flinging these new elements into space, to congeal into new stars and planets.

If you change this set of initial conditions only slightly in any one of dozens of ways, it will halt the entire process of evolution, and all you will have is a box of steel balls, inert and lifeless.

[Maybe, although some physicists question this claim; but even if true to some degree, it doesn’t demonstrate anything about “fine tuning” of the conditions and dynamics. It surely does not demonstrate that some intelligence must have fine tuned the primordial conditions so that the universe would be the result. To say otherwise (as you do) is to move too fast and too carelessly.]

So what is the true source of Evolution? Not chance. The specific structure and design of the universe, inherent from the beginning. Where did this structure and design come from? Consciousness and intelligence, obviously. Blind chance does not produce intelligent systems, unless it is working on an intelligent system.

[Here’s an invalid inference based on a questionable premise and vague question? You assert --- but surely have not shown that random chance could never have resulted in the structure and ‘design’ of the universe. And then you compound your fallacy by drawing a completely invalid inference: Consciousness and intelligence is required. But even a more basic problem: what is meant by asking for the “true source of evolution”? Nothing in science and rational thought demands that there be a true source of evolution (presumably an external source). Sometimes the best we can say – with some rational, scientific ground – is that some things just happen. Any stories and fantasies dreamt up by theological-inclined people are simply without relevance.]

Why are modern skeptics and atheists so averse to this fairly obvious observation?
[Easy response: because your so-called “observation” is just a body of fallacious thinking and speculation.]

Because the notion of “God” is tied up with primitive, historical religions, which have claimed to speak for God, and which have propounded absurd laws (like the Islamic laws about women, or the Old Testament laws of “justice.” Religions have done terrible, evil things in the name of their gods, murder, torture, wars, human sacrifice, and have opposed science when it contradicted their dogma.

So modern people have quite understandably concluded, “If this be religion, and this be God, then I want no part of it!” But historic religion is not the final word on the mystery of the universe. And if we are to learn more about the mystery, like good scientists, we must be willing to open ourselves to greater possibilities.

[These “greater possibilities” you lay before us are ‘possibilities’ only in the sense that any number of groundless speculations, fantasies and just-so-stories are ‘possibilities.’ We open ourselves to these "greater possibilities" only at peril to reason and sanity.]

If historic religion is not the final word, the answer is not to give up all religion. The answer is to improve religion, throw out the nonsense, and grow closer to the Grand Designer as HE/SHE really is.

[Now you present us with “The Grand Designer”? You recommend that we "grow closer to the Grand Designer?" How, Bob, do you propose that we do this? Are you sure you're not pushing a form of super-natural religion in which we pray to the Lord in heaven?]

Robert Richert: First Letter to a Christian Friend

How many times have you heard that you shouldn’t discuss sex, religion, or politics in mixed company? I like talking about religion, but sometimes it is difficult to maintain civil discourse in a ‘one on one’ or small group setting. People become emotional and tempers tend to flair. Thus, I thought I should express my views about religion in the form of a letter. Please understand that my viewpoint is complex and to explain it thoroughly might require the length of a book. However, in this letter I’ll try to confine my remarks to a few pages. I invite you to respond or ask questions.

I am a second-generation atheist (Atheist – one who does not believe in the existence of God – a supernatural being). Both of my parents lost interest in religion at an early age. I was raised without religion, but was not raised to be hostile toward it. I’ve studied philosophy and am familiar with most of the arguments for and against the existence of God. I have read many scholarly articles and books on how the Bible came to be, and about the various religions and cultures that contributed to the development of Christianity. In contrast, most of my Christian friends are quite unfamiliar with the counter arguments to the existence of God and the history and development of the religion in which they believe. When the subject comes up and I share my knowledge, I often make my friends feel uncomfortable. I don’t apologize; they should feel uncomfortable!

I think that most people are religious believers primarily because:
1. They are raised in a particular faith and don’t question it in depth.
2. They are persuaded by the first cause (God caused the universe to exist) and design arguments (God is responsible for the complex ‘design’ of the universe, including life). I won’t go into these here (See Prometheus Books).
3. They ‘believe’ (in a benevolent God and life after death, for example) because it makes them feel good. I call this “Religious Hedonism”.

When religious beliefs become firmly rooted by emotional fervor, those beliefs become rendered immune from rational analysis. It is analogous to being in love; nobody wants to hear criticism of someone they care about deeply. This is one reason why once people become emotionally attached to their religious beliefs; you can’t pry them away from them with a crow bar.

It upsets me when religious believers claim that their beliefs are true because they have strong, passionate feelings about them. I do not accept that religious faith is a pathway to knowledge. One’s strength of conviction and religious zeal is not in any way a barometer as to the truth of a belief. After all, Muslim fundamentalists are as committed to and passionate about the truth of their ‘faith’ as are Evangelical Christians. Both can’t be right. I think the evidence is clear that religious faith is subjective in nature. I wrote a three-page article called, “A Critique of Religious Faith”, that explains this in more detail.

It may surprise you, but I am not unilaterally opposed to religion. Most religions incorporate supernatural beliefs, but they are more than that. One can reject the supernatural elements of religion, but admire other aspects of it – the sense of community, some of the ceremonies, some of the ethics, the art, etc. Many of my Christian friends do volunteer work and help the community through their church. They express their ethics by setting a good example. This is what I believe churches ought to be doing. These friends are not trying to undermine science education in our public schools, prevent stem cell research, ban abortion and birth control, trash homosexuals, and spread divisiveness and hate! I can live peacefully with the former, but in my opinion, the latter are causing great harm in society.

At one time, religion was at the center of western culture. Before the invention of the printing press, only priests were allowed to be literate. The church was the primary source of ‘education’. Holy Scriptures, interpreted by an authoritarian priesthood, were not only the sole source of ethics, but knowledge about the cosmos and almost everything else. Medicine men or priests were the healers. Kings were believed to be more than mere mortals; they were part of a divine succession. Whatever government ruled was always in varying degrees, subservient to the authority of the church. Today in western cultures, secular institutions have replaced almost all of these things. Religion is no longer necessary for people to be moral, prosper, lead a fulfilling life, or understand our place in the cosmos.

I believe that the natural universe (or multi-verse) is all that exists. I don’t see any credible evidence for anything supernatural – be it gods, devils and demons, angels, ghosts, souls, or anything else that is supposed to exist in that nebulous other world. Consider that in centuries past, supernatural agents seemed to be present everywhere. For example, it was widely believed that angry gods caused earthquakes, floods, etc., and that demons caused disease. In Matthew 8:28, Jesus cures two demon possessed men by casting the demons out of their heads and into a group of pigs! Today, we know that a complex mixture of natural forces determines weather; shifting tectonic plates cause earthquakes; and germs, genes, and chemical imbalances cause disease and mental illness. Unfortunately, it is still widely believed today that only a powerful God could design life with all its complexity. However, Darwin, and mountains of subsequent evidence accumulated since his day, has shown that a purely natural, purposeless, yet powerful mechanism – evolution by natural selection – is the primary force in ‘designing’ life (see Richard Dawkins book, “The Blind Watchmaker”). Today, we have exotic cosmological theories (Inflation, String, ‘M’, etc.) that explain how our universe came to be, and some hint that we might be part of a larger multi-verse that is eternal. In summary, over the last 500 years, as science has expanded our knowledge about the universe and our place in it, supernaturalism has been in steady retreat. How many supernatural gods have to fall before one concludes that the natural universe is all that exists and our current version of God is unnecessary as a causal agent or active participant in nature? I think that time is now.

Modern brain research challenges dualism – the concept that mind is something separate from the brain. With increasing success, researchers are connecting specific properties of our mind – memory, visualization, personality, etc. – to specific areas of the brain. If part of the brain is altered or damaged, there is a corresponding dysfunction to the mind. For example, as the physical brain deteriorates with Alzheimer’s disease, the mind loses function and dissolves away until there is almost nothing like a ‘person’ left. Most brain researchers today have discarded dualism as a working hypothesis; some don’t even use the word mind at all! There is only the brain and one of its functions is what we perceive as ‘mind’. Mind, as something separate from the brain, is a relic of our pre-scientific past. In my opinion, the concept of the soul and life after death is outmoded because they are based upon this false assumption; that some kind of conscious awareness can exist separately from the physical brain. Brain research also challenges the concept of God because God is, in a sense, a bodiless, non-physical mind.

I don’t believe that moral standards are derived from God. All of our ethical systems, whether they are ‘enshrined’ within religious dogma or not, are the result of the human struggle to cope with the hardships of life and survival. Humans evolved as social animals. Like our ape cousins, our survival and prosperity is dependent upon cooperating as a group, not acting individually. This is the core foundation for all ethics, yours and mine. Over time, ethical systems have, and continue to evolve, and some basic objective moral standards have emerged (honesty is good, murder is bad, the golden rule, etc.) Your ethics and mine are a mixed bag of our heritage, the environment in which we live, and societal factors. Our ethics are probably not far apart, despite our religious differences, because we were raised in the same culture. However, our ethics and religious beliefs would be far different if we were raised in a Taliban tribe! Obviously, one does not need religion to be a good person, and conversely, being religious does not necessarily make one a moral person.

My secular humanism (see Kurtz, Prometheus Books) means that I have confidence in human potential. I believe that we humans are capable of making ‘heaven’ here on earth! My ethics tell me that since there is no ‘savior’ to come and bail us out of our troubles, we must act to be our own saviors. Conversely, Christianity is predicated upon human failure. According to Christian theology, at some future date humans will muck things up so badly that Jesus will return and make things right. This doesn’t show much confidence in human potential! Also, I think a wise God could think of a better way of saving humanity than by allowing the killing of his own son. I could never subscribe to such a ridiculous theology, for these and other reasons.

I do not believe that the Bible is a reliable source of morality, nor inspired by a god. It is purely human in origin. While some passages and stories contain good moral messages, much of it is barbaric by today’s standards. For example, the Bible mandates that the method of punishment for infractions of its laws is stoning to death. I saw a video of some people in an Islamic country being stoned to death – it was one of the ugliest, most brutal things I’ve seen – and I’ve been in combat!

The Bible demands, just to name a few examples, that children who curse their parents – men who lay with men – girls found not to be virgins on their wedding night – adulterers – being a stubborn son – those who worship other gods – must be stoned to death. This is patently absurd! I doubt that you or any other sane person would act to enforce laws based upon most of these ancient Biblical edicts, much less demand an extreme, barbaric form of punishment for such trivial ‘sins’.

The Old Testament is quite bloody and the Old Testament God is harsh and cruel by today’s standards. For example, in many passages, a petty, jealous, angry, or vengeful God orders his followers to kill entire villages of men, women, children, and infants (Numbers 21:25, 21:34, 25:4; Exodus 32:27; Joshua 6, 10:36, 10:40; I Samuel 15:3, etc., etc.) I’ve heard apologists say that these acts are justified because all of these people were ‘sinners’. Children and infants? Give me a break! Also, what kind of just God would torture his subjects for eternity for merely not believing – or for any crime for that matter? Most of the former Christians turned non-believers that I meet say that punishment in hell was a very real and frightening emotional burden, and a significant impediment to their gradual movement away from this religion. Of course, I don’t believe that heaven and hell exist, other than as human invented concepts. In my opinion, compelling religious allegiance by creating guilt, fear and anxiety with the threat of eternal punishment in hell is immoral and contemptible! All of this is inconsistent with our modern concepts of compassion and justice, and the concept of a compassionate and just God.

The New Testament is an improvement over the Old, but still leaves much to be desired. For example, where is the loud and clear condemnation of slavery? The Bible condones slavery. The Biblical Jesus was no ‘saint’, in my opinion. Never mentioned in church sermons are his harsh, cruel, and outright ludicrous sayings. For example in Luke 14:25, Jesus is quoted as saying, “If anyone comes to me and does not hate his mother and father, his wife and children, his brothers and sisters, – yes, even his own life – he cannot be my disciple“. Sorry my friend, but should you hope that I will someday convert; hating my family and myself is out of the question! The Bible is chock-full of absurdities like this. Those who claim to believe in the inerrancy of the Bible, and still try to function as moral citizens in our society are forced to cherry-pick the text for guidance: They ignore or gloss over the absurdities or ‘reinterpret’ ambiguous or harsh passages in such a way as to conform to modern morality.

For the past 150 years, Biblical scholars, archeologists, linguists, and other professionals have turned their talents toward understanding how the Bible came to be. Just to name a few examples, most scholars at major universities and seminaries generally agree that:
1. The creation stories in Genesis are variations adapted from earlier stories. For example, Genesis 1 is largely derived from the Babylonian writing called, “Enuma Elish” and Noah and the flood from the Babylonian, “Epic of Gilgamesh”. The Genesis stories are primitive myths that conform to the ‘flat earth’ thinking of the times and cultures in which they were written, but not to our modern scientific view of the cosmos.
2. Jesus was not born in Bethlehem. The authors of Matthew and Luke placed Jesus’ birth there in conformance with their interpretations of Old Testament prophecy. Some scholars believe that Jesus’ birth stories were added to these texts well into the second century after Jesus failed to return within their lifetimes as promised in Matthew 10:5, 10:21, 16:28 – Mark 13:30.
3. The birth, passion, and miracle stories about Jesus are replete with common pagan, and other ancient mythical themes or are variations on Old Testament stories (virgin birth, celestial object heralding birth, born near the winter solstice, three wise ‘Magi” visiting the birth, raising of the dead (Lazarus), resurrection after three days, etc. (see below)
4. Mark was the first Gospel written, about 68-70 AD; Matthew and Luke followed at least ten years later. Both borrowed from Mark. John, the last Gospel, was written near the end of the first century.
5. The Apostles did not write the Gospels – they were written by anonymous sources long after Jesus death. It’s unlikely that any of the Gospel writers knew or met Jesus; none claimed to know him.
6. The Gospel authors had access to a yet undiscovered document that scholars call “Q” (German ‘Quelle’, meaning source). This document contained a list of sayings attributed to Jesus.
7. Many, if not most of the sayings attributed to Jesus in the Gospels are derived from earlier sayings and sources, or added by the authors.
8. The authors of the Gospels were Evangelicals and they wrote from an emerging theological perspective. They were not objective historians.

Gerald LaRue, professor emeritus of Biblical Studies at USC is a world-class scholar and personal acquaintance. He said, “Analyzing the New Testament Jesus story is like peeling an onion. As one non-historical, fictional layer after another is removed, one ends up with the tiny core. Is this the true onion; is this the true Jesus? There is not enough left to grasp a personality” (The Humanist, May/June 1991). Van Harvey, former chairman of Stanford’s religious studies department said, “So far as the Biblical historian is concerned, there is scarcely a popularly held traditional belief about Jesus that is not regarded with considerable skepticism” (Los Angeles Times, May 1985).

Long before Jesus, the birth and/or death of all Persian Kings were heralded by the appearance of a celestial object such as a comet or star in the sky. Savior gods allegedly born of a virgin were quite common; so was being resurrected after three days. Mithraism predates Christianity by centuries. This religion became one of Christianity’s chief rivals before the time of Constantine. Mithras was a creator god who returned to earth as a savior. The “Acts of Thomas”, “Oracles of Hystaspes”, and “Chronicle of Zugnin” tell the story of Mithras. They tell of a star that fell from the sky at his birth, that shepherds witnessed the birth, and how Zoroastrian priests called Magi followed the star to worship him. These priests had prophesied the coming of a savior and brought golden crowns to the newborn, “King of Kings”. His birth was celebrated on December 24th. Does this all sound familiar? Incidentally, many other (mostly sun) god births were also celebrated around the winter solstice, which by today’s calendars is December 21st – all long before Jesus was born.

For me it is obvious that not just the advancement of science, but the accumulation of knowledge acquired by modern Biblical scholarship undermines the conservative, traditional view of Christianity. I don’t think it is possible to be scientifically informed nor objective about the Bible and its history and development and also subscribe to a conservative Christian belief system!

The world suffers from overpopulation, diminishing resources, religious and cultural divisions, weapons of mass destruction, and environmental decline. I have confidence that we humans can overcome or mediate most of these problems. However, to do so, we must move beyond looking backward to ancient religious dogmas for answers. Those who cling to conservative religious and authoritarian belief systems – and attempt to shape the modern world by them – are exacerbating the problem. This is not to say that religion has no place in the modern world. Those that have ‘evolved’ to become more centered in the ‘here and now’; those that have ‘evolved’ to become humanistic, tolerant, inclusive, and not at war with science, can offer emotional and psychological support, and rally people to help overcome our problems. However, I think that our world leaders must strive to marginalize religious extremism within their midst, and minimalize religious and cultural barriers between themselves and their neighbors. Our leaders must turn toward humanistic thinking – the application of science, reason, and human cooperation and compassion – to make the world a better place.

Yes, I am an optimistic secular humanist atheist. I believe we humans can make heaven here on earth – all by ourselves!

Really, Mr. Blaise Pascal, YOU CAN’T BE SERIOUS!

(Somewhat of a “tongue-in-cheek” reply to Blaise Pascal’s Wager)

The Christian philosopher, Blaise Pascal (1623-1662 ), argued that rational prudence dictated that everyone should believe in God’s existence, even if we lacked a personal faith in God. According to Pascal, the person who opts to believe has nothing to lose, should it turn out there is no God, and everything to gain, should God exist. On the other hand, the person who chooses to disbelieve has nothing to gain, should God not exist, and everything to lose, should God exist. In simple terms, believe and you risk nothing but stand to gain everything; disbelieve and you risk everything and stand to gain nothing.

Shouldn’t all agree, then, that any rational person would certainly opt to belief in God’s existence? Pascal says the answer was obvious. This has been called “Pascal’s Wager.”

A number of critics have shown that there are a number of problems with Pascal’s argument, and I will not rehash all the good responses that have been given. The main one is simply that nobody, including respected theologians, knows what fate awaits any human in the afterlife, supposing it even makes rational sense to speak of the ‘afterlife.’ Pascal simply relied on what are very questionable points of Christian doctrine. For now, I simply will focus attention Pascal’s key assumptions. Pascal assumes that God will punish non-believers, solely for their lack of belief, and reward believers, solely for their belief in his existence. These are philosophically untenable claims which ignore altogether the moral aspect of religious life.

The God of Pascal’s Christian faith is certainly considered to be an infinitely wise deity. Let us ask: How would an infinitely wise deity treat those human creatures that did not believe in him? Would he punish them by eternal damnation simply for their lack of belief, as many Christians claim?

First, this infinitely wise deity remains hidden from the human world, never giving any clear evidence of his existence. After having remained hidden and mysterious, he allegedly condemns all non-believers, among them the empirically-minded, rational humans who operate on the basis of evidence available to them. These humans conclude quite reasonably that there are no grounds for affirming that a supernatural deity exists. On the contrary, the deity allegedly rewards all the credulous, fantasy-minded humans who proclaim that he does exist. Does this sound like the behavior of an infinitely wise being?

Consider our attitude to a parent who treats his children in an analogous way. An absentee father who was never present, never let his children know where he might be, and never supported his children, but later appeared and punished those children who stopped believing in him, while rewarding the credulous ones who never stopped believing despite all evidence to the contrary. Would anyone hold that such a parent was wise and good? Yet, Pascal’s wager can be seen as attributing analogous behavior to the deity, a being who is perfectly good and infinitely wise.

Many of us question whether an infinitely wise being would condemn non-believers, as Pascal and some Christians claim. Wouldn’t an infinitely wise being easily understand why some humans would withhold belief in him? After all, as their creator, he would know that these creatures were given brains, the capability to question things and the inclination to seek evidence for doubtful claims. It would be the opposite of wisdom to punish these creatures for using the faculties that they were given, and to reward those lazy creatures who make no use of their faculties. Doesn’t it make rational sense to conjecture that an infinitely wise deity might hold the prudent believer in contempt for being so credulous?

When we play this theologically speculative game, we have as much reason for inferring contempt for the pious believer as Mr. Pascal had for his assumption that believers would be rewarded. Yes, I know this is part of accepted Christian doctrine. However, looking at all this from a philosophically rational perspective, we can say that Pascal’s assumptions are simply false and untenable.

My guess is that an infinitely wise being might even prefer the skeptics and the agnostics. They are certainly more interesting and entertaining company than pious, credulous folk, who don’t have much to say beyond repetitious “hosannas to the Lord.” An infinitely wise being might prefer someone who can give him a good argument or a good game of chess, over some religious type who simply sings his praises. Let’s not bore the deity; after all, eternity is a long time!

Robert Richert: A Critique of Religious Faith

Most Americans believe that religious faith is one of the noblest of human virtues. Indeed, many people claim that religious faith is the cornerstone of their spiritual well being. Following are three definitions of faith from my Webster’s dictionary: “Belief and trust in and loyalty to God”, “Something that is believed with strong conviction,” and “Firm belief in something for which there is no proof”.

Analysis and criticism of religious faith is a difficult task because of the ambiguity of the word’s usage, the value accorded it in our society, and the passion that it arouses. However, I think this is a task in need of doing and bringing to the public’s attention.

One problem I have encountered repeatedly is that people often use different meanings of the word ‘Faith’ within the same context. For example, religious believers often begin a discussion by saying that they believe because of faith. However, when pressed, they shift more emphasis toward their strong convictions about their beliefs, as if these are one in the same. Thus, when a skeptic criticizes faith as a justification for belief, it often becomes construed as a personal attack upon the believer’s character. This shift of emphasis often places the critic rather than the believer in an uncomfortable position. Therefore, skeptics should demand clarification at the outset and illuminate the following important distinction: Believing on the basis of faith is not the same thing as having strong feelings about the belief. The former is an argument and the latter is an expression of passion. Conservative Muslims are as committed to and passionate about the truth of their beliefs as are Evangelical Christians. Obviously, one’s strength of conviction and expression of passion is not in any way a barometer as to the truth of a belief.

A common claim is that everyone, even atheists, believes in ‘something’ based upon faith. Here are some typical examples that I have heard from religious believers: When we stop at a red light at an unfamiliar intersection, we have ‘faith’ that the light will turn green. We have faith that the sun will rise tomorrow, or that our car will start in the morning. The argument is made that if everyone believes in some things based upon faith, religious faith is justified. However, according to the Bible (Hebrews11:1), “Faith is the substance of things hoped for, the evidence of things unseen.” Traditionally, Theologians have interpreted the latter phrase to mean that the existence of God cannot be proven by the ordinary rules of evidence and experience, and/or that evidence for his existence is of a mysterious nature. They argue that faith is a special way of knowing distinct from mere reason and everyday experience. However, the word ‘faith’ used in the mundane examples above is more accurately defined as, “confidence gained through experience in the routine of daily life.” This is not religious faith! In fact, the use of the word faith in the above examples stands in direct contradiction to traditional biblical and theological interpretations.

Along with its ambiguous usage, another problem is that there are varying degrees of faith. For example, knowing the possibility of an accident, I might nonetheless maintain a small degree of faith that I will reach my destination safely and decide to drive on the freeway. This is not the same thing as believing I will be safe based totally on faith. It is a matter of weighing probabilities. To some, faith may play a small part in every day decision making processes, to others none at all. It depends upon how one evaluates such situations. In any case, to passionately believe in God on the sole basis of religious or blind faith is not at all like having a small degree of faith in a decision based primarily on evidence or experience.

Many believers say that their faith is not grounded in a vacuum; they have trenchant personal experiences confirming the truth of their faith. No doubt that people have vivid, even life changing religious experiences. However it is reasonable to question their interpretation and whether they connect to something beyond the individual mind. People with strong religious convictions tend to construe their religious experiences through the rose colored glasses of entrenched beliefs. For example, the Virgin Mary appears frequently in Catholic miracle stories and visions, but almost never in Protestant versions. Almost all of the world’s leading religions contain stories of powerful mind altering personal experiences, and they usually reinforce the existing belief system or a current religious trend. Often, the theological message within one religious experience stands in contradiction to those within other religions. All of this strongly suggests that such occurrences, emotionally moving as they may be, are subjective in nature. In any case, if people claim that these events are evidence that their beliefs are true, they are not basing them solely upon faith.

We all have heard the expression that, “Faith moves mountains.” I usually counter this statement by saying, “Yes, but sometimes people motivated by strong faith, move those mountains on top of people of different faiths!” We have all heard stories about people motivated by faith accomplishing wonderful things such as building hospitals and serving the poor. However, faith has an ugly dark side. I can’t think of a more timely and poignant example than the September 11th terrorist attacks. The perpetrators, devout Muslims, believed that Allah would reward them in the afterlife. Most certainly, they were motivated by strong religious faith! It should seem crystal clear after 9-11 that having strong religious convictions is no guarantee for good works and ethical conduct. Yes, sometimes faith works for the good, but sometimes it works for the bad. Yet, this dark side is seldom acknowledged in our society. Many religious people are reluctant to attribute evil acts to strong religious faith. They offer presumptive and arrogant rationalizations like; the terrorists strayed from their ‘true’ faith, or they don’t believe in the ‘correct’ religion in the first place. Not only do our three major faiths have significant doctrinal differences, each contains many denominations and scriptural interpretations. Who is to say which, if any are correct, and on what basis? I sincerely doubt that any atheist could be convinced to fly a jetliner into a building because of potential God given rewards in a heavenly afterlife!

Imagine a criminal defendant saying to the jury, “I don’t have an alibi and no evidence to support my claim to innocence. Just have faith that I didn’t do the crime.” Most people would take this comment with a grain of salt! Imagine a scientist claiming that although he has no evidence for his hypothesis, it is true and must be taken purely on faith. This scientist would be labeled a crackpot. Would a wise consumer buy a used car based solely on his faith in a total stranger’s testimony that the car is in perfect condition? No! Religion is the only major aspect of our culture in which faith is not just acceptable but heralded as virtuous. Although it may be claimed that religious faith is distinguishable from the other forms of faith in the examples above, I don’t see any substantive difference, only a double standard. The elevation of religious faith to a high virtue strikes me as an example of special pleading. If faith submitted as evidence or justification for belief isn’t acceptable in our courts, in science, or when purchasing a car, it shouldn’t be acceptable as a basis for believing in religion.

How can one have evidence for, “…things unseen,” meaning things beyond or above reason, experience and scientific knowledge? By what process does one weigh the truth of one representative religious faith against another that is different? If faith is a personal, intuitive process, why should we believe that this intuition is tapping into anything that is objectively true? I have not heard any cogent responses to these questions. Inadvertently, the first phrase of the biblical definition, “Faith is the substance of things hoped for…” may provide an answer. It strongly suggests that faith is based upon wishful thinking. Thus, faith isn’t about discovering truth, it’s really a form of religious hedonism: People of faith believe what they desire to believe is true; they believe because it makes them feel good, even in the absence of supporting evidence or despite contrary evidence. Thus, elevating religious faith to a noble virtue provides the socially protective cloak that enables an emotional justification for the rationally unjustifiable. I find this deception reprehensible, not to mention immoral! As for claims about religious faith leading to truth, skeptics must demand more than strong convictions, passionately felt personal experiences and wishful thinking. Skeptics need solid evidence derived independently of personal bias.

For all of the reasons above, I think that religious faith is not a pathway to knowledge. It is a subjective experience deeply rooted by human emotional wants and needs. As a means to truth, it is not just irrational, but anti-rational. Throughout history, religious thinkers have defended faith by attacking human reason. Martin Luther said repeatedly that reason was the enemy of faith. Even in our scientific age today, many religious thinkers argue that human reason is limited and inadequate, and that faith is superior. When believers elevate religious faith to high virtue and use the word ambiguously, whether intended or not, they are attempting to insulate themselves from the burden of proof. I think that all beliefs we deem important and hold with strong convictions should be based upon solid, reliable evidence. To the contrary, Religious faith is glorified ignorance masquerading as truth.

Religious faith should not be heralded as a noble quality or as a hallmark of human virtue in any educated society.

by Robert Richert

Was the ‘Jesus’ of the New Testament a historical person?

Did the ‘Jesus’ described in the New Testament really exist as a flesh-and-blood person, walking the hills of Galilee, teaching, preaching, healing, and working miracles in Palestine?

I don’t think so.

Here is a summary of my reasons, gotten from a study of a variety of popular and scholarly writings on the subject. Of course, there’s much more to be said.

1) References in the New Testament were written long after the events described and are very problematic. In many instances they’re not consistent with each other and not consistent with known historical facts. For example, the ‘Jesus’ of the Synoptics is very different from ‘Jesus’ of the fourth Gospel (John) and from the ‘Jesus’ of Pauline writings. There is much evidence to suggest the various accounts of ‘Jesus’ were written to serve doctrinal purposes, not as historical accounts of an actual person.

2) References in Jewish writings of the time and those referring to the time of ‘Jesus’ are sketchy and very inconclusive, and do not even make a clear connection to the Jesus of the Gospels. The oldest surviving documents of the time —the Dead Sea Scrolls— do not even mention ‘Jesus’ or any of the episodes described in the Gospels.

3) There are virtually no independent, secular references to the man Jesus, certainly none which can be used as clear evidence that he did exist. References in the writings of Josephus are problematic, probably later interpolations. Those by Tacitus are even more doubtful. The best one can do is point out that other figures in the Jesus stories appear to be historical individuals; e.g. John the Baptist, Pontius Pilate, and maybe Peter and James the Righteous (the brother?).

4) Many aspects of the Jesus story are clearly the stuff of myth and legend, e.g. Paul’s version of the Christ who died to redeem mankind; and the ‘Jesus’ of the fourth Gospel (John) where he is identified with the spiritual Logos, a Greek concept.

5) Therefore, most probably the composite ‘Jesus’ of the synoptic Gospels, of the Gospel John and of Paul’s writings did not really exist. The ‘Jesus’ of the New Testament is more likely the product of developing Christian doctrine of the late 1st and 2nd centuries.

————
Corollary: If there is a historical basis for the figure of ‘Jesus,’ he is likely a Jewish preacher-teacher-healer who existed in the early decades of the first century, lived in the areas of Galilee and Jerusalem, attracted a following and enough attention to get himself executed by the authorities, likely, the Roman authorities, with the likely complicity of the Jewish Temple authorities).
(Call him “Yeshua.”)
[There are many theories and legends as to who this individual really was and what he actually did and taught.]

—————-
Reflections on the problem:

Suppose we have this situation: All significant references to someone known as “Q” are consistent with each other and consistent with contemporary, known facts. Unless we have reason for thinking otherwise, the reasonable hypothesis is that these various references point to the same individual “Q”; this ‘working hypothesis’ will be stronger when there are living people who knew and interacted with Q. Generally in such cases it is plausible (sometimes easy) to separate facts about Q from fiction/myth/legends/exaggerations/etc. about Q.

None of this applies to the ‘Jesus’ of the New Testament.

But these things apply to most, unproblematic figures from the past, even the distant past. We have no reason for doubting that Abe Lincoln, George Washington, Thomas Jefferson existed, although in each case much fiction and myth can confuse the issue. We know that JFK existed, as did Martin Luther King, and Ronald Reagan, although they’re no longer around. Many people are still with us who knew them personally and interacted with them. No problem as to existence and identity here. Even in the case of the ancient figures, such as Plato, Aristotle, Euripides, or medieval figures like Thomas Aquinas, Duns Scotus, and Martine Luther, or later figures like Baruch Spinoza, Leonardo DeVinci, and Galileo — we don’t have any reason for doubting that they existed and did —more or less— what they’re credited with (or blamed for) doing.

The case of Socrates presents interesting questions, in some ways comparable to questions regarding Jesus, but in significant ways very different. Likely there aren’t any credible reasons for questioning the existence of an individual known as Socrates, even if it is true that he did not leave any written works. Generally, we can agree that Socrates is a historical figure who existed in ancient Athens; but we’re mostly limited to the writings of others, primarily, Plato (also Xenophon and Aristophanes) for specific information about the man. These writers lived during the lifetime of Socrates (contrary to the writers who first inform us about the man, Jesus) and their writings were read by people who had independent knowledge of Socrates; so Plato could not take too many liberties with his characterization of Socrates. Therefore, we can rely on the descriptions given by Plato, Xenophon, and even Aristophanes’ satire, as fairly good guides to the character and tendencies of Socrates. Furthermore, there is nothing that suggests a legendary, supernatural figure in all this, contrary to some of what we find with regard to Jesus.

We are presented with problems of identity and existential status when the referenced figure existed long before the direct memory and experience of anyone still living; and when the references to this figure are problematic: e.g., inconsistent with each other, inconsistent with known historical facts, laden with the stuff of myth and legend, and devoid of a significant body of unproblematic references. In such cases, it is virtually impossible to separate fact from fiction; and virtually impossible to expose the historical, factual individual “J”. We have no way of establishing, for a neutral, objective observer, that multiple references to “J” are really pointing to the same individual. We have no clear grounds for denying that most references to “J” are references to a fictional, mythical figure.