Post: LONDON BURNING

 England has developed a culture of entitlement which meets the basic material needs (and more) of young people but also emotionally consigns them to dependency on the welfare state. There is little possibility for advancement except via greater and greater government handouts . . . and now the bills are coming due, which actually means less and less government handouts. Hence, the rage. In the US, there is much greater economic mobility; if you don't significantly screw up your life by the time you turn 21, the chances are pretty good that you are going to move upwards over the course of your life. (Most rich Americans are elderly and did not start out rich). America, however, is headed in the direction of England, which is one more reason we need to rein in entitlements sooner rather than later.

Post: CAPITALISM AND CONSEQUENCES


It is a brute fact of human nature that a certain percentage of the population will always find ways to screw up their lives—regardless of government efforts to keep them from doing so. But that propensity for self-sabotage will necessarily undermine their measurable equality with their fellow citizens. In a capitalist economy, if two employees with equal qualifications are hired at equal salaries on the same day, but Employee A goes out and gets drunk every night, while Employee B goes home and gets a good night’s sleep, over time the quality of Employee A’s work will likely suffer in comparison with the quality of Employee B’s work. So, too, will his rewards.


Click Read more link below for the rest of this article.


Article: TALKING ABOUT GOD

Philosophy Now



March/April 2005
  

Talking About God

by
Mark Goldblatt


What does it mean to say “God is just” or “God is merciful” or “God is loving”? Do such statements mean anything, rationally? In this essay, I’m going to argue that they don’t--not if we follow the epistemological premises of Thomas Aquinas to their logical conclusions. If I’m right, then Thomistic theology ultimately negates itself, and the first sentence of Genesis, In the beginning God created the heaven and the earth, begins and ends rational discourse about God.
So let’s begin, in the beginning, with the first line of Genesis--which was accepted as literally true by Thomas for the very reason that it was the first line of Genesis; it was revealed truth, and thus dependent on faith. Yet he acknowledged the rational possibility that the world had no beginning, that it had existed with God "from eternity." In a previous essay (PN 44, Jan/Feb 04), I contended that the beginning of the world was in fact demonstrable--a position held by Thomas’s contemporary, Bonaventure. If Bonaventure was right, then Thomas was wrong even to acknowledge the possibility that the world had no beginning. 

Article: LIBERTY, LOGIC AND ABORTION

Philosophy Now


June/July 2002
  

Liberty, Logic and Abortion

by
Mark Goldblatt

January 22nd has evolved into a national red letter day  of sorts, the anniversary of the Supreme Court's landmark 1973 Roe v. Wade decision. Commemorations abound, typically sloganeering rallies, and mainstream journals churn out perfunctory retrospectives. Still, abortion remains the most divisive issue in the United States since the abolition of slavery. In fact, the very details of the ruling seem to recede, year by year, deeper into a foam of overwrought rhetoric. Specifically, therefore: Roe guaranteed a woman's right to terminate a pregnancy in the first trimester (in all instances) and in the second trimester (to safeguard her own well-being, broadly defined); only in the third trimester of pregnancy, the Roe decision held, could the rights of the fetus be taken into account and abortion restricted by the state.


What I’m about to argue is that the debate over abortion now continues on three distinct but inter-related levels. The first level, which might charitably be called Popular Arguments, consists absolutist slogans and unexamined logic; it’s this level of argumentation that often crops up at unpleasant dinner parties and on phone-in talk shows. The second level consists of nuanced speculations on natural law and individual rights; it’s conducted by moral philosophers beneath the radar of mass media. The third level consists of evolving theories of constitutional interpretation; it’s conducted in the courts by legal scholars. To get at the second and third levels of the abortion debate, however, it’s useful to address the popular arguments--if only to demonstrate that the issues involved are necessarily nuanced and evolving.


On the “pro-choice” side of the abortion debate, two popular arguments dominate:


A woman has the right to control her own body. Though the reasoning is much older, the feminist author Susan Brownmiller attributes the precise wording to Patricia Maginnis, founder of an illegal abortion referral service in California in the years prior to Roe. As a defense of abortion rights, however, the argument is either demonstrably false or logically meaningless. Even if we grant the contentious point that the fetus is part of the woman's body, it’s simply untrue that American citizens, male or female, exercise absolute sovereignty over their own bodies. If that were the case, not only would 49 states have to join Nevada in the legalization of prostitution, they would also have to permit the selling of internal organs for transplant--which would in turn warrant the bodily exploitation of poor people by wealthy people. For good reason, therefore, there’s no right to absolute bodily sovereignty. However, if we take Maginnis's words in a weaker sense--specifically, that a woman's right to control her body extends to include the right to terminate a pregnancy--we’re left with a tautology: a woman should have the right to an abortion because a woman has the right to an abortion.


Without access to legal abortions, women will be forced to obtain unsafe, illegal abortions. Let’s, again, grant the hidden premises, namely: 1) if states could outlaw abortions altogether, many would, and 2) women in those states who couldn’t afford to travel would turn to "back alley" alternatives. This is, nevertheless, an attempt to resolve a moral question on practical grounds. The danger is that identical logic could be applied to, say, cockfighting. Like abortion, cockfighting is apt to continue despite sanctions; legalization, thus, would let the government ensure its conduct under more wholesome circumstances. Certainly, practical considerations can influence our decision to enforce a law, or mitigate the penalty when a law is broken; they cannot, however, determine the morality of the law itself.


The "pro-life" camp rests its case on a single popular argument:


Abortion is murder. The hidden premise here is that the unborn life, from its conception, constitutes a legal person. Even among those who profess this, however, few adhere to its literal truth. The equation of abortion with murder logically entails its banning in all circumstances--including to save the mother's life, or to end a pregnancy resulting from rape or incest. Yet according to a 2000 Los Angeles Times poll, 85% of Americans would allow abortions if the mother’s physical health is at risk, 54% if her emotional health is at risk--numbers impossible to reconcile with the 57% who claim that abortion is murder.


Common sense, moreover, urges that we distinguish among the intrinsic values of a two-celled embryo, a second-trimester fetus, and a two-week-old baby. Such distinctions are, indeed, made by most pro-life apologists; the proof is their quick condemnation of violence at abortion clinics. But if those apologists considered abortion literally murder, then such violence would represent a morally justifiable response to infanticide--even as bombing a guards' barracks at a Nazi death camp would be morally justifiable.


Clearly, then, the popular pro-choice and pro-life arguments dissolve under logical scrutiny. Which leads to the second level of the debate, the exchanges among moral philosophers. Without exception, these arguments hinge on a fundamental conflict of rights: the right to life of a potential human being (to omit the word potential is to presuppose what one side wouldn’t admit) versus a liberty-right of a pregnant woman.


The pro-choice side has put forward two notable arguments on this level:


The sick violinist analogy. Two years prior to Roe, Judith Jarvis Thompson published an influential essay called "A Defense of Abortion" in which she developed an analogical case for a woman's right to end a pregnancy. Suppose, Thompson writes, you awakened to find yourself hooked up through intravenous tubes to a desperately ill violinist. The violinist, you’re informed, requires the shared use of your kidneys for nine months--or else he’ll die. Clearly, it would be generous if you agree to continue such an arrangement. But can the state compel you to save the violinist? Clearly not. Likewise, Thompson reasons, the state cannot compel a woman to continue a pregnancy.


The problem with Thompson's argument, as Ronald Dworkin points out in his book Life’s Dominion, is that it assumes "a pregnant woman has no more moral obligation to a fetus she is carrying . . . than anyone has to a stranger." The analogy fails to take into account the unique bond between fetus and mother. But if the law disregarded that relationship, one consequence would be no natural presumption on the mother’s behalf in custody disputes--so if both parents sought custody of their child, the mother’s chances would be no better than 50-50. Such logic would, at minimum, allow husbands to leverage more lenient terms of separation in exchange for a promise not to seek primary custody.


The pre-sentient fetus, although it may have a “detached” value, has no “derivative” rights. Though skeptical of Thompson's analogical argument, Dworkin himself develops a multi-layered defense of abortion rights in Life’s Dominion. He grants, from the start, what many pro-choice advocates deny: that the unborn life, at whatever stage, possesses an intrinsic value. Even in its embryonic phase, it’s more than a protoplasmic blob, waiting to be surgically disposed of. Human life, according to Dworkin, from conception to expiration, "is sacred just in itself." It is people's sense of that essential sanctity that forms what he calls the "detached" objection to abortion ("detached" since he’s detaching the objection from any thought that the fetus possesses rights of its own). This accounts for the rape and incest exceptions many pro-lifers allow--they don't really think that the fetus is the equivalent of a person, or that abortion is therefore the moral equivalent of murder. But their perception of the sanctity of human life, in every stage, leads them, except in extreme cases, to condemn the practice. Dworkin then distinguishes the "detached" objection, based on human sanctity, from the "derivative" objection, based on the right to life a fetus might derive from its own interest in being born. The latter objection, Dworkin terms “scarcely comprehensible” for the pre-sentient fetus, noting that to possess a "right," something must first possess an "interest." And to possess an interest, a thing must be sentient. For example, a mink is sentient; people ascribe different levels of consciousness to minks, but no one disputes that a mink seeks to avoid physical pain. Minimally, it follows, a mink has an interest in avoidance of pain. Whether a mink derives rights from that interest is, again, arguable. Animal rights activists argue it does; fur designers argue it doesn't.


The onset of sentience in fetal development (occurring near the end of the second trimester, with the first hints of a nervous system) is key for Dworkin because a non-sentient thing cannot possess even a minimal "interest" in the avoidance of pain; hence, the pre-sentient fetus cannot derive rights based on non-existent interests.


Dworkin's case is powerful. On the one hand, he addresses the qualms of many pro-life advocates whose sense that abortion is evil admits inexplicable exceptions in instances of rape and incest. On the other hand, his distinction between the pre- and post-sentient fetus parallels Roe’s own distinction between abortions in the first two trimesters and abortions in the third trimester.


As with the popular level of the abortion debate, there is only one significant pro-life counter-argument on the second more nuanced level.


The argument from natural law. The principal objection to Dworkin's analysis is taken from natural law theory. Natural law effectively unites the derivative and detached objections to abortion and, thereby, transcends the logical pitfalls inherent in each individually. The premise underlying natural law is that God is a willing partner in every act of procreation. It’s not requisite to the derivative case, as Dworkin presupposes, to hold that the fetus has interests of its own in its pre-sentient stage; rather, natural law teaches that God has an interest in the fetus. Moreover, according to natural law, it’s from God’s interest in the fetus, signaled by his creation of an immaterial soul, that the fetus derives its right to life. So, too, though Dworkin doesn’t acknowledge it, natural law underlies his "detached" objection to abortion based on people's sense of the essential sanctity (Dworkin's word choice is significant.) of human life in every stage. For that sense, however vague, is predicated on the belief that human beings are distinct from, and more valuable than, other living things. The only aspect of human beings that might "sanctify" them, even in Dworkin's limited usage, is a God-created soul. Brute intelligence cannot serve as the criterion for sanctity--or else killing a mentally retarded person would be less evil than killing a university professor. Without an implicit recognition of man's spiritual nature, in other words, Dworkin's invocation of sanctity becomes meaningless.


It is simply the case that many Americans cling to the notion of a third active participant in the process of human creation--namely, God. The readiness of many pro-life advocates to permit an exception to save the mother's life becomes explainable, in this view, in virtually quantitative terms: the mother's life carries both her own and God's interests--from which a full panoply of rights derive; the fetus's life carries God's interests alone, from which only a partial set of rights derive. By two criteria to one, the mother's life wins out. To be sure, a natural law conception of rights was critical to Thomas Jefferson when he wrote that all men were created equal and endowed by their Creator with certain unalienable rights. The singular noun "Creator," notwithstanding the limited Deistic meaning Jefferson himself might have attached to it, is inexplicable without reference to ensoulment. For in what sense would God be counted the Creator of every person except as the Creator of his soul? Two human parents are a person's creators, in the plural, with regard to his material nature. It is God's creative interest in every human soul, according to Jefferson's formulation, that endows people with unalienable rights. And it is the existence of these unalienable rights that the positive statutes of the Constitution were arguably designed to safeguard.


Which leads us, finally, to the third level of the abortion debate, a level more sophisticated than the sloganeering of public rallies and more pragmatic than the rarefied speculations of moral philosophers; it’s the level engaged by legal scholars, consisting of variant readings of precedent and conflicting theories of constitutional interpretation. The issue also narrows from a consideration of abortion per se to a consideration of the Roe v. Wade decision of 1973. Three positions, at minimum, must be considered:


Roe v. Wade was the right decision for the right reasonJustice Harry Blackmun, author of Roe, declared that states couldn’t enact bans on abortion during the first trimester of pregnancy because such laws violated a woman's constitutional right to privacy in the personal matter of procreation. Yet the Constitution itself doesn’t mention a right to privacy. There was, however, a specific precedent for such a right in the 1965 Griswold v. Connecticut decision wherein the Supreme Court overturned laws forbidding the sale of contraceptives to adults--on the grounds that people enjoyed a constitutional right to private decision-making in certain personal matters that no legislation could rescind. The Griswold ruling, itself, was based on the "due process" clause of the Fourteenth Amendment: "Nor shall any State deprive any person of life, liberty, or property, without due process of law." The Court ruled that buying contraceptives, as a private matter which harms no one, is a "liberty" guaranteed under the amendment. The conceptual leap from Griswold to Roe isn’t far--unless, of course, the fetus were considered a "person" with its own set of rights. In that case, Griswold might still stand, but Roe,because of the harm done to the fetus/person, would fall by the wayside. Indeed, if the fetus were considered a person under the Constitution, states would be compelled to ban abortion by the equal protection clause of the Fourteenth Amendment--the fetus itself would qualify for equal protection. The question of whether to permit abortion couldn’t, in that case, be left to individual states any more than the question of whether to permit infanticide. Yet even before Roe, many states permitted abortions in the first two trimesters. Thus, historically speaking, there is no case for the pre-sentient fetus as a fully-protected constitutionally-defined person.


To summarize Blackmun's argument in the Roe decision: since the pre-sentient fetus cannot be defined as a person under the Constitution, and since the right to privacy in the matter of procreation has been affirmed by the Griswold case, it’s unconstitutional for states to prohibit abortions--at least until the fetus develops into a constitutionally-recognizable person.


Roe v. Wade was the right decision for the wrong reason. The feminist lawyer, Catherine MacKinnon, though vehemently pro-choice, has criticized the right-to-privacy argument used by Blackmun to decide Roe. MacKinnon reads the Blackmun decision to mean that a state has no compelling interest to interfere in actions occurring in the privacy of the home, and she worries that the state might thereby forfeit the right to rescue a battered wife from her abusive husband. But, as Ronald Dworkin points out, MacKinnon's has failed to distinguish among three senses "privacy." The right to privacy can be construed as an assertion of spatial sovereignty ("I control what happens in my home.") or of confidentiality ("No one has a right to know what happens in my home.") or of freedom in decision-making on private matters ("Certain issues are mine alone to decide.”). The thrust of Blackmun's right to privacy argument is only the third sense. A battered wife, thus, would still be entitled to the state’s protection; indeed, her liberty interest, affirmed in the Fourteenth Amendment, guarantees her freedom from physical coercion and virtually obliges the state to take action on her behalf.


Nevertheless, many feminists maintain that women’s abortion rights should rest on the state's burden to promote gender equality--a burden mandated, it’s argued, by the equal protection clause of the Fourteenth Amendment: "No State shall . . . deny to any person within its jurisdiction the equal protection of the laws." The state, according to the standard reading of the clause, cannot enact legislation to curb the liberty of a particular class of citizens--it cannot, for example, make a law restricting the access of black people to bank loans. Banning abortions, the argument goes, would prevent only women from getting them--thus curbing only their liberty; the ban would have no practical effect on men. Therefore, such a ban seems to violate the equal protection clause. But the immediate reply is that banning abortions means men cannot get them either--a liberty their biology renders moot.


There is, however, a deeper, more controversial, version of this argument, associated with Mackinnon and Andrea Dworkin. It begins by observing that the decision to bear a child significantly impacts a woman's social and economic status. According to a 1995 Census Bureau report, women earn only 76 cents for every dollar their male counterparts earn. However, among women aged 27-33 who’ve never had a child, the figure is 98 cents. In short, there exists a distinct correlation between a woman's decision to start a family and her future ability to earn money at a rate comparable to a man. Moreover--and here the argument's more controversial elements emerge--a woman cannot be held fully responsible for her pregnancy because sexual intercourse, in a patriarchal society, is inevitably coercive, an act of aggression by which women's subservient status is maintained. Dworkin has famously asserted that sex is never wholly consensual--a position caricatured (inaccurately) as the equation of sex with rape. In reality, Dworkin's argument is more subtle; the true insidiousness of patriarchal culture, she contends, is that it instills an unconscious sense of the rightness of socially-constructed gender roles. Men are conditioned to find pleasure in dominance, whereas women--against their self-interest--are conditioned to enjoy submissiveness. Thus, Dworkin reasons, sex is always coercive; a woman’s decision to become pregnant can never be construed as free. The state itself, in a patriarchal society, is complicit in every unwanted pregnancy. And since the state has a responsibility, stemming from the equal justice guarantee under the Fourteenth Amendment, to promote gender equality, it must ensure women legal recourse to end their pregnancies.


The difficulty with Dworkin's reasoning is that it constitutes less a specific argument than a fully-formed ideology. If you accept the premises, it becomes a powerful pro-choice case. But there's no compelling reason to accept her premises about men and women.


Moving on, there is the argument that abortion rights should be grounded in the First Amendment's non-establishment of religion clause rather than the Fourteenth's due process or equal protection clauses. The argument runs as follows: Since, as we’ve seen, the pre-sentient fetus cannot be deemed a person under the Constitution's definition, the issue of life's beginning must remain a question of personal religious conviction--not statutory determination. Thus, to curtail a woman's access to an abortion is tantamount to the state's prescribing a religious belief. The state cannot, in other words, insist that life begins at conception--individuals may believe that, but they cannot band together, even through the exercise of electoral majority, and thereby deprive others of a liberty rooted in religious dissent.


As we’ll see, however, the principle of non-establishment of religion cuts both ways in assessing the Roe decision.


Roe v. Wade was the wrong decision. Given the Griswold decision as the precedent for a right to privacy, and plausible further support from the equal protection and due process clauses of the Fourteenth Amendment, and the non-establishment of religion clause of the First Amendment, the classification of Roe as an outright constitutional error would seem dubious. Nevertheless, a case can be made that cedes to Roe both legal precedence and interpretive likelihood but argues against the decision on the grounds that it violates the underlying vision of the Constitution. The Constitution, remember, was written "to form a more perfect union"--a perfection that can only be gauged in terms of the government’s ongoing ability to secure certain individual rights based on the equality of every citizen before the law. But which rights? The right to vote? To be sure. To worship freely? Yes, certainly. The right to a high school education. Again, yes. To a job? Probably not. To urinate in public? No. To drive 90 miles per hour on the interstate? No. Though a person may develop a temporary interest in public urination or highway speeding, he never acquires a corresponding right. Evidently, then, the individual rights a government does well to secure are circumscribed occasionally by a concern for collective well-being. The public interest, in other words, can at least occasionally trump the private.


Now consider whether there exists a "right" of reproductive choice--and for the moment, let’s concern ourselves not with the contentious question of abortion but with the largely settled question of procreation. Imagine the case of a mentally retarded couple who want a child. In the first half of the last century, this issue was far from settled; 24 states, between 1911 and 1930, enacted sterilization laws aimed at the mentally retarded. The Supreme Court itself upheld the constitutionality of such laws in 1927--a decision featuring Justice Oliver Wendell Holmes's now infamous remark: "Three generations of imbeciles are enough."


From a utilitarian view, which prioritizes the collective weal, it’s bad for a mentally retarded couple to procreate. It’s indisputable that their offspring are more likely to be mentally retarded than the offspring of a couple in which neither parent is retarded, or in which only one parent is; it’s also indisputable that a mentally retarded child is more likely to require public assistance. How, then, does the private interest of the mentally retarded couple trump the collective interest? Whence, given the likelihood of a burdensome outcome, a mentally retarded couple's right to procreation?


The answer returns us to the second level of the abortion debate, to the theory of natural law and unalienable rights. The equality of persons before the law derives not from a measurable equality, such as I.Q.. That equality plainly doesn’t exist. Rather, equality before the law derives from a immeasurable equality--namely, God’s endowment of a soul. Though the collective weal might be served by social programs that pre-sorted individuals according to their unequal endowments, that channeled only the intellectually promising towards higher education and only the physically gifted towards sports, the suggestion is constitutionally repugnant because the collective weal is trumped by the equal endowments of life, liberty and the pursuit of happiness--equal endowments which, in the only coherent reading of Jefferson's words, are themselves rooted not in our physical but in our spiritual natures.


In short, if we understand the Constitution as the system of positive laws by which Jefferson's "unalienable rights" are secured, then the Constitution implicitly compels the states to recognize ensoulment--that is, God's work in the creation of personhood--as the starting point for equality before the law. Even if, as Ronald Dworkin contends, the fetus has no interest in its own survival, and thus derives no right to life from its own interest, nevertheless, God's unique interest in the fetus's existence, signaled by the act of ensoulment, might well be sufficient to secure the fetus's unalienable right to life. The difficulty with the Roe decision now becomes manifest: on the one hand, the states must affirm ensoulment as the actualizing element of legal protection; on the other hand, the states, in the wake of Roe,cannot define the moment of ensoulment before the third trimester.


Let me restate that.


If the Constitution is meant to secure the unalienable rights of life, liberty and the pursuit of happiness, it compels states to recognize ensoulment by God as the basis of human equality. If that’s indeed how the Constitution is understood, then, strangely enough, by denying the possibility of ensoulment at conception, Roe violates the non-establishment of religion clause of the First Amendment.


That would, I think, comprise a knockdown argument against Roe were it not necessarily prefaced by the word if. The linking of the Constitution to the unalienable rights passage in the Declaration is one way to understand the Constitution--perhaps the likeliest way. But it’s not the only way. The Constitution can also be understood as merely a set of practical guidelines--a means to procure civil order rather than a reflection of higher ideals. Or it can be understood as a compromise between pragmatic and idealistic ends. As your understanding of the Constitution's intention changes, so necessarily must your method of interpretation.


Conclusion. So what’s the status of Roe v. Wade in 2002? It may be that the single reed by which the decision hangs is the legal principle of Stare decisis--that is, the respect current courts owe prior courts’ rulings. In other words, unless incontrovertible evidence can be marshaled that a past case was decided erroneously, the earlier decision must stand. The defense of Roe on these grounds was made eloquently by the Supreme Court’s majority ruling in the 1992 case of Planned Parenthood v. Casey. It merits quoting at length:

Overruling Roe’s central holding would not only reach an unjustifiable result under stare decisis principles, but would seriously weaken the Court's capacity to exercise the judicial power and to function as the Supreme Court of a Nation dedicated to the rule of law. Where the Court acts to resolve the sort of unique, intensely divisive controversy reflected in Roe, its decision has a dimension not present in normal cases, and is entitled to rare precedential force to counter the inevitable efforts to overturn it and to thwart its implementation. Only the most convincing justification under accepted standards of precedent could suffice to demonstrate that a later decision overruling the first was anything but a surrender to political pressure and an unjustified repudiation of the principle on which the Court staked its authority in the first instance.

The principle of stare decisis, in short, upholds Roe because no knockdown argument against the decision has emerged. There have beentelling arguments against Roe--as I’ve endeavored to show. But no knockdown argument. Moreover, it’s fair to speculate that no such knockdown argument can emerge since it would have to establish the full personhood of a pre-sentient fetus. This differentiates the issue of abortion from the issue of slaveholding--an instance in which stare decisis was unable to uphold the legality of an institution--since the full personhood of human beings of African ancestry became, in time, undeniable.


What, then, is the status of the abortion debate? Certainly, the practice of abortion, even at the stage prior to the sentience of the fetus, is an offense against the concept of a rights-endowing God--a concept on which the republic was founded. Perhaps it’s even an offense against God Himself, an offense for which its practitioners may answer in a divinely just hereafter. But it is an offense against no person--at least insofar as the term "person" can be consistently defined. For that reason, the Roe decision, on the basis of stare decisis, must stand.

REVIEW: The Politically Incorrect Guide to American History

The New York Post

THE POLITICALLY INCORRECT GUIDE TO AMERICAN HISTORY
by Thomas Woods
(Regnery: Washington DC, 2004)

reviewed by

Mark Goldblatt
March 20, 2005


It would be possible to write a history of America focusing on the cruelty of the Indians towards European settlers and their descendants, citing broken treaties, kidnappings, and assorted massacres of white women and children. Such a book would be factually correct but would miss the forest for the trees--since the wide angle view cannot avoid the genocidal upheavals visited upon the Indians, intentionally and unintentionally, by whites. On the outskirts of the political left, Howard Zinn is notable for an overriding forest-trees problem; his determination to highlight every instance of ethnic minorities and working poor being exploited by wealthy capitalists blinds him to the long-term trend in American history towards greater social justice.

Thomas Woods should be understood as the far right’s answer to Howard Zinn. The first chapter of Woods’s new book The Politically Incorrect Guide to American History sets the tone. While he doesn’t deny the “injustice and maltreatment” of Indians throughout American history, he notes that during the colonial period Harvard College admitted Indian students, that colonists “could and did receive the death penalty for murdering Indians” and that Indian converts to Christianity “enjoyed considerable autonomy.” The qualifier “considerable” is telling; Woods is cherry picking facts. It’s true, as he asserts, that Indians were quite cognizant of land ownership; for millennia, they’d fought tribal wars over disputed territories, so they knew what selling their land to whites meant. But Woods skirts the fact that such sales were often made under duress. If relations between the groups were hunky-dory, then why, in 1643, did whites form the Confederation of New England “in case of conflict with the Indians”?

Woods’s take on the forty year run up to the Civil War has the southern states essentially minding their own business, enduring multiple provocations from the North, until finally they felt compelled to secede. Well, maybe, but the business they were minding was slaveholding. That’s the forest. The trees are that the motives for North’s actions were not altogether altruistic, that John Brown might’ve received financial support from New England abolitionists, and that the southern states had a plausible constitutional right to secede. Woods misses the idea that wars of narrow self-interest occasionally morph into grand moral causes--as is happening, even now, in the Middle East. If you overlook the moral component of the Civil War, you might as well argue that the U.S. had no quarrel with the Nazis because Germany had been screwed by the Treaty of Versailles after World War One.

Sorry, that’s Chapter Thirteen.

This isn’t to say that Woods has written a worthless book. His analysis of the dubious benefits of antitrust legislation is strong, as is his take on the financial policies leading to the Depression; he does a good job rehabilitating Presidents Harding and Coolidge and taking FDR down a peg. Woods is generally on more solid ground talking about economics than about ethics. Indeed, he never comes to grips with the notion that history and morality are bound together, and that winding up on the side of the angels--the side of Jefferson’s self-evident truths--counts.

Review: THE SCHOOLS OUR CHILDREN DESERVE




Review: THE SCHOOLS OUR CHILDREN DESERVE: MOVING BEYOND TRADITIONAL CLASSROOMS AND "TOUGHER STANDARDS"
by Alfie Kohn
Houghton Mifflin: Boston, 1999.  (344 pages)

Mark Goldblatt*

January 2000

The current back-to-basics movement in education was spawned by the near-universal perception that American students are progressing from grade to grade without knowing what they should know, without being able to do what they should be able to do, and without the initiative or curiosity to remedy either condition. It's a common corollary of this perception that much of the blame lies with progressive methods in pedagogy--methods that prioritize students' creativity over their formal correctness, self-esteem over measurable achievement, community spirit over individual progress--whose introduction in classrooms, in the late 1960's, seemed to coincide the steady decline in students' performance.

Not so! insists Alfie Kohn in his new book The Schools Our Children Deserve. The author of five previous works on pedagogical reform and left-liberal politics, he puts forward a passionate defense of the progressive approach to education. The main problem with American schools, according to Kohn, is that they aren't progressive enough,that traditional teaching methods still carry too much weight in classrooms, and that parents' expectations of what students should be doing in school have been conditioned by their own miserable educational experiences--thus blinding them to the nature of "true learning." And what is true learning?


The question is answered by Kohn most often in negative terms. True learning is decidedly not what the back-to-basics crowd thinks it is. Their traditional approach to education, which he describes as "an uneasy blend of behaviorist psychology and conservative social philosophy" is "based on the idea that people, like other organisms, do only what they have been reinforced for doing [sic]." Learning, for traditionalists, "is just the acquisition of very specific skills and bits of knowledge." The names he associates with the traditional approach are B.F. Skinner, Edward L. Thorndike and, most recently, E.D. Hirsch--whose Cultural Literacy program exemplifies what Kohn terms "bunch o' facts" pedagogy.


In opposition stands the progressive approach, associated by Kohn with John Dewey and Jean Piaget, which holds "that schools shouldn't be about handing down a collection of static truths to the next generation but about responding to the needs and interests of the students themselves." Students' "needs" and "interests" are vague terms, but Kohn hints at the goals of progressive education when he urges us to see schools not "as places where cultural knowledge is transmitted in order to preserve important institutions" but "as places where a new generation learns the skills and dispositions necessary to evaluate those institutions."


The methodological and teleological opposition between the two approaches lends Kohn's book its point-counterpoint structure. Chapter by chapter, he dissects the mean-spirited, misguided assumptions about learning of traditionalists and then contrasts these with the kind-hearted, benign views of progressives. Kohn bases his case on his own classroom observations, his conversations with teachers and his readings in educational psychology and pedagogic theory.


To gauge the benefits of progressive education, Kohn argues, requires a paradigm shift away from the very notion that education can be gauged. We must abandon our traditional preoccupation with "performance! results! achievement! success!" That is, the acquisition of specific knowledge and particular skills--the traditional signs of productive schooling--should concern us less than "the activity that produces those results." The process of learning is more important that what is actually learned.


To that end, we must recognize the distortions of grade-by-grade curricula. He warns parents to worry if their children spend time drilling "state capitals, the multiplication table, the names of explorers of the New World, or the parts of speech." Drilling, he maintains, is the by-product of standard curricula--which impose time-constraints on the learning experience; thinking is a direct casualty. "There should be less time defining words like 'nationalism'," he writes, "and more time discussing what the world would be like if there were no countries."


Furthermore, we must stop imagining the classroom as a place where information is transmitted and begin imagining it as a place where students learn by doing what interests them. This principle is best illustrated by Kohn's discussion of the "Whole Language" versus "direct phonics" controversy in reading and writing instruction. He makes the useful point that the debate is never actually either-or. Children cannot "invent" spellings, as Whole Language encourages, without a basic grasp of phoneme functions, and children taught by direct phonics must eventually move past sounding out every word phonetically--which no able reader does. What the debate boils down to is whether the move from phoneme-recognition to meaning-gathering is best accomplished by quick immersion in rich literary texts (the Whole Language view) or by incremental advances through model texts (the direct phonics view). Kohn comes down squarely, and predictably, on the Whole Language side. But his exposition, here and elsewhere, is marred by pointless, ideological asides--as when he notes that the Christian Coalition supports direct phonics instruction, and that "right wing" journals have published "wildly enthusiastic articles" on the phonics method.

The fact that ideology animates Kohn's critique is a relatively minor objection. Reality is a greater obstacle to taking his program seriously. To begin with, he refuses to view plummeting test scores as a problem--or even as a sign of a problem. He must discount standardized testing because, as he reluctantly concedes, back-to-basics pedagogy is often more effective than progressive when it comes to the "temporary retention of facts and low-level skills"--in other words, students taught by traditional methods tend to score higher than their progressively-taught peers. This phenomenon is quickly explained away by noting that "Skills tests are a perfect match for skills instruction, and it's not surprising that the latter often produces the former." The circularity is obvious: The underlying error of the back-to-basics movement, according to Kohn, is that it is geared to produce measurable results; that it succeeds only confirms that such measurements are meaningless.


But here's where experiential reality imposes itself. Would there be a back-to-basics movement in the first place if parents sensed that American schools were doing a good job? If falling test scores were indeed belied by the inquisitiveness, perspicacity and reasoning power of their children, would parents willingly rock the educational boat? Kohn perceives sinister forces behind the push to raise standards--forces that view learning as "memorizing random facts and doing whatever you're told." He explains that corporations regard children not as lifetime learners but as potential workers; since a skilled workforce maximizes profits, major companies are willing to underwrite skills-intensive programs. The "crisis" in American education, he insinuates, is at least partly a public relations conspiracy hatched by greedy capitalists to gussy up their bottom line. He never addresses the objection that even greedy capitalists send their kids to school.


Kohn's next run in with reality emerges with the question of teacher-competence--another question he never addresses. It's a glaring omission since doing away with curricula would empower individual teachers to determine their own classroom agendas. He assumes a world in which every teacher is blessed with the wisdom of Socrates and patience of Job, in which every teacher is capable of coming up, again and again, with responses pointed enough to push students to the next conceptual level but subtle enough not to provide a final answer. Anecdotes abound--wonderful anecdotes--of classroom breakthroughs; significantly, though, the same instructor is never profiled twice. Kohn provides glimpses of teachers at their most inspiring and supposes that these performances can be replicated time after time, by teacher after teacher, with no script to follow and no standard textbook as a safety net. In Kohn's world, too, all teachers must be equally competent since, besides dropping curricula, he favors "looping"--uniting teachers and students for three grades at a stretch. Pity the poor class that rolls snake-eyes in that crap-shoot.


Kohn's "reality problem" is evident even in his treatment of his opposition. Throughout the book, Kohn knocks down enough straw men to form a picket line around his house, caricaturing antagonists' ideas, putting ridiculously severe words in their mouths. He is especially unfair to Hirsch--whose position is not, as Kohn would have it, that knowing a certain set of facts renders a person automatically "educated" but rather that knowing a certain set of facts is a prerequisite for the kind of active intellectual engagements typical of educated people. Consistently, Kohn sets up false dichotomies--as when he writes "The Old School . . . firmly rejects cooperative learning, in which kids can experience the value of doing things together." Yet even the most traditional schools have, for the last half century, grouped students for science lab experiments or as speech partners or as writing respondents.


The remedy for what ails American education, according to Kohn, is a greater dose of the very thing that seems to be making it sick. If only we would re-define health and sickness, he argues, we would see the wisdom of his program.

*This is the original version of the review; a slightly edited version appeared in Commentary.

ARTICLE: On The Soul

Philosophy Now

On The Soul

by Mark Goldblatt

January/February 2011 
  

"The passage from the physics of the brain to the corresponding facts of consciousness is unthinkable. Granted that a definite thought, and a definite molecular action in the brain, occur simultaneously, we do not possess the intellectual organ, nor apparently any rudiment of the organ, which would enable us to pass by a process of reasoning from the one phenomenon to the other.”
John Tyndall, 1868



The nature of the human soul has been the subject of religious belief and scientific investigation for millennia. We feel as though definitive answers should be at hand since each of us seems to have relevant experiential insights. But certainty remains elusive. The proposition that the soul does not exist, that what’s called the soul is actually no more than a neurological phenomenon, a trick the brain plays on its owner, is altogether plausible. But so too is the converse--the proposition that the soul is indeed an immaterial essence which somehow animates human beings.

We must begin, however, by clarifying the term. The word “soul” can refer to a number of different things. Many of us intuitively think of the voice inside our head as our soul--a definition that is consistent with a materialist perspective. That is, if the soul is in fact merely a neurological phenomenon, merely an activity of the brain, then the voice-inside-our-head definition, perhaps enlarged to include the impulses and sensations which coalesce around the voice, would work. Consciousness, in that case, equals soul. Which is another way of saying that the soul is to the brain what digestion is to the stomach. It’s what the brain does.

If, on the other hand, the soul is an immaterial essence, rather than the outcome of a material process, then the definition becomes more complicated because neurological science has demonstrated that the voice inside our head has a material component. That much is certain. We now know, for instance, that if the brain is grievously injured, the voice inside our head is often irreparably altered. Brain trauma can affect memory formation, language skills, even mood swings. So it must be the case that the brain itself is directly involved in consciousness. What used to be called the mind-body problem has been solved, at least to that extent. Consciousness, as the word is traditionally understood, cannot be wholly severed from brain activity.

But that doesn’t necessarily mean that there’s no such thing as an immaterial soul. What it means is that the incidentals that make up our inner lives are rooted in the workings of our brains. If you whack me across the skull hard enough to rattle my brain, but not kill me outright, you might well alter what I remember about my life, which friends I’m able to recognize, what words I’m able to form, whether I’m able to count to ten, what food I like to eat, what juice I like to drink, what position I like to play in softball--in sum, the very characteristics that come to mind when I think of myself.

It’s tempting to conclude that, minus those characteristics, I would no longer be me. But in truth, that’s all I would be. The human soul, if it exists in an immaterial form, must be the me-ness of me, the sense of first personhood on which the rest of my conscious experiences hang. It’s the rooting interest each one of us has in himself, in his own existence, stripped of language and memory, stripped of thought and disposition; it’s the unified presence by which I differentiate myself from whatever I encounter. I am not the thing I encounter; I am the thing doing the encountering.

The soul, in other words, is not your consciousness--unless you are a materialist. If you are not a materialist, however, then the soul is what’s underneath your consciousness, the platform upon which your consciousness is constructed. The distinction is critical. Consciousness is the thing that emerges from sense data, the thing that comes to consist of memory and language. But sense data, memory and language have material components; they’re rooted in the workings of brain. The last half century of neuroscience has established that beyond a reasonable doubt. The stuff of consciousness is definitively brain-based. It relies on physical matter. So if the soul is indeed immaterial, it must be more basic than consciousness. The word “self” gets closer to the point--though I think “me-ness” gets still closer. It’s the gathering principle of sensation, memory and language, the immeasurable, imperceptible, inexplicable filament that draws together sensation, memory and language into consciousness.

Think of consciousness as cotton candy. The soul, in that case, is the cone around which the cotton candy is wound--except, of course, and here the metaphor breaks down, it’s an immaterial cone.

If the soul is indeed immaterial, then it must consist of the me-ness of me, the thing that encounters other things. But the word “encounter” also requires clarification. For example, I’ve got a robotic vacuum cleaner in my apartment that seems to encounter other things. It even adjusts to them. It goes around things that get in its way. Does the vacuum cleaner therefore possess a soul?

Clearly not.

It’s thus necessary to differentiate between encountering something and responding to something. Even a run-of-the-mill computer nowadays, like the one found in a robotic vacuum cleaner, can be programmed to respond to things. But it cannot be programmed to encounter them. My vacuum cleaner doesn’t differentiate between itself and, say, the edge of my sofa--which would be the essence of an “encounter.” It strikes the edge of my sofa and cannot move forward; its sensors detect an inability to move forward and send a signal to the processor inside to reverse direction. It has no interest in reversing direction. If its program were altered, it would keep careening into the edge of my sofa until its battery ran down.

So, too, with computers that play chess--even the ones that play at grandmaster level. Such a computer can be programmed to read the situation of a chess board, run through the billions of possible scenarios that might follow from the next move and calculate which scenario yields the highest numerical probability of taking the adversary’s king. But it cannot be programmed to care whether it wins or loses the game. It has no first-person interest in the outcome of the game. The software that underlies artificial intelligence cannot, for the time being, give rise to that sense of me-ness that is the foundation of human consciousness. I suspect (though of course don’t know for sure) that it will never be able to do so, regardless of how powerful the hardware, or how much more sophisticated the programming.

Here’s a thought experiment: Suppose two rival software companies designed chess programs to compete against one another. Suppose, further, that each program contained instructions that, in the event of its defeat, would slightly jigger the probabilities on which it based its moves, then erase the old scheme under which it operated. The games between the two rivals would take mere moments to complete since the moves would be almost instantaneous. The winner of each game would then have to wait several seconds for the loser to tweak its algorithms, and then the next game would commence.

It would be a deadly dull sport to watch. The point, however, is that the two systems are, in a sense, evolving. They’re operating under conditions that resemble survival-of-the-fittest. Each winner continues on to the next game intact; each loser perishes and leaves behind an offspring to try its own hand at chess survival.

Now the critical question: At what point would either system develop a rooting interest in whether it wins or loses? At what point would either system care, in even the most rudimentary way, about the outcome of a game?

To be sure, the case of chess-playing computers leaves out the crucial evolutionary element of bio-feedback. What if the software programs making the chess moves were also picking up sense data, encountering sights and sounds as they were playing? Except encountering presupposes a me-ness capable of doing the encountering. In other words, it presupposes a rooting interest. Sights and sounds, in themselves, would surely produce more data than the mere playing out of chess games would--data which could then be stored in the form of binary codes within the memory of each operating system. But the accumulation of data does not address the problem. What interaction of circuit board and binary codes will ever give rise to a self? How does information become will? How does me-ness enter into the system?

Thinking about the peculiarity of consciousness arising in a contraption of electronic circuitry and binary codes puts the problem in especially stark relief. But even if the system consists of flesh and blood, the problem remains unchanged. Sensations like sights and sounds are just electrochemical surges. There’s nothing mystical about them. They course through the body, carrying charges to and from the neuromagnetic cluster that operates in the brain, adding more and more data to the system. So you’ve got electrical signals. You’ve got magnetic fields. You’ve got living matter, organic cells in which the electrical signals and magnetic fields gather. Below that, you’ve got carbon atoms. You’ve got water molecules. In other words, you’ve got goop and soup, stimuli and response. That’s the totality of it, from the materialist perspective.

I ask again: How does it add up to me-ness?

The origin of me-ness is the great mystery of the human condition. Let me confess, therefore, that I have no clue how me-ness can emerge, or even be accounted for, by materialism. That doesn’t mean the explanation doesn’t exist. Only that I cannot imagine it.

If, on the other hand, the materialist perspective is rejected, then at least two possible explanations present themselves. (These are no more than guesses, however. That should be borne in mind.) The first is that me-ness is a kind of transmission, something akin to a radio signal, that emanates from a Source and is picked up by the circuitry of the brain. If the brain circuitry is in good working order, then the transmission is stable enough to form the basis of a continuous, recognizable consciousness. But if the brain is damaged, if the circuitry breaks down, the signal becomes scrambled. The signal is still being picked up, but the circuitry cannot do with it what it’s designed to do. That’s how the incidentals of memory and language are lost. But of course all of what I’ve just described is only a metaphor. To think of me-ness as a radio signal is to think of it as a wave--which is a measurable thing. If me-ness is measurable, then it’s no longer immaterial.

The radio-signal metaphor is a Platonic down-from-on-high explanation. So the alternative would be a more Aristotelian ground-upwards account in which me-ness becomes the end to which matter is directed. Me-ness isn’t something the brain happens to produce; it’s what the brain is designed to do, its formal cause, the reason it exists in the first place--and the brain, in turn, is what the human being is designed to sustain. The flesh and blood of man evolved, in other words, as a means to generate the soul . . . and the material world itself as a means to generate the flesh and blood of man. Me-ness is the immaterial potential that justifies the existence of matter, the Little Bang insinuated into the Big Bang, the why of the what. But, again, all of this is mere speculation. What’s not speculation is that the most dramatic moment in all of our lives is one that none of us can recall, the moment in the womb when the self awakens, without language, without thoughts, when the light switches on, when that sense of me-ness dawns. Regardless of where it comes from, regardless of who or what turns the switch, the miracle of that moment is undeniable.

Again, however, I’m not arguing that the soul must be immaterial, that it cannot be accounted for by the accidental functioning of the brain. My gut instinct tells me that the soul is not a material phenomenon. But I acknowledge that the reverse is possible--that the soul is what the brain does and nothing more. The fact that I cannot imagine how me-ness would ever accidentally and spontaneously arise out of organic matter and physical processes doesn’t mean that it didn’t happen.

Review: FREETHINKERS BY SUSAN JACOBY

The Common Review


FREETHINKERS: A HISTORY
OF AMERICAN SECULARISM
by Susan Jacoby
(Metropolitan Books: New York, 2004)

Mark Goldblatt

winter 2005 

History is written from a point of view--inevitably. The most valuable histories are those which embrace both the ideal of objectivity and the reality of bias, which make no pretense of disinterest but which subordinate what should have happened to what in fact did happen. By that measure, Susan Jacoby’s Freethinkers: A History of American Secularism is a consistently valuable account of the long tradition of religious dissent in the United States.

The axe that Jacoby--an independent scholar and author of six previous non-fiction books--is grinding throughout Freethinkers is with Christian conservatism. She’s not reluctant to mix in her current political preferences with the politics of her historical narrative; jabs at President Bush and the religious right abound. If present trends continue, she seems to think, America is destined for a return to the Salem Witch Trials. This is tendentious, and slightly silly--the book’s jacket, for example, features praise from Philip Roth, who calls it “as necessary a book as could be published in the fourth year of the ministry of George W. Bush” and from Arthur Miller, who insists that “America’s precious freedom of conscience, her pride for two centuries, [is] now under threat from the political Right as never before.” But Jacoby’s approach does manage to infuse her history with the urgency of a polemic. And the juxtaposition of twenty-first century clashes over the separation of church and state with centuries old instances of the debate highlights odd but intriguing parallels. There is a sameness to such arguments; indeed, there are only so many way in which they can be drawn.

Jacoby utilizes “freethinkers” as a catch-all term which includes deists, agnostics and atheists--as well as Jews and liberal Protestants, especially Quakers and Unitarians--on the grounds that these very distinct groups are joined politically by “a conviction that the affairs of human beings should be governed not by faith in the supernatural but by a reliance on reason and evidence adduced from the natural world.” [4] The book traces the struggles of freethinkers to prevent the Christian majority in the United States from installing various aspects of their religious agenda into the law of the land. This is a heroic story; the seventeenth century expulsions of Roger Williams and Ann Hutchinson from the Massachusetts Bay Colony demonstrate that the earliest European settlers were not averse to meting out persecutions which mirrored the ones from which many of them had fled to the New World. That the colonies would eventually unite to form a single secular entity was by no means a given.

Jacoby’s story picks up with Thomas Paine, the freethinker whose January 1776 pamphlet Common Sense helped inspire the American revolution. But Paine’s reputation crashed after independence. The process began with 1791’s The Rights of Man, written in England, which defended the French Revolution at a time when word of the terror was beginning to reach America; hence, many of Paine’s former supporters, including George Washington, were put off by Paine’s linkage of the two revolutions.

The tone of The Rights of Man was so anti-monarchical that Paine was forced to flee England for his life. He settled in Paris, where in 1793 he wrote the first part of the even more heretical The Age of Reason, in which, according to Jacoby, he “put forth the astonishing idea that Christianity, like all other religions, was an invention of man rather than God.” [5] But the move to France had made Paine a witness to the horrors of the revolution, and he was soon speaking out against the execution of Louis XVI. Paine was thereafter arrested and spent nine miserable months in prison until James Monroe became minister to France and obtained his release. By the time Paine returned to the United States, at the behest of Thomas Jefferson in 1802, his reputation as an infidel was used by Jefferson’s critics to tar the president. One Philadelphia paper referred to Paine as “a drunken atheist” and a “rebel rascal”--and called Jefferson’s invitation “an insult to the moral sense of the nation.” [61]. Publicly ostracized for the remainder of his life, Paine died in 1809 and was buried on his farm in New Rochelle. By then, his reputation was negligible--due in large measure, Jacoby suggests, to his religious skepticism. His remains were actually robbed ten years later and subsequently lost. Not until the Civil War era did Paine’s contribution to America’s independence begin to receive its due. The first full length biography did not come until 1892. His biographer wrote: “As to his bones, no man knows the place of their rest to this day. His principles rest not.” [65]

The highlight of Jacoby’s book is unquestionably her account of the tumultuous hundred year period, roughly 1825-1925, which encompasses what she calls “the golden age of freethought.” Coalitions evolved among reform movements, exemplified by the presence of abolitionist Frederick Douglass at the Seneca Falls Women’s Rights Conference of 1848.  Such liaisons were often tenuous; when the Fifteenth Amendment, ratified in 1870, granted black men the right to vote, at a time when women still were denied, the suffragist Elizabeth Cady Stanton expressed her outrage in explicitly racist and nativist terms: “Think of Patrick and Sambo . . . and Yung Tung, who do not know the difference between a monarchy and a republic, who cannot read the Declaration of Independence or Webster’s spelling book, making laws for [noted feminists] Lucretia Mott, Ernestine L. Rose and Anna E. Dickinson.” [195]

Throughout the period, the common foe of religious (read: Protestant) conservatism made strange bedfellows of, at various times, abolitionists, feminists, immigrants, anarchists, socialists, communists, Jews and Catholics. With the possible exception of the abolitionists--and even here, Jacoby argues, the Bible was used as often to justify slavery as to condemn it--each group had a clear stake in the separation of church and state; thus, every effort by conservatives to breach the church-state wall was met by an equal and opposite forging of liberal alliances. The wall teetered but held--through the Haymarket Riot of 1886, which was blamed on anarchists and immigrants, through the Palmer Raids of 1919-1921, which targeted communists (and out of which was born the American Civil Liberties Union), and, most famously, through the Scopes Monkey Trial of 1925, which pitted the fundamentalist William Jennings Bryan, against the freethinker Clarence Darrow. Two points are worth noting in the latter case: First, Darrow lost the case; even though the verdict was later overturned on a technicality, Scopes was actually convicted of breaking the Tennessee state law which forbade mentioning evolution in public schools. Second, Bryan himself was a populist, a vocal supporter of woman’s suffrage and the rights of the common man, whose objection to Darwin’s theory was grounded in the belief that God’s role in the creation of all human beings demanded they be treated with dignity.

Jacoby provides a lively intellectual panorama of the era, replete with brief but incisive portraits of William Lloyd Garrison, Lincoln (of course), Mott, Stanton, Susan B. Anthony, Robert Ingersoll and Darrow--along with cameos by Douglass, the Grimk√© sisters, Rose, Eugene Debs and Emma Goldman. Of the group, Ingersoll emerges in Jacoby’s telling as the most underrated by posterity; she makes a compelling case for him as a major figure.

Though Ingersoll never held elected office--he lost his only bid for Congress from Illinois in 1860 because of his opposition to the Fugitive Slave Law--he acquired, in the years following the Civil War, a national reputation as a public speaker, known especially for his stirring lectures on the perils of mixing religion and government. On the subject of the Founding Fathers, he remarked, “They knew that to put God in the Constitution was to put man out. They knew that the recognition of a Deity would be seized upon by fanatics and zealots as a pretext for destroying the liberty of thought. . . . They intended to found and frame a government for man, and for man alone.” [184] He also earned the nickname the Great Agnostic, lampooning religious certitudes wherever he encountered them: “The Catholics will give you a through ticket to heaven, and they will attend to your baggage, and keep it too. The Protestants, on the other hand, won’t even tell you what train to get on.” [170] Along with his relentless personal decency, it was the elegance of Ingersoll’s rhetoric--in sharp contrast with the ad hominem invective swirling around him--which stands as his noblest legacy; he eulogized a friend’s child in 1882 thusly: “Every cradle asks us ‘Whence?’ and every coffin ‘Whither?’ The poor barbarian, weeping above his dead, can answer these questions just as well as the robed priest of the most authentic creed. . . . The larger and nobler the faith in all that is, and is to be, tells us that death, even at its worst, is only the perfect rest.” [148]

The spirit of Ingersoll hovers over the freethought movements of the early twentieth century, with Darrow eventually emerging as his and Paine’s natural heir, fighting the forces of religious orthodoxy until his death in 1938. Jacoby also provides a useful account of the evolution of Pledge of Allegiance--written in 1892, at the urging of the National Education Association, by a Christian Socialist named Francis Bellamy and intended as “a straightforward statement of the American public school system’s commitment to assimilation of all immigrants.” [287] Congress crow-barred the phrase “under God” into the pledge in 1954, during the McCarthy era, bowing to pressure from a movement led by the Knights of Columbus; the insertion would surely have appalled Bellamy, a devout proponent of church-state separation.

On the whole, however, Jacoby falters as she turns to the latter half of the twentieth century. Her insistence that the role of secularists in the civil rights movement of the 50’s and 60’s has been intentionally underplayed by the forces of “religious correctness” only serves as a reminder of just how critical organized religion--especially the black church--was in the effort. A discussion of the women’s movement of the late 60’s and early 70’s begins promisingly, but the abortion issue occasions a disquisition on moral relativism, “an honorable concept that has been poisoned by cultural conservatives and insufficiently defended by secularists.” [344] Maybe so, but then she asserts, a scant 13 lines later, “all decent societies do indeed prohibit murder.” Her reasoning here is, to say the least, myopic.

Indeed, Jacoby’s embrace of moral relativism in this instance highlights the perennial dilemma encountered by all those who would insist on an absolute separation of church and state: To what degree is the self-evident truth that all men are created equal--which is the actuating premise of the Constitution--a religious principle? Jefferson’s truth is certainly not “self-evident” in the sense, say, that “a whole is equal to the sum of its parts” is self-evident; indeed, if he’s asserting a purely secular truth, then he’s manifestly wrong since all men are manifestly not created equal physically or intellectually. (The fact that Jefferson himself would not have included dark-skinned men in the equation underscores the point.) History is strewn with the corpses of religious and ethnic minorities whose created equality was by no means self-evident. But if their created equality consists, as Jefferson claims, of a spiritual endowment by a Creator--even in a limited, deistic sense--then the self-evident truth effectively becomes an article of faith, and the wall of separation between church and state was breached from the moment the state came into existence.

It’s unfair to expect Jacoby to resolve this Jeffersonian paradox; it’s a circle that cannot be squared. That predicament should not detract from what she has managed to produce, which is an impassioned, eminently readable history of the secularist tradition in the United States.