Cathedral Round-Up: Checking in with the Bright Minds at Yale Law

Yale Law’s Coat of Arms

Yale Law is the most prestigious lawschool in the entire US (Harvard Law is probably #2). YL’s professors, therefore, are some of the US’s top legal scholars; it’s students are likely to go on to be important lawyers, judges, and opinion-makers.

If you’re wondering about the coat of arms, it was designed in 1956 as a pun on the original three founders’ names: Seth Staples, (BA, Yale, 1797), Judge David Daggett aka Doget, (BA 1783), and Samuel Hitchcock, (BA, 1809), whose name isn’t really a pun but he’s Welsh and when Welsh people cross the Atlantic, their dragon transforms into a crocodile. (The Welsh dragon has also been transformed into a crocodile on the Jamaican coat of arms.)

(For the sake of Yale’s staple-bearing coat of arms, let us hope that none of the founders were immoral in any way, as Harvard‘s were.)

So what have Yale’s luminaries been up to?

Professor Yaffe has a new book on Criminal Responsibility, titled The Age of Culpability: Children and the Nature of Criminal Responsibility. The blurb from Amazon:

Gideon Yaffe presents a theory of criminal responsibility according to which child criminals deserve leniency not because of their psychological, behavioural, or neural immaturity but because they are denied the vote. He argues that full shares of criminal punishment are deserved only by those who have a full share of say over the law.

The YLS Today article goes into more depth:

He proposes that children are owed lesser punishments because they are denied the right to vote. This conclusion is reached through accounts of the nature of criminal culpability, desert for wrongdoing, strength of legal reasons, and what it is to have a say over the law. The heart of this discussion is the theory of criminal culpability.

To be criminally culpable, Yaffe argues, is for one’s criminal act to manifest a failure to grant sufficient weight to the legal reasons to refrain. The stronger the legal reasons, then, the greater the criminal culpability. Those who lack a say over the law, it is argued, have weaker legal reasons to refrain from crime than those who have a say, according to the book. They are therefore reduced in criminal culpability and deserve lesser punishment for their crimes. Children are owed leniency, then, because of the political meaning of age rather than because of its psychological meaning. This position has implications for criminal justice policy, with respect to, among other things, the interrogation of children suspected of crimes and the enfranchisement of adult felons. …

He holds an A.B. in philosophy from Harvard and a Ph.D. in philosophy from Stanford.

I don’t think you need a degree in philosophy or law to realize that this is absolutely insane.

Even in countries where no one can vote, we still expect the government to try to do a good job of rounding up criminals so their citizens can live in peace, free from the fear of random violence. The notion that “murder is bad” wasn’t established by popular vote in the first place. Call it instinct, human nature, Natural Law, or the 6th Commandment–whatever it is, we all want murderers to be punished.

The point of punishing crime is 1. To deter criminals from committing crime; 2. To get criminals off the street; 3. To provide a sense of justice to those who have been harmed. These needs do not change depending on whether or not the person who committed the crime can vote. Why, if I wanted to commit a crime, should I hop the border into Canada and commit it there, then claim the Canadian courts should be lenient since I am not allowed to vote in Canada? Does the victim of a disenfranchised felon deserve less justice than the victim of someone who still had the right to vote?

Since this makes no sense at all from any sort of public safety or discouraging crime perspective, permit me a cynical theory: the author would like to lower the voting age, let immigrants (legal or not) vote more easily, and end disenfranchisement for felons.

Professor Moyn has a new book on Human Rights: Not Enough: Human Rights in an Unequal World. According to the Amazon blurb:

The age of human rights has been kindest to the rich. Even as state violations of political rights garnered unprecedented attention due to human rights campaigns, a commitment to material equality disappeared. In its place, market fundamentalism has emerged as the dominant force in national and global economies. In this provocative book, Samuel Moyn analyzes how and why we chose to make human rights our highest ideals while simultaneously neglecting the demands of a broader social and economic justice. …

In the wake of two world wars and the collapse of empires, new states tried to take welfare beyond its original European and American homelands and went so far as to challenge inequality on a global scale. But their plans were foiled as a neoliberal faith in markets triumphed instead.

As Yale puts it:

In a tightly-focused tour of the history of distributive ideals, Moyn invites a new and more layered understanding of the nature of human rights in our global present. From their origins in the Jacobin welfare state

Which chopped people’s heads off.

to our current neoliberal moment, Moyn tracks the subtle shifts in how human rights movements understood what, exactly, their high principles entailed.

Like not chopping people’s heads off?

Earlier visionaries imagined those rights as a call for distributive justice—a society which guaranteed a sufficient minimum of the good things in life. And they generally strove, even more boldly, to create a rough equality of circumstances, so that the rich would not tower over the rest.

By chopping their heads off.

Over time, however, these egalitarian ideas gave way. When transnational human rights became famous a few decades ago, they generally focused on civil liberties — or, at most sufficient provision.

Maybe because executing the kulaks resulted in mass starvation, which seems kind of counter-productive in the sense of minimum sufficient provision for human life.

In our current age of human rights, Moyn comments, the pertinence of fairness beyond some bare minimum has largely been abandoned.

By the way:

From Human Progress

Huh. Why would anyone think that economic freedom and human well-being go hand-in-hand?

The Dramatic Decline in World Poverty, from CATO https://www.cato.org/blog/dramatic-decline-world-poverty

At the risk of getting Pinkerian, the age of “market fundamentalism” has involved massive improvements in human well-being, while every attempt to make society economically equal has caused mass starvation and horrible abuses against humans.

Moyn’s argument that we have abandoned “social justice” is absurd on its face; in the 1950s, the American south was still racially segregated; in the 1980s South Africa was still racially segregated. Today both are integrated and have had black presidents. In 1950, homosexuality was widely illegal; today gay marriage is legal in most Western nations. Even Saudi Arabia has decided to let women drive.

If we want to know why, absurdly, students believe that things have never been worse for racial minorities in America, maybe the answer is the rot starts from the top.

In related news, Yale Law School Clinics Secure Third Nationwide Injunction:

The first ruling dramatically stopped the unconstitutional Muslim ban in January 2017, when students from the Worker and Immigrant Rights Advocacy Clinic (WIRAC) mobilized overnight to ground planes and free travelers who were being unjustly detained. The students’ work, along with co-counsel, secured the first nationwide injunction against the ban, and became the template for an army of lawyers around the country who gathered at airports to provide relief as the chaotic aftermath of the executive order unfolded.

Next came a major ruling in California in November 2017 in which a federal Judge granted a permanent injunction that prohibited the Trump Administration from denying funding to sanctuary cities—a major victory for students in the San Francisco Affirmative Litigation Project (SFALP) …

And on February 13, 2018, WIRAC secured yet another nationwide injunction—this time halting the abrupt termination of the Deferred Action for Childhood Arrivals program (DACA). … The preliminary injunction affirms protections for hundreds of thousands of Dreamers just weeks before the program was set to expire.

And Rule of Law Clinic files Suit over Census Preparations:

The Rule of Law Clinic launched at Yale Law School in the Spring of 2017 and in less than one year has been involved in some of the biggest cases in the country, including working on the travel ban, the transgender military ban, and filing amicus briefs on behalf of the top national security officials in the country, among many other cases. The core goal of the clinic is to maintain U.S. rule of law and human rights commitments in four areas: national security, antidiscrimination, climate change, and democracy promotion.

 

Meanwhile, Amy Chua appears to be the only sane, honest person at Yale Law:

In her new book, Political Tribes: Group Instinct and the Fate of Nations (Penguin, 2018), Amy Chua diagnoses the rising tribalism in America and abroad and prescribes solutions for creating unity amidst group differences.

Chua, who is the John M. Duff, Jr. Professor of Law, begins Political Tribes with a simple observation: “Humans are tribal.” But tribalism, Chua explains, encompasses not only an innate desire for belonging but also a vehement and sometimes violent “instinct to exclude.” Some groups organize for noble purposes, others because of a common enemy. In Chua’s assessment, the United States, in both foreign and domestic policies, has failed to fully understand the importance of these powerful bonds of group identity.

Unlike the students using their one-in-a-million chance at a Yale Law degree to help members of a different tribe for short-term gain, Amy Chua at least understands politics. I might not enjoy Chua’s company if I met her, but I respect her honesty and clear-sightedness.

 

On a final note, Professor Tyler has a new book, also about children and law, Why Children Follow Rules: Legal Socialization and the Development of Legitimacy. (Apparently the publishers decided to stiff the cover artist.) From the Amazon blurb:

Why Children Follow Rules focuses upon legal socialization outlining what is known about the process across three related, but distinct, contexts: the family, the school, and the juvenile justice system. Throughout, Tom Tyler and Rick Trinkner emphasize the degree to which individuals develop their orientations toward law and legal authority upon values connected to responsibility and obligation as opposed to fear of punishment. They argue that authorities can act in ways that internalize legal values and promote supportive attitudes. In particular, consensual legal authority is linked to three issues: how authorities make decisions, how they treat people, and whether they recognize the boundaries of their authority. When individuals experience authority that is fair, respectful, and aware of the limits of power, they are more likely to consent and follow directives.

Despite clear evidence showing the benefits of consensual authority, strong pressures and popular support for the exercise of authority based on dominance and force persist in America’s families, schools, and within the juvenile justice system. As the currently low levels of public trust and confidence in the police, the courts, and the law undermine the effectiveness of our legal system, Tom Tyler and Rick Trinkner point to alternative way to foster the popular legitimacy of the law in an era of mistrust.

Speaking as a parent… I understand where Tyler is coming from. If I act in a way that doesn’t inspire my children to see me as a fair, god-like arbitrator of justice, then they are more likely to see me as an unjust tyrant who should be disobeyed and overthrown.

On the other hand, sometimes things are against the rules for reasons kids don’t understand. One of my kids, when he was little, thought turning the dishwasher off was the funniest thing and would laugh all the way through timeout. Easy solution: I didn’t turn it on when he was in the room and  he forgot. Tougher problem: one of the kids thought climbing on the stove to get to the microwave was a good idea. Time outs didn’t work. Explaining “the stove is hot sometimes” didn’t work. Only force solved this problem.

Some people will accept your authority. Some people can reason their way to “We should cooperate and respect the social contract so we can live in peace.” And some people DON’T CARE no matter what.

So I agree that police, courts, etc., should act justly and not abuse their powers, and I can pull up plenty of examples of cases where they did. But I am afraid this is not a complete framework for dealing with criminals and legal socialization.

Advertisements

Re Nichols: Times the Experts were Wrong, pt 3/3

Welcome to our final post of “Times the Experts were Wrong,” written in preparation for our review of The Death of Expertise: The Campaign Against Established Knowledge and Why it Matters. Professor Nichols, if you ever happen to read this, I hope it give you some insight into where we, the common people, are coming from. If you don’t happen to read it, it still gives me a baseline before reading your book. (Please see part 1 for a discussion of relevant definitions.)

Part 3 Wars:

WWI, Iraq, Vietnam etc.

How many “experts” have lied to convince us to go to war? We were told we had to attack Iraq because they had weapons of mass destruction, but the promised weapons never materialized. Mother Jones (that source of all things pro-Trump) has a timeline:

November 1999: Chalabi-connected Iraqi defector “Curveball”—a convicted sex offender and low-level engineer who became the sole source for much of the case that Saddam had WMD, particularly mobile weapons labs—enters Munich seeking a German visa. German intel officers describe his information as highly suspect. US agents never debrief Curveball or perform background check. Nonetheless, Defense Intelligence Agency (DIA) and CIA will pass raw intel on to senior policymakers. …

11/6/00: Congress doubles funding for Iraqi opposition groups to more than $25 million; $18 million is earmarked for Chalabi’s Iraqi National Congress, which then pays defectors for anti-Iraq tales. …

Jan 2002: The FBI, which favors standard law enforcement interrogation practices, loses debate with CIA Director George Tenet, and Libi is transferred to CIA custody. Libi is then rendered to Egypt. “They duct-taped his mouth, cinched him up and sent him to Cairo,” an FBI agent told reporters. Under torture, Libi invents tale of Al Qaeda operatives receiving chemical weapons training from Iraq. “This is the problem with using the waterboard. They get so desperate that they begin telling you what they think you want to hear,” a CIA source later tells ABC. …

Feb 2002: DIA intelligence summary notes that Libi’s “confession” lacks details and suggests that he is most likely telling interrogators what he thinks will “retain their interest.” …

9/7/02: Bush claims a new UN International Atomic Energy Agency (IAEA) report states Iraq is six months from developing a nuclear weapon. There is no such report. …

9/8/02: Page 1 Times story by Judith Miller and Michael Gordon cites anonymous administration officials saying Saddam has repeatedly tried to acquire aluminum tubes “specially designed” to enrich uranium. …

Tubes “are only really suited for nuclear weapons programs…we don’t want the smoking gun to be a mushroom cloud.”—Rice on CNN …

“We do know, with absolute certainty, that he is using his procurement system to acquire the equipment he needs in order to enrich uranium to build a nuclear weapon.”—Cheney on Meet the Press

Oct 2002: National Intelligence Estimate produced. It warns that Iraq “is reconstituting its nuclear program” and “has now established large-scale, redundant and concealed BW agent production capabilities”—an assessment based largely on Curveball’s statements. But NIE also notes that the State Department has assigned “low confidence” to the notion of “whether in desperation Saddam would share chemical or biological weapons with Al Qaeda.” Cites State Department experts who concluded that “the tubes are not intended for use in Iraq’s nuclear weapons program.” Also says “claims of Iraqi pursuit of natural uranium in Africa” are “highly dubious.” Only six senators bother to read all 92 pages. …

10/4/02: Asked by Sen. Graham to make gist of NIE public, Tenet produces 25-page document titled “Iraq’s Weapons of Mass Destruction Programs.” It says Saddam has them and omits dissenting views contained in the classified NIE. …

2/5/03: In UN speech, Powell says, “Every statement I make today is backed up by sources, solid sources. These are not assertions. What we’re giving you are facts and conclusions based on solid intelligence.” Cites Libi’s claims and Curveball’s “eyewitness” accounts of mobile weapons labs. (German officer who supervised Curveball’s handler will later recall thinking, “Mein Gott!”) Powell also claims that Saddam’s son Qusay has ordered WMD removed from palace complexes; that key WMD files are being driven around Iraq by intelligence agents; that bioweapons warheads have been hidden in palm groves; that a water truck at an Iraqi military installation is a “decontamination vehicle” for chemical weapons; that Iraq has drones it can use for bioweapons attacks; and that WMD experts have been corralled into one of Saddam’s guest houses. All but the last of those claims had been flagged by the State Department’s own intelligence unit as “WEAK.”

I’m not going to quote the whole article, so if you’re fuzzy on the details, go read the whole darn thing.

If you had access to the actual documents from the CIA, DIA, British intelligence, interrogators, etc., you could have figured out that the “experts” were not unanimously behind the idea that Iraq was developing WMDs, but we mere plebes were dependent on what the government, Fox, and CNN told us the “experts” believed.

For the record, I was against the Iraq War from the beginning. I’m not sure what Nichols’s original position was, but in Just War, Not Prevention (2003) Nichols argued:

More to the point, Iraq itself long ago provided ample justifications for the United States and its allies to go to war that have nothing to do with prevention and everything to do with justice. To say that Saddam’s grasping for weapons of mass destruction is the final straw, and that it is utterly intolerable to allow Saddam or anyone like to gain a nuclear weapon, is true but does not then invalidate every other reason for war by subsuming them under some sort of putative ban on prevention.

The record provides ample evidence of the justice of a war against Saddam Hussein’s regime. Iraq has shown itself to be a serial aggressor… a supreme enemy of human rights that has already used weapons of mass destruction against civilians, a consistent violator of both UN resolutions and the therms of the 1991 cease-fire treaty … a terrorist entity that has attempted to reach beyond its own borders to support and engage in illegal activities that have included the attempted assassination of a former U.S. president; and most important, a state that has relentlessly sought nuclear arms against all international demands that it cease such efforts.

Any one of these would be sufficient cause to remove Saddam and his regime … but taken together they are a brief for what can only be considered a just war. ..

Those concerned that the United States is about to revise the international status quo might conside that Western inaction will allow the status quo to be revised in any case, only under the gun of a dictator commanding an arsenal of the most deadly materials on earthy. These are the two alternatives, and sadly, thee is no third choice.

Professor Nichols, I would like to pause here.

First: you think Trump is bad, you support the President under whom POWs were literally tortured, and you call yourself a military ethicist?

Second: you, an expert, bought into this “WMD” story (invented primarily by “Curveball,” an unreliable source,) while I, a mere plebe, knew it was a load of garbage.

Third: while I agree Saddam Hussein killed a hell of a lot of people–according to Wikipedia, Human Rights Watch estimates a quarter of a million Iraqis were killed or “disappeared” in the last 25 years of Ba’th party rule, the nine years of the Iraq war killed 150,000 to 460,000 people (depending on which survey you trust,) and based on estimates from the Iraq Body Count, a further 100,000 have died since then. Meanwhile, instability in Iraq allowed the horrifically violent ISIS to to sprout into existence. I Am Syria (I don’t know if they are reliable) estimates that over half a million Syrians have died so far because of the ISIS-fueled civil war rampaging there.

In other words, we unleashed a force that is twice as bad as Saddam in less than half the time–and paid a lovely 2.4 TRILLION dollars to accomplish this humanitarian feat! For that much money you could have just evacuated all of the Kurds and built them their own private islands to live on. You could have handed out $90,000 to every man, woman, and child in Iraq in exchange for “being friends with the US” and still had $150 BILLION left over to invest in things like “cancer treatments for children” and “highspeed rail infrastructure.”

Seriously, you could have spent the entire 2.4 trillion on hookers and blow and we would have still come out ahead.

Back in 2015, you tried to advise the Republican frontrunners on how to answer questions about the Iraq War:
First, let’s just stipulate that the question is unfair.

It’s asking a group of candidates to re-enact a presidential order given 12 years ago, while Hillary Clinton isn’t even being asked about decisions in which she took part, much less about her husband’s many military actions. …

Instead, Republican candidates should change the debate. Leadership is not about what people would do with perfect information; it’s about what people do when faced with danger and uncertainty. So here’s an answer that every Republican, from Paul to Bush, could give:

“Knowing exactly what we know now, I would not have invaded when we did or in the way we did. But I do not regret that we deposed a dangerous maniac like Saddam Hussein, and I know the world is better for it. What I or George Bush or anyone else would have done with better information is irrelevant now, because the next president has to face the world as it is, not as we would like to imagine it. And that’s all I intend to say about second-guessing a tough foreign-policy decision from 12 years ago, especially since we should have more pressing questions about foreign policy for Hillary Clinton that are a lot more recent than that.”

While I agree that Hillary should have been questioned about her own military decisions, Iraq was a formally declared war that the entire Republican establishment, think tanks, newspapers, and experts like you supported. They did such a convincing job of selling the war that even most of the Democratic establishment got on board, though never quite as enthusiastically.

By contrast, there was never any real Democratic consensus on whether Obama should remove troops or increase troops, on whether Hillary should do this or that in Libya. Obama and Hillary might have hideously bungled things, but there was never enthusiastic, party-wide support for their policies.

This makes it very easy for any Dem to distance themselves from previous Dem policies: “Yeah, looks like that was a big whoopsie. Luckily half our party knew that at the time.”

But for better or worse, the Republicans–especially the Bushes–own the Iraq War.

The big problem here is not that the Republican candidates (aside from Trump and Rand Paul) were too dumb to come up with a good response to the question (though that certainly is a problem.) The real problem is that none of them had actually stopped to take a long, serious look at the Iraq War, ask whether it was a good idea, and then apologize.

The Iraq War deeply discredited the Republican party.

Ask yourself: What did Bush conserve? What have I conserved? Surely being a “conservative” means you want to conserve something, so what was it? Iraqi freedom? Certainly not. Mid East stability? Nope. American lives? No. American tax dollars? Definitely not.

The complete failure of the Republicans to do anything good while squandering 2.4 trillion dollars and thousands of American lives is what triggered the creation of the “alt” right and set the stage for someone like Trump–someone willing to make a formal break with past Republican policies on Iraq–to rise to power.

Iraq I, the prequel:

But Iraq wasn’t the first war we were deceived into fighting–remember the previous war in Iraq, the one with the other President Bush? The one where we were motivated to intervene over stories of poor Kuwaiti babies ripped from their incubators by cruel Iraqis?

The Nayirah testimony was a false testimony given before the Congressional Human Rights Caucus on October 10, 1990 by a 15-year-old girl who provided only her first name, Nayirah. The testimony was widely publicized, and was cited numerous times by United States senators and President George H. W. Bush in their rationale to back Kuwait in the Gulf War. In 1992, it was revealed that Nayirah’s last name was al-Ṣabaḥ (Arabic: نيره الصباح‎) and that she was the daughter of Saud Al-Sabah, the Kuwaiti ambassador to the United States. Furthermore, it was revealed that her testimony was organized as part of the Citizens for a Free Kuwait public relations campaign which was run by an American public relations firm Hill & Knowlton for the Kuwaiti government. Following this, al-Sabah’s testimony has come to be regarded as a classic example of modern atrocity propaganda.[1][2]

In her emotional testimony, Nayirah stated that after the Iraqi invasion of Kuwait she had witnessed Iraqi soldiers take babies out of incubators in a Kuwaiti hospital, take the incubators, and leave the babies to die.

Her story was initially corroborated by Amnesty International[3] and testimony from evacuees. Following the liberation of Kuwait, reporters were given access to the country. An ABC report found that “patients, including premature babies, did die, when many of Kuwait’s nurses and doctors… fled” but Iraqi troops “almost certainly had not stolen hospital incubators and left hundreds of Kuwaiti babies to die.”[4][5]

Kuwaiti babies died because Kuwaiti doctors and nurses abandoned them. Maybe the “experts” at the UN and in the US government should vet their sources a little better (like actually find out their last names) before starting wars based on the testimony of children?

Vietnam:

And then there was Vietnam. Cold War “experts” were certain it was very important for us to spend billions of dollars in the 1950s to prop of the French colony in Indochina. When the French gave up, fighting the war somehow became America’s problem. The Cold War doctrine of the “Domino Theory” held that the loss of even one obscure, third-world country to Communism would unleash an unstoppable chain-reaction of global Soviet conquest, and thus the only way to preserve democracy anywhere in the world was to oppose communism wherever it emerged.

Of course, one could not be a Cold War “expert” in 1955, as we had never fought a Cold War before. This bi-polar world lead by a nuclear-armed communist faction on one side and a nuclear-armed democratic faction on the other was entirely new.

Atop the difficulties of functioning within an entirely novel balance of powers (and weapons), almost no one in America spoke Vietnamese (and no one in Vietnam spoke English) in 1955. We couldn’t even ask the Vietnamese what they thought. At best, we could play a game of telephone with Vietnamese who spoke French and translators who spoke French and English, but the Vietnamese who had learned the language of their colonizers were not a representative sample of average citizens.

In other words, we had no idea what we were getting into.

I lost family in Vietnam, so maybe I take this a little personally, but I don’t think American soldiers exist just to enrich Halliburton or protect French colonial interests. And you must excuse me, but I think you “experts” grunting for war have an extremely bad track record that involves people in my family getting killed.

While we are at it, what is the expert consensus on Russiagate?

Well, Tablet Mag thinks it’s hogwash:

At the same time, there is a growing consensus among reporters and thinkers on the left and right—especially those who know anything about Russia, the surveillance apparatus, and intelligence bureaucracy—that the Russiagate-collusion theory that was supposed to end Trump’s presidency within six months has sprung more than a few holes. Worse, it has proved to be a cover for U.S. intelligence and law-enforcement bureaucracies to break the law, with what’s left of the press gleefully going along for the ride. Where Watergate was a story about a crime that came to define an entire generation’s oppositional attitude toward politicians and the country’s elite, Russiagate, they argue, has proved itself to be the reverse: It is a device that the American elite is using to define itself against its enemies—the rest of the country.

Yet for its advocates, the questionable veracity of the Russiagate story seems much less important than what has become its real purpose—elite virtue-signaling. Buy into a storyline that turns FBI and CIA bureaucrats and their hand-puppets in the press into heroes while legitimizing the use of a vast surveillance apparatus for partisan purposes, and you’re in. Dissent, and you’re out, or worse—you’re defending Trump.

“Russia done it, all the experts say so” sounds suspiciously like a great many other times “expert opinion” has been manipulated by the government, industry, or media to make it sound like expert consensus exists where it does not.

Let’s look at a couple of worst case scenarios:

  1. Nichols and his ilk are right, but we ignore his warnings, overlook a few dastardly Russian deeds, and don’t go to war with Russia.
  2. Nichols is wrong, but we trust him, blame Russia for things it didn’t do, and go to war with a nuclear superpower.

But let’s look at our final fail:

Failure to predict the fall of the Soviet Union

This is kind of an ironic, given that Nichols is a Sovietologist, but one of the continuing questions in Political Science is “Why didn’t political scientists predict the fall of the Soviet Union?”

In retrospect, of course, we can point to the state of the Soviet economy, or glasnost, or growing unrest and dissent among Soviet citizens, but as Foreign Policy puts it:

In the years leading up to 1991, virtually no Western expert, scholar, official, or politician foresaw the impending collapse of the Soviet Union, and with it  and with it one-party dictatorship, the state-owned economy, and the Kremlin’s control over its domestic and Eastern European empires. … 

Whence such strangely universal shortsightedness? The failure of Western experts to anticipate the Soviet Union’s collapse may in part be attributed to a sort of historical revisionism — call it anti-anti-communism — that tended to exaggerate the Soviet regime’s stability and legitimacy. Yet others who could hardly be considered soft on communism were just as puzzled by its demise. One of the architects of the U.S. strategy in the Cold War, George Kennan, wrote that, in reviewing the entire “history of international affairs in the modern era,” he found it “hard to think of any event more strange and startling, and at first glance inexplicable, than the sudden and total disintegration and disappearance … of the great power known successively as the Russian Empire and then the Soviet Union.”

I don’t think this is Political Science’s fault–even the Soviets don’t seem to have really seen it coming. Some things are just hard to predict.

Sometimes we overestimate our judgment. We leap before we look. We think there’s evidence where there isn’t or that the evidence is much stronger than it is.

And in the cases I’ve selected, maybe I’m the one who’s wrong. Maybe Vietnam was a worthwhile conflict, even if it was terrible for everyone involved. Maybe the Iraq War served a real purpose.

WWI was still a complete disaster. There is no logic where that war makes any sense at all.

When you advocate for war, step back a moment and ask how sure you are. If you were going to be the canon fodder down on the front lines, would you still be so sure? Or would you be the one suddenly questioning the experts about whether this was really such a good idea?

Professor Nichols, if you have read this, I hope it has given you some food for thought.

Re Nichols: Times the Experts were Wrong, pt 2

Welcome back. In preparation for our review of The Death of Expertise: The Campaign Against Established Knowledge and Why it Matters, I have made a list of “times the experts were wrong.” Professor Nichols, if you ever happen to read this, I hope it give you some insight into where we, the common people, are coming from. If you don’t happen to read it, it still gives me a baseline before reading your book. (Please see part 1 for a discussion of relevant definitions.)

Part 2: Law, Academia, and Science

Legal Testimony

If you’ve had any contact with the court system, you’re probably familiar with the use of “expert testimony.” Often both sides of a case bring in their own experts who give their expert testimony on the case–by necessity, contradictory testimony. For example, one expert in a patent case may testify that his microscopy data shows one thing, while a second testifies that in fact a proper analysis of his microscopy data actually shows the opposite. The jury is then asked to decide which expert’s analysis is correct.

If it sounds suspicious that both sides in a court case can find an “expert” to testify that their side is correct, that’s because it is. Take, for example, the government’s expert testimony in the trial of Mr. Carlos Simon-Timmerman, [note: link takes you to AVN, a site of questionable work-friendliness] accused of possessing child pornography:

“When trial started,” said Ramos-Vega, “the government presented the Lupe DVD and a few other images from the other DVDs that the government understood were also of child pornography.  The government presented the testimony of a Special Agent of Immigration and Customs Enforcement that deals with child pornography and child exploitation cases.  She testified that Lupe was ‘definitely’ under 18. The government then presented the testimony of a pediatrician who testified that she was 100 percent sure that Lupe was underage.”

The experts, ladies and gents.

After the prosecution rested its case, it was Ramos-Vega’s turn to present witnesses.

The first witness we called was Lupe,” he said. “She took the stand and despite being very nervous testified so well and explained to the ladies and gentlemen of the jury that she was 19 years old when she performed in the videos for littlelupe.com.  She also allowed us to present into evidence copies of her documents showing her date of birth.”

So the Customs Special Agent and the pediatrician were both LYING UNDER OATH about the age of a porn star in order to put an innocent man in prison. There were multiple ways they could have confirmed Lupe’s age (such as checking with her official porn star information on file in the US, because apparently that’s an official thing that exists for exactly this purpose,) or contacting Lupe herself like Mr. Simon-Timmerman’s lawyer did.

Unfortunately, this is hardly the first time trial “experts” have lied:

The Washington Post published a story so horrifying this weekend that it would stop your breath: “The Justice Department and FBI have formally acknowledged that nearly every examiner in an elite FBI forensic unit gave flawed testimony in almost all trials in which they offered evidence against criminal defendants over more than a two-decade period before 2000.”

“Of 28 examiners with the FBI Laboratory’s microscopic hair comparison unit, 26 overstated forensic matches in ways that favored prosecutors in more than 95 percent of the 268 trials reviewed so far.” …

Santae Tribble served 28 years for a murder based on FBI testimony about a single strand of hair. He was exonerated in 2012. It was later revealed that one of the hairs presented at trial came from a dog.

Professor Nichols, you want to know, I assume, why we plebes are so distrustful of experts like you. Put yourself, for a moment, in the feet of an ordinary person accused of a crime. You don’t have a forensics lab. Your budget for expert witnesses is pretty limited. Your lawyer is likely a public defender.

Do you trust that these experts are always right, even though they are often hired by people who have a lot more money than you do? Do you think there is no way these experts could be biased toward the people paying them, or that the side with more money to throw at experts and its own labs could produce more evidence favorable to itself than the other?

Now let’s expand our scope: how do you think ordinary people think about climate scientists, medical drug studies, or military intelligence? Unlike drug companies, we commoners don’t get to hire our own experts. Do you think Proctor and Gamble never produces research that is biased toward its own interests? Of course; that’s why researchers have to disclose any money they’ve received from drug companies.

From the poor man’s perspective, it looks like all research is funded by rich men, and none by poor men. It is sensible to worry, therefore, that the results of this research are inherently biased toward those who already have plenty of status and wealth.

The destruction of expertise: “Studies” Departments

Here is a paper published in a real, peer-reviewed academic journal:

Towards a truer multicultural science education: how whiteness impacts science education, by Paul T. Le, (doctoral candidate from the Department of Integrative and Systems Biology at the University of Colorado) and Cheryl Matias, (associate professor at the School of Education and Human Development, University of Colorado) (h/t Real Peer Review):

The hope for multicultural, culturally competent, and diverse perspectives in science education falls short if theoretical considerations of whiteness are not entertained. [Entertained by whom?] Since whiteness is characterized [by whom?] as a hegemonic racial dominance that has become so natural it is almost invisible, this paper identifies how whiteness operates in science education such that [awkward; “to such an extent that”] it falls short of its goal for cultural diversity. [“Cultural diversity” is not one of science education’s goals] Because literature in science education [Which literature? Do you mean textbooks?] has yet to fully entertain whiteness ideology, this paper offers one of the first theoretical postulations [of what?]. Drawing from the fields of education, legal studies, and sociology, [but not science?] this paper employs critical whiteness studies as both a theoretical lens and an analytic tool to re-interpret how whiteness might impact science education. Doing so allows the field to reconsider benign, routine, or normative practices and protocol that may influence how future scientists of Color experience the field. In sum, we seek to have the field consider the theoretical frames of whiteness and how it [use “whiteness” here instead of “it” because there is no singular object for “it” to refer to in this sentence] might influence how we engage in science education such that [“to such an extent that”] our hope for diversity never fully materializes.

Apologies for the red pen; you might think that someone at the “School of Education” could write a grammatical sentence and the people publishing peer-reviewed journals would employ competent editors, but apparently not.

If these are “experts,” then expertise is dead with a stake through its heart.

But the paper goes on!

The resounding belief that science is universal and objective hides the reality that whiteness has shaped the scientific paradigm.

See, you only think gravity pulls objects toward the earth at a rate of 9.8 m/second^2 because you’re white. When black people drop objects off the Leaning Tower of Pisa, they fall 10m/s^2. Science textbooks and educators only teaching the white rate and refusing to teach the black rate is why no black nation has successfully launched a man into space.

Our current discourse believes that science and how we approach experimentation and constructing scientific explanations is unbiased, and on the surface, it may seem justified (Kelly 2014). However, this way of knowing science in the absence of other ways of knowing only furthers whiteness an White supremacy through power and control of science knowledge. As a result, our students of Color are victims of deculturization, and their own worldviews are invalidated, such as described by Ladson-Bilings (1998a).

For example, some Aboriginal people in Australia believe that cancer is caused by curses cast by other people or a spiritual punishment for some misdeed the sufferer committed. Teaching them that cancer is caused by mutated cells that have begun reproducing out of control and can’t be caused by a curse is thus destroying a part of their culture. Since all cultures are equally valuable, we must teach that the Aboriginal theory of cancer-curses and the white theory of failed cellular apoptosis are equally true.

Or Le and Matias are full of shit. Le doesn’t have his PhD, yet, so he isn’t an official expert, but Matias is a professor with a CV full of published, peer-reviewed articles on similar themes.

You might say I’ve cherry-picked a particularly bad article, but give me 10 minutes and I’ll get you 100 more that are just as bad. Here’s one on “the construction of race in contemporary PE curriculum policy.”

Every single degree awarded paper published on such garbage degrades the entire concept of “experts.” Sure, Nichols is a professor–and so is Matias. As far as our official system for determining expertise, Nichols, Matias, and Stephen Hawing are all “experts.”

And this matters, because the opinions of garbage experts get cited in places like the NY Times, and then picked up by other journalists and commentators as though they were some kind of valid proof backing up their points. Take this case, “Extensive Data Shows Punishing Reach of Racism for Black Boys:

Black boys raised in America, even in the wealthiest families and living in some of the most well-to-do neighborhoods, still earn less in adulthood than white boys with similar backgrounds, according to a sweeping new study that traced the lives of millions of children.

White boys who grow up rich are likely to remain that way. Black boys raised at the top, however, are more likely to become poor than to stay wealthy in their own adult households.

(Oh, look, someone discovered regression to the mean.)

What happens when blue check twitter reports on this piece?

    1. You don’t need an “expert” to tell you that black men might get discriminated against.
    2. How do you become an “expert” in anti-racism? Do you have to pass the implicit bias test? Get a degree in anti-racist studies?
    3. Do you think, for whatever reason, that a guy who gets paid to do anti-racist research might come up with “racism” as an answer to almost any question posed?
    4. “The guy who gets paid to say that racism is the answer said the answer is racism” does not actually prove that racism is the answer, but it is being presented like it does.
    5. Blue check has failed to mention any obvious counters, like:
      a. Mysteriously, this “racism” only affects black men and not black women (this is why we’ve had a black female president but not a black male one, right?)
      b. Regression to the mean is a thing and we can measure it (shortly: The further you are from average for your group on any measure [height, intelligence, income, number of Daleks collected, etc.,] the more likely your kids are to be closer to average than you are. [This is why the kids of Nobel prize winners, while pretty smart on average, are much less likely to win Nobels than their parents.] Since on average blacks make a lot less money than whites, any wealthy black family is significantly further from the average black income than a white family with the same amount of money is from the average white income. Therefore at any high income level, we expect black kids to regress harder toward the black mean than white kids raised at the same level. La Griffe du Lion [a statistics expert] has an article that goes into much more depth and math on regression to the mean and its relevance.)
      c. Crime rates. Black men commit more crime than black women or white men, and not only does prison time cut into employment, but most employers don’t want to employ people who’ve committed a crime. This makes it easier for black women to get jobs and build up wealth than black men. (The article itself does mention that “The sons of black families from the top 1 percent had about the same chance of being incarcerated on a given day as the sons of white families earning $36,000,” but yeah, it’s probably just totally irrational discrimination keeping black men out of jobs.)

“Experts” like this get used to trot a simple, narrative-supporting line that the paper wants to make rather than give any real or uncomfortable analysis of a complex issue. It’s dishonest reporting and contributes to the notion that “expert” doesn’t mean all that much.

Source

Leaded Gas:

Tetraethyllead (aka lead) was added to automobile fuels beginning in the 1920s to raise fuel economy–that is, more miles per gallon. For half a century, automobiles belched brain-damaging lead into the atmosphere, until the Clean Air Act in the 70s forced gas companies to cut back.

Here’s a good article discussing the leaded gas and crime correlation.

Plenty of people knew lead is poisonous–we’ve known that since at least the time of the Romans–so how did it end up in our gas? Well, those nice scientists over at the auto manufacturers reassured us that lead in gasoline was perfectly safe, and then got themselves on a government panel intended to evaluate the safety of leaded gas and came to the same conclusion. Wired has a thorough history:

But fearing that such [anti-leaded gas] measures would spread, … the manufacturing companies demanded that the federal government take over the investigation and develop its own regulations. U.S. President Calvin Coolidge, a Republican and small-government conservative, moved rapidly in favor of the business interests.

… In May 1925, the U.S. Surgeon General called a national tetraethyl lead conference, to be followed by the formation of an investigative task force to study the problem. That same year, Midgley [the inventor of leaded gas] published his first health analysis of TEL, which acknowledged  a minor health risk at most, insisting that the use of lead compounds,”compared with other chemical industries it is neither grave nor inescapable.”

It was obvious in advance that he’d basically written the conclusion of the federal task force. That panel only included selected industry scientists like Midgely. It had no place for Alexander Gettler or Charles Norris [scientists critical of leaded gas] or, in fact, anyone from any city where sales of the gas had been banned, or any agency involved in the producing that first critical analysis of tetraethyl lead.

In January 1926, the public health service released its report which concluded that there was “no danger” posed by adding TEL to gasoline…”no reason to prohibit the sale of leaded gasoline” as long as workers were well protected during the manufacturing process.

The task force did look briefly at risks associated with every day exposure by drivers, automobile attendants, gas station operators, and found that it was minimal. The researchers had indeed found lead residues in dusty corners of garages. In addition,  all the drivers tested showed trace amounts of lead in their blood. But a low level of lead could be tolerated, the scientists announced. After all, none of the test subjects showed the extreme behaviors and breakdowns associated with places like the looney gas building. And the worker problem could be handled with some protective gear.

I’m not sure how many people were killed globally by leaded gas, but Wired notes:

It was some fifty years later – in 1986 – that the United States formally banned lead as a gasoline additive. By that time, according to some estimates, so much lead had been deposited into soils, streets, building surfaces, that an estimated 68 million children would register toxic levels of lead absorption and some 5,000 American adults would die annually of lead-induced heart disease.

The UN estimates that the elimination of lead in gas and paint has added 2.4 trillion, annually, the global economy.

Leaded gas is a good example of a case where many experts did know it was poisonous (as did many non-experts,) but this wasn’t the story the public heard.

Pluto

Yes, this one is silly, but I have relatives who keep bringing it up. “Scientists used to say there are 9 planets, but now they say there are only 8! Scientists change what they think all the time!”

Congratulations, astronomers, they think you lost Pluto. Every single time I try to discuss science with these people, they bring up Pluto. Scientific consensus is meaningless in a world where planets just disappear. “Whoops! We miscounted!”

(No one ever really questioned Pluto’s planetary status before it was changed, but a few die-hards refuse to accept the new designation.)

Scientists weren’t actually wrong about Pluto (“planet” is just a category scientists made up and that they decided to redefine to make it more useful,) but the matter confused people and it seemed like scientific consensus was arbitrary and could change unexpectedly.

Unfortunately, normal people who don’t have close contact with science or scientists often struggle to understand exactly what science is and how it advances. They rely, sporadically, on intermediaries like The History Chanel or pop science journalists to explain it to them, and these guys like to run headlines like “5 things Albert Einstein got Totally Wrong” (haha that Albert, what a dummy, amirite?)

So when you question why people distrust experts like you, Professor Nichols, consider whether the other “experts” they’ve encountered have been trustworthy or even correct, or if they’ve been liars and shills.

Cathedral Round-Up: Should I read Nichols or Pinker?

Harvard Mag had interesting interviews/reviews of both Tom Nichols’s “Death of Expertise” and Steven Pinker’s “Enlightenment Now“.

From the article about Nichols:

Several years ago, Tom Nichols started writing a book about ignorance and unreason in American public discourse—and then he watched it come to life all around him, in ways starker than he had imagined. A political scientist who has taught for more than a decade in the Harvard Extension School, he had begun noticing what he perceived as a new and accelerating—and dangerous—hostility toward established knowledge. People were no longer merely uninformed, Nichols says, but “aggressively wrong” and unwilling to learn. They actively resisted facts that might alter their preexisting beliefs. They insisted that all opinions, however uninformed, be treated as equally serious. And they rejected professional know-how, he says, with such anger. That shook him.

Skepticism toward intellectual authority is bone-deep in the American character, as much a part of the nation’s origin story as the founders’ Enlightenment principles. Overall, that skepticism is a healthy impulse, Nichols believes. But what he was observing was something else, something malignant and deliberate, a collapse of functional citizenship.

What are people aggressively wrong about, and what does he think is causing the collapse of functional citizenship?

The Death of Expertise resonated deeply with readers. … Readers regularly approach Nichols with stories of their own disregarded expertise: doctors, lawyers, plumbers, electricians who’ve gotten used to being second-guessed by customers and clients and patients who know little or nothing about their work. “So many people over the past year have walked up to me and said, ‘You wrote what I was thinking,’” he says.

Sounds like everyone’s getting mansplained these days.

The Death of Expertise began as a cri de coeur on his now-defunct blog in late 2013. This was during the Edward Snowden revelations, which to Nichols’s eye, and that of other intelligence experts, looked unmistakably like a Russian operation. “I was trying to tell people, ‘Look, trust me, I’m a Russia guy; there’s a Russian hand behind this.’ ” But he found more arguments than takers. “Young people wanted to believe Snowden was a hero.”

I don’t have a particular opinion on Snowdon because I haven’t studied the issue, but let’s pretend you were in the USSR and one day a guy in the government spilled a bunch of secrets about how many people Stalin was having shot and how many millions were starving to death in Holodomor (the Ukrainian genocide.) (Suppose also that the media were sufficiently free to allow the stories to spread.)

Immediately you’d have two camps: the “This guy is a capitalist spy sent to discredit our dear leader with a hideous smear campaign” and “This guy is totally legit, the people need to know!”

Do you see why “Snowden is a Russian” sounds like the government desperately trying to cover its ass?

Now let’s suppose the guy who exposed Stalin actually was a capitalist spy. Maybe he really did hate communism and wanted to bring down the USSR. Would it matter? As long as the stuff he said was true, would you want to know anyway? I know that if I found out about Holodomor, I wouldn’t care about the identity of the guy who released the information besides calling him a hero.

I think a lot of Trump supporters feel similarly about Trump. They don’t actually care whether Russia helped Trump or not; they think Trump is helping them, and that’s what they care about.

In other words, it’s not so much “I don’t believe you” as “I have other priorities.”

In December, at a JFK Library event on reality and truth in public discourse, a moderator asked him a version of “How does this end?” … “In the longer term, I’m worried about the end of the republic,” he answered. Immense cynicism among the voting public—incited in part by the White House—combined with “staggering” ignorance, he said, is incredibly dangerous. In that environment, anything is possible. “When people have almost no political literacy, you cannot sustain the practices that sustain a democratic republic.” The next day, sitting in front of his fireplace in Rhode Island, where he lives with his wife, Lynn, and daughter, Hope, he added, “We’re in a very perilous place right now.”

Staggering ignorance about what, I wonder. Given our increased access to information, I suspect that the average person today both knows and can easily find the answers to far more questions than the average person of the 80s, 50s, or 1800s.

I mean, in the 80s, we still had significant numbers of people who believed in: faith healing; televangelists; six-day creationism; “pyramid power”; crop circles; ESP; UFOs; astrology; multiple personality disorder; a global Satanic daycare conspiracy; recovered memories; Freudianism; and the economic viability of the USSR. (People today still believe in the last one.)

One the one hand, I think part of what Nichols is feeling is just the old distrust of experts projected onto the internet. People used to harass their local school boards about teaching ‘evilution’; today they harass each other on Twitter over Ben Ghazi or birtherism or Russia collusion or whatever latest thing.

We could, of course, see a general decline in intellectual abilities as the population of the US itself is drawn increasingly from low-IQ backgrounds and low-IQ people (appear to) outbreed the high-IQ ones, but I have yet to see whether this has had time to manifest as a change in the amount of general knowledge people can use and display, especially given our manifestly easier time actually accessing knowledge. I am tempted to think that perhaps the internet forced Nichols outside of his Harvard bubble and he encountered dumb people for the first time in his life.

On the other hand, however, I do feel a definite since of malaise in America. It’s not about IQ, but how we feel about each other. We don’t seem to like each other very much. We don’t trust each other. Trust in government is low. Trust in each other is low. People have fewer close friends and confidants.

We have material prosperity, yes, despite our economic woes, but there is a spiritual rot.

Both sides are recognizing this, but the left doesn’t understand what is causing it.

They can point at Trump. They can point at angry hoards of Trump voters. “Something has changed,” they say. “The voters don’t trust us anymore.” But they don’t know why.

Here’s what I think happened:

The myth that is “America” got broken.

A country isn’t just a set of laws with a tract of land. It can be that, but if so, it won’t command a lot of sentimental feeling. You don’t die to defend a “set of laws.” A country needs a people.

“People” can be a lot of things. They don’t have to be racially homogenous. “Jews” are a people, and they are not racially homogenous. “Turks” are a people, and they are not genetically homogenous. But fundamentally, people have to see themselves as “a people” with a common culture and identity.

America has two main historical groups: whites and blacks. Before the mass immigration kicked off in 1965, whites were about 88% of the country and blacks were about 10%. Indians, Asians, Hispanics, and everyone else rounded out that last 2%. And say what you will, but whites thought of themselves as the American culture, because they were the majority.

America absorbed newcomers. People came, got married, had children: their children became Americans. The process takes time, but it works.

Today, though, “America” is fractured. It is ethnically fractured–California and Texas, for example, are now majority non-white. There is nothing particularly wrong with the folks who’ve moved in, they just aren’t from one of America’s two main historical ethnic groups. They are their own groups, with their own histories. England is a place with a people and a history; Turkey is a place with a people and a history. They are two different places with different people and different history. It is religiously fractured–far fewer people belong to one of America’s historically prominent religions. It is politically fractured–more people now report being uncomfortable with their child dating a member of the opposite political party than of a different race.

Now we see things like this: After final vote, city will remove racist Pioneer Monument Statue:

As anticipated, the San Francisco Arts Commission voted unanimously Monday to remove the “Early Days” statue from Civic Center’s Pioneer Monument, placing the century-plus old bronze figures in storage until a long-term decision about their fate can be made.

The decision caps off a six-month long debate, after some San Franciscans approached the commission in August 2017 to complain about the statue, which features a pious but patronizing scene of a Spanish missionary helping a beaten Indian to his feet and pointing him toward heaven.

In February the city’s Historic Preservation Commission voted unanimously to recommend removing “Early Days” despite some commissioners expressing reservations about whether the sculpture has additional value as an expose of 19th century racism.

Your statues are racist. Your history is racist. Your people is racist.

What do they think the reaction to this will look like?

 

But before we get too dark, let’s take a look at Pinker’s latest work, Enlightenment Now:

It is not intuitive that a case needs to be made for “Reason, Science, Humanism, and Progress,” stable values that have long defined our modernity. And most expect any attack on those values to come from the far right: from foes of progressivism, from anti-science religious movements, from closed minds. Yet Steven Pinker argues there is a second, more profound assault on the Enlightenment’s legacy of progress, coming from within intellectual and artistic spheres: a crisis of confidence, as progress’s supporters see so many disasters, setbacks, emergencies, new wars re-opening old wounds, new structures replicating old iniquities, new destructive side-effects of progress’s best intentions. …

Pinker’s volume moves systematically through various metrics that reflect progress, charting improvements across the last half-century-plus in areas from racism, sexism, homophobia, and bullying, to car accidents, oil spills, poverty, leisure, female empowerment, and so on. …

the case Pinker seeks to make is at once so basic and so difficult that a firehose of evidence may be needed—optimism is a hard sell in this historical moment. … Pinker credits the surge in such sentiments since the 1960s to several factors. He points to certain religious trends, because a focus on the afterlife can be in tension with the project of improving this world, or caring deeply about it. He points to nationalism and other movements that subordinate goods of the individual or even goods of all to the goods of a particular group. He points to what he calls neo-Romantic forms of environmentalism, not all environmentalisms but specifically those that subordinate the human species to the ecosystem and seek a green future, not through technological advances, but through renouncing current technology and ways of living. He also points to a broader fascination with narratives of decline …

I like the way Pinker thinks and appreciate his use of actual data to support his points.

To these decades-old causes, one may add the fact that humankind’s flaws have never been so visible as in the twenty-first century. … our failures are more visible than ever through the digital media’s ceaseless and accelerating torrent of grim news and fervent calls to action, which have pushed many to emotional exhaustion. Within the last two years, though not before, numerous students have commented in my classroom that sexism/racism/inequality “is worse today than it’s ever been.” The historian’s answer, “No, it used to be much worse, let me tell you about life before 1950…,” can be disheartening, especially when students’ rage and pain are justified and real. In such situations, Pinker’s vast supply of clear, methodical data may be a better tool to reignite hope than my painful anecdotes of pre-modern life.

Maybe Nichols is on to something about people today being astoundingly ignorant…

Pinker’s celebration of science is no holds barred: he calls it an achievement surpassing the masterworks of art, music, and literature, a source of sublime beauty, health, wealth, and freedom.

I agree with Pinker on science, but Nichols’s worldview may be the one that needs plumbing.

Which book do you want me to read/review?

Cathedral Round-Up #30: HLS’s Bicentennial Class

Harvard Law Bulletin recently released a special issue commemorating HLS’s 200th anniversary:

Invocation

A Memorial to the Enslaved People Who Enabled the Founding of Harvard Law School

On a clear, windy afternoon in early September at the opening of its bicentennial observance, Harvard Law School unveiled a memorial on campus. The plaque, affixed to a large stone, reads:

In honor of the enslaved whose labor created wealth that made possible the founding of Harvard Law School

May we pursue the highest ideals of law and justice in their memory

Harvard Law School was founded in 1817, with a bequest from Isaac Royall Jr. Royall’s wealth was derived from the labor of enslaved people on a sugar plantation he owned on the island of Antigua and on farms he owned in Massachusetts.

“We have placed this memorial here, in the campus cross-roads, at the center of the school, where everyone travels, where it cannot be missed,” said HLS Dean John Manning ’85. …

Harvard University President Drew Faust… also spoke at the unveiling, which followed a lecture focused on the complicated early history of the school.

“How fitting that you should begin your bicentennial,” said Faust, “with this ceremony reminding us that the path toward justice is neither smooth nor straight.” …

Halley, holder of the Royall Professorship of Law, who has spoken frequently about the Royall legacy, read aloud the names of enslaved men, women, and children of the Royall household from records that have survived, “so that we can all share together the shock of the sheer number, she said, “and a brief shared experience of their loss.”

“These names are the tattered, ruined remains, the accidents of recording and the encrustation of a system that sought to convert human beings into property,’ she said “But they’re our tattered remains.”

This commemorative issue also contains an interview with ImeIme Umana, Harvard Law Review’s 131st president, “How Have Harvard Scholars Shaped the Law?”:

How has legal scholarship changed since the Law Review began publishing more than a century ago?

Scholarship certainly has changed over time, and these pieces, whether or not they acknowledge it to a great extent, are consistent with the changing nature of the legal field in that they bring more voices to the table and more diverse perspectives. If you look back at our older scholarship, you’ll tend to see more traditional, doctrinal, technical pieces. now, they’re more aspirational, more critical, and have more social commentary in them. It’s a distinction between writing on what the law is and writing on what the law should be, and asking why things are the way they are.

BTW, you can purchase the Harvard Law Review on Amazon.

What Kind of scholarship do you find especially meaningful?

I’m really passionate about the sate of the criminal legal system and civil rights. The cherry on top within those topics is scholarship that proposes new ways of thinking or challenges the status quo.

One of my favorite articles is [Assistant] Professor Andrew Crespo’s “Systemic Facts” [published in the June 2016 Harvard Law Review], because it does just that. The thesis is that courts are institutionally positioned to bring about systemic change, and that they can use their position to collect facts that they are institutionally privy to. It calls on them to do that such that we might learn more about how the legal system is structured.

I’ve noticed the increased emphasis on criminal law lately, especially bail reform.

The Law Review was founded 130 years ago, and now you are its president. Do you ever get caught up in thinking about the historical implications of running such a well-known and influential publication?

… Looking at it through a historical lens, the diversity of the student body and Law Review editors and authors is especially meaningful, as it makes legal institutions more inclusive, and therefore the law more inclusive. It’s important to keep pushing in that direction and never become complacent. The history is very important.

You are the first black woman who was elected to serve as president of the Law Review. Why do you think it took so long for that to happen?

Ive thought about it a lot and I just don’t know the answer. My thought is that it just tracks the lack of inclusion of black women in legal institutions, full stop. It’s a function of that. There’ always more we can be doing to be more inclusive. The slowness of milestones like this might have a broader cause than just something specific to the Law Review.

It probably tracks closer to the inclusion of Nigerian women at Harvard than black women. Umana is Nigerian American, and Nigerian Americans score significantly better on the SAT and LSAT than African Americans. (Based on average incomes, Nigerian Americans do better than white Americans, too.) So I’m going to go out on a limb and wager that significant black firsts at HLR are due to the arrival of more Nigerian and Kenyan immigrants, rather than the integration of America’s African American community.

While reading about ImeIme Umana, I noticed that American publications–such as NBC News–describe her as a “native” of Harrisburg, Pennsylvania. By contrast, Financial Nigeria proudly claims her as a “Nigerian American”:

Born to Nigerian immigrant parents originally from Akwa Ibom State in Nigeria, Umana is a resident of Harrisburg, Pennsylvania, United States. Umana graduated with a BA in Joint Concentration in African American Studies and Government from Harvard University in 2014. She is currently working on a Doctor of Law degree (Class of 2018) at the Harvard Law School.

Who is this man? HLS Class of 1926

The issue is full of fascinating older photographs with minimalist captions, because the graphic design team prefers white space over information.

For example, on page 58 is a photo of a collection of students and older men (is that Judge Learned Hand in the first row?) captioned simply 1926 and “Stepping up: by 1925, lawyers could pursue graduate degrees (LL.M.s and S.J.D.s) at HLS.

<- Seated in the front row is this man. Who is he? Quick perusal of a list of famous Indians reveals only that he isn’t any of them.

There is also an Asian man seated directly behind him whose photo I’ll post below. You might think, in our diversity obsessed age, when we track the first black editor of this and first black female head of that, someone would be curious enough about these men to tell us their stories. Who were they? How did they get to Harvard Law?

After some searching and help from @prius_1995, I think the Indian man is Dr. Kashi Narayan Malaviya, S.J.D. HLS 1926, and the Asian man is Domingo Tiongco Zavalla, LL.M. 1927, from the Philippines. (If you are curious, here are the relevant class lists.)

I haven’t been able to find out much about Dr. Malaviya. Clearly he associated with folks in high places, as indicated by this quote from Hindu Nationalism and the Language of Politic in Late Colonial India:

In Allahabad, during a meeting attended by Uma Nehru, Hriday Nath Kunzru and Dr. Kashi Narayan Malaviya, M. K. Acharya made the link between the politics of the nation and the plight of Hinduism very clear…

Domingo Tiongco Zavalla, LL.M. HLS 1927

(Unfortunately, it appears that he has a more famous relative named Madan Mohan Malaviya, who is coming up in the search results. His great-grandson is single, however, if any of you ladies are looking for a Brahmin husband.)

1926 was during the period when America ruled the Philippines, so it would be sensible for Filipinos to want to learn about the American legal system and become credentialed in it. Domingo Zavalla went on to be a delegate to the Philippines’s Commonwealth Constitutional Convention (This was probably the 1934 Convention: “The Convention drafted the 1935 Constitution, which was the basic law of the Philippines under the American-sponsored Commonwealth of the Philippines and the post-War, sovereign Third Republic.”)

That’s about all I’ve found about Zavalla.

How quickly we fall into obscurity and are forgotten.

Cathedral Round-Up #29: Pinker, Truth, and Liars

Steven Pinker recently gave a short speech at Harvard (where he works) on how hearing certain facts without accompanying leftist counter-arguments causes people to become “infected” with right-wing thoughts:

The Left responded in its usual, thoughtful, reasonable fashion, eg “If you ever doubted that Steven Pinker’s sympathies lie with the alt-right…” The author of the piece also called Pinker a “lying right-wing shitweasel” on twitter.

Of course this is nonsense; as Why Evolution is True has pointed out, Pinker is one of Harvard’s most generous donors to the Democratic party.

The difference between Pinker and the Left is that Pinker is (trying) to be honest. Pinker believes in truth. He believes in believing true things and discussing true things. He believes that just because you believe a true thing doesn’t mean you have to go down this road to believing other, in his opinion untrue, things. You can believe more than one true thing. You can simultaneously believe “Blacks commit more homicide than whites” and believe “Blacks should not be discriminated against.”

By contrast, the Left is not trying to be honest. It is not looking for truth. It just wants to win. The Left does not want people to know that crime stats vary by race, that men and women vary in average interests and aptitudes, that communism is an atrociously bad economic system. Merely saying, “Hey, there are things you can’t say out loud without provoking a very loud controversy from the left,” has provoked… a very loud controversy from the left:

Link to the original conversation

 

The Left is abusing one of its own because merely saying these things out loud is seen as a betrayal of Leftist goals.

 

And yet he was in the right! They were wrong and he was right. And if all others accepted the lie which the Party imposed—if all records told the same tale—then the lie passed into history and became truth. ‘Who controls the past’ ran the Party slogan, ‘controls the future: who controls the present controls the past.’ —George Orwel, 1984

 

Anthropology Friday: Numbers and the Making of Us, part 2

Welcome to part 2 of my review of Caleb Everett’s Numbers and the Making of Us: Counting and the Course of Human Cultures.

I was really excited about this book when I picked it up at the library. It has the word “numbers” on the cover and a subtitle that implies a story about human cultural and cognitive evolution.

Regrettably, what could have been a great books has turned out to be kind of annoying. There’s some fascinating information in here–for example, there’s a really interesting part on pages 249-252–but you have to get through pages 1-248 to get there. (Unfortunately, sometimes authors put their most interesting bits at the end so that people looking to make trouble have gotten bored and wandered off by then.)

I shall try to discuss/quote some of the book’s more interesting bits, and leave aside my differences with the author (who keeps reiterating his position that mathematical ability is entirely dependent on the culture you’re raised in.) Everett nonetheless has a fascinating perspective, having actually spent much of his childhood in a remote Amazonian village belonging to the Piraha, who have no real words for numbers. (His parents were missionaries.)

Which languages contain number words? Which don’t? Everett gives a broad survey:

“…we can reach a few broad conclusions about numbers in speech. First, they are common to nearly all of the world’s languages. … this discussion has shown that number words, across unrelated language, tend to exhibit striking parallels, since most languages employ a biologically based body-part model evident in their number bases.”

That is, many languages have words that translate essentially to “One, Two, Three, Four, Hand, … Two hands, (10)… Two Feet, (20),” etc., and reflect this in their higher counting systems, which can end up containing a mix of base five, 10, and 20. (The Romans, for example, used both base five and ten in their written system.)

“Third, the linguistic evidence suggests not only that this body-part model has motivated the innovation of numebers throughout the world, but also that this body-part basis of number words stretches back historically as far as the linguistic data can take us. It is evident in reconstruction of ancestral languages, including Proto-Sino-Tibetan, Proto-Niger-Congo, Proto-Autronesian, and Proto-Indo-European, the languages whose descendant tongues are best represented in the world today.”

Note, though, that linguistics does not actually give us a very long time horizon. Proto-Indo-European was spoken about 4-6,000 years ago. Proto-Sino-Tibetan is not as well studied yet as PIE, but also appears to be at most 6,000 years old. Proto-Niger-Congo is probably about 5-6,000 years old. Proto-Austronesian (which, despite its name, is not associated with Australia,) is about 5,000 years old.

These ranges are not a coincidence: languages change as they age, and once they have changed too much, they become impossible to classify into language families. Older languages, like Basque or Ainu, are often simply described as isolates, because we can’t link them to their relatives. Since humanity itself is 200,000-300,000 years old, comparative linguistics only opens a very short window into the past. Various groups–like the Amazonian tribes Everett studies–split off from other groups of humans thousands 0r hundreds of thousands of years before anyone started speaking Proto-Indo-European. Even agriculture, which began about 10,000-15,000 years ago, is older than these proto-languages (and agriculture seems to have prompted the real development of math.)

I also note these language families are the world’s biggest because they successfully conquered speakers of the world’s other languages. Spanish, Portuguese, and English are now widely spoken in the Americas instead of Cherokee, Mayan, and Nheengatu because Indo-European language speakers conquered the speakers of those languages.

The guy with the better numbers doesn’t always conquer the guy with the worse numbers–the Mongol conquest of China is an obvious counter. But in these cases, the superior number system sticks around, because no one wants to replace good numbers with bad ones.

In general, though, better tech–which requires numbers–tends to conquer worse tech.

Which means that even though our most successful language families all have number words that appear to be about 4-6,000 years old, we shouldn’t assume this was the norm for most people throughout most of history. Current human numeracy may be a very recent phenomenon.

“The invention of number is attainable by the human mind but is attained through our fingers. Linguistic data, both historical and current, suggest that numbers in disparate cultures have arisen independently, on an indeterminate range of occasions, through the realization that hands can be used to name quantities like 5 and 10. … Words, our ultimate implements for abstract symbolization, can thankfully be enlisted to denote quantities. But they are usually enlisted only after people establish a more concrete embodied correspondence between their finger sand quantities.”

Some more on numbers in different languages:

“Rare number bases have been observed, for instance, in the quaternary (base-4) systems of Lainana languages of California, or in the senary (base-6) systems that are found in southern New Guinea. …

Several languages in Melanesia and Polynesia have or once had number system that vary in accordance with the type of object being counted. In the case of Old High Fijian, for instance, the word for 100 was Bola when people were counting canoes, but Kora when they were counting coconuts. …

some languages in northwest Amazonia base their numbers on kinship relationships. This is true of Daw and Hup two related language in the region. Speakers of the former languages use fingers complemented with words when counting from 4 to 10. The fingers signify the quantity of items being counted, but words are used to denote whether the quantity is odd or even. If the quantity is even, speakers say it “has a brother,” if it is odd they state it “has no brother.”

What about languages with no or very few words for numbers?

In one recent survey of limited number system, it was found that more than a dozen languages lack bases altogether, and several do not have words for exact quantities beyond 2 and, in some cases, beyond 1. Of course, such cases represent a miniscule fraction of the world’s languages, the bulk of which have number bases reflecting the body-part model. Furthermore, most of the extreme cases in question are restricted geographically to Amazonia. …

All of the extremely restricted languages, I believe, are used by people who are hunter-gatherers or horticulturalists, eg, the Munduruku. Hunter gatherers typically don’t have a lot of goods to keep track of or trade, fields to measure or taxes to pay, and so don’t need to use a lot of numbers. (Note, however, that the Inuit/Eskimo have a perfectly normal base-20 counting system. Their particularly harsh environment appears to have inspired both technological and cultural adaptations.) But why are Amazonian languages even less numeric than those of other hunter-gatherers from similar environments, like central African?

Famously, most of the languages of Australia have somewhat limited number system, and some linguists previously claimed that most Australian language slack precise terms for quantities beyond 2…. [however] many languages on that continent actually have native means of describing various quantities in precise ways, and their number words for small quantities can sometimes be combined to represent larger quantities via the additive and even multiplicative usage of bases. …

Of the nearly 200 Australian languages considered in the survey, all have words to denote 1 and 2. In about three-quarters of the languages, however, the highest number is 3 or 4. Still, may of the languages use a word for “two” as a base for other numbers. Several of the languages use a word for “five” as a base, an eight of the languages top out at a word for “ten.”

Everett then digresses into what initially seems like a tangent about grammatical number, but luckily I enjoy comparative linguistics.

In an incredibly comprehensive survey of 1,066 languages, linguist Matthew Dryer recently found that 98 of them are like Karitiana and lack a grammatical means of marking nouns of being plural. So it is not particularly rare to find languages in which numbers do not show plurality. … about 90% of them, have a grammatical means through which speakers can convey whether they are talking about one or more than one thing.

Mandarin is a major language that has limited expression of plurals. According to Wikipedia:

The grammar of Standard Chinese shares many features with other varieties of Chinese. The language almost entirely lacks inflection, so that words typically have only one grammatical form. Categories such as number (singular or plural) and verb tense are frequently not expressed by any grammatical means, although there are several particles that serve to express verbal aspect, and to some extent mood.

Some languages, such as modern Arabic and Proto-Indo-European also have a “dual” category distinct from singular or plural; an extremely small set of languages have a trial category.

Many languages also change their verbs depending on how many nouns are involved; in English we say “He runs; they run;” languages like Latin or Spanish have far more extensive systems.

In sum: the vast majority of languages distinguish between 1 and more than one; a few distinguish between one, two, and many, and a very few distinguish between one, two, three, and many.

From the endnotes:

… some controversial claims of quadral markers, used in restricted contexts, have been made for the Austronesian languages Tangga, Marshallese, and Sursurunga. .. As Corbett notes in his comprehensive survey, the forms are probably best considered quadral markers. In fact, his impressive survey did not uncover any cases of quadral marking in the world’s languages.

Everett tends to bury his point; his intention in this chapter is to marshal support for the idea that humans have an “innate number sense” that allows them to pretty much instantly realize if they are looking at 1, 2, or 3 objects, but does not allow for instant recognition of larger numbers, like 4. He posits a second, much vaguer number sense that lets us distinguish between “big” and “small” amounts of things, eg, 10 looks smaller than 100, even if you can’t count.

He does cite actual neuroscience on this point–he’s not just making it up. Even newborn humans appear to be able to distinguish between 1, 2, and 3 of something, but not larger numbers. They also seem to distinguish between some and a bunch of something. Anumeric peoples, like the Piraha, also appear to only distinguish between 1, 2, and 3 items with good accuracy, though they can tell “a little” “some” and “a lot” apart. Everett also cites data from animal studies that find, similarly, that animals can distinguish 1, 2, and 3, as well as “a little” and “a lot”. (I had been hoping for a discussion of cephalopod intelligence, but unfortunately, no.)

How then, Everett asks, do we wed our specific number sense (1, 2, and 3) with our general number sense (“some” vs “a lot”) to produce ideas like 6, 7, and a googol? He proposes that we have no innate idea of 6, nor ability to count to 10. Rather, we can count because we were taught to (just as some highly trained parrots and chimps can.) It is only the presence of number words in our languages that allows us to count past 3–after all, anumeric people cannot.

But I feel like Everett is railroading us to a particular conclusion. For example, he sites neurology studies that found one part of the brain does math–the intraparietal suclus (IPS)–but only one part? Surely there’s more than one part of the brain involved in math.

About 5 seconds of Googling got me “Neural Basis of Mathematical Cognition,” which states that:

The IPS turns out to be part of the extensive network of brain areas that support human arithmetic (Figure 1). Like all networks it is distributed, and it is clear that numerical cognition engages perceptual, motor, spatial and mnemonic functions, but the hub areas are the parietal lobes …

(By contrast, I’ve spent over half an hour searching and failing to figure out how high octopuses can count.)

Moreover, I question the idea that the specific and general number senses are actually separate. Rather, I suspect there is only one sense, but it is essentially logarithmic. For example, hearing is logarithmic (or perhaps exponential,) which is why decibels are also logarithmic. Vision is also logarithmic:

The eye senses brightness approximately logarithmically over a moderate range (but more like a power law over a wider range), and stellar magnitude is measured on a logarithmic scale.[14] This magnitude scale was invented by the ancient Greek astronomer Hipparchus in about 150 B.C. He ranked the stars he could see in terms of their brightness, with 1 representing the brightest down to 6 representing the faintest, though now the scale has been extended beyond these limits; an increase in 5 magnitudes corresponds to a decrease in brightness by a factor of 100.[14] Modern researchers have attempted to incorporate such perceptual effects into mathematical models of vision.[15][16]

So many experiments have revealed logarithmic responses to stimuli that someone has formulated a mathematical “law” on the matter:

Fechner’s law states that the subjective sensation is proportional to the logarithm of the stimulus intensity. According to this law, human perceptions of sight and sound work as follows: Perceived loudness/brightness is proportional to logarithm of the actual intensity measured with an accurate nonhuman instrument.[3]

p = k ln ⁡ S S 0 {\displaystyle p=k\ln {\frac {S}{S_{0}}}\,\!}

The relationship between stimulus and perception is logarithmic. This logarithmic relationship means that if a stimulus varies as a geometric progression (i.e., multiplied by a fixed factor), the corresponding perception is altered in an arithmetic progression (i.e., in additive constant amounts). For example, if a stimulus is tripled in strength (i.e., 3 x 1), the corresponding perception may be two times as strong as its original value (i.e., 1 + 1). If the stimulus is again tripled in strength (i.e., 3 x 3 x 3), the corresponding perception will be three times as strong as its original value (i.e., 1 + 1 + 1). Hence, for multiplications in stimulus strength, the strength of perception only adds. The mathematical derivations of the torques on a simple beam balance produce a description that is strictly compatible with Weber’s law.[6][7]

In any logarithmic scale, small quantities–like 1, 2, and 3–are easy to distinguish, while medium quantities–like 101, 102, and 103–get lumped together as “approximately the same.”

Of course, this still doesn’t answer the question of how people develop the ability to count past 3, but this is getting long, so we’ll continue our discussion next week.

Cathedral Round-Up #28: They’re not coming for George Washington, that’s just a silly right-wing conspiracy–

Titus Kaphar’s Shadows of Liberty, 2016, at Yale University Art Gallery

Is that… George Washington? With rusty nails pounded into his face?

Holding up “cascading fragments of his slave records.”

Oh. I see.

Carry on, then.

I was going to write about Harvard forbidding its female students from forming female-only safe spaces (College will Debut Plans to Enforce Sanctions Next Semester) in an attempt to shut down all single-gender frats and Finals Clubs, but then Princeton upped the ante with Can Art Amend Princeton’s History of Slavery?

No. Of course not.

Princeton University [has] a new public-art project that confronts the school’s participation in the nation’s early sins. On Monday, the university unveiled Impressions of Liberty, by the African American artist Titus Kaphar. The sculpture is the conceptual core of a campus-wide initiative that begins this fall and aims to reconcile the university’s ties to slavery. The Princeton and Slavery Project’s website has released hundreds of articles and primary documents about slavery and racism at Princeton…

Attaching strips of canvas or other material to the faces of people he disapproves of is apparently one of Kaphar’s shticks.

I’m old enough to remember when George Washington was admired for freeing all of his slaves in an era when most people took slavery for granted. Today he is castigated for not having sprung from the womb with a fully modern set of moral opinions.

Impressions of Liberty, by Titus Kaphar

Impressions of Liberty is Kaphar’s portrait of Samuel Finley–fifth president and one of the original trustees of Princeton (1761-1766)–interwoven with photographs of black actors in historical dress etched in glass.

For generations, slave-owning Christians—including Princeton’s founders—used religious ideas to justify a horrific national practice, [Kaphar] noted; Finley is holding a bible in Impressions of Liberty.

Note the framing: yes, Christians used religion to justify owning slaves. So did Muslims, Jews, Hindus, Buddhists, pagans, and atheists. There’s nothing unique about Christians and slavery aside from the fact that Finley was Christian. No mention is made of pagan Africans who captured and sold each other into slavery, nor of Muslims who raided Africa and Europe in search of slaves. There were Jewish slave merchants and Confederates, as well, for slavery was a near-universal practice justified by people all over the world prior to its abolition by whites in the 1800s. The article mentions none of that; only Christians are singled out for criticism.

The article doesn’t say how much Princeton paid for the sculpture it commissioned to castigate the memory of one of its founders. The work currently stands outside MacLean House, but will soon be moved indoors, to Princeton’s permanent art collection. MacLean House–completed in 1756–is a national landmark that was home to Princeton’s first presidents, including Samuel Finley. It also housed George Washington during the Battle of Princeton.

According to the article:

On the one hand, according to records, Princeton was a bastion of liberty, educating numerous Revolutionary War leaders and in 1783 hosting the Continental Congress… At the same time, Sandweiss found that the institution’s first nine presidents all owned slaves at some point, as did the school’s early trustees. She also discovered that the school enrolled a significant number of anti-abolitionist, Southern students during its early years; an alumni delivered a pro-slavery address at the school’s 1850 commencement ceremony. …

Princeton’s racist history enabled it to provide social and political benefits for alumni—an advantage that students will continue to enjoy well into the future.

While I happen to think that universities have it much too good these days and deserve to be taken down a notch, I find this claim extremely dubious. Harvard and Yale are located in staunchly abolitionist New England and had very few ties to slavery, (Mr. Yale apparently knew a guy who had slaves, and Harvard Law School received some money from a guy who had slaves,) yet these schools are arguably even wealthier and more powerful than closer-to-the-South and more-tied-to-slavery Princeton. Stanford was founded after slavery was outlawed, and yet its students enjoy social and political benefits on par with Princeton’s.

We could argue that the entire area of the Confederacy reaped the economic benefits of slavery, yet today this region is much poorer than the Free States of the North. There isn’t just no correlation between slavery, wealth, and power–there’s actually a negative correlation. Slavery, if it has any effect at all, makes a region poorer and weaker.

Monumental Inversion: George Washington, (Titus Kaphar,) Princeton Art Museum

… Princeton University is spreading the mission across various pieces of art through a show this fall entitled “Making History Visible: Of American Myths And National Heroes.” At the exhibit’s entrance, viewers begin with Kaphar’s piece Monumental Inversion: George Washington—a sculpture of the leader astride his horse, made out of wood, blown glass, and steel. The sculpture depicts the former president’s dueling nature: He’s glorified within a great American equestrian monument but he’s also sitting astride a charred cavity, surrounded by glass on the ground. In juxtaposing Kaphar’s artwork and a George Washington plaster bust, “Making History Visible” forces visitors, hopefully, to see and feel the contradiction in colonial leaders who sought freedom from tyranny but did not extend that ideal to slaves.

I repeat: George Washington freed all of his slaves.

We might question the point of all this. Kaphar is free to make his art, of course. His paintings display quite excellent technical skill, I admit. But why do we, as a society, feel the need to commission and display attacks on our founders? Princeton’s students could just as happily go to class each day without looking at images of Finley’s slaves; unlike Washington, Finley isn’t famous and most students were probably blissfully unaware of his slaveholding until someone decided to stick a sculpture dedicated to it on the lawn.

How do Princeton’s black students feel after walking past a sculpture depicting slaves? Uplifted? Happy? Ready to go to class and concentrate on their lectures? I doubt it. Art may be “powerful” or “open dialogues,” but no one seems to feel better after viewing such pieces.

No, I don’t see how this selective dwelling on the past improves anything.

A world in which images of your founders and heroes are defaced, their corpses judged and rusty nails are driven into their portraits: it’s like a cruel dystopia, Lewis’s That Hideous Strength or 1984. According to Wikipedia:

During and after the October Revolution, widespread destruction of religious and secular imagery took place, as well as the destruction of imagery related to the Imperial family. The Revolution was accompanied by destruction of monuments of past tsars, as well as the destruction of imperial eagles at various locations throughout Russia. According to Christopher Wharton, “In front of a Moscow cathedral, crowds cheered as the enormous statue of Tsar Alexander III was bound with ropes and gradually beaten to the ground. After a considerable amount of time, the statue was decapitated and its remaining parts were broken into rubble”.[40]

The Soviet Union actively destroyed religious sites, including Russian Orthodox churches and Jewish cemeteries, in order to discourage religious practice and curb the activities of religious groups.

You know, they tell us, “No one is attacking George Washington; that’s just a crazy right-wing conspiracy theory,” and then they go and do it.

Incidentally, Georgetown, according to the article, “announced last year that it would grant admissions preference to descendants of slaves whose sale it profited from in the early 1800s.” How do you qualify for that? Do you have to prove that you’re descended from the specific slaves involved, or can you be descended from any American slaves? Because I had ancestors who were enslaved, too, and I’d like to get in on this racket.

In the end, the article answers its titular question:

When Impressions of Liberty is removed from Maclean House in December and enters Princeton’s permanent museum collection, its greatest achievement may lie in the realization that no apology or recompense can ever suffice. …

“No civil-rights project can ever fully redeem anything.”

Cultural Marxists are the Real Capitalists: A Critical Critique of Critical Criminology

Critical Criminology claims that:

  1. The legal system was created by and for the ruling class (cishetero white males) in order to keep the rich rich and the poor and oppressed poor and oppressed.
  2. To this end, crimes the poor commit (such as burglary) are heavily penalized, while crimes the rich commit (such as racism or insider trading) are not.
  3. Many of the “crimes” of the oppressed (like rape, assault, mugging, and mass rioting) shouldn’t be considered crimes at all, but are just desperate attempts at survival
  4. The “real crimes” are things like racism, sexism, homophobia, etc., which create the oppressive capitalist society that creates common street crime
  5. When racism sexism, homophobia, etc. are outlawed, then we can create the perfect socialist state which will have no crime.

Creationism is more factually solid than Critical Criminology, but Critical Criminology is taught in real universities alongside real theories about how the world works.

But let’s step back a moment. #1 is at least partially true–the rich do have a disproportionate influence on the legal system and the poor are often at its mercy. Corporations and wealthy individuals do use their money and influence to get legislation written and enforced in ways that benefit themselves.

But which crimes, exactly, are the rich interested in prosecuting? Do they care if a drug addict steals wallets down in the ghetto? They don’t live in the ghetto. They use their money to insulate themselves from violent crime by buying houses in nice, gated neighborhoods with private security forces.

It’s the poor who are the primary victims of crime, and it’s the poor who’d like murderers to be arrested. Only someone who is rich enough not to live with the threat of violent crime could possibly say something as stupendously idiotic and  insensitive as “rape and assault aren’t real crimes.”

If critical criminologists are the wealthy, then wouldn’t they, logically, be trying to reshape the legal system to benefit themselves?

Meanwhile, they accuse the wealthy of  racism, sexism, homophobia, etc., but these attitudes are actually associated with the poor. Rich whites absolutely pride themselves on being open-minded, tolerant, anti-racist, feminist, etc, and are horrified at all of the racist, sexist, Islamophobic bigotry embodied in low-class Trump voters.

So the crimes these wealthy critical theorists are trying to get outlawed are not things that the rich are doing, but things the rich want the poor to stop doing.

Here I could cite a dozen examples, from Hate Speech laws in Britain being more strongly enforced than rape laws to Hillary Clinton’s “Would bringing down the banks end racism?” speech to Piers Morgan complaining about Islamophobia.

Why are the capitalists so intent on smashing bigotry in all its forms?

Simple: Capitalism wants to make money. Capitalism doesn’t care about oppressing brown people, or women, or gays, or Muslims, or foreigners, or anyone. Capitalism just wants the best possible ratio of worker quality : worker cost. If Mexicans can do the same job as Americans for half the cost, then capitalists want to hire Mexicans and they want Americans to stop trying to pass laws limiting the number of Mexican immigrants who can come work for the capitalists. If Europe is facing a labor crisis because Europeans haven’t made enough new workers to fill the factories and finance the welfare state, then European capitalists must import new workers and they want European workers to stop complaining about the terrorist attacks. Capitalism just wants to hire “the best person for the job” or at least the cheapest person who’ll do an adequate job.

The only odd part is that capitalists are wrapping themselves in the Communist flag while imprisoning people for objecting to the importation of cheap, union-breaking labor. We could accuse them of lying–or gaslighting–except many of them seem to really believe it. Perhaps socialism provides the necessary tool for lying to themselves. “Oh, I am not actually screwing over the poor by advocating on behalf of my own profits.” Most people don’t like to think of themselves as nasty, evil, and self-serving, but they will often project those qualities onto others. (“I’m a nice person, it’s everyone else who’s backstabbing cheaters!”) By casting their enemies (middle and working class white males who don’t want to lose economic security)’s concerns onto the cartoonish figure of the evil capitalist, they simultaneously dismiss those concerns and recast themselves as heroic defenders of the “oppressed.”

Wikipedia has an interesting theory on self-deception:

Some evolutionary biologists, such as Robert Trivers, have suggested[6][page needed] that deception plays a significant part in human behavior, and in animal behavior, more generally speaking. One deceives oneself to trust something that is not true as to better convince others of that truth. When a person convinces himself of this untrue thing, they better mask the signs of deception.[7]

This notion is based on the following logic: deception is a fundamental aspect of communication in nature, both between and within species. It has evolved so that one can have an advantage over another. From alarm calls to mimicry, animals use deception to further their survival. Those who are better able to perceive deception are more likely to survive. As a result, self-deception evolved to better mask deception from those who perceive it well, as Trivers puts it: “Hiding the truth from yourself to hide it more deeply from others.” In humans, awareness of the fact that one is acting deceptively often leads to tell-tale signs of deception, such as nostrils flaring, clammy skin, quality and tone of voice, eye movement, or excessive blinking. Therefore, if self-deception enables someone to believe her or his own distortions, they will not present such signs of deception and will therefore appear to be telling the truth.

Cathedral Round-Up #27: Critical Criminology

2005 Riots, Paris, France

Today we’ll be looking at the chapter on “critical criminology” from Criminology: The Core (Instructor’s Third Edition) by Larry Siegel. According to Cengage:

“It’s no mystery why Larry Siegel remains THE best-selling author in Criminal Justice. … Grounded in Siegel’s signature style — cutting-edge theory plus meticulous research — this book covers all sides of an issue without taking a political or theoretical position and provides a broad view of the interdisciplinary nature of the field.”

The book covers 5 different schools of Criminology Theories: Neoclassical Choice theory (eg, Cesare Beccaria;) Biosocial/Psychological Trait theory, (Freud, Piaget, Edward O. Wilson;) Social Structure/Process theory, (Clifford R. Shaw and Edwin Sutherland;) Marxist Critical Theory, (Marx;) and Life Course/Latent Trait Development theory, (Sheldon Glueck and James Q. Wilson.)

I don’t have time to read the whole book (not if you want this post to go up this month,) so we are going to focus on Chaper 8: Critical Criminology.

Major Premise (from the book’s inside cover): “Inequality between social classes (groups) and the social conditions that empower the wealthy and disenfranchise the less fortunate are the root causes of crime. It is the ongoing struggle for power, control, and material well-being that creates crime.” Critical Criminology was founded in 1968 and is unapologeticly Marxist.

Each chapter begins with an example crime. The choice for Critical Criminology is fascinating: the November 2005 Muslim riots in southern and western France and the suburbs of Paris.

What sparked the rioting? The immediate cause was the accidental deaths of two Muslim teenagers who were electrocuted as they hid in a power substation to escape a police identity check. Hearing of the deaths, gangs of youths armed with brick and stick roamed the streets of housing estates torching cars and destroying property. …

A majority of France’s Muslim population, estimated at 5 million, live in these poverty-stricken areas. Many residents are angry at the living conditions and believe they are the target of racial discrimination, police brutality, and governmental indifference.”

Is this the best the Critical Criminologists have? (Incidentally, according to Wikipedia,the police were responding to a report of a break-in, the rioting lasted for 3 weeks and some 8,973 cars were burned. 3 people died.) This sounds more like an argument against Muslim immigration than an argument that racism causes crime, because if the French were really so racist, Muslims wouldn’t move there.

But let’s let the Critical Theorists explain themselves:

According to Critical Theorists, crime is a political concept designed to protect the power and position of the upper classes at the expense of the poor. Some of these theorists… would include in a list of “real” crime such acts as violations of human rights due to racism, sexism, and imperialism and other violations of human dignity and physical needs and necessities. Part of the critical agenda, argues Criminologist Robert Bohm, is to make the public aware that these behaviors “are crimes just as much as burglary and robbery.”…

Graph from the Wikipedia
See also my post, “No, Hunter Gatherers were not Peaceful Paragons of Gender Egalitarianism.”

“Capitalism,” claims Bohm, “as a mode of production, has always produced a relatively high level of crime and violence.”

Note: Bohm is either a moron or a liar. Pre-industrial economies had far more violent crime than modern, capitalist economies.

Crime rates are much lower in countries with advanced, capitalist economies than in countries with less-developed economies.

Countries with poorly defined or enforced property rights or where property is held in common are not bastions of civility.

In fact, the rise of capitalism in Europe over the past seven houndred years was accompanied by a dramatic decrease in crime.

My biggest complaint about this chapter is the total lack of data cited to support any of the claims. This is not necessarily the author’s fault, as the Critical Criminology field is overtly hostile to actual research:

Critical criminologists rarely use standard social science methodologies to test their views because many believe the traditional approach of measuring research subjects is antihuman and insensitive. Critical thinkers believe that research conducted by mainstream liberal and positivist criminologists is often designed to unmask weak, powerless members of society so they can be better dealt with by the legal system. They are particularly offended by purely empirical studies, such as those designed to show that minority group members have lower IQs than whites or that the inner city is the site of the most serious crime whereas middle-class areas are relatively c rime free. Critical scholars are more likely to examine historical trends and patterns…

Back to definitions:

Critical Criminologists reject the notion that law is designed to maintain a tranquil, fair society and that criminals are malevolent people who wish to trample the rights of others. Critical theorists consider acts of racism, sexism, imperialism, unsafe working conditions, inadequate child care, substandard housing, pollution of the environment, and war making as a tool of foreign policy to be ‘true crimes.’ The crimes of the helpless–burglary, robbery, and assault–are more expressions of rage over unjust economic conditions than actual crimes. … Marxist thought serves as the basis for critical theory.

I have now read In Search of Respect: Selling Crack in El Barrio, Gang Leader for a Day, Leeson’s work on pirates, The Pirates’ Own Book, God of the Rodeo, Outlaws on Horseback, No Angel: My Harrowing Undercover Journey to the Inner Circle of the Hells Angels, Donnie Brasco’s The Way of the Wiseguy, and Original Gangster: The Real Life Story of one of America’s Most Notorius Drug Lords, by Frank Lucas. I also have a close personal friend who was homeless for a couple of decades.

I’m not an expert, but I feel like I know something on the subject.

Very few people in modern, capitalist countries are committing crime out of desperation. My friend survived for years by going to soup kitchens and never stole a wallet or held up a convenience store. We have welfare and subsidized housing. There are exceptions, but most violent criminals are not Jean Valjean; it’s not economic desperation that drives people to put meat cleavers into their boss’s skulls or rape children.

But back to the book. On the origins of Critical Criminology:

Mainstream, positivist criminology was criticized a being overtly conservative, pro-government, and antihuman. What emerged was a social conflict theory whose proponents scoffed when their fellow scholars used statistical analyses of computerized data to describe criminal and delinquent behavior. Several influential scholars embraced the idea that the social conflict produced by the unequal distribution of power and wealth was at the root cause of crime. …

Richard Quinney also proclaimed that in contemporary society criminal law represents the interests of those who hold power in society. Where there is conflict between social groups–the wealthy and the poor–those who hold power will create law that benefit themselves and hold rivals in check. … Crime is a function of power relation and an inevitable result of social conflict.

This is not entirely wrong (if it were entirely wrong, far fewer people would believe it.) The wealthy do in fact have a disproportionate say on which laws are passed and how they are enforced. They can afford better lawyers and can often buy their way out of situations the poor are just stuck with.

But again, this is not what drives a man to put a meat cleaver in another man’s skull, nor is it why society nigh-universally condemns unprovoked skull-cleaving. It is not only in the interests of the rich to prevent violent crime–they, after all, use their money to insulate themselves from the worst of it by buying into low-crime, gated communities with private security forces. If anything, the poor, as the disproportionate victims of crime, have the most to gain from strict law enforcement against violent criminals.

My formerly homeless friend was once beaten into a coma and nearly died on his way home to the park bench where he spent his nights.

If crime were all about fighting back against oppression, criminals would only target the rich.

There is one branch of Critical Criminology, Left Realism, which acknowledges that crime is actually really unpleasant for its victims. Since it is relatively sane, we need not worry about it.

With thanks to the Heritage Foundation

Back to the book:

Critical criminologist are also deeply concerned about the current state of the American political system …

While spending is being cut on social programs, it is being raised on military expansion. The rapid buildup of the prison system and passage of draconian criminal laws that threaten civil rights and liberties–the death penalty, three strikes laws, and the Patriot Act–are other elements of the conservative agenda. Critical Criminologists believe that they are responsible for informing the public about he dangers of these developments.

Hold on. I’m going to need a couple more graphs:

Source: Heritage Foundation

The “Three Strikes Laws” were passed in 1994 in reaction to the crack-driven crime epidemic throughout the hearts of America’s cities and appear to have done a pretty good job of preventing black people from being murdered.

Back to the text:

Critical criminologists have turned their attention to the threat competitive capitalism presents to the working class. The believe that in addition to perpetuating male supremacy and racialism, modern global capitalism helps destroy the lives of workers in less-developed countries. For example, capitalists hailed China’s entry into the World Trade Organization in 2001 as a significant economic event. However, critical thinkers point out that the economic boom has significant costs: The average manufacturing wage in China is 20 to 25 cents per hour; in a single yea (2001) more than 47,000 workers were killed at work, and 35.2 million Chinese workers were permanently or temporarily disabled.

According to the AFL-CIO, 4,836 workers were killed on the job in the US in 2015. Since there are 320 million Americans, this works out to about a 0.0015% chance of dying on the job.

Since China had about 1.272 billion people in 2001, that works out to about a 0.0037% chance of dying on the Chinese job.

Obviously it’d be great if no one died on the job, but these are not horrific odds. By contrast, China’s Great Leap Forward, when it implemented a communist system, was a horrific disaster that killed between 18 and 55 MILLION people.

(Also, 35,398 Americans died in car/motorcycle accidents in 2014, so you are much more likely to die driving your car to the grocery tore than employed in China.)

Meanwhile:

According to the World Bank, more than 500 million people were lifted out of extreme poverty as China’s poverty rate fell from 88 percent in 1981 to 6.5 percent in 2012, as measured by the percentage of people living on the equivalent of US$1.90 or less per day in 2011 purchasing price parity terms.[4]

From Human Progress

Now, I admit that capitalism does not always produce good results. Sometimes priceless natural resources get destroyed. Sometimes people’s jobs get outsourced. Often employers decide they’re okay with a level of job-induced harm to their employees’ health that their employees would not be okay with. But on the whole, capitalism produces good results far more often than communism.

But back to the book:

In our advanced technological society, those with economic and political power control the definition of crime and the manner in which the criminal justice system enforces the law. Consequently, the only crimes available to the poor are the severely sanctioned “street crimes”: rape, murder, theft, and mugging.

EvX: Available? How are crimes “available” to anyone? Are crimes like Pokemon, where you have to go to the Pokemon center to get your first starter crime, but if you sleep in the rich take all of the good crimes like insider training and you get stuck with some random Pikachu from the back, and it turns out to be a home invasion?

And if the rich are running the whole show, why don’t they make it so none of the laws apply to them? Why don’t they rape and murder poor people at the same rate as the poor rape and murder each other?

Back to the book:

Because private ownership of property is the true measure of success in American society [Source needed] (as opposed to being, say, a worthy person), the state becomes an ally of the wealthy in protecting their property interests. [How?] As a result, theft-related crimes are often punished more severely than are acts of violence, [Source needed] because although the former may be interclass, the latter are typically intraclass.” [Source needed]…

Empirical research confirms that economic downturns are indeed linked to both crime rate increases and government activities such as passing anticrime legislation.

I’ve heard this one before. Scroll back up to that graph of homicide rates over time and note the massive decrease in crime during the Great Depression. By the time the Depression crime drop petered out, crime was at one of its lowest points in the entire 20th century. Even in 2013 (the year the graph ends) crime was higher than it was after the Depression.

To be fair this drop is better explained by the end of Prohibition than by the Depression. But the Depression saw a massive decrease in crime: this theory is bogus.

Let’s finish up with Critical Feminist Criminology:

Critical feminism views gender inequality as stemming from the unequal power of men and women in a capitalist society, which leads to the exploitation of women by fathers and husbands. …

Patriarchy, or male supremacy, has been and continue to be supported by capitalists. This system sustains female oppression at home and in the workplace. …

Critical feminists link criminal behavior pattern to the gender conflict created by the economic and social struggles common in postindustrial societies. … Capitalists control the labor of workers, and men control women both economically and biologically. This ‘double marginality’ explains why females in a capitalist society commit fewer crimes than males.

So, when Capitalism oppresses men, it makes them commit “crime,” but when it oppresses women, it makes them not commit crime. Because capitalism wants to exploit workers by locking them in prisons where they can’t really do much work, but it wants to exploit women by making them do the dishes, because a capitalist system could never see the value of getting people to work for pay. Got it?

The text continues:

Because they are isolated in the family, they have fewer opportunities to engage in elite deviance… Women are also denied access to male-dominated street crimes.

So women are like… Some kid who was locked in his room and so couldn’t even get a cruddy crime-emon?

Seriously, though, do these guys not know that women are allowed to leave the house? Most of them have cars and do things like “drive to work” and “drive to the supermarket.” Yes, it’s true that existing, male-dominated street gangs and the Mafia generally don’t take women, but if women wanted to go out and punch people and steal their wallets, they would. If they wanted to make their own gangs, they would. If someone is actually a violent criminal, their husband saying, “Don’t go outside, make me a sandwich instead,” would not stop them from doing violence. If there’s one trait criminals tend to have in common, it’s that they don’t refrain from crime just because society disapproves of it.

Over in reality, women don’t commit much crime simply because… they aren’t that interested in committing crime.

 

Critical Criminology is a deep subject and I have only skimmed its surface. I haven’t discussed Mumia Abu-Jamal (the chapter’s other Profile in Crime;) restorative justice; the failure of restorative justice in South Africa to prevent horrific, race-motivated farm murders; instrumental vs. structural theorists; etc.

In closing, I’d just like to repeat, in the book’s defense, that the author is laying out the field for us, not advocating on its behalf. The book also has sections critiquing Critical Criminology theory and chapters devoted to sociobiology and developmental theories.

And in potentially related news, between 85 and 93% of French Muslims voted for Hollande, the Socialist candidate, in 2012.