Looks like Dean Faust is stepping down and Lawrence Bacow is stepping up. Bacow has an S.B. in economics from MIT, a J.D. from Harvard Law, and an M.P.P. and Ph.D. from Harvard’s Kennedy School of Government.
I don’t know much about Bacow, but I’m sure I’ll learn once he takes over writing Faust’s column in Harvard Magazine. Overall he looks like a “safe” (ie dull) choice. His work at Tufts involved a expanding financial aid (Harvard already has extremely good financial aid, so there’s not much to do there) and diversity initiatives.
Harvard has a couple of other newcomers. Economist Bridget Terry Long will be the new dean of the Harvard Graduate School of Education. Long’s CV is long (no pun intended) and filled with the sorts of awards and commiittee memberships appropriate to an Ivy League striver, like the National Bureau of Economic Research.
Long’s research focuses on getting more poor and dumb (excuse me, unprepared) students into college. I don’t have time to review her entire corpus, but I read her most recent paper, “Does Remediation Work for All Students? How the Effects of Postsecondary Remedial and Developmental Courses Vary by Level of Academic Preparation.” (Co-author: Angela Boatman.) The paper is fine, if rather oddly written (by my standards.)
[Results: placing borderline low-performing students into first-level remedial classes in the University of Tennessee system may be worse than just letting them try their best in regular courses; but really dumb kids actually do benefit from remedial courses. Obvious Conclusions that I didn’t see directly stated: Cut-off score for inclusion in remedial classes in U of Tenn system is too high.]
Long’s research looks fine; I don’t think it’s bad to look at whether a remedial program is actually helping students or whether a financial aid program is working (aside from my conviction that students who can’t do college-level work don’t belong in college.) It’s not exactly groundbreaking work, though. Harvard has plenty of folks like Reich and Pinker who are paving new intellectual (and technical ground); Long’s research seems underewhelming by comparison.
Tomiko Brown-Nagin has been tapped to lead the Radcliffe Institute. From Harvard Mag’s article about her:
Brown-Nagin, who holds a J.D. from Yale Law School and a Ph.D. in history from Duke, is best known for her contributions to the history of the civil-rights movement. Her 2011 book Courage to Dissent: Atlanta and the Long History of the Civil Rights Movement won the Bancroft Prize for U.S. history, and is widely regarded as a definitive text on the legal and social history of civil rights in the United States. Her current book project explores the life of Constance Baker Motley, an African-American lawyer, judge, and politician who was an attorney in Brown v. Board of Education. …
Brown-Nagin is a sophisticated, nuanced thinker on the significance of diversity and representation in democratic institutions. In a recent Columbia Law Review article titled “Identity Matters: The Case of Judge Constance Baker Motley,” she wrote:
“Motley did endorse greater representation of women and racial minorities in the judiciary. Her argument for diversity on the bench did not turn on the view that women and people of color have a different voice or would reach different or better decisions than white men. Motley advocated judicial diversity because, she believed, inclusion reinforced democracy. By affirming openness and fairness, the mere presence of women and racial-minority judges built confidence in government. …”
Radcliffe is a women’s college that Harvard officially absorbed in 1999; the Radcliffe Institute came with it. According to Wikipedia:
The Radcliffe Institute for Advanced Study at Harvard shares transformative ideas across the arts, humanities, sciences, and social sciences. The Institute comprises three programs:
The Radcliffe Institute Fellowship Program annually supports the work of 50 artists and scholars, with an acceptance rate of around 5 percent each year.
The Academic Ventures program is for collaborative research projects and hosts lectures and conferences.
Yale Law is the most prestigious lawschool in the entire US (Harvard Law is probably #2). YL’s professors, therefore, are some of the US’s top legal scholars; it’s students are likely to go on to be important lawyers, judges, and opinion-makers.
If you’re wondering about the coat of arms, it was designed in 1956 as a pun on the original three founders’ names: Seth Staples, (BA, Yale, 1797), Judge David Daggett aka Doget, (BA 1783), and Samuel Hitchcock, (BA, 1809), whose name isn’t really a pun but he’s Welsh and when Welsh people cross the Atlantic, their dragon transforms into a crocodile. (The Welsh dragon has also been transformed into a crocodile on the Jamaican coat of arms.)
(For the sake of Yale’s staple-bearing coat of arms, let us hope that none of the founders were immoral in any way, as Harvard‘s were.)
Gideon Yaffe presents a theory of criminal responsibility according to which child criminals deserve leniency not because of their psychological, behavioural, or neural immaturity but because they are denied the vote. He argues that full shares of criminal punishment are deserved only by those who have a full share of say over the law.
He proposes that children are owed lesser punishments because they are denied the right to vote. This conclusion is reached through accounts of the nature of criminal culpability, desert for wrongdoing, strength of legal reasons, and what it is to have a say over the law. The heart of this discussion is the theory of criminal culpability.
To be criminally culpable, Yaffe argues, is for one’s criminal act to manifest a failure to grant sufficient weight to the legal reasons to refrain. The stronger the legal reasons, then, the greater the criminal culpability. Those who lack a say over the law, it is argued, have weaker legal reasons to refrain from crime than those who have a say, according to the book. They are therefore reduced in criminal culpability and deserve lesser punishment for their crimes. Children are owed leniency, then, because of the political meaning of age rather than because of its psychological meaning. This position has implications for criminal justice policy, with respect to, among other things, the interrogation of children suspected of crimes and the enfranchisement of adult felons. …
He holds an A.B. in philosophy from Harvard and a Ph.D. in philosophy from Stanford.
I don’t think you need a degree in philosophy or law to realize that this is absolutely insane.
Even in countries where no one can vote, we still expect the government to try to do a good job of rounding up criminals so their citizens can live in peace, free from the fear of random violence. The notion that “murder is bad” wasn’t established by popular vote in the first place. Call it instinct, human nature, Natural Law, or the 6th Commandment–whatever it is, we all want murderers to be punished.
The point of punishing crime is 1. To deter criminals from committing crime; 2. To get criminals off the street; 3. To provide a sense of justice to those who have been harmed. These needs do not change depending on whether or not the person who committed the crime can vote. Why, if I wanted to commit a crime, should I hop the border into Canada and commit it there, then claim the Canadian courts should be lenient since I am not allowed to vote in Canada? Does the victim of a disenfranchised felon deserve less justice than the victim of someone who still had the right to vote?
Since this makes no sense at all from any sort of public safety or discouraging crime perspective, permit me a cynical theory: the author would like to lower the voting age, let immigrants (legal or not) vote more easily, and end disenfranchisement for felons.
The age of human rights has been kindest to the rich. Even as state violations of political rights garnered unprecedented attention due to human rights campaigns, a commitment to material equality disappeared. In its place, market fundamentalism has emerged as the dominant force in national and global economies. In this provocative book, Samuel Moyn analyzes how and why we chose to make human rights our highest ideals while simultaneously neglecting the demands of a broader social and economic justice. …
In the wake of two world wars and the collapse of empires, new states tried to take welfare beyond its original European and American homelands and went so far as to challenge inequality on a global scale. But their plans were foiled as a neoliberal faith in markets triumphed instead.
In a tightly-focused tour of the history of distributive ideals, Moyn invites a new and more layered understanding of the nature of human rights in our global present. From their origins in the Jacobin welfare state
Which chopped people’s heads off.
to our current neoliberal moment, Moyn tracks the subtle shifts in how human rights movements understood what, exactly, their high principles entailed.
Like not chopping people’s heads off?
Earlier visionaries imagined those rights as a call for distributive justice—a society which guaranteed a sufficient minimum of the good things in life. And they generally strove, even more boldly, to create a rough equality of circumstances, so that the rich would not tower over the rest.
By chopping their heads off.
Over time, however, these egalitarian ideas gave way. When transnational human rights became famous a few decades ago, they generally focused on civil liberties — or, at most sufficient provision.
Maybe because executing the kulaks resulted in mass starvation, which seems kind of counter-productive in the sense of minimum sufficient provision for human life.
In our current age of human rights, Moyn comments, the pertinence of fairness beyond some bare minimum has largely been abandoned.
By the way:
Huh. Why would anyone think that economic freedom and human well-being go hand-in-hand?
At the risk of getting Pinkerian, the age of “market fundamentalism” has involved massive improvements in human well-being, while every attempt to make society economically equal has caused mass starvation and horrible abuses against humans.
Moyn’s argument that we have abandoned “social justice” is absurd on its face; in the 1950s, the American south was still racially segregated; in the 1980s South Africa was still racially segregated. Today both are integrated and have had black presidents. In 1950, homosexuality was widely illegal; today gay marriage is legal in most Western nations. Even Saudi Arabia has decided to let women drive.
If we want to know why, absurdly, students believe that things have never been worse for racial minorities in America, maybe the answer is the rot starts from the top.
The first ruling dramatically stopped the unconstitutional Muslim ban in January 2017, when students from the Worker and Immigrant Rights Advocacy Clinic (WIRAC) mobilized overnight to ground planes and free travelers who were being unjustly detained. The students’ work, along with co-counsel, secured the first nationwide injunction against the ban, and became the template for an army of lawyers around the country who gathered at airports to provide relief as the chaotic aftermath of the executive order unfolded.
Next came a major ruling in California in November 2017 in which a federal Judge granted a permanent injunction that prohibited the Trump Administration from denying funding to sanctuary cities—a major victory for students in the San Francisco Affirmative Litigation Project (SFALP) …
And on February 13, 2018, WIRAC secured yet another nationwide injunction—this time halting the abrupt termination of the Deferred Action for Childhood Arrivals program (DACA). … The preliminary injunction affirms protections for hundreds of thousands of Dreamers just weeks before the program was set to expire.
The Rule of Law Clinic launched at Yale Law School in the Spring of 2017 and in less than one year has been involved in some of the biggest cases in the country, including working on the travel ban, the transgender military ban, and filing amicus briefs on behalf of the top national security officials in the country, among many other cases. The core goal of the clinic is to maintain U.S. rule of law and human rights commitments in four areas: national security, antidiscrimination, climate change, and democracy promotion.
Meanwhile, Amy Chua appears to be the only sane, honest person at Yale Law:
In her new book, Political Tribes: Group Instinct and the Fate of Nations (Penguin, 2018), Amy Chua diagnoses the rising tribalism in America and abroad and prescribes solutions for creating unity amidst group differences.
Chua, who is the John M. Duff, Jr. Professor of Law, begins Political Tribes with a simple observation: “Humans are tribal.” But tribalism, Chua explains, encompasses not only an innate desire for belonging but also a vehement and sometimes violent “instinct to exclude.” Some groups organize for noble purposes, others because of a common enemy. In Chua’s assessment, the United States, in both foreign and domestic policies, has failed to fully understand the importance of these powerful bonds of group identity.
Unlike the students using their one-in-a-million chance at a Yale Law degree to help members of a different tribe for short-term gain, Amy Chua at least understands politics. I might not enjoy Chua’s company if I met her, but I respect her honesty and clear-sightedness.
Why Children Follow Rules focuses upon legal socialization outlining what is known about the process across three related, but distinct, contexts: the family, the school, and the juvenile justice system. Throughout, Tom Tyler and Rick Trinkner emphasize the degree to which individuals develop their orientations toward law and legal authority upon values connected to responsibility and obligation as opposed to fear of punishment. They argue that authorities can act in ways that internalize legal values and promote supportive attitudes. In particular, consensual legal authority is linked to three issues: how authorities make decisions, how they treat people, and whether they recognize the boundaries of their authority. When individuals experience authority that is fair, respectful, and aware of the limits of power, they are more likely to consent and follow directives.
Despite clear evidence showing the benefits of consensual authority, strong pressures and popular support for the exercise of authority based on dominance and force persist in America’s families, schools, and within the juvenile justice system. As the currently low levels of public trust and confidence in the police, the courts, and the law undermine the effectiveness of our legal system, Tom Tyler and Rick Trinkner point to alternative way to foster the popular legitimacy of the law in an era of mistrust.
Speaking as a parent… I understand where Tyler is coming from. If I act in a way that doesn’t inspire my children to see me as a fair, god-like arbitrator of justice, then they are more likely to see me as an unjust tyrant who should be disobeyed and overthrown.
On the other hand, sometimes things are against the rules for reasons kids don’t understand. One of my kids, when he was little, thought turning the dishwasher off was the funniest thing and would laugh all the way through timeout. Easy solution: I didn’t turn it on when he was in the room and he forgot. Tougher problem: one of the kids thought climbing on the stove to get to the microwave was a good idea. Time outs didn’t work. Explaining “the stove is hot sometimes” didn’t work. Only force solved this problem.
Some people will accept your authority. Some people can reason their way to “We should cooperate and respect the social contract so we can live in peace.” And some people DON’T CARE no matter what.
So I agree that police, courts, etc., should act justly and not abuse their powers, and I can pull up plenty of examples of cases where they did. But I am afraid this is not a complete framework for dealing with criminals and legal socialization.
Welcome to our final post of “Times the Experts were Wrong,” written in preparation for our review of The Death of Expertise: The Campaign Against Established Knowledge and Why it Matters. Professor Nichols, if you ever happen to read this, I hope it give you some insight into where we, the common people, are coming from. If you don’t happen to read it, it still gives me a baseline before reading your book. (Please see part 1 for a discussion of relevant definitions.)
Part 3 Wars:
WWI, Iraq, Vietnam etc.
How many “experts” have lied to convince us to go to war? We were told we had to attack Iraq because they had weapons of mass destruction, but the promised weapons never materialized. Mother Jones (that source of all things pro-Trump) has a timeline:
November 1999: Chalabi-connected Iraqi defector “Curveball”—a convicted sex offender and low-level engineer who became the sole source for much of the case that Saddam had WMD, particularly mobile weapons labs—enters Munich seeking a German visa. German intel officers describe his information as highly suspect. US agents never debrief Curveball or perform background check. Nonetheless, Defense Intelligence Agency (DIA) and CIA will pass raw intel on to senior policymakers. …
11/6/00: Congress doubles funding for Iraqi opposition groups to more than $25 million; $18 million is earmarked for Chalabi’s Iraqi National Congress, which then pays defectors for anti-Iraq tales. …
Jan 2002: The FBI, which favors standard law enforcement interrogation practices, loses debate with CIA Director George Tenet, and Libi is transferred to CIA custody. Libi is then rendered to Egypt. “They duct-taped his mouth, cinched him up and sent him to Cairo,” an FBI agent told reporters. … Under torture, Libi invents tale of Al Qaeda operatives receiving chemical weapons training from Iraq. “This is the problem with using the waterboard. They get so desperate that they begin telling you what they think you want to hear,” a CIA source later tells ABC. …
Feb 2002: DIA intelligence summary notes that Libi’s “confession” lacks details and suggests that he is most likely telling interrogators what he thinks will “retain their interest.” …
9/7/02: Bush claims a new UN International Atomic Energy Agency (IAEA) report states Iraq is six months from developing a nuclear weapon. There is no such report. …
9/8/02: Page 1 Times story by Judith Miller and Michael Gordon cites anonymous administration officials saying Saddam has repeatedly tried to acquire aluminum tubes “specially designed” to enrich uranium. …
Tubes “are only really suited for nuclear weapons programs…we don’t want the smoking gun to be a mushroom cloud.”—Rice on CNN …
“We do know, with absolute certainty, that he is using his procurement system to acquire the equipment he needs in order to enrich uranium to build a nuclear weapon.”—Cheney on Meet the Press
Oct 2002: National Intelligence Estimate produced. It warns that Iraq “is reconstituting its nuclear program” and “has now established large-scale, redundant and concealed BW agent production capabilities”—an assessment based largely on Curveball’s statements. But NIE also notes that the State Department has assigned “low confidence” to the notion of “whether in desperation Saddam would share chemical or biological weapons with Al Qaeda.” Cites State Department experts who concluded that “the tubes are not intended for use in Iraq’s nuclear weapons program.” Also says “claims of Iraqi pursuit of natural uranium in Africa” are “highly dubious.” Only six senators bother to read all 92 pages. …
10/4/02: Asked by Sen. Graham to make gist of NIE public, Tenet produces 25-page document titled “Iraq’s Weapons of Mass Destruction Programs.” It says Saddam has them and omits dissenting views contained in the classified NIE. …
2/5/03: In UN speech, Powell says, “Every statement I make today is backed up by sources, solid sources. These are not assertions. What we’re giving you are facts and conclusions based on solid intelligence.” Cites Libi’s claims and Curveball’s “eyewitness” accounts of mobile weapons labs. (German officer who supervised Curveball’s handler will later recall thinking, “Mein Gott!”) Powell also claims that Saddam’s son Qusay has ordered WMD removed from palace complexes; that key WMD files are being driven around Iraq by intelligence agents; that bioweapons warheads have been hidden in palm groves; that a water truck at an Iraqi military installation is a “decontamination vehicle” for chemical weapons; that Iraq has drones it can use for bioweapons attacks; and that WMD experts have been corralled into one of Saddam’s guest houses. All but the last of those claims had been flagged by the State Department’s own intelligence unit as “WEAK.”
I’m not going to quote the whole article, so if you’re fuzzy on the details, go read the whole darn thing.
If you had access to the actual documents from the CIA, DIA, British intelligence, interrogators, etc., you could have figured out that the “experts” were not unanimously behind the idea that Iraq was developing WMDs, but we mere plebes were dependent on what the government, Fox, and CNN told us the “experts” believed.
For the record, I was against the Iraq War from the beginning. I’m not sure what Nichols’s original position was, but in Just War, Not Prevention (2003) Nichols argued:
More to the point, Iraq itself long ago provided ample justifications for the United States and its allies to go to war that have nothing to do with prevention and everything to do with justice. To say that Saddam’s grasping for weapons of mass destruction is the final straw, and that it is utterly intolerable to allow Saddam or anyone like to gain a nuclear weapon, is true but does not then invalidate every other reason for war by subsuming them under some sort of putative ban on prevention.
The record provides ample evidence of the justice of a war against Saddam Hussein’s regime. Iraq has shown itself to be a serial aggressor… a supreme enemy of human rights that has already used weapons of mass destruction against civilians, a consistent violator of both UN resolutions and the therms of the 1991 cease-fire treaty … a terrorist entity that has attempted to reach beyond its own borders to support and engage in illegal activities that have included the attempted assassination of a former U.S. president; and most important, a state that has relentlessly sought nuclear arms against all international demands that it cease such efforts.
Any one of these would be sufficient cause to remove Saddam and his regime … but taken together they are a brief for what can only be considered a just war. ..
Those concerned that the United States is about to revise the international status quo might conside that Western inaction will allow the status quo to be revised in any case, only under the gun of a dictator commanding an arsenal of the most deadly materials on earthy. These are the two alternatives, and sadly, thee is no third choice.
Professor Nichols, I would like to pause here.
First: you think Trump is bad, you support the President under whom POWs were literally tortured, and you call yourself a military ethicist?
Second: you, an expert, bought into this “WMD” story (invented primarily by “Curveball,” an unreliable source,) while I, a mere plebe, knew it was a load of garbage.
Third: while I agree Saddam Hussein killed a hell of a lot of people–according to Wikipedia, Human Rights Watch estimates a quarter of a million Iraqis were killed or “disappeared” in the last 25 years of Ba’th party rule, the nine years of the Iraq war killed 150,000 to 460,000 people (depending on which survey you trust,) and based on estimates from the Iraq Body Count, a further 100,000 have died since then. Meanwhile, instability in Iraq allowed the horrifically violent ISIS to to sprout into existence. I Am Syria (I don’t know if they are reliable) estimates that over half a million Syrians have died so far because of the ISIS-fueled civil war rampaging there.
In other words, we unleashed a force that is twice as bad as Saddam in less than half the time–and paid a lovely 2.4 TRILLION dollars to accomplish this humanitarian feat! For that much money you could have just evacuated all of the Kurds and built them their own private islands to live on. You could have handed out $90,000 to every man, woman, and child in Iraq in exchange for “being friends with the US” and still had $150 BILLION left over to invest in things like “cancer treatments for children” and “highspeed rail infrastructure.”
Seriously, you could have spent the entire 2.4 trillion on hookers and blow and we would have still come out ahead.
It’s asking a group of candidates to re-enact a presidential order given 12 years ago, while Hillary Clinton isn’t even being asked about decisions in which she took part, much less about her husband’s many military actions. …
Instead, Republican candidates should change the debate. Leadership is not about what people would do with perfect information; it’s about what people do when faced with danger and uncertainty. So here’s an answer that every Republican, from Paul to Bush, could give:
“Knowing exactly what we know now, I would not have invaded when we did or in the way we did. But I do not regret that we deposed a dangerous maniac like Saddam Hussein, and I know the world is better for it. What I or George Bush or anyone else would have done with better information is irrelevant now, because the next president has to face the world as it is, not as we would like to imagine it. And that’s all I intend to say about second-guessing a tough foreign-policy decision from 12 years ago, especially since we should have more pressing questions about foreign policy for Hillary Clinton that are a lot more recent than that.”
While I agree that Hillary should have been questioned about her own military decisions, Iraq was a formally declared war that the entire Republican establishment, think tanks, newspapers, and experts like you supported. They did such a convincing job of selling the war that even most of the Democratic establishment got on board, though never quite as enthusiastically.
By contrast, there was never any real Democratic consensus on whether Obama should remove troops or increase troops, on whether Hillary should do this or that in Libya. Obama and Hillary might have hideously bungled things, but there was never enthusiastic, party-wide support for their policies.
This makes it very easy for any Dem to distance themselves from previous Dem policies: “Yeah, looks like that was a big whoopsie. Luckily half our party knew that at the time.”
But for better or worse, the Republicans–especially the Bushes–own the Iraq War.
The big problem here is not that the Republican candidates (aside from Trump and Rand Paul) were too dumb to come up with a good response to the question (though that certainly is a problem.) The real problem is that none of them had actually stopped to take a long, serious look at the Iraq War, ask whether it was a good idea, and then apologize.
The Iraq War deeply discredited the Republican party.
Ask yourself: What did Bush conserve? What have I conserved? Surely being a “conservative” means you want to conserve something, so what was it? Iraqi freedom? Certainly not. Mid East stability? Nope. American lives? No. American tax dollars? Definitely not.
The complete failure of the Republicans to do anything good while squandering 2.4 trillion dollars and thousands of American lives is what triggered the creation of the “alt” right and set the stage for someone like Trump–someone willing to make a formal break with past Republican policies on Iraq–to rise to power.
In her emotional testimony, Nayirah stated that after the Iraqi invasion of Kuwait she had witnessed Iraqi soldiers take babies out of incubators in a Kuwaiti hospital, take the incubators, and leave the babies to die.
Her story was initially corroborated by Amnesty International and testimony from evacuees. Following the liberation of Kuwait, reporters were given access to the country. An ABC report found that “patients, including premature babies, did die, when many of Kuwait’s nurses and doctors… fled” but Iraqi troops “almost certainly had not stolen hospital incubators and left hundreds of Kuwaiti babies to die.”
Kuwaiti babies died because Kuwaiti doctors and nurses abandoned them. Maybe the “experts” at the UN and in the US government should vet their sources a little better (like actually find out their last names) before starting wars based on the testimony of children?
And then there was Vietnam. Cold War “experts” were certain it was very important for us to spend billions of dollars in the 1950s to prop of the French colony in Indochina. When the French gave up, fighting the war somehow became America’s problem. The Cold War doctrine of the “Domino Theory” held that the loss of even one obscure, third-world country to Communism would unleash an unstoppable chain-reaction of global Soviet conquest, and thus the only way to preserve democracy anywhere in the world was to oppose communism wherever it emerged.
Of course, one could not be a Cold War “expert” in 1955, as we had never fought a Cold War before. This bi-polar world lead by a nuclear-armed communist faction on one side and a nuclear-armed democratic faction on the other was entirely new.
Atop the difficulties of functioning within an entirely novel balance of powers (and weapons), almost no one in America spoke Vietnamese (and no one in Vietnam spoke English) in 1955. We couldn’t even ask the Vietnamese what they thought. At best, we could play a game of telephone with Vietnamese who spoke French and translators who spoke French and English, but the Vietnamese who had learned the language of their colonizers were not a representative sample of average citizens.
In other words, we had no idea what we were getting into.
I lost family in Vietnam, so maybe I take this a little personally, but I don’t think American soldiers exist just to enrich Halliburton or protect French colonial interests. And you must excuse me, but I think you “experts” grunting for war have an extremely bad track record that involves people in my family getting killed.
While we are at it, what is the expert consensus on Russiagate?
At the same time, there is a growing consensus among reporters and thinkers on the left and right—especially those who know anything about Russia, the surveillance apparatus, and intelligence bureaucracy—that the Russiagate-collusion theory that was supposed to end Trump’s presidency within six months has sprung more than a few holes. Worse, it has proved to be a cover for U.S. intelligence and law-enforcement bureaucracies to break the law, with what’s left of the press gleefully going along for the ride. Where Watergate was a story about a crime that came to define an entire generation’s oppositional attitude toward politicians and the country’s elite, Russiagate, they argue, has proved itself to be the reverse: It is a device that the American elite is using to define itself against its enemies—the rest of the country.
Yet for its advocates, the questionable veracity of the Russiagate story seems much less important than what has become its real purpose—elite virtue-signaling. Buy into a storyline that turns FBI and CIA bureaucrats and their hand-puppets in the press into heroes while legitimizing the use of a vast surveillance apparatus for partisan purposes, and you’re in. Dissent, and you’re out, or worse—you’re defending Trump.
“Russia done it, all the experts say so” sounds suspiciously like a great many other times “expert opinion” has been manipulated by the government, industry, or media to make it sound like expert consensus exists where it does not.
Let’s look at a couple of worst case scenarios:
Nichols and his ilk are right, but we ignore his warnings, overlook a few dastardly Russian deeds, and don’t go to war with Russia.
Nichols is wrong, but we trust him, blame Russia for things it didn’t do, and go to war with a nuclear superpower.
But let’s look at our final fail:
Failure to predict the fall of the Soviet Union
This is kind of an ironic, given that Nichols is a Sovietologist, but one of the continuing questions in Political Science is “Why didn’t political scientists predict the fall of the Soviet Union?”
In retrospect, of course, we can point to the state of the Soviet economy, or glasnost, or growing unrest and dissent among Soviet citizens, but as Foreign Policy puts it:
In the years leading up to 1991, virtually no Western expert, scholar, official, or politician foresaw the impending collapse of the Soviet Union, and with it and with it one-party dictatorship, the state-owned economy, and the Kremlin’s control over its domestic and Eastern European empires. …
Whence such strangely universal shortsightedness? The failure of Western experts to anticipate the Soviet Union’s collapse may in part be attributed to a sort of historical revisionism — call it anti-anti-communism — that tended to exaggerate the Soviet regime’s stability and legitimacy.Yet others who could hardly be considered soft on communism were just as puzzled by its demise. One of the architects of the U.S. strategy in the Cold War, George Kennan, wrote that, in reviewing the entire “history of international affairs in the modern era,” he found it “hard to think of any event more strange and startling, and at first glance inexplicable, than the sudden and total disintegration and disappearance … of the great power known successively as the Russian Empire and then the Soviet Union.”
I don’t think this is Political Science’s fault–even the Soviets don’t seem to have really seen it coming. Some things are just hard to predict.
Sometimes we overestimate our judgment. We leap before we look. We think there’s evidence where there isn’t or that the evidence is much stronger than it is.
And in the cases I’ve selected, maybe I’m the one who’s wrong. Maybe Vietnam was a worthwhile conflict, even if it was terrible for everyone involved. Maybe the Iraq War served a real purpose.
WWI was still a complete disaster. There is no logic where that war makes any sense at all.
When you advocate for war, step back a moment and ask how sure you are. If you were going to be the canon fodder down on the front lines, would you still be so sure? Or would you be the one suddenly questioning the experts about whether this was really such a good idea?
Professor Nichols, if you have read this, I hope it has given you some food for thought.
Welcome back. In preparation for our review of The Death of Expertise: The Campaign Against Established Knowledge and Why it Matters, I have made a list of “times the experts were wrong.” Professor Nichols, if you ever happen to read this, I hope it give you some insight into where we, the common people, are coming from. If you don’t happen to read it, it still gives me a baseline before reading your book. (Please see part 1 for a discussion of relevant definitions.)
Part 2: Law, Academia, and Science
If you’ve had any contact with the court system, you’re probably familiar with the use of “expert testimony.” Often both sides of a case bring in their own experts who give their expert testimony on the case–by necessity, contradictory testimony. For example, one expert in a patent case may testify that his microscopy data shows one thing, while a second testifies that in fact a proper analysis of his microscopy data actually shows the opposite. The jury is then asked to decide which expert’s analysis is correct.
If it sounds suspicious that both sides in a court case can find an “expert” to testify that their side is correct, that’s because it is. Take, for example, the government’s expert testimony in the trial of Mr. Carlos Simon-Timmerman, [note: link takes you to AVN, a site of questionable work-friendliness] accused of possessing child pornography:
“When trial started,” said Ramos-Vega, “the government presented the Lupe DVD and a few other images from the other DVDs that the government understood were also of child pornography. The government presented the testimony of a Special Agent of Immigration and Customs Enforcement that deals with child pornography and child exploitation cases. She testified that Lupe was ‘definitely’ under 18. The government then presented the testimony of a pediatrician who testified that she was 100 percent sure that Lupe was underage.”
The experts, ladies and gents.
After the prosecution rested its case, it was Ramos-Vega’s turn to present witnesses.
“The first witness we called was Lupe,” he said. “She took the stand and despite being very nervous testified so well and explained to the ladies and gentlemen of the jury that she was 19 years old when she performed in the videos for littlelupe.com. She also allowed us to present into evidence copies of her documents showing her date of birth.”
So the Customs Special Agent and the pediatrician were both LYING UNDER OATH about the age of a porn star in order to put an innocent man in prison. There were multiple ways they could have confirmed Lupe’s age (such as checking with her official porn star information on file in the US, because apparently that’s an official thing that exists for exactly this purpose,) or contacting Lupe herself like Mr. Simon-Timmerman’s lawyer did.
The Washington Post published a story so horrifying this weekend that it would stop your breath: “The Justice Department and FBI have formally acknowledged that nearly every examiner in an elite FBI forensic unit gave flawed testimony in almost all trials in which they offered evidence against criminal defendants over more than a two-decade period before 2000.” …
“Of 28 examiners with the FBI Laboratory’s microscopic hair comparison unit, 26 overstated forensic matches in ways that favored prosecutors in more than 95 percent of the 268 trials reviewed so far.” …
Santae Tribble served 28 years for a murder based on FBI testimony about a single strand of hair. He was exonerated in 2012. It was later revealed that one of the hairs presented at trial came from a dog.
Professor Nichols, you want to know, I assume, why we plebes are so distrustful of experts like you. Put yourself, for a moment, in the feet of an ordinary person accused of a crime. You don’t have a forensics lab. Your budget for expert witnesses is pretty limited. Your lawyer is likely a public defender.
Do you trust that these experts are always right, even though they are often hired by people who have a lot more money than you do? Do you think there is no way these experts could be biased toward the people paying them, or that the side with more money to throw at experts and its own labs could produce more evidence favorable to itself than the other?
Now let’s expand our scope: how do you think ordinary people think about climate scientists, medical drug studies, or military intelligence? Unlike drug companies, we commoners don’t get to hire our own experts. Do you think Proctor and Gamble never produces research that is biased toward its own interests? Of course; that’s why researchers have to disclose any money they’ve received from drug companies.
From the poor man’s perspective, it looks like all research is funded by rich men, and none by poor men. It is sensible to worry, therefore, that the results of this research are inherently biased toward those who already have plenty of status and wealth.
The destruction of expertise: “Studies” Departments
Here is a paper published in a real, peer-reviewed academic journal:
The hope for multicultural, culturally competent, and diverse perspectives in science education falls short if theoretical considerations of whiteness are not entertained. [Entertained by whom?] Since whiteness is characterized [by whom?] as a hegemonic racial dominance that has become so natural it is almost invisible, this paper identifies how whiteness operates in science education such that [awkward; “to such an extent that”] it falls short of its goal for cultural diversity. [“Cultural diversity” is not one of science education’s goals] Because literature in science education [Which literature? Do you mean textbooks?] has yet to fully entertain whiteness ideology, this paper offers one of the first theoretical postulations [of what?]. Drawing from the fields of education, legal studies, and sociology, [but not science?] this paper employs critical whiteness studies as both a theoretical lens and an analytic tool to re-interpret how whiteness might impact science education. Doing so allows the field to reconsider benign, routine, or normative practices and protocol that may influence how future scientists of Color experience the field. In sum, we seek to have the field consider the theoretical frames of whiteness and how it [use “whiteness” here instead of “it” because there is no singular object for “it” to refer to in this sentence] might influence how we engage in science education such that [“to such an extent that”] our hope for diversity never fully materializes.
Apologies for the red pen; you might think that someone at the “School of Education” could write a grammatical sentence and the people publishing peer-reviewed journals would employ competent editors, but apparently not.
If these are “experts,” then expertise is dead with a stake through its heart.
But the paper goes on!
The resounding belief that science is universal and objective hides the reality that whiteness has shaped the scientific paradigm.
See, you only think gravity pulls objects toward the earth at a rate of 9.8 m/second^2 because you’re white. When black people drop objects off the Leaning Tower of Pisa, they fall 10m/s^2. Science textbooks and educators only teaching the white rate and refusing to teach the black rate is why no black nation has successfully launched a man into space.
Our current discourse believes that science and how we approach experimentation and constructing scientific explanations is unbiased, and on the surface, it may seem justified (Kelly 2014). However, this way of knowing science in the absence of other ways of knowing only furthers whiteness an White supremacy through power and control of science knowledge. As a result, our students of Color are victims of deculturization, and their own worldviews are invalidated, such as described by Ladson-Bilings (1998a).
For example, some Aboriginal people in Australia believe that cancer is caused by curses cast by other people or a spiritual punishment for some misdeed the sufferer committed. Teaching them that cancer is caused by mutated cells that have begun reproducing out of control and can’t be caused by a curse is thus destroying a part of their culture. Since all cultures are equally valuable, we must teach that the Aboriginal theory of cancer-curses and the white theory of failed cellular apoptosis are equally true.
Or Le and Matias are full of shit. Le doesn’t have his PhD, yet, so he isn’t an official expert, but Matias is a professor with a CV full of published, peer-reviewed articles on similar themes.
Every single degree awarded paper published on such garbage degrades the entire concept of “experts.” Sure, Nichols is a professor–and so is Matias. As far as our official system for determining expertise, Nichols, Matias, and Stephen Hawing are all “experts.”
Black boys raised in America, even in the wealthiest families and living in some of the most well-to-do neighborhoods, still earn less in adulthood than white boys with similar backgrounds, according to a sweeping new study that traced the lives of millions of children.
White boys who grow up rich are likely to remain that way. Black boys raised at the top, however, are more likely to become poor than to stay wealthy in their own adult households.
(Oh, look, someone discovered regression to the mean.)
You don’t need an “expert” to tell you that black men might get discriminated against.
How do you become an “expert” in anti-racism? Do you have to pass the implicit bias test? Get a degree in anti-racist studies?
Do you think, for whatever reason, that a guy who gets paid to do anti-racist research might come up with “racism” as an answer to almost any question posed?
“The guy who gets paid to say that racism is the answer said the answer is racism” does not actually prove that racism is the answer, but it is being presented like it does.
Blue check has failed to mention any obvious counters, like:
a. Mysteriously, this “racism” only affects black men and not black women (this is why we’ve had a black female president but not a black male one, right?)
b. Regression to the mean is a thing and we can measure it (shortly: The further you are from average for your group on any measure [height, intelligence, income, number of Daleks collected, etc.,] the more likely your kids are to be closer to average than you are. [This is why the kids of Nobel prize winners, while pretty smart on average, are much less likely to win Nobels than their parents.] Since on average blacks make a lot less money than whites, any wealthy black family is significantly further from the average black income than a white family with the same amount of money is from the average white income. Therefore at any high income level, we expect black kids to regress harder toward the black mean than white kids raised at the same level. La Griffe du Lion [a statistics expert] has an article that goes into much more depth and math on regression to the mean and its relevance.)
c. Crime rates. Black men commit more crime than black women or white men, and not only does prison time cut into employment, but most employers don’t want to employ people who’ve committed a crime. This makes it easier for black women to get jobs and build up wealth than black men. (The article itself does mention that “The sons of black families from the top 1 percent had about the same chance of being incarcerated on a given day as the sons of white families earning $36,000,” but yeah, it’s probably just totally irrational discrimination keeping black men out of jobs.)
“Experts” like this get used to trot a simple, narrative-supporting line that the paper wants to make rather than give any real or uncomfortable analysis of a complex issue. It’s dishonest reporting and contributes to the notion that “expert” doesn’t mean all that much.
Tetraethyllead (aka lead) was added to automobile fuels beginning in the 1920s to raise fuel economy–that is, more miles per gallon. For half a century, automobiles belched brain-damaging lead into the atmosphere, until the Clean Air Act in the 70s forced gas companies to cut back.
Plenty of people knew lead is poisonous–we’ve known that since at least the time of the Romans–so how did it end up in our gas? Well, those nice scientists over at the auto manufacturers reassured us that lead in gasoline was perfectly safe, and then got themselves on a government panel intended to evaluate the safety of leaded gas and came to the same conclusion. Wired has a thorough history:
But fearing that such [anti-leaded gas] measures would spread, … the manufacturing companies demanded that the federal government take over the investigation and develop its own regulations. U.S. President Calvin Coolidge, a Republican and small-government conservative, moved rapidly in favor of the business interests.
… In May 1925, the U.S. Surgeon General called a national tetraethyl lead conference, to be followed by the formation of an investigative task force to study the problem. That same year, Midgley [the inventor of leaded gas] published his first health analysis of TEL, which acknowledged a minor health risk at most, insisting that the use of lead compounds,”compared with other chemical industries it is neither grave nor inescapable.”
It was obvious in advance that he’d basically written the conclusion of the federal task force. That panel only included selected industry scientists like Midgely. It had no place for Alexander Gettler or Charles Norris [scientists critical of leaded gas] or, in fact, anyone from any city where sales of the gas had been banned, or any agency involved in the producing that first critical analysis of tetraethyl lead.
In January 1926, the public health service released its report which concluded that there was “no danger” posed by adding TEL to gasoline…”no reason to prohibit the sale of leaded gasoline” as long as workers were well protected during the manufacturing process.
The task force did look briefly at risks associated with every day exposure by drivers, automobile attendants, gas station operators, and found that it was minimal. The researchers had indeed found lead residues in dusty corners of garages. In addition, all the drivers tested showed trace amounts of lead in their blood. But a low level of lead could be tolerated, the scientists announced. After all, none of the test subjects showed the extreme behaviors and breakdowns associated with places like the looney gas building. And the worker problem could be handled with some protective gear.
I’m not sure how many people were killed globally by leaded gas, but Wired notes:
It was some fifty years later – in 1986 – that the United States formally banned lead as a gasoline additive. By that time, according to some estimates, so much lead had been deposited into soils, streets, building surfaces, that an estimated 68 million children would register toxic levels of lead absorption and some 5,000 American adults would die annually of lead-induced heart disease.
The UN estimates that the elimination of lead in gas and paint has added 2.4 trillion, annually, the global economy.
Leaded gas is a good example of a case where many experts did know it was poisonous (as did many non-experts,) but this wasn’t the story the public heard.
Yes, this one is silly, but I have relatives who keep bringing it up. “Scientists used to say there are 9 planets, but now they say there are only 8! Scientists change what they think all the time!”
Congratulations, astronomers, they think you lost Pluto. Every single time I try to discuss science with these people, they bring up Pluto. Scientific consensus is meaningless in a world where planets just disappear. “Whoops! We miscounted!”
(No one ever really questioned Pluto’s planetary status before it was changed, but a few die-hards refuse to accept the new designation.)
Scientists weren’t actually wrong about Pluto (“planet” is just a category scientists made up and that they decided to redefine to make it more useful,) but the matter confused people and it seemed like scientific consensus was arbitrary and could change unexpectedly.
Unfortunately, normal people who don’t have close contact with science or scientists often struggle to understand exactly what science is and how it advances. They rely, sporadically, on intermediaries like The History Chanel or pop science journalists to explain it to them, and these guys like to run headlines like “5 things Albert Einstein got Totally Wrong” (haha that Albert, what a dummy, amirite?)
So when you question why people distrust experts like you, Professor Nichols, consider whether the other “experts” they’ve encountered have been trustworthy or even correct, or if they’ve been liars and shills.
Nichols devotes a chapter to the subject–expert failures are, he claims, “rare but spectacular when they do happen, like plane crashes.” (I may be paraphrasing slightly.)
How often are the experts wrong? (And how would we measure that?)
For starters, we have to define what “experts” are. Nichols might define experts as, “anyone who has a PhD in a thing or has worked in that field for 10 years,” but the general layman is probably much laxer in his definitions.
Now, Nichols’s argument that “experts” are correct most of the time probably is correct, at least if we use a conservative definition of “expert”. We live in a society that is completely dependent on the collective expertise of thousands if not millions of people, and yet that society keeps running. For example, I do not know how to build a road, but road-building experts do, and our society has thousands of miles of functional roads. They’re not perfect, but they’re a huge improvement over dirt paths. I don’t know how to build a car, but car-building experts do, and so society is full of cars. From houses to skyscrapers, smartphones to weather satellites, electricity to plumbing: most of the time, these complicated systems get built and function perfectly well. Even airplanes, incredibly, don’t fall out of the sky most of the time (and according to Steven Pinker, they’re getting even better at it.)
But these seem like the kind of experts that most people don’t second-guess too often (“I think you should only put three wheels on the car–and make them titanium,”) nor is this the sort of questioning that I think Nichols is really concerned about. Rather, I think Nichols is concerned about people second-guessing experts like himself whose opinions bear not on easily observed, physical objects like cars and roads but on abstract policies like “What should our interest rates be?” or “Should we bomb Syria?”
We might distinguish here between practical experts employed by corporations, whose expertise must be “proven” via production of actual products that people actually use, and academic experts whose products are primarily ideas that people can’t touch, test, or interact with.
For ordinary people, though, we must include another form of experts: writers–of newspapers, magazines, TV programs, textbooks, even some well-respected bloggers. Most people don’t read academic journals nor policy papers. They read Cosmo and watch daytime talk shows, not because they “hate experts” but because this is the level of information they can understand.
In other words, most people probably think Cosmo’s “style expert” and Donald Trump are as much “experts” as Tom Nichols. Trump is a “business expert” who is so expert he not only has a big tower with his name on it, they even let him hire and fire people on TV! Has anyone ever trusted Nichols’s expertise enough to give him a TV show about it?
Trump Tower is something people can touch–the kind of expertise that people trust. Nichols’s expertise is the Soviet Union (now Russia) and how the US should approach the threat of nuclear war and deterrence–not things you can easily build, touch, and test.
Nichols’s idea of “experts” is probably different from the normal person’s idea of “experts.” Nichols probably uses metrics like “How long has this guy been in the field?” and “Which journals has he been published in?” while normal people use metrics like “Did CNN call him an expert?” and “Did I read it in a magazine?” (I have actually witnessed people citing margarine advertizements as “nutrition advice.”)
If anything, I suspect the difference between “normal people’s idea of expert” and “Nichols’s idea of experts” is part of the tension Nichols is feeling, as for the first time, ordinary people like me who would in the past have been limited largely to discussing the latest newspaper headlines with friends can now pull up any academic’s CV and critique it online. “The people,” having been trained on daytime TV and butter ads, can now critique foreign policy advisers…
Let’s sort “people who distrust experts” into three main categories:
Informed dissenters: People who have read a lot on a particular topic and have good reason to believe the expert consensus is wrong, eg, someone involved in nutrition research who began sounding warning bells about the dangers of partially hydrogenated fats in the ’80s.
General contrarians: Other people are wrong. Music has been downhill ever since the Beatles. The schools are failing because teachers are dumb. Evolution isn’t real. Contrarians like to disagree with others and sometimes they’re correct.
Tinfoil hatters: CHEMTRAILS POISON YOU. The Tinfoil hatters don’t think other people are dumb; they think others are actively conspiring against them.
People can fall into more than one category–in fact, being a General Contrarian by nature probably makes it much easier to be an Informed Dissenter. Gregory Cochran, for example, probably falls into both categories. (Scott Alexander, by contrast, is an informed dissenter but not contrarian.)
Tinfoil hatters are deprecated, but even they are sometimes correct. If a Jew in 1930’s Germany had said, “Gee, I think those Germans have it out for us,” they’d have been correct. A white South African today who thinks the black South Africans have it out for them is probably also correct.
So the first question is whether more people actually distrust experts, or if the spread of the internet has caused Nichols to interact with more people who distrust experts. For example, far more people in the 80s were vocally opposed to the entire concept of “evolution” than are today, but they didn’t have the internet to post on. Nichols, a professor at the US Naval War College and the Harvard Extension School, probably doesn’t interact in real life with nearly as many people who are actively hostile to the entire edifice of modern science as the Kansas City School Board does, and thus he may have been surprised to finally encounter these people online.
But let’s get on with our point: a few cases where “the experts” have failed:
Artificially created trans (or partially hydrogenated) fats entered the American diet in large quantities in the 1950s. Soon nutrition experts, dieticians, healthcare philanthropists, and the federal government itself were all touting the trans fat mantra: trans fats like margarine or crisco were healthier and better for you than the animal fats like butter or lard traditionally used in cooking.
Unfortunately, the nutrition experts were wrong. Trans fats are deadly. According to a study published in 1993 by the Harvard School of Public Health, trans fats are probably responsible for about 100,000 deaths a year–or a million every decade. (And that’s not counting the people who had heart attacks and survived because of modern medical care.)
The first people to question the nutritional orthodoxy on trans fats (in any quantity) were probably the General Contrarians: “My grandparents ate lard and my parents ate lard and I grew up eating lard and we turned out just fine! We didn’t have ‘heart attacks’ back in the ’30s.” After a few informed dissenters started publishing studies questioning the nutritional orthodoxy, nutrition’s near-endless well of tinfoil hatters began promoting their findings (if any field is perfect for paranoia about poisons and contaminants, well, it’s food.)
And in this case, the tinfoil hatters were correct: corporations really were promoting the consumption of something they by then knew was killing people just because it made them money
If you’re old enough, you remember not only the days of Joe Camel, but also Camel’s ads heavily implying that doctors endorsed smoking. Dentists recommended Viceroys, the filtered cigarettes. Camels were supposed to “calm the nerves” and “aid the digestion.” Physicians recommended “mell-o-wells,” the “health cigar.” Some brands were even supposed to cure coughs and asthma.
Now, these weren’t endorsements from actual doctors–if anything, the desire to give cigarettes a healthy sheen was probably driven by the accumulating evidence that they weren’t healthy–but when my grandmother took up smoking, do you think she was reading medical journals? No, she trusted that nice doctor in that Camel ad.
Chesterfield, though, claimed that actual doctors had confirmed that their cigarettes had no adverse health effects:
In the 70s, the tobacco companies found doctors willing to testify not that tobacco was healthy, but that there was no proof–or not enough data–to accuse it of being unhealthy.
Even when called before Congress in the 90s, tobacco companies kept insisting their products weren’t damaging. If the CEO of Philip Morris isn’t an expert on cigarettes, I don’t know who is.
The CDC estimates that 480,000 Americans die due to cigarettes per year, making them one of our leading killers.
Freudianism, recovered memories, multiple personality disorder, and Satanic Daycares
In retrospect, Freudian Psychoanalysis is so absurd, it’s amazing it ever became a widely-believed, mainstream idea. And yet it was.
In the early 1890s, Freud used a form of treatment based on the one that Breuer had described to him, modified by what he called his “pressure technique” and his newly developed analytic technique of interpretation and reconstruction. According to Freud’s later accounts of this period, as a result of his use of this procedure most of his patients in the mid-1890s reported early childhood sexual abuse. He believed these stories, which he used as the basis for his seduction theory, but then he came to believe that they were fantasies. He explained these at first as having the function of “fending off” memories of infantile masturbation, but in later years he wrote that they represented Oedipal fantasies, stemming from innate drives that are sexual and destructive in nature.
Another version of events focuses on Freud’s proposing that unconscious memories of infantile sexual abuse were at the root of the psychoneuroses in letters to Fliess in October 1895, before he reported that he had actually discovered such abuse among his patients. In the first half of 1896, Freud published three papers, which led to his seduction theory, stating that he had uncovered, in all of his current patients, deeply repressed memories of sexual abuse in early childhood. In these papers, Freud recorded that his patients were not consciously aware of these memories, and must therefore be present as unconscious memories if they were to result in hysterical symptoms or obsessional neurosis. The patients were subjected to considerable pressure to “reproduce” infantile sexual abuse “scenes” that Freud was convinced had been repressed into the unconscious. Patients were generally unconvinced that their experiences of Freud’s clinical procedure indicated actual sexual abuse. He reported that even after a supposed “reproduction” of sexual scenes the patients assured him emphatically of their disbelief.
To sum: Freud became convinced that patients had suffered sexual abuse.
The patients replied emphatically that they had not.
Freud made up a bunch of sexual abuse scenarios.
The patients insisted they remembered nothing of the sort.
Freud decided the memories must just be repressed.
Later, Freud decided the sexual abuse never actually happened, but that the repressed, inverted memories were of children masturbating to the thought of having sex with their parents.
So not only was Freud’s theory derived from nothing–directly contradicted by the patients he supposedly based it on–he took it a step further and actually denied the stories of patients who had been sexually abused as children.
Freud’s techniques may have been kinder than the psychology of the 1800s, which AFAIK involved locking insane people in asylums and stomping them to death, but there remains a cruel perversity to insisting that people have memories of horrible experiences they swear they don’t, and then turning around and saying that horrible things they clearly remember never happened.
Eventually Freudian psychoanalysis and its promise of “recovering repressed memories” morphed into the recovered traumatic memory movement of the 1980s, in which psychologists used hypnosis to convince patients they had been the victims of a vast world-wide Satanic conspiracy and that they had multiple, independent personalities that could only be accessed via hypnosis.
The satanic Daycare conspiracy hysteria resulted in the actual conviction and imprisonment of real people for crimes like riding broomsticks and sacrificing elephants, despite a total lack of local dead elephants. Judges, lawyers, juries, and prosecutors found the testimony of “expert” doctors and psychologists (and children) convincing enough to put people in prison for running an underground, global network of “Satanic Daycares” that were supposedly raping and killing children. Eventually the hysteria got so bad that the FBI got involved, investigated, and found a big fat nothing. No sacrificial altars. No secret basements full of Satanic paraphernalia and torture devices. No dead elephants or giraffes. No magic brooms. No dead infants.
Insurance companies began investigating the extremely expensive claims of psychologists treating women with “multiple personality disorder” (many of whom had so degenerated while in the psychologists care that they had gone from employed, competent people to hospitalized mental patients.) Amazingly, immediately after insurance companies decided the whole business was a scam and stopped paying for the treatment, the patients got better. Several doctors were sued for malpractice and MPD was removed from the official list of psychological conditions, the DSM-V. (It has been replaced with DID, or dissasociative disorder.)
(Ironically, people attack psychiatry’s use of medications like Prozac, but if anything, these are the most evidence-based parts of mental care. At least you can collect data on things like “Does Prozac work better than placebo for making people feel better?” unlike Freudian psychoanalysis, which contained so many levels of “repression” and “transference” that there was always a ready excuse for why it wasn’t working–or for why “the patient got worse” was actually exactly what was supposed to happen.)
Between 1839 and 1847, the First Clinic at the Vienna General Hospital had 20,204 births and 1,989 maternal deaths. The Second Clinic, attended by midwives, had 17,791 birth and 691 maternal deaths. An MD’s care conferred an extra 6% chance of death. Births at home were even safer, with maternal mortality averaging about 0.5%
In that period, MDs caused about 1200 extra deaths. …
We know that wounded men in the Civil War had a better chance of surviving when they managed to hide from Army surgeons. Think how many people succumbed to bloodletting, over the centuries.
Ever wondered why Christian Scientists, who are otherwise quite pro-science, avoid doctors? It’s because their founder, Mary Baker Eddy (born in 1821) was often sick as a child. Her concerned parents dragged her to every doctor they could find, but poor Mary found that she got better when she stopped going to the doctors.
Back in the good old days, Charles II, age 53, had a fit one Sunday evening, while fondling two of his mistresses.
Monday they bled him (cupping and scarifying) of eight ounces of blood. Followed by an antimony emetic, vitriol in peony water, purgative pills, and a clyster. Followed by another clyster after two hours. Then syrup of blackthorn, more antimony, and rock salt. Next, more laxatives, white hellebore root up the nostrils. Powdered cowslip flowers. More purgatives. Then Spanish Fly. They shaved his head and stuck blistering plasters all over it, plastered the soles of his feet with tar and pigeon-dung, then said good-night.
Tuesday. ten more ounces of blood, a gargle of elm in syrup of mallow, and a julep of black cherry, peony, crushed pearls, and white sugar candy.
Wednesday. Things looked good:: only senna pods infused in spring water, along with white wine and nutmeg.
Thursday. More fits. They gave him a spirituous draft made from the skull of a man who had died a violent death. Peruvian bark, repeatedly, interspersed with more human skull. Didn’t work.
Friday. The king was worse. He tells them not to let poor Nelly starve. They try the Oriental Bezoar Stone, and more bleeding. Dies at noon.
Homeopathy has a similar history: old medicines were so often poisonous that even if some of them worked, on average, you were probably better off eating sugar pills (which did nothing) than taking “real” medicines. But since people can’t market “pills with nothing in them,” homeopathy’s strange logic of “diluting medicine makes it stronger” was used to give the pills a veneer of doing something. (Freudian psychotherapy, the extent that it “helped” anyone, was probably similar. Not that the practitioner himself brought anything to the table, but the idea of “I am having treatment so I will get better” plus the opportunity to talk about your problems probably helped some people.)
Today, “alternative” medical treatments like homeopathy and “faith healing” are less effective than conventional medicine, but for most of the past 2,000 years or so, you’d have been better off distrusting the “experts” (ie doctors) than trusting them.
It was only in the 20th century that doctors (or researchers) developed enough technology like vaccines, antibiotics, the germ theory of disease, nutrition, insulin, traumatic care, etc., that doctors began saving more lives than they cost, but the business was still fraught:
Disclaimer: I have had the whole birth trifecta: natural birth without medication, vaginal birth with medication, and c-section. Natural birth was horrifically painful and left me traumatized. The c-section, while medically necessary, was almost as terrible. Recovery from natural (and medicated) birth was almost instant–within minutes I felt better; within days I was back on my feet and regaining mobility. The c-section left me in pain for a month, trying to nurse a new baby and care for my other children while on pain killers that made me feel awful and put me to sleep. Without the pain killers, I could barely sit up and get out of bed.
Medically necessary c-sections save lives, perhaps mine. I support them, but I do NOT support medically unnecessary c-sections.
The “international healthcare community” recommends a c-section rate of 10-15% (maybe 19%.) The US rate is over 30%. Half of our c-sections are unnecessary traumas inflicted on women.
In cases where c-sections are not medically necessary (low-risk pregnancies), c-sections carry more than triple the risk of maternal death (13 per 100,000 for c sections and 3.5 per 100,000 for vaginal births.) Medically necessary c-sections, of course, save more lives than they take.
Given: 1,258,581 c-sections in the US in 2016, if half of those were unnecessary, then I estimate 60 women per year died from unnecessary c-sections. Not the kind of death rate Semmelweis was fighting against when he tried to convince doctors they needed to wash their hands between dissecting corpses and delivering babies, (for his efforts he was branded “a guy who didn’t believe the wisdom of experts,” “crazy,” and was eventually put in an insane asylum and literally stomped to death by the guards. (Freudianism looks really good by comparison.)
C-sections have other effects besides just death: they are more expensive, can get infected, and delay recovery. (I’ve also seen data linking them to an increased chance of post-partum depression.) For women who want to have more children, a c-section increases the chances of problems during subsequent pregnancies and deliveries.
Why do we do so many c-sections? Because in the event of misfortune, a doctor is more likely to get sued if he didn’t do a c-section (“He could have done more to save the baby’s life but chose to ignore the signs of fetal distress!”) than if he does do one (“We tried everything we could to save mother and baby.”) Note that this is not what’s in the mother’s best interests, but in the doctor’s.
Although I am obviously not a fan of natural childbirth, (I favor epidurals,) I am sympathetic to the movement’s principle logic: avoiding unnecessary c-sections by avoiding the doctors who give them. These women are anti-experts, and I can’t exactly blame them.
At the intersection of the “natural food” and “natural birth” communities we find the anti-vaxers.
Now, I am unabashedly pro-vaccine (though I reserve the right to criticize any particular vaccine,) but I still understand where the anti-vax crew is coming from. If doctors were wrong about blood-letting, are wrong about many c-sections (or pushing them on unsuspecting women to protect their own bottom lines) and doctors were just plain wrong for decades about dangerous but lucrative artificial fats that they actively pushed people to eat, who’s to say they’re right about everything else? Maybe some of the other chemicals we’re being injected with are actually harmful.
We can point to (and I do) massive improvements in public health and life expectancies as a result of vaccinations, but (anti-vaxers counter) how do we know these outcomes weren’t caused by other things, like the development of water treatment systems and sewers that ensured people weren’t drinking fecal-contaminated water anymore?
(I am also pro-not drinking contaminated water.)
Like concerns about impurities in one’s food, concerns about vaccinations make a certain instinctual sense: it is kind of creepy to inject people (mostly infants) with a serum composed of, apparently, dead germs and “chemicals.” The idea that exposing yourself to germs will somehow make you healthier is counter-intuitive, and hypodermic needles are a well-publicized disease vector.
So even though I think anti-vaxers are wrong, I don’t think they’re completely irrational.
This is the end of Part 1. We’ll continue with Part 2 on Wed.
Several years ago, Tom Nichols started writing a book about ignorance and unreason in American public discourse—and then he watched it come to life all around him, in ways starker than he had imagined. A political scientist who has taught for more than a decade in the Harvard Extension School, he had begun noticing what he perceived as a new and accelerating—and dangerous—hostility toward established knowledge. People were no longer merely uninformed, Nichols says, but “aggressively wrong” and unwilling to learn. They actively resisted facts that might alter their preexisting beliefs. They insisted that all opinions, however uninformed, be treated as equally serious. And they rejected professional know-how, he says, with such anger. That shook him.
Skepticism toward intellectual authority is bone-deep in the American character, as much a part of the nation’s origin story as the founders’ Enlightenment principles. Overall, that skepticism is a healthy impulse, Nichols believes. But what he was observing was something else, something malignant and deliberate, a collapse of functional citizenship.
What are people aggressively wrong about, and what does he think is causing the collapse of functional citizenship?
The Death of Expertise resonated deeply with readers. … Readers regularly approach Nichols with stories of their own disregarded expertise: doctors, lawyers, plumbers, electricians who’ve gotten used to being second-guessed by customers and clients and patients who know little or nothing about their work. “So many people over the past year have walked up to me and said, ‘You wrote what I was thinking,’” he says.
Sounds like everyone’s getting mansplained these days.
The Death of Expertise began as a cri de coeur on his now-defunct blog in late 2013. This was during the Edward Snowden revelations, which to Nichols’s eye, and that of other intelligence experts, looked unmistakably like a Russian operation. “I was trying to tell people, ‘Look, trust me, I’m a Russia guy; there’s a Russian hand behind this.’ ” But he found more arguments than takers. “Young people wanted to believe Snowden was a hero.”
I don’t have a particular opinion on Snowdon because I haven’t studied the issue, but let’s pretend you were in the USSR and one day a guy in the government spilled a bunch of secrets about how many people Stalin was having shot and how many millions were starving to death in Holodomor (the Ukrainian genocide.) (Suppose also that the media were sufficiently free to allow the stories to spread.)
Immediately you’d have two camps: the “This guy is a capitalist spy sent to discredit our dear leader with a hideous smear campaign” and “This guy is totally legit, the people need to know!”
Do you see why “Snowden is a Russian” sounds like the government desperately trying to cover its ass?
Now let’s suppose the guy who exposed Stalin actually was a capitalist spy. Maybe he really did hate communism and wanted to bring down the USSR. Would it matter? As long as the stuff he said was true, would you want to know anyway? I know that if I found out about Holodomor, I wouldn’t care about the identity of the guy who released the information besides calling him a hero.
I think a lot of Trump supporters feel similarly about Trump. They don’t actually care whether Russia helped Trump or not; they think Trump is helping them, and that’s what they care about.
In other words, it’s not so much “I don’t believe you” as “I have other priorities.”
In December, at a JFK Library event on reality and truth in public discourse, a moderator asked him a version of “How does this end?” … “In the longer term, I’m worried about the end of the republic,” he answered. Immense cynicism among the voting public—incited in part by the White House—combined with “staggering” ignorance, he said, is incredibly dangerous. In that environment, anything is possible. “When people have almost no political literacy, you cannot sustain the practices that sustain a democratic republic.” The next day, sitting in front of his fireplace in Rhode Island, where he lives with his wife, Lynn, and daughter, Hope, he added, “We’re in a very perilous place right now.”
Staggering ignorance about what, I wonder. Given our increased access to information, I suspect that the average person today both knows and can easily find the answers to far more questions than the average person of the 80s, 50s, or 1800s.
I mean, in the 80s, we still had significant numbers of people who believed in: faith healing; televangelists; six-day creationism; “pyramid power”; crop circles; ESP; UFOs; astrology; multiple personality disorder; a global Satanic daycare conspiracy; recovered memories; Freudianism; and the economic viability of the USSR. (People today still believe in the last one.)
One the one hand, I think part of what Nichols is feeling is just the old distrust of experts projected onto the internet. People used to harass their local school boards about teaching ‘evilution’; today they harass each other on Twitter over Ben Ghazi or birtherism or Russia collusion or whatever latest thing.
We could, of course, see a general decline in intellectual abilities as the population of the US itself is drawn increasingly from low-IQ backgrounds and low-IQ people (appear to) outbreed the high-IQ ones, but I have yet to see whether this has had time to manifest as a change in the amount of general knowledge people can use and display, especially given our manifestly easier time actually accessing knowledge. I am tempted to think that perhaps the internet forced Nichols outside of his Harvard bubble and he encountered dumb people for the first time in his life.
On the other hand, however, I do feel a definite since of malaise in America. It’s not about IQ, but how we feel about each other. We don’t seem to like each other very much. We don’t trust each other. Trust in government is low. Trust in each other is low. People have fewer close friends and confidants.
We have material prosperity, yes, despite our economic woes, but there is a spiritual rot.
Both sides are recognizing this, but the left doesn’t understand what is causing it.
They can point at Trump. They can point at angry hoards of Trump voters. “Something has changed,” they say. “The voters don’t trust us anymore.” But they don’t know why.
Here’s what I think happened:
The myth that is “America” got broken.
A country isn’t just a set of laws with a tract of land. It can be that, but if so, it won’t command a lot of sentimental feeling. You don’t die to defend a “set of laws.” A country needs a people.
“People” can be a lot of things. They don’t have to be racially homogenous. “Jews” are a people, and they are not racially homogenous. “Turks” are a people, and they are not genetically homogenous. But fundamentally, people have to see themselves as “a people” with a common culture and identity.
America has two main historical groups: whites and blacks. Before the mass immigration kicked off in 1965, whites were about 88% of the country and blacks were about 10%. Indians, Asians, Hispanics, and everyone else rounded out that last 2%. And say what you will, but whites thought of themselves as the American culture, because they were the majority.
America absorbed newcomers. People came, got married, had children: their children became Americans. The process takes time, but it works.
Today, though, “America” is fractured. It is ethnically fractured–California and Texas, for example, are now majority non-white. There is nothing particularly wrong with the folks who’ve moved in, they just aren’t from one of America’s two main historical ethnic groups. They are their own groups, with their own histories. England is a place with a people and a history; Turkey is a place with a people and a history. They are two different places with different people and different history. It is religiously fractured–far fewer people belong to one of America’s historically prominent religions. It is politically fractured–more people now report being uncomfortable with their child dating a member of the opposite political party than of a different race.
The decision caps off a six-month long debate, after some San Franciscans approached the commission in August 2017 to complain about the statue, which features a pious but patronizing scene of a Spanish missionary helping a beaten Indian to his feet and pointing him toward heaven.
In February the city’s Historic Preservation Commission voted unanimously to recommend removing “Early Days” despite some commissioners expressing reservations about whether the sculpture has additional value as an expose of 19th century racism.
Your statues are racist. Your history is racist. Your people is racist.
What do they think the reaction to this will look like?
It is not intuitive that a case needs to be made for “Reason, Science, Humanism, and Progress,” stable values that have long defined our modernity. And most expect any attack on those values to come from the far right: from foes of progressivism, from anti-science religious movements, from closed minds. Yet Steven Pinker argues there is a second, more profound assault on the Enlightenment’s legacy of progress, coming from within intellectual and artistic spheres: a crisis of confidence, as progress’s supporters see so many disasters, setbacks, emergencies, new wars re-opening old wounds, new structures replicating old iniquities, new destructive side-effects of progress’s best intentions. …
Pinker’s volume moves systematically through various metrics that reflect progress, charting improvements across the last half-century-plus in areas from racism, sexism, homophobia, and bullying, to car accidents, oil spills, poverty, leisure, female empowerment, and so on. …
the case Pinker seeks to make is at once so basic and so difficult that a firehose of evidence may be needed—optimism is a hard sell in this historical moment. … Pinker credits the surge in such sentiments since the 1960s to several factors. He points to certain religious trends, because a focus on the afterlife can be in tension with the project of improving this world, or caring deeply about it. He points to nationalism and other movements that subordinate goods of the individual or even goods of all to the goods of a particular group. He points to what he calls neo-Romantic forms of environmentalism, not all environmentalisms but specifically those that subordinate the human species to the ecosystem and seek a green future, not through technological advances, but through renouncing current technology and ways of living. He also points to a broader fascination with narratives of decline …
I like the way Pinker thinks and appreciate his use of actual data to support his points.
To these decades-old causes, one may add the fact that humankind’s flaws have never been so visible as in the twenty-first century. … our failures are more visible than ever through the digital media’s ceaseless and accelerating torrent of grim news and fervent calls to action, which have pushed many to emotional exhaustion. Within the last two years, though not before, numerous students have commented in my classroom that sexism/racism/inequality “is worse today than it’s ever been.” The historian’s answer, “No, it used to be much worse, let me tell you about life before 1950…,” can be disheartening, especially when students’ rage and pain are justified and real. In such situations, Pinker’s vast supply of clear, methodical data may be a better tool to reignite hope than my painful anecdotes of pre-modern life.
Maybe Nichols is on to something about people today being astoundingly ignorant…
Pinker’s celebration of science is no holds barred: he calls it an achievement surpassing the masterworks of art, music, and literature, a source of sublime beauty, health, wealth, and freedom.
I agree with Pinker on science, but Nichols’s worldview may be the one that needs plumbing.
A Memorial to the Enslaved People Who Enabled the Founding of Harvard Law School
On a clear, windy afternoon in early September at the opening of its bicentennial observance, Harvard Law School unveiled a memorial on campus. The plaque, affixed to a large stone, reads:
In honor of the enslaved whose labor created wealth that made possible the founding of Harvard Law School
May we pursue the highest ideals of law and justice in their memory
Harvard Law School was founded in 1817, with a bequest from Isaac Royall Jr. Royall’s wealth was derived from the labor of enslaved people on a sugar plantation he owned on the island of Antigua and on farms he owned in Massachusetts.
“We have placed this memorial here, in the campus cross-roads, at the center of the school, where everyone travels, where it cannot be missed,” said HLS Dean John Manning ’85. …
Harvard University President Drew Faust… also spoke at the unveiling, which followed a lecture focused on the complicated early history of the school.
“How fitting that you should begin your bicentennial,” said Faust, “with this ceremony reminding us that the path toward justice is neither smooth nor straight.” …
Halley, holder of the Royall Professorship of Law, who has spoken frequently about the Royall legacy, read aloud the names of enslaved men, women, and children of the Royall household from records that have survived, “so that we can all share together the shock of the sheer number, she said, “and a brief shared experience of their loss.”
“These names are the tattered, ruined remains, the accidents of recording and the encrustation of a system that sought to convert human beings into property,’ she said “But they’re our tattered remains.”
This commemorative issue also contains an interview with ImeIme Umana, Harvard Law Review’s 131st president, “How Have Harvard Scholars Shaped the Law?”:
How has legal scholarship changed since the Law Review began publishing more than a century ago?
Scholarship certainly has changed over time, and these pieces, whether or not they acknowledge it to a great extent, are consistent with the changing nature of the legal field in that they bring more voices to the table and more diverse perspectives. If you look back at our older scholarship, you’ll tend to see more traditional, doctrinal, technical pieces. now, they’re more aspirational, more critical, and have more social commentary in them. It’s a distinction between writing on what the law is and writing on what the law should be, and asking why things are the way they are.
What Kind of scholarship do you find especially meaningful?
I’m really passionate about the sate of the criminal legal system and civil rights. The cherry on top within those topics is scholarship that proposes new ways of thinking or challenges the status quo.
One of my favorite articles is [Assistant] Professor Andrew Crespo’s “Systemic Facts” [published in the June 2016 Harvard Law Review], because it does just that. The thesis is that courts are institutionally positioned to bring about systemic change, and that they can use their position to collect facts that they are institutionally privy to. It calls on them to do that such that we might learn more about how the legal system is structured.
I’ve noticed the increased emphasis on criminal law lately, especially bail reform.
The Law Review was founded 130 years ago, and now you are its president. Do you ever get caught up in thinking about the historical implications of running such a well-known and influential publication?
… Looking at it through a historical lens, the diversity of the student body and Law Review editors and authors is especially meaningful, as it makes legal institutions more inclusive, and therefore the law more inclusive. It’s important to keep pushing in that direction and never become complacent. The history is very important.
You are the first black woman who was elected to serve as president of the Law Review. Why do you think it took so long for that to happen?
Ive thought about it a lot and I just don’t know the answer. My thought is that it just tracks the lack of inclusion of black women in legal institutions, full stop. It’s a function of that. There’ always more we can be doing to be more inclusive. The slowness of milestones like this might have a broader cause than just something specific to the Law Review.
It probably tracks closer to the inclusion of Nigerian women at Harvard than black women. Umana is Nigerian American, and Nigerian Americans score significantly better on the SAT and LSAT than African Americans. (Based on average incomes, Nigerian Americans do better than white Americans, too.) So I’m going to go out on a limb and wager that significant black firsts at HLR are due to the arrival of more Nigerian and Kenyan immigrants, rather than the integration of America’s African American community.
While reading about ImeIme Umana, I noticed that American publications–such as NBC News–describe her as a “native” of Harrisburg, Pennsylvania. By contrast, Financial Nigeria proudly claims her as a “Nigerian American”:
Born to Nigerian immigrant parents originally from Akwa Ibom State in Nigeria, Umana is a resident of Harrisburg, Pennsylvania, United States. Umana graduated with a BA in Joint Concentration in African American Studies and Government from Harvard University in 2014. She is currently working on a Doctor of Law degree (Class of 2018) at the Harvard Law School.
The issue is full of fascinating older photographs with minimalist captions, because the graphic design team prefers white space over information.
For example, on page 58 is a photo of a collection of students and older men (is that Judge Learned Hand in the first row?) captioned simply 1926 and “Stepping up: by 1925, lawyers could pursue graduate degrees (LL.M.s and S.J.D.s) at HLS.
<- Seated in the front row is this man. Who is he? Quick perusal of a list of famous Indians reveals only that he isn’t any of them.
There is also an Asian man seated directly behind him whose photo I’ll post below. You might think, in our diversity obsessed age, when we track the first black editor of this and first black female head of that, someone would be curious enough about these men to tell us their stories. Who were they? How did they get to Harvard Law?
After some searching and help from @prius_1995, I think the Indian man is Dr. Kashi Narayan Malaviya, S.J.D. HLS 1926, and the Asian man is Domingo Tiongco Zavalla, LL.M. 1927, from the Philippines. (If you are curious, here are the relevant class lists.)
In Allahabad, during a meeting attended by Uma Nehru, Hriday Nath Kunzru and Dr. Kashi Narayan Malaviya, M. K. Acharya made the link between the politics of the nation and the plight of Hinduism very clear…
(Unfortunately, it appears that he has a more famous relative named Madan Mohan Malaviya, who is coming up in the search results. His great-grandson is single, however, if any of you ladies are looking for a Brahmin husband.)
Politico recently ran an article titled “What if you could get your own immigrant?” which was so terrible, I don’t even know where to begin. (Even they now realize their headline was atrocious, so they changed it to “Sponsor an immigrant yourself”.)
Politico wants to know: why do only corporations get to sponsor immigrants? Why not individuals? What’s so good about companies that they get special rights that we mere plebian humans don’t? That’s not a terrible question, but then they rip off the mask of decency and show their complete misunderstanding of, well, everything:
Right now, special classes of citizens—mostly corporations (and in practice, big corporations) and family members—can sponsor temporary or permanent migrants, benefiting shareholders mainly, as well as ethnic enclaves.
This system should be wiped away and replaced with a system of citizenship sponsorship for immigrants that we call a Visas Between Individuals Program. Under this new system, all citizens would have the right to sponsor a migrant for economic purposes.
Here’s how the program would work: Imagine a woman named Mary Turner, who lives in Wheeling, West Virginia. She was recently laid off from a chicken-processing plant and makes ends meet by walking and taking care of her neighbors’ pets. Mary could expand her little business by hiring some workers, but no one in the area would accept a wage she can afford. Mary goes online—to a new kind of international gig economy website, a Fiverr for immigrants—and applies to sponsor a migrant. She enters information about what she needs: someone with rudimentary English skills, no criminal record and an affection for animals. She offers a room in her basement, meals and $5 an hour. (Sponsors under this program would be exempt from paying minimum wage.) The website offers Mary some matches—people living in foreign countries who would like to spend some time in the United States and earn some money. After some back and forth, Mary interviews a woman named Sofia who lives in Paraguay.
In no particular order:
1. Mary is not an “individual” in this scenario, she is a small business owner looking to hire employees, so we are right back at square one: a company hiring immigrants. Now, maybe Mary hasn’t filed all of the paperwork to become a proper corporation–in which case she is running tremendous legal risks.
Look, corporations don’t exist because someone needed to split the cost of a big building. They exist to minimize the legal risks to individuals from running a business.
Corporations enjoy what is called “limited liability.” This means that while a corporation can be sued for all it is worth, the corporation’s owners get to keep whatever money they have in their personal bank accounts. If Donald Trump’s hotels get sued for, say, hiring discrimination, they can go bankrupt, go out of business, and get converted into very tall waterslides by a new round of developers, but the money in Donald Trump’s personal wallet is untouchable. (Which is why Trump is still wealthy after numerous bankruptcies.)
If Mary is just an individual and not a corporation, she bears personal liability for anything she or her employees do. For example, if a client’s prize-winning akita chokes on a chew toy and dies while at doggy daycare, she can be personally sued for the full $15,000 her clients paid for the pooch. If Sophia crashes the company car while on the way to a client’s house to pick up a dog, totaling another car in the process and putting a four year old girl in the hospital with crushed femurs and a punctured lung, Mary will be sued for every last penny while Sophia skips bail and hightails it out of the country.
In other words, once your small business is at the point where you are looking to hire employees and wondering how to do payroll taxes, you should be filling out that incorporation paperwork for your own benefit. “What if we let people who haven’t incorporated their small businesses and so face a lot more legal risks personally sponsor immigrants for economic gain?” is not good logic.
2. Dog walking business in West Virginia. Let me repeat that: Dog. Walking. Business. In. West. Virginia.
Yeah, after the chicken processing plant laid off all of its workers, apparently Mary’s neighbors discovered that they had tons of cash lying round just waiting to be spent on luxuries for their pets.
(Clarification for the stupid: normal West Virginians either walk their own dogs or just let them poop in the backyard. Professional dog-walkers are a New York thing, where urbanites assuage their guilt about leaving their surrogate children alone in tiny apartments for 14 hours a day while they file biglaw briefs by hiring other people to actually care for them.)
3. Mary is already barely making ends meet at an extremely low-income job that not many of her neighbors need done with zero barriers to entry, and her idea for making more money is to find someone who can live on even less income than herself? Is Sophia expected to eat in this scenario? Don’t forget that you now have to keep track of payroll taxes and deductions–most businesses hire a payroll service to do this for them, because legal compliance is tricky and doing it incorrectly can get you into very expensive trouble with the IRS.
4. If Sophia can make enough to live on, why would she give Mary any of the money? It’s not like dog walking is a complicated business that requires a professional to handle all of the client information. Sophia can just negotiate with the clients herself and give Mary nothing.
5. Oh, wait, Sophia lives in Mary’s basement and is required to give Mary the money she makes? We have a word for that: SLAVERY.
No, really, that actually happened under slavery. People who didn’t have slaves or needed a worker with a particular skill that a slave happened to have would hire slaves from the people who had them. The slaves received a certain amount of wages, most of which went to the owner but a certain percent of which went to the slaves themselves, who could save up money for pleasant things like new clothes or freedom.
Here’s a quote from the article:
According to our calculations, a typical family of four could boost its income by $10,000 to 20,000 by hosting migrants. The reason is that migrants to the United States usually increase their wages many times, allowing them to pay as much as $6,000 to hosts for sponsorships (and our average family could sponsor up to four visas, one for each member).
Where exactly are these four extra people sleeping in a household of four? The sofa?
Most slaves who worked for South Carolina College were “hired” on a short-term basis. Hiring out, or hiring, referred to a system in which a hirer would temporarily lease a slave from an owner. In doing so, owners generated revenue from their slaves’ labor without having an investment in the actual work itself. Slaves were more likely to face weekly, monthly, or yearly hiring than being permanently sold. Each year, five to fifteen percent of the slave population was hired for outside work. Conversely, less than four percent of slaves permanently exchanged hands. Hired slaves performed all kinds of labor: women worked domestic jobs such as laundering and wet-nursing, while men labored on roads, canals, and railroads. Others worked in industries such as mining coal, smelting iron, and processing tobacco. Skilled slaves might work as carpenters or blacksmiths. The number of hired slaves and the variety of jobs reflected not only the flexibility of slavery but also the importance of slaves as capital for owners and hirers.
6. You economists should realize that under a scenario like this, with unlimited visa supply, the equilibrium price of visas will drop to the cost of the visa and families will make nothing.
7. No minimum wage, but only for the immigrants. Sure, let’s just make Americans unemployable.
Look, I understand if you want to do away with the minimum wage for everyone. There are coherent arguments you could make in favor of letting everyone work for whatever wage they can get and letting the market work it out. But this is legally creating two classes of people in which one group is more expensive to hire than the other–which obviates the entire point of having minimum wage laws and just doesn’t work.
8. There used to be a group of Americans who could be hired for slightly below minimum wage for small jobs: teenagers.
Teenagers mowed lawns, babysat, walked dogs, even picked fruit and flipped burgers. We still have teenagers in West Virginia who can walk and groom dogs–even 10 year olds can probably be convinced to walk dogs for a dollar a dog per hour. Teenagers also have the benefit of having low living expenses because they still live with their parents, and the work experience they acquire in their highschool years can translate into a sense of accomplishment, real jobs, and eventually, allow them to pay for real expenses. There is no sensible reason to import people from the third world to do the same job Mary’s 13 year old neighbor could do equally well, unless you just hate children.
The elimination of jobs teenagers traditionally did (through the influx of low-wage immigrants who end up doing the jobs instead,) means that modern teens no longer get that early experience with working, sense of accomplishment, and gradual transition to productive, working life. Instead, they graduate from college with no work experience and start looking for jobs that require 3-5 years of previous experience in the field.
9. The article seems to think that American society is some kind of bottomless money pit that can keep growing if we just put more poor people at the bottom. There’s a technical term for this: pyramid scheme.
“We can get richer if we just find more poor people to exploit” is not a long-term economic policy. It’s more like someone read Marx and thought “Wow, extracting Surplus Value from the proleteriat sounds awesome!”
10. You might be thinking, “What if people just want to hire someone to be their personal servant?”
As the article notes, that’s already a thing. If you need a gardener, chef, maid, or live-in nanny, you can already find plenty of hireable people (immigrants included) to do these jobs–and these are not jobs that ordinary, working-class Americans are hiring anyone to do.
11. Enforcement. Say what you will for Google, at least I don’t have to worry about it keeping H1-Bs chained up in its basement, feeding them nothing but table scraps in between coding projects.
I have much less confidence in the sorts of people who think it would be a good idea to have 4 immigrants sleeping in their basements in order to reap their visa fees. In fact, in think these people will strongly resemble the sorts of people who take in foster kids for the fees, adopt orphans to get another pair of working hands, and generally thought indentured servitude was a great idea.
And who is going to pay federal agents to comb people’s basements in search of immigrant mistreatment? Me.
However, the article suggests that the primary reason abuses won’t happen is that if people like Sofia don’t like their treatment, they’ll just use their extensive savings to buy an international plane ticket and hop back home.
12.Sofia, who grew up in a village, has endured hardships that few Americans can imagine.
A village. A VILLAGE, I TELL YOU. You cannot imagine the horrors of growing up in a a clustered human settlement or community, larger than a hamlet but smaller than a town, with a population ranging from a few hundred to a few thousand.
I mean, just look at this Hungarian village:
Overall, I don’t think the author was totally crazy when he thought, “Hey, why do corporations get special rights that individuals don’t? Why let corporations pick immigrants and not ordinary people?” I, too, am uncomfortable with the idea of corporations having special rights. But trying to preserve the part of immigration that is based on “hiring people to do jobs” while doing away with the part where corporations are doing the hiring is missing the point of what corporations are: organizations that we route hiring through. The logic here is thus completely garbled.
But garbled logic aside, there is a much deeper problem. I’ve been saying for a long time that the demand for low-wage immigrants skirts perilously close to the logic behind slavery. “Americans are too good for these icky jobs; let’s import some brown people and make them do it.” This article strips away all pretense of valuing immigrants for their skills, perspectives, or can-do spirit: they are nothing but mobile economic units, cogs in an increasingly post-industrial machine.
Reach out and touch poverty
Your own personal immigrant
Someone to hear your orders
Someone who cares for your kids
Your own personal immigrant
Someone to hear your orders
Someone who’s trapped
And they’re all alone
Flesh and bone
No minimum wage at home
Just like Uber
I’ll make you a believer
Take second best
Put indentured servitude to the test
Things on your chest
You need to confess
You just want a slave
I will deliver your luxury
You know I’m expendable
Reach out and touch poverty
Your own personal immigrant
Reach out and touch poverty
When I was a kid and one of my friends would ask for a bit of food–a spare french fry or nugget, say–I would always say “no” and then give them the food.
In retrospect, I was annoying.
My logic was that I would of course give my friend a french fry–I always gave my friends french fries if they wanted them–and thus the asking was superfluous. If anything, I thought we should pile all of the food up in the middle of the table and then everyone could just take what they wanted.
I don’t think I realized that some people have bigger appetites than others. Or germs.
A couple of years later I had a little job that mostly paid in candy. Since I don’t really eat candy, I became known in school as “the kid with the Skittles” because I tended to give it all away.
Around this time I began writing the first mini-essays (really only a few sentences long) that eventually morphed into this blog on the psychological/spiritual/anthropological meaning of food-sharing. (Food is necessary for life; to give it away to someone else signals that you care enough about their well-being to take a potential hit to your own survival chances, hence the significance of food sharing rituals among people.)
It’s not too surprising that by highschool I ascribed to some vague sort of communism.
Note: highschool me didn’t know anything about the history of actual communism. I just liked the idea of a political ideology based on sharing.
So I think I get where a lot of young “communists” are probably coming from. I loved my friends and enjoyed sharing with them so wouldn’t everyone be better off if everyone acted like friends and everyone shared?
There were two problems with my logic. The first, of course, is that not everyone is friends. The second is that in the real world, food costs money.
As a kid, food was, functionally, free: my parents paid for it. I got the exact same amount of french fries and pizza on my lunch tray as everyone else whether I was hungry or not, because our parents paid for it. In the real world, I don’t buy more french fries than I want to eat–I save that extra money for things I do want, like books.
So what happens if I want books and you want food? Or you want books and I want food? And you and I aren’t even friends? Or worse, when there isn’t enough food for both of us?
Sharing is great when everything is free and there’s plenty of it, or there’s a resource that you can only afford if you pitch in with several friends to purchase. (For example, everyone in the house shares the TV.) In other words, when you’re a kid.
But it scales up really badly.
The best laid schemes o’ mice an’ men
Gang aft a-gley.
Every single country that has ever tried communism ended up a disaster. Tens of millions starved to death in the USSR and China. Millions were murdered in Cambodia. North Korea is still an inescapable hellhole. Communism’s total death toll is estimated around 100 million people.
We didn’t exactly learn much about the USSR in highschool (or before.) It was one of the players in WWII, vaguely present in the few readings we had time for after the war, but certainly of much less prominence than things like the Vietnam War. It was only in college that I took actual courses that covered the PRC and USSR, (and then only because they were relevant to my career aspirations.) How much does the average person know about the history of other countries, especially outside of western Europe?
One of my kids accidentally did a report on North Korea (they were trying to do a report on South Korea, but accidentally clicked the wrong country.) The material they were given for the report covered North Korean mountains, rivers, cities, language, flag… And mentioned nothing about the country being just about one of the worst places on earth, where people are routinely starved and tortured to death.
Schools make sure to teach about the horrors of the Holocaust and slavery, but they don’t (as far as I know) teach about the horrors of communism.
So I think we could be in for a mess of trouble–because I understand just how appealing the political ideology of “sharing” sounds when you don’t know what it actually means.