Conspiracy Theory Theory

Our ancestors–probably long before they were even human–had to differentiate between living and non-living things. Living things can be eaten (and, importantly, can eat you back); non-living generally taste bad, can’t be eaten, and won’t try to eat you.

This is a task of such essential importance that I think it is basically an innate ability common to all thinking animals. Rabbits and fish need to distinguish between living things; both need to know whether the lump over there is a rock or a predator, after all. And we humans don’t have to explain to our children that cats and dogs are alive but tables aren’t. (Indeed, a defect in this ability that caused a person to regard tables as alive or other people as not is remarkable–and dangerous–when it happens.)

It is easy to divide most things into living and non-living. Living things move and grow; non-living things do not. Rabbits move. Rocks don’t. (Plants don’t move much, but they do grow. They’re also helpfully color-coded.)

But what about non-living things that nonetheless move and grow, like rivers or clouds? You can’t catch a cloud; you can’t eat it; but it still has a behavior that we can talk about, eg: “The clouds are building up on the horizon,” “The clouds moved in from the east,” “The clouds faded away.” Clouds and stars, sun and moon, rivers and tides all have their particular behaviors, unlike rocks, dirt, and fallen logs.

When it comes to mistakes along the living/non-living boundary, it is clearly better to mistakenly believe that something might be alive than to assume that it isn’t. If I mistake a rock for a lion, I will probably live until tomorrow, but if I mistake a lion for a rock, I very well may not. So we are probably inclined to treat anything that basically moves and behaves like a living thing as a living thing, at least until we have more information about it.

And thus our ancestors, who had no information about how or why the sun moved through the sky, were left to conclude that the sun was either a conscious being that moved because it wanted to, or was at least controlled by such a being. Same for the moon and the stars, the rivers and tides.

Moreover, these being were clearly more powerful than men, especially ancient men. We cannot catch the sun; we live at mercy of the wind and the rain. Rivers can sweep us away and sudden storms dash boats to pieces. We live or die according to their whims.

So ancient man believed these things were sentient, called them “gods” (or devils) and attempted to placate them through sacrifice and prayer.

Centuries of scientific research have gradually uncovered the secrets of the universe. We’ve figured out why the sun appears to move as it does, why clouds form, and that frogs aren’t actually generated by mud. We’ve also figured out that the “influence” (influenza, in Italian) of the stars doesn’t actually cause sickness, though the name persists.

We know better rationally, but the instinct to ascribe personhood to certain inanimate objects still persists: it’s why programs like Thomas the Tank Engine are so popular with children. Trains move, therefore trains are alive and must have feelings and personalities. It’s why I have to remind myself occasionally that Pluto is an icy space rock and doesn’t actually feel sad about being demoted from planet to planetoid.

If something acts like a conscious thing and talks like a conscious thing, we’re still liable to treat it like a conscious thing–even if we know it’s not.

Today, the vast implacable forces that rain down on people’s lives are less the weather and more often organizations like the IRS or the local grocery store. These organizations clearly “do” things on purpose, because they were set up with that intention. The grocery store sells groceries. The IRS audits your taxes. Wendy’s posts on Twitter. The US invades other countries.

If organizations act like conscious entities, then it is natural for people to think of them as conscious entities, even though we know they are actually made of hundreds or thousands of individual people (many of whom don’t even like their jobs) executing to various degrees of accuracy the instructions and procedures laid down for them by their bosses and predecessors for how to get things done. The bag boy at the grocery store does not think about lofty matters like “how to get food from the farm to the table,” he merely puts the groceries in the bags, with an eye toward not breaking the eggs and not using too many bags.

Human institutions often become so big that no one has effective control over them anymore. One side has no idea how the other side is operating. An organization may forget its original purpose entirely, eg, MTV’s transition away from music videos and The Learning’ Channel’s away from anything educational.

When this happens, their behavior begins to look erratic. Why would an organization do anything counter to its stated purpose? The answer that it’s because no one is actually running the show, the entire organization is just a lose network of people all following the instructions of their little part without any oversight or ability to affect the overall whole and the entire machinery has gone completely out of kilter is dissatisfying to people; since the organization looks like a conscious thing, then it must be a conscious thing, and they must therefore have reasons for their behavior.

Trying to explain organizations’ behaviors in terms of conscious intent gets us quickly into the realm of conspiracy theories. For example, I am sure you have all heard the claim that, “Cheap cancer cures exist, but doctors don’t want you to know about them because they want to keep you sick for longer so they can sell you more expensive medicines.” Well, this is kind of half-true. The true part is that the medical system is biased toward more expensive medications, but not because doctors make more from them. (If you could prove that you can cure cancer with, say, a mega-dose of Vitamin C, the vitamin companies would be absolutely thrilled to bring “Cancer Bustin’ Vit C” to market.) The not-true part is the idea that this is all being done intentionally.

Doctors can only prescribe medications that have official FDA approval. This keeps patients safe from quackery and keeps doctors safe(er) from the possibility of getting sued if their treatments don’t work or have unexpected side effects.

FDA approval is difficult to get. The process requires long and rigorous medical trials to ensure that medications are safe and effective. Long, rigorous medical trials are expensive.

As a result, pharmaceutical companies only want to spend millions of dollars on medical trials for drugs that they think they have the potential to make millions of dollars. Any drug company that tried spending millions of dollars on cheap treatments that they can’t sell for millions of dollars would quickly go out of business.

To sum:

  1. Doctors can only prescribe FDA-approved treatments
  2. The FDA requires long, rigorous trials to make sure treatments are safe
  3. Long trials are expensive
  4. Drug companies therefore prefer to do expensive trials only on expensive drugs they can actually make money on.

None designed this system with the intention of keeping cheap medical treatments off the market because no one designed the system in the first place. It was assembled bit by bit over the course of a hundred years by different people from different organizations with different interests. It is the sum total of thousands (maybe millions) of people’s decisions, most of which made sense at the time.

That said, the system actually does make it harder for patients to get cheap medical treatments. The fact that this consequence is unintended does not make it any less real (or important).

There are, unfortunately, plenty of people who only focus on each particular step in the process, decide that each step is justified, and conclude that the net results must therefore also be justified without ever looking at those results. This is kind of the opposite of over-ascribing intention to organizations, a failure to acknowledge that unintended, emergent behavior of organizations exist and have real consequences. These sorts of people will generally harp on the justification for particular rules and insist that these justifications are so important that they override any greater concern. For example, they will insist that it is vital that drug trials cost millions of dollars in order to protect patients from potential medical side effects, while ignoring patients who died because drug companies couldn’t afford to develop treatments for their disorder.

But back to conspiracy theories: when organizations act like conscious creatures, it is very natural to think that they actually are conscious or at least are controlled by by conscious, intentional beings. It’s much more satisfying, frankly, than just assuming that they are made up of random people who actually have no idea what they’re doing.

Now that I think about it, this is all very fundamental to the principle ideas underlying this blog: organizations act like conscious creatures and are subject to many of the same biological rules as conscious creatures, but do not possess true consciousness.

Businesses, for example, must make enough money to cover their operating expenses, just as animals must eat enough calories to power their bodies. If one restaurant produces tasty food more efficiently than its competitor, thus making more money, then it will tend to outcompete and replace that competitor. Restaurants that cannot make enough money go out of business quickly.

Similarly, countries must procure enough food/energy to feed their people, or mass starvation will occur. They must also be strong enough to defend themselves against other countries, just as animals have to make sure other animals don’t eat them.

Since these organizations act like conscious creatures, it is a convenient shorthand to talk about them as though they were conscious. We say things like, “The US invaded Vietnam,” even though the US as a whole never decided that it would be a good idea to invade Vietnam and then went and did so. (The president has a major role in US foreign policy, but he doesn’t act alone.)

Most systems/organizations don’t have anyone that’s truly in charge. We can talk about “the American medical system,” but there is no one who runs the American medical system. We can talk about “the media,” but there is no one in charge of the media; no one decided one day that we were switching from paper newspapers to online click-bait. We talk about “society,” but no one is in charge of society.

This is not to say that organizations never have anyone in charge: tons of them do. Small businesses and departments in particular tend to have someone running them and goals they are trying to accomplish. I’m also not saying that conspiracies never happen: of course they do. These are just general observations about the general behavior of organized human groups: they can act like living creatures and are subject to many of the same rules as living creatures, which makes us inclined to think of them as conscious even when they aren’t.

Advertisement

Reverse Psychiatry?

One of the big problems with psychiatric medications is they tend to stop working over time. Mundanely, they do this as your body processes and excretes the chemicals: they wear off. More annoyingly, brains will actually up- or down-regulate their own activities over time in order to reestablish normalcy. Alcohol, for example, is a depressant, so the brains of alcoholics actually become more active over time in order achieve a more normal brain state. At this point, if you remove the alcohol, the brain can no longer function, because it is now too active: alcoholics in withdrawal can go into seizures.

If you’re trying to give people medications to make them feel better, like anti-depressants or anti-anxieties, then you have to fight against these two problems: 1. you don’t want the medication to just wear off every evening, leaving the patient in a funk for the rest of the day; and 2. you don’t want the patient to become habituated to the medication, where it not only no longer works, but if they try to go off of it (perhaps to switch to another medication,) things get much worse.

So I was thinking, why not use the rebound effects? Suppose a depressed person took a medication right before bed that, like alcohol, was effectively a downer, but would wear off in 8 hours and leave them in a happier state? And after three months of constant use, maybe their brains would habituate to the medication by producing more of whatever counteracts an unhappy state?

Has anyone studied or tested any drugs that work like this?

There’s an obvious downside here, which is that you’re intentionally trying to make someone who already feels bad feel worse, which is why you’d probably want to couple it with some sort of sleep aid (and that probably wouldn’t work with anything that makes people anxious, so maybe it’s not an effective anxiety treatment). You’d want to keep a very close eye on people when starting such a treatment, of course.

But more generally, has anyone tried to use rebound states and habituation to get the brain to where they want it to be, rather than fighting against these? If it worked, we could call it reverse psychiatry.

Spiteful Mutants?

I recently received an inquiry as to my opinion about the “spiteful mutant hypothesis.” After a bit of reading about genetic deletions in rat colonies I realized that the question was probably referring to bioleninism rather than rodents (though both are valid).

Of Mice and Men: Empirical Support for the Population-Based Social Epistasis Amplification Model, by Serraf and Woodley of Menie, is an interesting article. The authors look at a study by  Kalbassi et al., 2017 about social structures in mouse populations. Experimenters raised two groups of mice. One group had mice with normal mouse genes; mice in this group were sensitive to mouse social-cues and formed normal mouse social hierarchies. The other group had mostly normal mice, but also some mice with genetic mutations that made them less sensitive to social cues. In the second group, the mutant mice were not simply excluded while the rest of the mice went on their merry way, but the entire structure of the group changed:

Among the more striking findings are that the genotypically mixed … litters lacked “a structured social hierarchy” (p. 9) and had lower levels of testosterone (in both Nlgn3y/- and Nlgn3y/+mice); additionally, Nlgn3y/+mice from genotypically homogeneous litters showed more interest in “social” as opposed to “non-social cues” (p. 9) than Nlgn3y/+mice from genotypically mixed litters [the latter did not show a preference for one type of cue over the other, “showing an absence of interest for social cues” (p. 9)].

In other words, in litters where all of the mice are social, they can depend on each other to send and receive social cues, so they do, so they form social hierarchies. Somehow,t his makes the (male) mice have a lot of testosterone. In litters where the mice cannot depend on their companions to consistently send and receive social cues, even the genetically normal mice don’t bother as much, social hierarchies fail to form, and the mice have less testosterone.

A “spiteful mutation” in this context is one that imposes costs not only on the carrier, but on those around them. In this case, by changing the social structure and decreasing the testosterone of the other mice.

It’s a good article (and not long); you should read it before going on.

So what is bioleninism? I’ve seen the term kicked around enough to have a vague sense of it, but let’s be a bit more formal–with thanks to Samir Pandilwar for succinctness:

Developed by Spandrell (alias, Bloody Shovel) it takes the basic Leninist model of building a Party to rule the state out of the dregs of society, and shifts this to the realm of biology, wrong-think biology in particular, building the party out of people who are permanent losers within the social order.

I think the term gets used more generally when people notice that people in positions of power (or striving to make themselves more powerful via leftist politics) are particularly unattractive. In this context, these people are the “spiteful mutants” trying to change the social structure to benefit themselves.

We humans, at least in the west, like to think of ourselves as “individuals” but we aren’t really, not completely. As Aristotle wrote, “Man is a political animal;” we are a social species and most of us can’t survive without society at large–perhaps none of us. Virtually all humans live in a tribe or community of some sort, or in the most isolated cases, have at least occasional trade with others for things they cannot produce themselves.

Our species has been social for its entire existence–even our nearest relatives, the other chimps and gorillas, are social animals, living in troops or families.

We talk a lot about “increasing atomization and individualism” in populations that have transitioned from traditional agricultural (or other lifestyles) to the urban, industrial/post-industrial life of the cities, and this is certainly true in a legal sense, but in a practical sense we are becoming less individual.

A man who lives alone in the mountains must do and provide most things for himself; he produces his own food, is warmed by the efforts of his own ax, and drinks water from his own well. Even his trash is his own responsibility. Meanwhile, people in the city depend on others for so many aspects of their lives: their food is shipped in, their hair is cut by strangers, their houses are cleaned by maids, their water comes from a tap, and even their children may be raised by strangers (often by necessity rather than choice). The man in the mountains is more properly an individual, while the man in the city is inextricably bound together with his fellows.

There isn’t anything objectively wrong with any particular piece of this (fine dining is delicious and hauling water is overrated), but I find the collective effect on people who have come to expect to live this way vaguely unnerving. It’s as though they have shed pieces of themselves and outsourced them to others.

Or as Kaczynski put it:

The industrial-technological system may survive or it may break down. If it survives, it MAY eventually achieve a low level of physical and psychological suffering, but only after passing through a long and very painful period of adjustment and only at the cost of permanently reducing human beings and many other living organisms to engineered products and mere cogs in the social machine. Furthermore, if the system survives, the consequences will be inevitable: There is no way of reforming or modifying the system so as to prevent it from depriving people of dignity and autonomy.

(I have not read the whole of his manifesto, but I keep returning to this point, so eventually I should.)

How different are we from the little bees who cannot live on their own, but each have their role in the buzzing hive? (This is where the spiteful mutant hypothesis comes in.) Bees don’t arrange themselves by talking it over and deciding that this bee would be happy visiting flowers and that bee would be happy guarding the hive. It’s all decided beforehand via bee genetics.

How much “free will” do we really have to chose our human social relations, and how much of it is instinctual? Do we chose whom we love and hate, whom we respect and whom we deem idiots? Did we chose who would invent a billion dollar company and who would be homeless?

(We don’t really know how far instincts and biology go, of course.)

Any genes that affect how human societies cohere and the social hierarchies we form would likely produce different results if found in different quantities in different groups, just like the genes in the mouse models. Such genes could predispose us to be more or less social, more or less aggressive, or perhaps to value some other elements in our groups.

One of the most under-discussed changes wrought by the modern era is the massive decrease in infant mortality. Our recent ancestors suffered infant mortality rates between 20 and 40 percent (sometimes higher.) Dead children were once a near-universal misery; today, almost all of them live.

Among the dead, of course, were some quantity of carriers of deleterious mutations, such as those predisposing one to walk off a cliff or to be susceptible to malaria. Today, our mutants survive–sometimes even those suffering extreme malfunctions.

This doesn’t imply that we need high disease levels to weed out bad mutations: the Native Americans had nice, low disease levels prior to contact with European and African peoples, but their societies seem to have been perfectly healthy. This low-disease state was probably the default our ancestors all enjoyed prior to the invention of agriculture and dense, urban living. They probably still had high rates of infant mortality by modern standards (I haven’t been able to find numbers, but our relatives the chimps and bonobos have infant mortality rates around 20-30%.)

That all said, I’m not convinced that all this so-called “autistic” behavior (eg, the mouse models) is bad. Humans who are focused on things instead of social relations have gifted us much of modern technology. Would we give up irascible geniuses like Isaac Newton just to be more hierarchical? The folks implicitly criticized in the “bioleninist” model are far more obsessed with social hierarchies (and their place in them) than I am. I do not want to live like them, constantly analyzing ever social interaction for whether it contains micro-slights or whether someone has properly acknowledged my exact social status (“That’s Doctor X, you sexist cretin.”)

I want to be left in peace.

Extroverts vs Introverts

newton110
Isaac Newton

Prior to lockdown, I probably would have objected on some level to the introvert-extrovert dichotomy. After all, it seems greatly oversimplified, given the wide variety o personalities in the world.

But watching people react to lockdown has been very interesting, and I have concluded that some people really do lean toward intro- or extraversion.

The introverts have reported–aside from sensible worries about the virus–feeling better during lockdown. I’ve talked to a couple of people who reported feeling relieved that they didn’t have to go in to jobs they disliked, and others who found being inside for a week surprisingly pleasant.

Most of the people I’ve talked to, however, found quarantine immediately and intensely awful.

Note: This post is not about anyone suffering from real harm during lockdown, like inability to earn money they need to eat. This is only about the stress people feel when unable to get together with friends or generally go out among others.

I’ve now heard that depressives should not self-isolate about a dozen times. Being alone is bad, my friends tell me. I’ve always had my doubts about this. What is so bad about having a little time to myself, to browse the internet or read a book? Of course, I live with my family, so I barely get to be alone in the bathroom–maybe we all want what we don’t have. But at least within the level of quarantine we have, not having left the house in weeks, we have been fine.

I think the difference, at least for some people, stems from the origin of our own source of happiness and self-worth. If you get most of those emotions from within, or from hobbies that you can easily carry on at home (like reading or growing bonsais,) then being cut off from other people can be frustrating, but you’ll be okay. By contrast, if you really don’t produce those emotions for yourself (perhaps due to some glitch you can’t,) then you are more likely to seek them out in others. If your access to other people is suddenly cut off, then you’re in quite the bind: you can no longer access positive feelings.

Most of the time, extroverts seem happy and introverts seem like depressives, and that may be true for many of them (notably, depressed extroverts may fail to go out, making them look like introverts). But I propose a sub-type of extroverts who find time alone intolerable because they are really quite unhappy inside. By contrast, introverts may not share the effusive, bubbly style of extroverts, but that does not mean they are not feeling positive emotions–they just do not feel the need to convey those emotions to others. (And as far as depressed introverts, well, I don’t know if going around more people would make the situation better or worse.)

It is tempting to criticize people for being unable to generate their own positive feelings, but remember that man is a political–ie, social–animal. Our natural state is to live in bands and troops, same as our cousins the chimps, bonobos, and gorillas. We are supposed to want to be around each other, and it is normal for us to feel great distress if we are alone, which is why solitary confinement is so bad:

Solitary confinement has received severe criticism for having detrimental psychological effects[4] and, to some and in some cases, constituting torture.[5] According to a 2017 review study, “a robust scientific literature has established the negative psychological effects of solitary confinement”, leading to “an emerging consensus among correctional as well as professional, mental health, legal, and human rights organizations to drastically limit the use of solitary confinement.”[6]

Research surrounding the possible psychological and physiological effects of solitary confinement dates back to the 1830s. When the new prison discipline of separate confinement was introduced at the Eastern State Penitentiary in Philadelphia in 1829, commentators attributed the high rates of mental breakdown to the system of isolating prisoners in their cells. … Prison records from the Denmark institute in 1870 to 1920 indicate that staff noticed inmates were exhibiting signs of mental illnesses while in isolation, revealing that the persistent problem has been around for decades.[8]

According to the Journal of the American Academy of Psychiatry and the Law, solitary confinement can cause an array of mental disorders, as well as provoke an already existing mental disorder in a prisoner, causing more trauma and symptoms. …

The American Civil Liberties Union (ACLU) and Human Rights Watch created a report that incorporated the testimony of some juvenile inmates. Many interviews described how their placement in solitary confinement exacerbated the stresses of being in jail or prison. Many spoke of harming themselves with staples, razors, even plastic eating utensils, having hallucinations, losing touch with reality, and having thoughts of or attempting suicide – all this while having very limited access to health care.[10]:29–35 …

As well as severe and damaging psychological effects, solitary confinement manifests physiologically as well. Solitary confinement has been reported to cause hypertensionheadaches and migraines, profuse sweating, dizziness, and heart palpitations. Many inmates also experience extreme weight loss due to digestion complications and abdominal pain. Many of these symptoms are due to the intense anxiety and sensory deprivation. Inmates can also experience neck and back pain and muscle stiffness due to long periods of little to no physical activity. These symptoms often worsen with repeated visit to solitary confinement.[11]

Keep in mind that the alternative to solitary is being around a bunch of criminals, people not generally thought to be terribly pleasant companions.

Of course, some people prefer to be alone. Some people prefer to be with others. If you’re having a rough time in quarantine, well, at least you’re not alone (metaphorically, at least).

Stay safe, stay healthy, and see if you can invent some new math while you’re stuck inside.

 

Why people believe wrong things, pt 1/?

In our continuing quest to understand why people believe wrong things (and how to be better judges of information ourselves,) let’s take a look at this article: ‘I brainwashed myself with the internet’: Nearly 45 weeks pregnant, she wanted a “freebirth” with no doctors. Online groups convinced her it would be OK.

Not exactly the catchiest title, but still worth a read.

Long story short, Judith, a first time mom-to-be, started listening to homebirth and “freebirth” podcasts on her way to work and decided that giving birth at home, with just her husband and maybe a couple of friends (but no doctors, midwives, or other trained medical personnel,) sounded like a good idea.

Unsurprisingly, there were complications and the baby was stillborn, a month late.

Birth is hard. Childbirth has historically been one of the major killers of women (and their infants). Modern medical care does not remove all of the risks of childbirth, but you are still significantly less likely to die giving birth in a hospital than alone in your bathroom.

I don’t want to spend this post criticizing Judith (there’s enough of that already out there.) I want to examine what could possibly possess a woman intelligent enough to have a job and drive a car to trust her life and her infant’s to… nothing at all? What made her think this was a good idea?

The article blames three things. First, Judith blames herself (naturally). Second, the author blames “algorithms,” the modern scare-word for “the internet is run on code.” And third, the “freebirth” and similar communities themselves fall under scrutiny.

I think one more person deserves blame: Judith’s husband, who supported his wife’s decision to forgo medical care during childbirth and didn’t intervene on behalf of his child.

1. Let’s start with the algorithms. We’ve seen a lot of scaremongering lately about “algorithms.” Supposedly the dark magic of the internet can lure unsuspecting, innocent people deeper and deeper into the depths of political conspiracies, creepy kids’ videos on YouTube, or straight up flat-Earthers:

With a little help from algorithms that nudged increasingly questionable information and sources her way, Judith had become a part of the internet’s most extreme pregnancy communities. …

Social media has come under fire in recent years for amplifying extreme views and employing algorithms that connect users to these potentially dangerous echo chambers. Although much of this criticism has focused on political extremism, experts and lawmakers have also pointed to extremism fueled by health misinformation as a threat to individuals and the public health at large.

“Things can get a little dicey,” said Kolina Koltai, a researcher at the University of Texas at Austin, who studies the social media behavior of alternative health communities. “Not to demonize all of the groups, but when women start diagnosing and crowdsourcing health-related issues, they can end up getting bad medical advice that can be pretty dangerous.

“We’re in this weird time, like a new digital Wild, Wild West,” Koltai said.

Wow, “algorithms” sound really bad–except that all these algorithms actually do is recommend things similar to things you already like. It’s the same thing that happens on Amazon. If you search for slime, Amazon will show you a bunch of other slime-related products bought by other people who searched for slime. If you listen to your favorite folk band on YouTube, the suggestions bar will be filled with more folk bands listened to by people who also listened to your favorite band. Judith searched for freebirth, so she got recommendations related to freebirth.

A well-functioning algorithm does nothing more than try to recommend more of what you’re already interested in. If it works, you find something you want, like a new song or the perfect slime. If it doesn’t, you’re frustrated. 

If you join a bunch of Facebook communities about “natural birth” and “free birth,” you’ll get recommendations for more of the same. But there is nothing that compels people to click on these links, join these communities, and uncritically believe everything they read in them, any more than people are forced to buy “related items” advertised on Amazon. (And, by the way, no one is forcing you to watch weird porn on YouTube. “The algorithm made me do it” is the most pathetic 14-year-old-who-just-got-caught-watching-porn excuse I have ever heard.)

Judith didn’t just click recommended links–she actively sought out communities that would support her decision to ignore her doctor and midwife:

Judith also checked in with a local midwifery collective but ignored the gentle, constant advice that she induce.

Judith found a second opinion on Facebook. …

Searching the hashtag ‪#‎43weekspregnant‬ led her to a Facebook group called “Ten Month Mamas,” made up of a few hundred women who knew what she was going through. Judith joined.

Maybe, without the algorithms, she wouldn’t have known that there was more out there or thought to search for #43weekspregnant, but since she had just seen her doctor and midwife, she could have just as easily searched for information from doctors or real medical papers on the risks of being 43 weeks pregnant. Sweden did a whole study on this, but canceled it when 6 babies died at 42 weeks, so that information is definitely out there. (Thanks, Swedish friend, for the information.)

There’s been a push recently to blame “algorithms” for all of the bad stuff on the internet, whether it makes sense or not, as part of a wider push to make certain kinds of information more difficult to find. But the problem here is not the algorithms. (I, too, have been pregnant, and I have researched my options and even encountered freebirth advocates, but I didn’t try to give birth unassisted in a yurt because the idea never appealed to me.) The problem was the things Judith wanted.

You can’t change the overall algorithm to stop people like Judith from finding the information they want without breaking the algorithms for everyone. You might be able to build an algorithm that automatically detects certain types of behavior, like trolling or pornography, but “bad advice” is much harder to recognize (otherwise we wouldn’t be having this conversation). Communities like Judith’s would basically have to be shadowbanned or deleted on a case-by-case basis, which is both a lot of work for Facebook and an intrusive level of censorship. Yes, freebirth communities are clearly advocating something dangerous, but what about communities devoted to homebirth? Midwives? People who just really hate c-sections? Drawing a line between what is and isn’t “clearly dangerous” isn’t easy.

Finally, I would just like to note, in response to Mrs. Koltai’s comment, that millions of women (and men) have gotten excellent, life-saving medical advice online, much of it from non-professionals.

2. More credible blame lies, of course, with Judith and her husband. Judith chose her podcasts and sought out communities of like-minded people for support. She admits that she effectively “brainwashed” herself.

Why were these more appealing to Judith than regular medical advice?

Remember the old X-Files tagline: I want to believe?

Judith wanted a healthy pregnancy, uncomplicated labor, and a healthy baby–like almost all pregnant women–and if she’d had that, her freebirth would have gone off without a hitch. (Unfortunately, even the best of pregnancies can go wrong, rapidly and unexpectedly, once labor begins.) I think Judith wanted a healthy baby and uncomplicated labor so strongly that she refused to take seriously any information to the contrary. 

“43+1 today, politely declining hospital induction. … I really feel like this baby wants a home birth too but we are definitely being tested. What would you mamas do?”

This is a big red flag. Obviously fetuses cannot “want” anything. They are fetuses. Even babies do not “want” anything beyond the basics of infant care, like feeding, sleeping, and not being hot or cold. Do breech babies “want” to be born via c-section? Do ectopic pregnancies “want” to be implanted outside the uterus? No. This is magical thinking. You’re deluding yourself into thinking that “the baby wants” what you actually want, when it shouldn’t be about what anyone wants. It should be about what’s healthy.

When we want something to be true so much that we are willing to ignore evidence to the contrary, we experience cognitive dissonance. And we all do it. We all have things that we want to be true. Could our favorite politician really be a scumbag? Could our most cherished political solutions actually make things worse? Could our spouse cheat on us? Could we be not as smart as we think we are?

Sometimes I can feel cognitive dissonance–it’s this uncomfortable sensation in my head when trying to think about specific topics, like “presidents” and “people who are prettier than me.”

One sign that you are lying to yourself is that you have to hide what you are doing or thinking from the people who love you. Judith knew her relatives, aside from her husband, wouldn’t approve. She knew they would be afraid for her health or her baby’s health, but instead of listening, she hid what she was doing.

Unfortunately, merely wanting something to be true doesn’t make it true, in politics or real life, but these birth communities Judith had joined focused on the kind of magical thinking that leads people to believe that they can influence reality just by wishing hard enough:

“Birth is not a medical event but a spontaneous function of biology,” Free Birth Society instructor Yolande Norris-Clark says in the welcome video. It isn’t luck, Norris-Clark, an artist and mother of eight in the Canadian province of New Brunswick, breathily offers, but education, mindset and love of your baby, that hold the keys to successful freebirthing.

Education, love, and mindset have nothing to do with it. There are completely uneducated sharks mindlessly having bunches of healthy babies they’ll never love or care for out in the ocean and deeply loving, educated women with degrees in obstetrics and midwifery who need emergency c-sections due to ruptured uteruses.

Young people, especially, are used to having a fair amount of control over their lives, especially their bodies. They aren’t old enough yet to have been betrayed by hips, backs, or failing memories. They know that if they exercise they can lose weight or gain muscle. If they drink coffee they can stay up all night. They are used to thinking that with enough willpower, they can make their bodies do whatever they want.

Then comes labor. Labor is out of your control. You can no more “willpower” your way out of a bad labor than out of a failing kidney, and anyone who tells you that you can is lying. It can be deeply shocking.

3. The freebirth podcasts sold Judith a story–literally. She paid $300 for the freebirth society’s guide to how to have a baby at home, even though the process is literally “Wait until you go into labor. Try to find a comfortable position. Keep doing this until the baby comes out.” It’s a beautiful story, full of candles and yurts and beautiful thoughts and soaring spirits, but it’s still just an extremely expensive story.

The doctor will get sued if you die in his care, so the doctor has some incentive not to let you die. The random lady on the internet who sold you a $300 video about using your dog as a midwife will not, because who the hell trusts a dog to be their midwife?

Beware of people selling you a beautiful story who will not be impacted if you are the one who dies. They have no skin in this game.

These people sold Judith the idea that the magic of wishful thinking would get her a healthy baby and uncomplicated labor, and Judith wanted that healthy baby so much that she bought into it.

The actual claims of the “freebirth” and “home birth” communities bear investigating.

The course paints expectant mothers as warriors — and experts, doctors and midwives as the enemy.

In the video, Norris-Clark warns against induction, calling any procedure to bring on labor an “eviction from the womb.” …

Judith said the podcasts fanned her unease with doctors and medicine into a hot distrust, a common refrain in the freebirth community, in which hospital births are largely spoken about as traumatic experiences — harried medical teams rushing, poking, strapping women down to beds and pumping them full of drugs that confuse the mind, strangle the hormones responsible for love and push them into procedures that they didn’t feel they needed. Terms like “industrial obstetric tyranny” and “rape culture” are often used.

In general, you shouldn’t trust anyone who uses the phrase “rape culture” for anything that doesn’t involve actual rape. Prisons? Rape culture. Epidurals? Not rape culture.

Many people talk about women having “more control” at home than at the hospital. This is true, in a sense: you certainly won’t get an epidural if you don’t want one. It is untrue, though, in an emergency: you have far fewer options at home. If you need a c-section, well, you’re out of luck.

Doctors are not always right. Doctors are human; they make mistakes. Certainly people need places to talk about medical issues, get second opinions, and discuss what they should do if they disagree with their doctors’ opinions. But this kind of emotional language (“enemies”) is ridiculous and should be a big tip-off if you encounter it.

4. Let’s talk about these communities themselves. Obviously some of the people running them are absolute scum, making money off of other women’s suffering and their dead babies, but most of the people in them are good-hearted and well-intentioned. They are people like Judith herself, who really do want healthy babies.

The problem with these communities–and as a mom, I have been in many parenting-related communities and seen these problems first hand–is that they are always structured as “A supportive community for [activity X]” rather than “A supportive community for moms.” Activity X–whether it’s freebirth, breastfeeding, cloth diapering, etc–becomes the the focus, not the actual humans involved in them, and saying anything negative about the activity is forbidden.

So if you’re in a breastfeeding support group, you can’t give anti-breastfeeding advice like “Hey, sounds like this is really not working for you, maybe your baby would be better off on formula,” even if that is actually what someone needs to hear.

“43+1 today, politely declining hospital induction. They think I’m crazy,” Judith posted in Ten Month Mamas in January 2019, along with a list of the midwives’ concerns, including the baby’s larger size, her decreasing amniotic fluid and the integrity of her placenta, the organ that carries oxygen and nutrients from mother to baby. “… What would you mamas do?”

The comments rolled in, more than 50 per post.

“Trust your body.”

“Your baby isn’t ready to come out!”

“I would do exactly what you’re doing!”

“Keep going mumma, listen to your baby and your instincts — you got this.”

You only get positive comments on these sorts of posts because any negative comments get deleted and negative commentators get kicked out. Any time you are not getting at least some negative feedback, you have to ask why.

Yes, there is the argument (presented by one of the moderators) that these communities exist to allow discussion of a certain topic and if you want to discuss things that are not this topic, you can go literally anywhere else. The problem is that this is not how people actually operate. While you don’t want your group or message board to turn into 100% “Why this topic is wrong” posts, any discussion of a topic that doesn’t involve both sides is incomplete. Normal messageboards (take writers’ forums) have dedicated spaces for off-topic conversations, complaining, rants, and debates. Scientists and doctors welcome (at least in abstract) people who come up with new ways to test, falsify, and disprove theories, because this is how science and medicine advance.

If your advice is within the bounds of REALITY then you do not need to fear reality coming in and saying, “Hey, this is not for everyone. Some people need to do something different.”

If you can’t tell a woman in your community that maybe at 43 weeks she needs to get her ass to the doctor right away before her baby dies, then you have a PROBLEM.

I am not sure if this problem is specific to female-run communities, because I haven’t been in that many all-male ones, but I think it is. I think it is kind of a failure mode of how women prefer to interact, by removing points of view they don’t like from the conversation so they won’t have to interact with them rather than engaging directly.

So let’s try to summarize what went wrong:

  1. Algorithms: not that big a deal
  2. Wishful thinking: huge deal. We all do it, at least sometimes. Watch out for it.
  3. Sociopaths selling a beautiful dream: they want your money.
  4. Emotional language: interferes with logical thinking; big tip-off.
  5. Getting your information from communities that don’t allow for dissent or won’t tell you when you’re doing something dangerous.

Eugenics!

Everyone is talking about eugenics, so I thought I’d dive into the mix.

The first difficulty in discussing eugenics lies in the fact that different people use the word to mean different things. It does no good to use one sense of the term when replying to someone who meant something completely different, so we’re going to have to start with a range of definitions:

  1. Selective breeding to positively influence the distribution of traits in the gene pool
  2. Anything that positively influences the distribution of traits in the gene pool
  3. Purposefully removing specific negative traits from the population
  4. Anything undertaken to increase desirable traits in the population
  5. Valuing one human life over another
  6. Killing off “undesirable” people
  7. Genocide
  8. etc.

Although the dictionary definition is closer to number 1 or 2, most of the time when people use the word “eugenics”, they mean it in the sense of something coercive and unpleasant. When someone decides that they would rather marry and have children with someone they find attractive than someone they find unattractive, they don’t regard themselves as practicing “eugenics,” though they are making a decision about which traits they are passing on to the next generation.

Even aside from disagreements over definitions, people become emotional about eugenics. Many people are incredibly bad at separating moral and factual statements. Much of the opposition to Dawkin’s suggestion that eugenics works is not actually about whether it works so much as moral outrage over the idea of harming innocent people. They hear “eugenics,” and their brains jump to “gas chambers.” In contrast, when people hear of ways to improve traits that don’t involve harming specific people, they tend not to call it “eugenics.”

For example, suppose we found a vitamin that could reliably make people smarter, so the government decided to use tax dollars (personal sacrifice) to provide vitamins to everyone, even the poor. Most people would think this was fine because it’s a net positive.

Yes, this example doesn’t involve genetics, but it would change the distribution of traits in the population and it would make everyone smarter. It’s also something we already do: we put iodine in the salt, vitamin D in the milk, etc.

Now suppose we could use magic CRISPR bots to remove a person’s propensity to develop Alzheimer’s, fix heart disease, decrease their chance of cancer, etc. Maybe the CRISPR bots are delivered in pill form. If these pills fixed problems like trisomy 21, genetic mental retardation, genetic vision and hearing loss*, etc., most people suffering from these conditions would freely chose to take the pills and would consider them a miracle. If they couldn’t afford them, I think most people would support using tax dollars to fund these treatments in the same way that we pay for normal medical care. Access to CRISPR bots to correct severe genetic defects would soon be considered a basic human right.

These treatments would be true eugenics, but likely wouldn’t be controversial because they wouldn’t directly harm anyone (at least until athletes started using them in the same way they currently use steroids).

By contrast, people would object strongly to something like a government board that gets veto power over who you get to marry–or that decrees whom you must marry. This would be a significant personal sacrifice with a very nebulous promise of social benefit. This is the sort of thing people are actually objecting to. 

*There is some pushback against mechanical (surgical, etc) attempts to fix certain disabilities. Some people in the deaf community, for example, do not think there is anything “wrong” with them and have complained that giving people implants to fix deafness is “genocide” against their community. While this does nothing to the genetics of the population, they clearly feel like it falls under the general category of trying to get rid of deaf people, at least as a culture. This objection is fairly rare, however.

The question of whether “eugenics works” depends on both how you define eugenics and “works”. Certainly if I had supervillain-like powers and an island full of captive humans, I could selectively breed them to be taller, shorter, prettier (by my standards,) etc. Could I breed for personality? Absolutely. We’ve bred for golden retrievers, after all. I wouldn’t be able to breed for anything I wanted: X ray vision probably isn’t possible.

But we live in the real world, where I don’t have god-like superpowers. In the real world, it’d be governments doing the eugenics, and I have some serious doubts about the abilities of real-world governments in this regard.

The Germans are trying, though:

Closing EU borders will lead to inbreeding, German finance minister warns:

In an interview with weekly paper Die Zeit, Mr Schäuble rejected the idea Europe could close its borders to immigrants, and said: “Isolation is what would ruin us – it would lead us into inbreeding.”

Taking aim at opponents of Germany’s border policies, he said: “Muslims are an enrichment of our openness and our diversity.”

What’s with all the Knitting?

I received this question right after I finished crocheting all of the giftwrap ribbons into flowers and thought, “Huh, why am I doing this?”

The short answer is that I don’t know.

At the most direct and obvious level, I knit (or crochet, but for the sake of this post, I will be collapsing most yarn-related arts under the term “knitting”) because it’s fun, fast, and easy, and end the end you actually make something.

Knitting is very portable. I love my 3D printer, but I can’t exactly pop it in my purse and take it to the park with me. I skateboard, but I can’t skateboard at the mall or on an airplane (well, I could, but then I’d be having an awkward discussion with security). Sometimes I make chainmail, but that’s full of fiddly little bits that you can’t really balance on your knees.

Knitting is also very cheap. I’d love to learn something like carpentry or glass blowing, but these skills require a lot of time, room and expensive equipment to learn. Learning to knit only requires two pencils (smooth pencils make perfectly passable knitting needles) and a few dollars in yarn. Crochet requires an actual crochet hook, which might set you back a few more dollars, but either way, you can get started for less than $10.

The learning curve is a lot steeper on these skills, too–if I mess up while building a table, I’ve got a ruined table; if I mess up while knitting, I just pull on the yarn and undo the piece.

So that’s why  you’ll see people knitting: it’s easy, cheap, and portable.

But this is only a superficial analysis, for Rubik’s cubes are also cheap and portable, and if not exactly “easy” to solve, you can certainly fiddle with them without any training.

But Rubik’s cubes are pointless, outside of the intellectual activity. With knitting, you get an actual item at the end.

To be honest, I think most women find many male-dominated hobbies, like sports or video games, pointless. I’ve seen many football games, and I can tell you the outcome of every single one: one team wins and the other loses. The world keeps on spinning and nothing changes except that some of the players get hurt. Similarly, men will sink hours into a video game and get nothing tangible as a reward.

I’ve said before that men seem to prefer hobbies in which they get to tinker. They like building their own rigs, repairing their own cars, optimizing settings, or trying to figure out the most efficient ways to do things. Women prefer to just get a product straight off the shelf, use it, and get the job done.

(Of course this tinkering does, long term, produce a lot of good things.)

The one sort of exception to this general rule is arts and crafts , where women dominate. A Norwegian study, for example, found that about 30% of Norwegian women between the ages of 18 and 50 had knitted something in the past year, but less than 7% of Norwegian men. Among older Norwegians, the gender divide was much wider–over 60% of women over 60 knitted, but the number of male knitters rounds to zero. So while most women you meet probably don’t knit, even fewer men are likely to pick up a ball of yarn.

I love the arts and crafts store; it’s like a candy shop for adults.

The desire to make little things for the home probably stems from the nesting instinct–a real instinct found most prominently in pregnant women, who are often struck by a sudden urge to make their homes as baby-friendly as possible. This urge is often far in excess of reason, resulting in women compulsively scrubbing the kitchen tile with a toothbrush or rearranging all of the furniture in order to vacuum under the couch. Personally, aside from all of the cleaning, I made things, including a child sized easel, train table, and stuffed animals.

So the ultimate cause is probably a mild version of the nesting instinct–a desire to make one’s home warm and comfortable.

Happy New Year and keep cozy, everyone.

How Tall can Humans Grow

A Twitter friend recently proposed a question:

These skeletons can be divided into two groups: those for whom we have some historical evidence (eg, Goliath, famous literary villain), and those with no evidence except images like this one.

Incidentally, modern man does not average 6 feet tall. The average American man, hailing from a well-fed cohort, is only 5″9′ (you think men are taller than they are because they all lie). The global average is a bit smaller, at about 5’7“.

Historically, people tended to be a bit shorter, probably due to inconsistent food supplies.

I have often seen it claimed that heights fell when people adopted agriculture, but most hunter-gatherers aren’t especially tall. The Bushmen, for example, are short by modern standards; I suspect that the pre-agricultural human norm was more Bushman than Dinka.

If we roll back time to look at our pre-sapiens ancestors, Homo erectus skeletons are estimated to have been between 4″8′ and 6″1′, which puts them about as tall as we are, but with a lot of variation (we also have a lot of variation). Neanderthals are estimated about 5″4′-5″5′; Homo habilis was shorter, at a mere 4″ 3′. Lucy the Australophithecine, while female, was even shorter, similar to modern chimps.

On net, a few food-related hiccups aside, humans seem to have been evolving to be taller over the past few million years (but our male average still isn’t 6 feet.)

But does this mean humans couldn’t be taller?

The trouble with being unusually tall is that, unlike apatosauruses, we humans aren’t built for it. The tallest confirmed human was Robert Wadlow, at 8 feet, 11 inches. According to acromegalic gigantism specialist John Wass, quoted by The Guardian, it would be difficult for any human to surpass 9 feet for long:

First, high blood pressure in the legs, caused by the sheer volume of blood in the arteries, can burst blood vessels and cause varicose ulcers. An infection of just such an ulcer eventually killed Wadlow.

With modern antibiotics, ulcers are less of an issue now, and most people with acromegalic gigantism eventually die because of complications from heart problems. “Keeping the blood going round such an enormous circulation becomes a huge strain for the heart,” says Wass.

Ancient people, of course, did not have the benefit of antibiotics.

What about Bigfoot?

Well, Bigfoot isn’t real, but Gigantopithecus probably was.

Gigantopithecus … is an extinct genus of ape that existed from two million years to as recently as one hundred thousand years ago, at the same period as Homo erectus would have been dispersed,[2] in what is now VietnamChina and Indonesia placing Gigantopithecus in the same time frame and geographical location as several hominin species.[3][4] The primate fossil record suggests that the species Gigantopithecus blacki were the largest known primate species that ever lived, standing up to 3 m (9.8 ft) and weighing as much as 540–600 kg (1,190–1,320 lb),[2][5][6][7] although some argue that it is more likely that they were much smaller, at roughly 1.8–2 m (5.9–6.6 ft) in height and 180–300 kg (400–660 lb) in weight.[8][9][10][11]

They’re related to orangutans; unfortunately it’s difficult to find their remains because the Chinese keep eating them:

Fossilized teeth and bones are often ground into powder and used in some branches of traditional Chinese medicine.[13] Von Koenigswald named the theorized species Gigantopithecus.[8]

Since then, relatively few fossils of Gigantopithecus have been recovered. Aside from the molars recovered in Chinese traditional medicine shops, Liucheng Cave in LiuzhouChina, has produced numerous Gigantopithecus blacki teeth, as well as several jawbones.[14]

Please stop eating fossils. They’re not good for you.

Unfortunately, since we only have teeth and jawbones from this creature, it’s hard to tell exactly how tall it was.

Let’s just estimate, then, a maximum human height around 10 feet. After that, your heart explodes. (Joking. Sort of.)

Let’s start with Goliath.

The Philistines were a real people–one of the “Sea Peoples” who showed up in the Mediterranean during the Bronze Age Collapse:

 In 2016, a large Philistine cemetery was discovered near Ashkelon, containing more than 150 dead buried in oval-shaped graves. A 2019 genetic study found that, while all three Ashkelon populations derive most of their ancestry from the local Levantine gene pool, the early Iron Age population was genetically distinct due to a European-related admixture …According to the authors, the admixture was likely due to a “gene flow from a European-related gene pool” during the Bronze to Iron Age transition…[9]  

The inscriptions at Medinet Habu consist of images depicting a coalition of Sea Peoples, among them the Peleset, who are said in the accompanying text to have been defeated by Ramesses III during his Year 8 campaign. In about 1175 BC, Egypt was threatened with a massive land and sea invasion by the “Sea Peoples,” a coalition of foreign enemies which included the Tjeker, the Shekelesh, the Deyen, the Weshesh, the Teresh, the Sherden, and the PRST. … A separate relief on one of the bases of the Osirid pillars with an accompanying hieroglyphic text clearly identifying the person depicted as a captive Peleset chief is of a bearded man without headdress.[49] This has led to the interpretation that Ramesses III defeated the Sea Peoples including Philistines and settled their captives in fortresses in southern Canaan; another related theory suggests that Philistines invaded and settled the coastal plain for themselves.[53] The soldiers were quite tall and clean shaven. They wore breastplates and short kilts, and their superior weapons included chariots drawn by two horses. They carried small shields and fought with straight swords and spears.[54]

The name “Goliath” is a real Philistine name.

(More on the Bronze Age Collapse: 1177 BC: the Year Civilization Collapsed.)

So there is a decent chance that the Goliath recorded in the Bible was, in fact, a real person.

However, Goliath’s great height may just be… an exaggeration. According to Wikipedia: 

Goliath’s height increased over time: the oldest manuscripts, namely the Dead Sea Scrolls text of Samuel from the late 1st century BCE, the 1st-century CE historian Josephus, and the major Septuagint manuscripts, all give it as “four cubits and a span” (6 feet 9 inches or 2.06 metres)…

It looks like Goliath was tall, but only basketball player tall, not Guinness Book of World Records tall.

Maximinus Thrax:

The shortest guy in the picture, “Maximinus Thrax,” was a real person and emperor of Rome from 235 – 238 AD. 8″6′ is at least within the range of heights humans can achieve, and he was, according to the accounts we have, very tall. Unfortunately, we don’t know how tall he was–the ancient accounts are considered unreliable, the Roman “foot” is not the same as the modern “foot,” and crucially, no one has dug up his skeleton and measured it.

So Maximinus was probably a tall guy, though not 8″6′ (that would require the Roman foot to equal our modern foot).

Og the Rephaim:

Og, King of the Bashan, is only known from the Bible, but might have been an actual king. We don’t have any chronicles from other countries that mention him (kings often show up in such chronicles because they make war, get defeated, send tribute, sign treaties, etc., but there is one Agag of the Amelekites who does have a similar name.

Interestingly, there is one Og attested in archaeology, found in a funerary inscription which appears to say that if the deceased is disturbed, “the mighty Og will avenge me.”

The Bible claims that Og’s bed was 13 feet long. Wikipedia offers us an alternative explanation for this mysterious bed: a megalithic tomb:

It is noteworthy that the region north of the river Jabbok, or Bashan, “the land of Rephaim”, contains hundreds of megalithic stone tombs (dolmen) dating from the 5th to 3rd millennia BC. In 1918, Gustav Dalman discovered in the neighborhood of Amman, Jordan (Amman is built on the ancient city of Rabbah of Ammon) a noteworthy dolmen which matched the approximate dimensions of Og’s bed as described in the Bible. Such ancient rock burials are seldom seen west of the Jordan river, and the only other concentration of these megaliths are to be found in the hills of Judah in the vicinity of Hebron, where the giant sons of Anak were said to have lived (Numbers 13:33).[2]

Og might have actually been a very tall person, though it is doubtful he was 13 feet tall. He might have been a fairly normal-sized person who had a very impressive megalithic tomb which came to be known as “Og’s Bed,” inspiring local legends. He also might not have existed at all. Until someone digs up Og’s body and measures it, we can’t say anything for sure.

French Giants

Interestingly, I found two French giants, though neither of them, as far as I know, near Valence.

geant_de_castelnau
Bones of the giant of Castelnau, plus a normal human humerus.

The Giant of Castlenau is known from three pieces of bone uncovered in 1890. If they are human, they are unusually large, but no research has been done on them since 1894 and even a crack team of Wikipedia editors has failed to uncover anything more recent on the subject.

I’d hold off judgment on these until someone within the past century actually seems them and confirms that they didn’t come from a cow.

Teutobochus, king of the Teutons, was a giant found in 1613, France. Unfortunately, he seems to have been a deinotherium–that is, an extinct variety of elephant.

This is the last of the reasonable skeletons. The rest exist only in graphics like the one at the top of the post and articles discussing them–in other words, there’s more evidence for Paul Bunyan.

So far I’ve found no sources on the 15 foot Turkish giant. Yes, lots of people claiming they exist, eg. No, not one photo of them.

Was a 19’6″ human skeleton found in 1577 A.D. under an overturned oak tree in the Canton of Lucerne? There are no records of it.

Any 23 foot tall skeletons near an unidentified river in Valence, France? Can’t find any.

And what about the 36 foot tall Carthaginian skeletons?

apatosaurus-size
Apatosaurus dimensions

Giraffes, currently the tallest animals on earth, only reach 19 feet. T-rex was 12-20 feet tall. Even the famous Apatosaurus was a mere 30 feet tall (though we don’t know how high he could swing his head).

If you’re talking about humans who were bigger than an Apatosaurus, you’re really going to have to pause and take a biology check–and also check to make sure you aren’t holding an Apatosaurus femur.

Humans could be bigger (or smaller) than they currently are, just as dinosaurs came in many different sizes (some, like hummingbirds, are quite small), but different sizes require different anatomy. That’s why people with giganticism have heart trouble and tall people die younger: we aren’t built for it. Humans aren’t designed to handle Apatosaurus-level weights; our hearts aren’t designed to pump blood that far. A 36 foot tall human couldn’t be a single individual with giganticism, nor even a whole family or tribe of unusually tall people–they’d have to have evolved that way over millions of years. They’d be their own species, and we’d have actual evidence that their bones exist.

Incidentally, most of the sources I found discussing these skeletons, including ones using the graphic above, claim that evidence of these giants is being actively hidden or suppressed or destroyed by The Smithsonian, National Geographic, etc., because they would somehow disprove evolution by showing that humans have gotten shorter instead of taller.

This is absurd. Gigantopithecus is taller than any living ape (including humans) but he doesn’t disprove evolution. He doesn’t even disprove orangutans. A giant human skeleton would simply show that there was once a giant human–not that humans didn’t evolve.

Humans can evolve to be shorter–it has happened numerous times. Pygmies are living human people who are much shorter than average–adult male Pygmies average only 5 feet, one inch tall. Pygmoid peoples are just a little taller, and found in many parts of the world.

Even shorter, though, were Homo floresiensis and Homo luzonensis. The remains we have so far uncovered of H. floresiensis stood a mere 3 feet, 7 inches, and luzonensis was similarly petite. Both of these hominins descended from much taller ancestors.

Evolutionists don’t need to hide the existence of giant skeletons because evolution can’t be disproven by the existence of a tall (or short) skeleton. That’s just not how it works. The Smithsonian would love to display giant skeletons–if it had any. National Geographic would love to run articles on them. They’d make money like hotcakes on such sensational relics.

The problem is that no one can actually find any of these skeletons.

 

You’re all watching Sesame Street, now

So. I encountered this “TV” thing while on vacation (they had DirectTV at the hotel, and I needed the kids to stay put while packing and unpacking).

Now, obviously we watch some TV, mostly Minecraft videos and some educational things, but regular TV is something else.

It’s awful.

My kids actually demanded that we turn it off and maintained this policy through out the trip (even nixing Cartoon Network).

How do people watch this thing?

I didn’t find the basic content of the programs themselves objectionable. We saw a program featuring amateur music and dance numbers that had plenty of nice performances, for example. However, I find the way these programs are structured very unappealing:

  1. Onscreen clutter: For example, any news program will have scrolling tickers, waving flags, and other distracting, on-screen motion that has nothing to do with the things being discussed
  2. Frequent camera movement: Like the onscreen clutter, frequent camera movement and moving transitions between video clips keep changing what’s on the screen
  3. Too many cuts in the footage. This contributes both to visual clutter and makes it more difficult to keep track of what’s going on because subjects keep changing.
  4. Ads. Ads ads and more ads. They are guilty of all of the above and more.
  5. Many ads have the additional problem of making me feel like advertisers think I am an idiot, which makes me angry.
  6. We saw one ad on Cartoon Network in which kids (teens? I forget) made smoothies out of disgusting things and then drank them. This was not entertaining. This did not make my children want to watch the show being advertised. I have seen many absurd Youtube videos, but this took the cake.
  7. Filler.

I think it was Sesame Street that was first written with the idea that children have very short attention spans and thus the show needs to cut to something new every few minutes. This was obviously wrong, as kids will happily play for hours, day after day, with toys that they like. Crayons, bikes, slides, trains, dolls, trees, other kids–the average kid has no problem paying attention.

The difficulty was getting kids to pay attention to TV, which was still pretty new in the 60s and featured mostly black and white programs aimed at adults. Getting kids who wanted to go ride their bikes to pay attention to a black and white TV was hard. Sesame Street, as an educational project, began with the then-novel idea of using research on children to get them to pay attention so they could learn from the show. 

So they pioneered the technique of using frequent visual/narrative switches to constantly ping your “Hey! Pay attention!” reflex.

I don’t know what the technical term for this reflex is, or if it even has one, but you’ve surely noticed it if you’ve ever heard your name randomly spoken at a crowded dinner party. Here you were, conversing with one person, not paying attention to the other conversations around you, when suddenly, ping, you heard your name and your head snapped up. Your brain efficiently filters out all of the noise that you don’t want to listen to, but lets that one word–your name–through all of the gates and filters, up to the conscious level where it demands your attention.

Sudden scene changes, well, they don’t happen in nature. If the lake you are looking at suddenly transforms into a mountain in real life, something has gone very wrong. But things do suddenly move in nature–pouncing lions, fleeing gazelles, occasionally boulders falling down a mountain. Moving things are important, so we pay attention to them.

At least Sesame Street had good intentions. Car advertisers, not so much.

So now television programming and advertisements, in order to keep you from getting bored and wandering away, has been optimized to constantly ping your “pay attention!” reflex. They have hijacked your basic survival instincts in order to get you to watch them so you will watch their ads and so they can make money selling you things that you probably didn’t need in the first place (otherwise they wouldn’t have needed to work so hard to get you to watch their ads).

And you pay for this thing!

The whole thing is like a scaled down version of an arcade or casino, where the whole point is to get you to enjoy paying for the privilege of being separated from your money.

To be fair, I don’t hate all advertising. Sometimes it is useful. I understand that when I download some silly little free game, it has ads. The ads pay for the game, and since it’s on my tablet, I never have sound on and I can just put it down and ignore the ads. But I also spend very little time playing such games.

I feel like the whole thing is designed to turn your brain to jelly. If you thought for too long, you might realize that this entire storyline is stupid, that you’re wasting your time, that you don’t actually care about this thing on the news, and you’d really rather read a book or go for a walk. Instead the scene changes every few minutes so you never have time to concentrate on how meaningless it all is. (Yes, it’s all Harrison Bergeron, all the time.)

PS: Twitter’s bad for you, too.

Fame is Terrible for People

175px-Elvis_Presley_Jailhouse_Rock
Elvis

While researching my post on Music and Sex, I noticed a consistent pattern: fame is terrible for people.

Too many musicians to list have died from drug overdoses or suicide. Elvis died of a drug overdose. John Lennon attracted the attention of a crazy fan who assassinated him. Curt Cobain killed himself (or, yes, conspiracy theorists note, might have been murdered.) Linkin Park’s Chester Bennington committed suicide. Alice in Chains’s Layne Staley died of heroin. The list continues.

Far more have seen their personal relationships fail, time after time. The lives of stars are filled with breakups and drama, not just because tabloids care to report on them, but also because of the drugs, wealth, and easy availability of other partners.

At least musicians get something (money, sex,) out of their fame, and most went willingly into it (child stars, not so much). But many people today are thrust completely unwillingly into the spotlight and get nothing from it–people caught on camera in awkward incidents, or whose absurd video suddenly went viral for all the wrong reasons, or who caught the notice of an internet mob.

Here we have people like the students from Covington Catholic, or the coffee shop employee who lost her job after not serving a black woman who arrived after the shop had closed, or, for that matter, almost all of the survivors of mass shootings, especially the ones that attract conspiracy theorists.

It seems that fame, like many other goods, is a matter of decreasing returns. Going from zero fame to a little fame is nearly always good. Companies have to advertise products so customers know they exist. Being known as an expert in your field will net you lots of business, recommendations, or just social capital. Being popular in your school or community is generally pleasant.

At this level, increasing fame means increasing numbers of people who know and appreciate your work, while still remaining obscure enough that people who don’t like or care for your creations will simply ignore you.

Beyond a certain level of fame, though, you’ve already gotten the attention of most people who like you, and are now primarily reaching people who aren’t interested or don’t like you. If you become sufficiently famous, your fame alone will drive people who dislike your work to start complaining about how stupid it is that someone who makes such terrible work can be so famous. No one feels compelled to talk about how much they hate a local indie band enjoyed by a few hundred teens, but millions of people vocally hate Marilyn Manson.

Sufficient fame, therefore, attracts more haters than lovers.

This isn’t too big a deal if you’re a rock star, because you at least still have millions of dollars and adoring fans. This is a big deal if you’re just an ordinary person who accidentally became famous and wasn’t prepared in any way to make money or deal with the effects of a sudden onslaught of hate.

Fame wasn’t always like this, because media wasn’t always like this. There were no million-album recording artists in the 1800s. There were no viral internet videos in the 1950s. Just like in Texas, in our winner-take-all economy, fame is bigger–and thus so are its effects.

I think we need to tread this fame-ground very carefully. Recognize when we (or others) are thrusting unprepared people into the spotlight and withdraw from mobbing tactics. Teenagers, clearly, should not be famous. But more mundane people, like writers who have to post under their real (or well-known pseudonyms), probably also need to take steps to insulate themselves from the spasms of random mobs of haters. The current trend of writers taking mobs–at least SJW mobs–seriously and trying to appease them is another effect of people having fame thrust upon them that they don’t know how to deal with.