Color Wheel Frustration

Crayola tempera paintRemember the color wheel?

When you were a kid, your art teacher probably taught you the standard color wheel: If you have red, blue, and yellow paint, you can combine them to make any color (except black, white, gold, silver, magenta, neon anything…) Okay, almost any color. Red + Blue = Purple, Blue + Yellow = Green, and Red + Yellow = Orange. Mix all three, and you get Brown.

mixed paint But if you’ve ever picked up a standard set of kids’ tempera paints and tried to mix them, you’ve probably noticed that things aren’t quite this simple.

Here are the results of mixing red, blue, and yellow. The green looks pretty good. The orange is still red, and the “purple” is terrible. No, that’s not your monitor messing up. It is actually almost black.

This happens because the red and blue in these kits aren’t actually primary colors. The real primary colors are yellow, cyan, and magenta. Why were we taught that red and blue are primary paint colors in school? I don’t know. I suspect it’s because teachers think little kids understand red and blue but don’t know what “cyan” and “magenta” are, (though if you’ve ever discussed dinosaurs with a four year old, you’ve know that kids know lots of big words).

Thankfully, if you are cursed with red, yellow, and blue, you can improve your results.

The blues that come in standard kids’ paints tend to be very dark, and the reds are dark compared to the yellow. Yellow is, by nature, very light. If you try to mix equal quantities of these pigments, the dark colors will overwhelm the light ones.

Add white to lighten the blues and reds, then increase the amount of yellow in the orange and red in the purple:

Why bother with the white? Even though you are adding paint, paint is essentially subtractive. Paint works by absorbing most of the light that strikes it and only reflecting a few particular wavelengths. When you mix paints, you don’t increase the range of light reflected, but narrow it: you’re now blocking two paints’ worth of colors. This is why our purple looks almost black.

So if you’ve mixed your colors and the result is still too dark, add some white.

The purple is still pretty blah, but purple is hard. We didn’t even invent good, cheap purple paint until the 1800s. (Before then, purple was expensive, which is why it was associated with kings.) Don’t feel bad if you can’t get a good purple and just buy purple paint.

brown?Now let’s talk about brown. Here is the brown I get when I mix all three colors.

Yeah. That’s awful.

I remember being very frustrated as kid because no matter how I mixed my red, blue, and yellow, I just got disgusting colors that didn’t even deserve the name “brown.”

brownThere is a much easier and better way to make brown: add a touch of black to your orange. Yes, the orange. Brown is actually dark orange.

Here you go: orange + black. See? Isn’t that better? Now we’ve got a color that could grace a tasty bar of chocolate, a friendly dog, or a wooden table.

Why is brown dark orange? That’s a good question. I’m not sure, but I think it has to do with how our eyes physically perceive color.

Dark blue and light blue are both good, recognizable colors.
Dark red and light red are both good colors.
Dark green and light green are both good colors.

Dark yellow isn’t a thing. Try it. Mix yellow+black. Now you’ve got olive, not yellow.

And dark orange, as we’ve discussed, is brown.

You might have noticed that when people talk about light, instead of paint, they use a different set of primary colors: blue, red, and green. Yes, blue and red are actually the primary colors of light. Light is additive: if you put more light in, you get more light out. Mix all of the colors of light together and you get white light, like the sun. The sun makes a lot of light.

The cones in your eyes are optimized to detect particular wavelengths of light. They tell your brain what they’ve detected, and your brain constructs an image that you perceive as color. Our cones are optimized to perceive red, green, and blue light.

Yellow light is made by mixing red and green, so when you perceive yellow, both red and green cones are activating at the same time. Orange is the same story, but with a more red activation.

A “dark” version of a color is just a version that is emitting/reflecting less light. I suspect that when you see “dark green,” you are activating fewer of your green receptors, but still activating some of them, so your brain gets a clear signal that says “green.” When you see dark red, the same thing happens. But in order to see yellow and orange, you need to activate both receptors. I suspect that when you see dark yellow and dark orange, not enough of both red and green get activated to send a clear picture to your brain. What you end up with is, essentially, a degraded signal: brown.

You can degrade signals in other ways–by just blocking out a lot of the colors, as when you mix all of the paints, for example–but it’s faster and easier to work with orange. (And that’s definitely the technique you’ll use if you’re coloring on a computer.)

Good luck and happy painting.

More on primary colors of light and paint

Great video.

Noise, Noise, Noise

Noise is a the foe of any information transmission, yet the total elimination of noise is–counter-intuitively–bad.


Nature is, from our human perspective, inherently noisy, and this noise is inherent to its beauty. When we try to over-fit a simple order we get something that is, yes, lined up neatly, but also dead.

Nature is not truly noisy, but the mathematics that underlies its order is more complicated than we can easily model (it is only in the past century that we’ve developed the tools necessary to model tomorrow’s weather with any degree of accuracy, for example.) Whether we are drawing trees or clouds, controlled randomness works far better than regular repetition. (If you want to get technical, the math involved tends to involve fractals.)

In photography, dithering is the intentional application of noise to randomize quantitization errors. According to Wikipedia:

Dither is routinely used in processing of both digital audio and video data, and is often one of the last stages of mastering audio to a CD.

A common use of dither is converting a greyscale image to black and white, such that the density of black dots in the new image approximates the average grey level in the original.

…[O]ne of the earliest [applications] of dither came in World War II. Airplane bombers used mechanical computers to perform navigation and bomb trajectory calculations. Curiously, these computers (boxes filled with hundreds of gears and cogs) performed more accurately when flying on board the aircraft, and less well on ground. Engineers realized that the vibration from the aircraft reduced the error from sticky moving parts. Instead of moving in short jerks, they moved more continuously. Small vibrating motors were built into the computers, and their vibration was called dither from the Middle English verb “didderen,” meaning “to tremble.” Today, when you tap a mechanical meter to increase its accuracy, you are applying dither, and modern dictionaries define dither as a highly nervous, confused, or agitated state. In minute quantities, dither successfully makes a digitization system a little more analog in the good sense of the word.

— Ken Pohlmann, Principles of Digital Audio[1]

The term dither was published in books on analog computation and hydraulically controlled guns shortly after World War II.[2][3] 

This mechanical dithering is also why many people play “white noise” sounds while studying or falling asleep (my eldest used to fall asleep to the sound of the shower). As the article notes:

Quantization yields error. If that error is correlated to the signal, the result is potentially cyclical or predictable. In some fields, especially where the receptor is sensitive to such artifacts, cyclical errors yield undesirable artifacts. In these fields introducing dither converts the error to random noise. The field of audio is a primary example of this. The human ear functions much like a Fourier transform, wherein it hears individual frequencies.[8][9] The ear is therefore very sensitive to distortion, or additional frequency content, but far less sensitive to additional random noise at all frequencies such as found in a dithered signal.[10]

In digital audio, like CDs, dithering reduces the distortion caused by data compression (a necessary part of the process of producing CDs). The same process works in digital photography, as demonstrated in these photos:

from Wikipedia

In art, dithering allows artists to create a wide range of colors out of a limited palette (this is, of course, how the colors in newspaper comics are created).

There are many different ways to dither, and subsequently many different dithering algorithms, all created with the intention of using random noise to increase the quality of images/sound/readings, etc.

Too much noise is of course a problem, but as photographer Frederik Mork notes:

Don’t forget that noise can also be a creative tool. Especially when paired with black-and-white, high ISO noise can sometimes add a lot of atmosphere to the image. Some images need noise.

Symmetrical faces are supposed to be more attractive than less-symmetrical faces, but this only applies up to a point. Even very beautiful faces are not truly symmetrical, and increasing their symmetricality (by mirroring the left side over the right, or the right side over the left) does not improve them:

Nicole Kidman

I don’t know where this image came from originally, but I found it in a YouTube video.  There is a literature on this subject, probably primarily related to plastic surgery. The image on the left is a Nicole as she normally appears; the image in the center is Nicole with the right side of her face mirrored, an the image on the right is the left side of her face. Note that the original contains several asymmetries: her hair, her eyebrows, and even the way her necklace lies on her chest.

This post was inspired by a conversation about what it meant to be a “good” musician, and whether one can be a reasonable judge of music outside of one’s own musical tastes. If I love rap but hate techno, can I still recognize which techno songs are considered “good” by some standard other than “techno fans like it”? Is all art subjective, or are some pieces actually better or worse than others?

I am a simple creature, and I do not really understand the depths of what goes into creating music. I can squeak out a few notes on a recorder and Twinkle Twinkle Little Star on the piano, but deeper theories behind things like “harmonic intervals” and “chords” are beyond me. Music to me is an immersive sensory experience (more so since quarantine has induced a state of semi-zen that has quieted the normally very loud meta and meta-meta narrative in my head). Whether something is good or bad I cannot say in any quantitative sense, but to paraphrase the words of Justice Stewart, I know it when I feel it.

I had very little exposure to popular music as a kid (I listened to a lot of “Christian rock”), and only began listening to music seriously and sorting out what I liked and didn’t in grad school, so whatever arguments you have about people forming their musical taste in highschool don’t apply.

I tend to like music that has a lot of distortion–noise, if you will. (I like music that reflect what I feel, and my feelings look like a Jackson Pollock.)

You might have noticed that my current favorite musician is Gary Numan (the guy who sang “Cars” back in 1979.) I don’t like most of Numan’s early work (like “Cars”); the sound is too simple. It took a couple of decades for Numan to develop the layers of complexity and necessary to create the sort of musical soundscape I enjoy. Compare, for example, his 1979 performance of “Are Friends Electric” to his 2013 performance of the same song. They are the same song, but with slightly different arrangements; I don’t know the technical words to describe them, but I find the 1979 version merely acceptable, while the 2013 hit me like a brick. (That’s a good thing, in this context.)

Or, heck, let’s compare Cars 1979, with Cars 2009 (with NIN). Okay, so the first thing that stands out to me is that 2009 Gary Numan has gotten laid and is no longer afraid to move around the stage, unlike 1979 Numan. Second, Trent Reznor on the tambourine is hilarious. Third, there is WAY more noise in the second recording–especially since it is live and there is a screaming audience–and this does not detract from the experience: I vastly prefer the 2009 version.

One of the things I love about Numan’s music is that you can do this; you can listen to the same piece from different eras and see how his style has changed and evolved.

To really appreciate his new new style, though, I think it’s best to listen to new compositions, rather than covers of his older work, like I am Dust, Ghost Nation, or Crazier (with Rico).

Music, as I understand it, is built from a disturbed mathematical progression of sounds. A sequence of sounds in a regular pattern builds up our expectation of what will come next, and the violation of this sequence creates surprise, which–when done properly–our brains enjoy. If the pattern simply repeated over and over, it would become boring.

I recently enjoyed a documentary on Netflix about ZZ Top, (who knows, maybe they’ll become my favorite band someday). At one point in the documentary the band described the difficulties of their first recording session. They had their song, had their band, had their instruments, but the guy doing the recording just couldn’t capture the right sound. He had microphones all around the recording studio, but just couldn’t get what he wanted. So he proposed that the band loosen the strings on their guitars (or maybe it was just one guitar, forgive me, it’s been a while) to create a slightly out of tune sound. The band’s manager was having nothing of it: the instruments needed to be in tune. Finally the recording studio guy proposed that the manager run out to get them some barbecue, because it was getting on toward dinner time, and conveniently directed the manager to a restaurant across the county line, a good half hour away. With the manager gone for the next hour and fifteen minutes, they untuned the guitars and finally got the sound they were looking for.

(Here’s ZZ Top’s Sharp Dressed Man.)

Is this a “better” sound? It’s a different sound. It’s not the standard sound, and if you’re looking for the standard sound that guitars are supposed to make, this isn’t it. Is it wrong, on technical grounds? Or is it right because it was the sound the band wanted to make?

This whole post was inspired by the claim that Kurt Cobain was, technically, not a good musician. I found this accusation absolutely flabbergasting. Of course, I regard pretty much anyone who can crank out a tune on a guitar as “good”, and anyone who can top the charts as “excellent” by default. If we want to differentiate between top stars, well, I guess we can, but in the immortal words of @dog_rates, they’re all good dogs, Brent.

I’m not a huge Nirvana fan–like I said, I never listened to them as a kid (I didn’t have MTV and couldn’t have told you the difference between Pearl Jam and Oyster Jelly), but I like grunge and Nirvana is part of that.

After some conversation, I realized that my interlocutor and I were using different definitions of “technically,” which is always a sign that you should stop arguing about dumb stuff on the internet and go take a walk, except when you’re quarantined. I meant “technically” as in “actually,” while he meant it in the sense of “the proper way of doing something; by the manual.” His argument was that Kurt Cobain did not play the guitar properly according to the manual for how to play a guitar. Kurt was dropping notes, or not pushing down the strings properly, or otherwise playing lazily and not doing it right. Someone who has taken lessons on “how to play a guitar” will not be able to play like Kurt because he was being sloppy and not playing properly. They will have to learn how to be sloppy.

Naturally, I found this argument baffling. I have no idea how to play the guitar, but my standard for whether a piece of music is good or not is based on how it sounds, not how it is produced. I am listening to Nirvana now and I don’t hear–to my ears–any flaws.

To say that there is a “proper” way to play a guitar that is a standard benchmark against which musicians are judged sounds like some sort of  prescriptivist nonsense. It’s like saying that there is a proper way to paint, and that “impressionism” isn’t it:

Monet’s Impression, Sunrise

An outraged critic, Louis Leroy, coined the label “Impressionist.” He looked at Monet’s Impression Sunrise, the artist’s sensory response to a harbor at dawn, painted with sketchy brushstrokes. “Impression!” the journalist snorted. “Wallpaper in its embryonic state is more finished!” Within a year, the name Impressionism was an accepted term in the art world.

If the name was accepted, the art itself was not. “Try to make Monsieur Pissarro understand that trees are not violet; that the sky is not the color of fresh butter…and that no sensible human being could countenance such aberrations…try to explain to Monsieur Renoir that a woman’s torso is not a mass of decomposing flesh with those purplish-green stains,” wrote art critic Albert Wolff after the second Impressionist exhibition.

Although some people appreciated the new paintings, many did not. The critics and the public agreed the Impressionists couldn’t draw and their colors were considered vulgar. Their compositions were strange. Their short, slapdash brushstrokes made their paintings practically illegible. Why didn’t these artists take the time to finish their canvases, viewers wondered?

Indeed, Impressionism broke every rule of the French Academy of Fine Arts, the conservative school that had dominated art training and taste since 1648.

The “proper” way to play a guitar is however sounds good, and if it sounds better with dropped notes and imperfectly depressed strings, then that is the proper way to play. This is grunge, and grunge is intentionally high on the distortion.

Whether it sounds good or not is, of course, a matter of opinion, but Smells Like Teen Spirit has over a billion views on YouTube, so I think it’s fair to say that there are a lot of people out there who think this is a very good song. From their perspective, Kurt Cobain is a very talented musician.

There’s a saying that you have to learn the rules in order to know when to break the rules. It applies primarily in art and literature, but I suppose it applies to the rest of life, too. When you are learning to write, you learn the rules of grammar and punctuation. If you become a poet, you know when and how to throw all of that out the window. If you want to be an artist, you need to know how to paint; later you can throw together whatever colors you want. If you want to play music, then you need to learn to play properly–harmonic intervals and all that, I suppose–but when you actually play music, you need to know when to break the rules and how subtle differences in the way the notes are played result in massive differences in the music. Compare, for example, NIN’s Hurt to Johnny Cash’s.

This is a song that gets its power from our subverted expectations; we expect an increase in tempo that never really comes, creating a tension that stretches out across the song, finally breaking at 4:33 (in Trent’s version).

In these two songs, I think Trent’s version is “better” in the technical skills sense, but Cash’s version is better in the absolute punch in the guts sense. This is simply because of who Trent and Cash are; they each bring their own sense of self to the song: Trent the sense of a bitter youth; Cash the sense of an old man composing his epitaph.

Let us end with some Alice in Chains:

I hope you have enjoyed the songs.

The Cost of Escape

While making plans for what looked like a looming corona-pocalypse, I thought back (as I often do) over the many disasters of history and what they must have looked like, before-hand, to the people caught in them.

What did ordinary Poles think on the eve of WWII? We know they did not expect war to arrive so furiously on their doorsteps, because if they had, the entire nation would have converted every scrap of wood and metal they had into boats and poured into the Baltic long before the Germans arrived. One in 5 Poles died in the war, a death rate that makes almost any risk worth taking. Almost no one expected this ahead of time; certainly many expected war, but only the most paranoid imagined tragedy at this scale.

And what did the average Jew expect? Certainly Hitler said some very unpleasant things about them, but again, no one expected cattle cars and gas chambers.

Wealth is tricky. You need money to buy your way out (few people can just walk or kayak their way out of a country,) but it is rarely kept in easily portable gold coins under the mattress. People tend to invest their money into houses or productive enterprises, which are difficult to liquidate quickly. If you realize that you need to get out fast, you can sell your house, but may only get half of its true value. (When Isaac Bacirongo had to flee the DRC in Still a Pygmy, he had to sell his house overnight; he got about half its value.) Even worse are degrees and certifications that you’ve spent years of effort and money to earn that are only good in one country. Having money is better than not having money, but moving money fast is difficult and requires significant losses–and people who’ve put a lot of effort into making money in the first place don’t like taking huge losses on the chance that something might go wrong in the future.

And it only gets worse if you have a family. Pull your kids out of school? Convince your wife’s parents to come with you? Leave your brother and sister behind?

Even if you think, “Things are going to get bad,” well, how bad? Enough to sell everything you own, take massive losses, and take your chances with the sharks?

The time to get out is early, when things are still good, but at this point, there’s no reason to get out. What are you, paranoid? The worse things get, the more obvious it becomes that you need to get out, but the worse things get, the harder it becomes to liquidate your assets and run. In other words, costs–while always high–are lower when risk is low and higher when risk is high.

So who gets out? The paranoid, the prescient, and the peripatetic (that is, those whose lives are already optimized for moving).

I’m sure insurance companies have an extensive literature on the subject.

It’s only in retrospect that we have the luxury of saying, “Boy, things sure did get bad! Here’s exactly when people should have gotten out!” Then we can tsk-tsk the ones who didn’t, the ones who didn’t see the writing on the wall or who weren’t willing to pull up stakes and run. In reality, though, you don’t know what’s going to happen until it happens.

I was worried enough about ebola to buy a big bag of rice and another of beans. There’s no harm in rice and beans, as I see it, and a lot of good if I need them. Thankfully ebola never became a big deal in the West–while it is awful and horrible and makes people basically explode, it is still difficult to catch if you don’t come in contact with the body and have things like modern sewer systems. Whew.

After that false alarm, should I be worried about corona? Certainly it has been a big deal in China, but will it peter out like SARS, and MERS, and ebola? Or will it spread out of control? Again, for me, preparation was not a big deal. I still had some of the rice and beans, after all; I already homeschool my kids. But this is obviously not true for everyone else. Most people have had difficult decisions to make. Few were prescient enough to make really hard ones in the weeks before the government started shutting things down and taking official steps to contain it. To most people, corona simply wasn’t a “real” threat when it was merely overseas; things only got real when the government declared it so. To many people, corona still isn’t a real threat, and won’t be until–or unless–thousands die. Of course, by that point, it’s much too late.

Containment strategies are best implemented early, before you know if the disease is a real threat or not. If they work, then the disease never turns into a problem–and if other countries do the same, then you have the difficulty of not knowing what would have happened had you not tried to contain the disease. You will know how much it cost you, but you don’t know how much you saved. Maybe the disease wouldn’t have been a problem anyway.

Let the disease spread, and if bodies start racking up, then you can implement containment strategies–but it will be too late for thousands of people.

People can calculate normal risks.

Calculating exceptional risks, though, is much harder.

Dwarf Wheat: is it good for us?

A friend recently suggested that dwarf grains might be a key component in the recent explosion of health conditions like obesity and gluten (or other wheat-related) sensitivities.

According to Wikipedia:

The Green Revolution, or Third Agricultural Revolution, is a set of research technology transfer initiatives occurring between 1950 and the late 1960s, that increased agricultural production worldwide, particularly in the developing world, beginning most markedly in the late 1960s.[1] The initiatives resulted in the adoption of new technologies, including high-yielding varieties (HYVs) of cereals, especially dwarf wheats and rices, in association with chemical fertilizers and agro-chemicals, and with controlled water-supply (usually involving irrigation) and new methods of cultivation, including mechanization.

Most people would say that this has been good because we now have a lot fewer people starving to death. We also have a lot more fat people. There’s an obvious link, inasmuch as it is much easier to be fat if there is more food around, but we’re investigating a less obvious link: does the nutritional/other content of new wheat varieties contribute to certain modern health problems?

Continuing with Wikipedia:

The novel technological development of the Green Revolution was the production of novel wheat cultivarsAgronomists bred cultivars of maize, wheat, and rice that are the generally referred to as HYVs or “high-yielding varieties“. HYVs have higher nitrogen-absorbing potential than other varieties. Since cereals that absorbed extra nitrogen would typically lodge, or fall over before harvest, semi-dwarfing genes were bred into their genomes. …

Dr. Norman Borlaug, who is usually recognized as the “Father of the Green Revolution”, bred rust-resistant cultivars which have strong and firm stems, preventing them from falling over under extreme weather at high levels of fertilization. … These programs successfully led the harvest double in these countries.[40]

Plant scientists figured out several parameters related to the high yield and identified the related genes which control the plant height and tiller number.[43] … Stem growth in the mutant background is significantly reduced leading to the dwarf phenotypePhotosynthetic investment in the stem is reduced dramatically as the shorter plants are inherently more stable mechanically. Assimilates become redirected to grain production, amplifying in particular the effect of chemical fertilizers on commercial yield.

HYVs significantly outperform traditional varieties in the presence of adequate irrigation, pesticides, and fertilizers. In the absence of these inputs, traditional varieties may outperform HYVs.

In other words, if you breed a variety of wheat (or rice, or whatever) that takes up nutrients really fast and grows really fast, it tends to get top-heavy and fall over. Your wheat then lies on the ground and gets all soggy and rotten and is impossible to use. But if you make your fast-growing wheat shorter, by crossing it with some short (dwarf) varieties, it doesn’t fall over and it can devote even more of its energy to making nice, fat wheat berries instead of long, thin stems.

(I find it interesting that a lot of this research was done in Mexico. Incidentally, Mexico is also one of the fattest countries–on average–in the world.)

But we are talking about making the plant grow faster than it normally would, via the intake of more than usual levels of nutrients. This requires the use of more fertilizers, as these varieties can’t grow properly otherwise.

I’ve just started researching this, so I’m just reading papers and posting some links/quotes/summaries.

Elevating optimal human nutrition to a central goal of plant breeding and production of plant-based foods:

…  However, deficiencies in certain amino acids, minerals, vitamins and fatty acids in staple crops, and animal diets derived from them, have aggravated the problem of malnutrition and the increasing incidence of certain chronic diseases in nominally well-nourished people (the so-called diseases of civilization). …

The inadequacy of cereal grains as a primary food for humans arises from the fundamentals of plant physiology. … Their carbohydrate, protein and lipid profiles reflect the specific requirements for seed and seedling survival. This nutrient profile, especially after selection during domestication [], is far from optimal for human or animal nutrition. For example, the seeds of most cultivated plants contain much higher concentrations of omega-6 fatty acids than omega-3 fatty acids than is desirable for human nutrition [], with few exceptions such as flax, camelina (Camelina sativa) and walnuts. …

The authors then describe what’s up up with the fats–for plants to germinate in colder temperatures, they need more omega-3s, which are more liquid at colder temperatures. Plants in warmer climates don’t need omega-3s, so they have more omega-6s. (Presumably omega-6s are more heat tolerant, making them more stable during high-temperature cooking.)

Flax and walnut have low smokepoints (that is, they start turning to smoke at low temperatures) and so are unsuited to high-temperature cooking. People prefer to cook with oils that can withstand higher temperatures, like peanut, soy, corn, and canola.

I think one of the issues with fast food (and perhaps restaurant food in general) is that it needs to be cooked fast, which means it needs to be cooked at high temperatures, which requires the use of oils with high smokepoints, which are not necessarily the best for human health. The same food cooked more slowly at lower temperatures might be just fine, though.

There is a side issue that while oil smoking is unpleasant and bad, the high-temperature oils that don’t smoke aren’t necessarily any better, because I think they are undergoing other undesirable internal changes to prevent smoking.

Then there’s the downstream matter of the feed cattle and chickens are getting. My impression of cattle raising (from having walked around a cattle ranch a few times) is that most cattle eat naturally growing pasture grass most of the time, because buying feed and shipping it out to them is way too expensive. This grass is not human feed and is not fungible with human feed, because growing food for humans requires more effort (and water) than just letting cows wander around in the grass. Modern crops require a lot of water and fertilizer to grow properly (see the Wikipedia quote above.) This is why I am not convinced by the vegetarian argument that we could produce a lot more food for humans if we stopped producing cows–cattle feed and human feed are not energy/resource equivalent.

However, once the cows are grown, they are generally sent to feedlots to be fattened up before slaughter. Here they are given corn and other grains. The varieties of grains they are fed at this point may influence the nature of the fats they subsequently build:

Modern grain-fed meat and grain-rich diets are particularly abundant in omega-6 fatty acids, and it is thought that a deficiency of omega-3 fatty acids, especially the EPA and DHA found in fish oils, can be linked to many of the inflammatory diseases of the western diet, such as cardiovascular disease and arthritis (). DHA has been recognized as being vitally important for brain function, and a deficiency of this fatty acid has been linked to depression, cognitive disorders, and mental illness ().

Let’s get back to the article about plant breeding. I thought this was interesting:

The biological basis of protein limitation in seed-based foods appears to be the result of evolutionary strategies that plants use to build storage proteins. Seed storage proteins have evolved to store amino nitrogen polymerized in compact forms, i.e. in storage proteins such as zein in maize, gluten in wheat and hordein in barley. As the seed germinates, enzymes hydrolyze the storage proteins and the plant is able to use these stored amino acids as precursors to re-synthesize all of the twenty amino acids needed for de novo protein synthesis.

So if we make plants that absorb more nitrogen, and we dump a lot more nitrogen on them, do we get wheat with more gluten in it?

Another book I read, Nourishing Traditions, which is really a cookbook, claims that our ancestors generally ate their grains already sprouted. This was more accidental than on purpose–grains often sat around in storage, got wet, and sprouted. Sprouting (or germinating) makes the wheat use stored gluten to make amino acids. Between sprouting, fermentation (sourdough bread) and less nitrogen-loving wheat varieties, our ancestors’ breads and porridges may have had less gluten than ours.

Another issue:

In the laboratory of the first author we have taken two different approaches to improving the protein quality of crops. First, we successfully selected a series of high lysine wheat cultivars over a period of twenty years, by standard breeding methods []. …  Surviving embryos consistently had elevated levels of lysine relative to parental populations and the seed produced from these embryos also had increased levels of lysine. The increased nutritional value of these lines, however, carried a cost in terms of lower total yield. A striking result was that grasshoppers, aphids, rats and deer preferentially feasted on the foliage of these high lysine wheats in the field, rather than on neighboring conventional low lysine wheats. The highest lysine wheat had the highest predation and subsequently the lowest yield (D.C. Sands, unpublished field observations). … Thus, we are led to the hypothesis that selection for insect resistance may have inadvertently resulted in the selection for lower nutritional value…

Then the authors talk about peas, of Gregor Mendel fame. Two varieties of peas are wrinkled and smooth. The smooth, plump ones look nicer (and probably taste sweeter). The plump ones store sugar in a form that we digest more quickly, resulting in faster increases in blood sugar. They are thus more likely to get stored as fat.

Breeders and buyers are biased toward plump seeds and tubers, in peas and many other crops.

Incidentally, the outside of the wheat grain–the part we discard when producing white flours but keep when making “whole” wheat flour–contains phytates which interfere with iron absorption and other irritants designed by the plant to increase the chance of grazers passing the seed out the other end without digestion. (However, the creation of white flours may remove other nutrients.)

It’s getting late, so I’d better wrap up. The authors end by noting that fermentation is another way to potentially increase the nutritional content of foods and suggest a variety of ways scientists could make grains or yeasts that enhance fermentation.

A few more studies:

The nutritional value of crop residue components from several wheat cultivars grown at different fertilizer levels:

Nine wheat cultivars were grown at two test sites in Saskatoon, each at fertilizer levels of 0, 56, 224 kgN ha−1. Proportions of leaf, stem, chaff and grain were obtained for each level. Significant cultivar differences were observed at each site for plant component yields. A significant increase in the proportion of leaf components and a significant decrease in the proportion of the grain components was observed as soil nitrogen levels increased. Crude protein contents of plant components varied significantly with both cultivar and fertilizer level. Significant differences in digestibility in vitro also existed among cultivars. Increasing fertilizer levels significantly improved the digestibility in vitro of the leaf but not of the chaff.

Genetic differences in the copper nutrition of cereals:

Seven wheat genotypes, one or barley and one of oats were compared for their sensitivity to sub optimal supplies of copper, and their ability to recover from copper deficiency when copper was applied at defined stages of growth Copper deficiency delayed maturity, reduced the straw yield and severely depressed the gram yield In all genotypes. …

Genotypes with relatively higher yield potential were less sensitive to copper deficiency than those with lower yield potential … There was no apparent association between dwarfness and sensitivity to copper deficiency in wheat.

An article suggesting we should eat emmer wheat instead of modern cultivars:

… The production and food-relevant use of domesticated modern-day wheat varieties face increasing challenges such as the decline in crop yield due to adverse fluctuating climatic trends, and a need to improve the nutritional and phytochemical content of the grain, both of which are a result of centuries of crop domestication and advancement of dietary calorie requirements demanding new high-yield dwarf varieties in the last five decades. The focus on improving phenotypic traits such as grain size and grain yield towards calorie-driven macronutrients has inadvertently led to a loss of allelic function and genetic diversity in modern-day wheat, which suffers from poor tolerance to biotic and abiotic stresses, as well as poor nutritional and phytochemcial profiles against high-calorie-driven non-communicable chronic diseases (NCDs).The low baseline phytochemical profile of modern-day wheat varieties along with highly mechanized post-harvest processing have resulted in poor health-relevant nutritional qualities in end products against emerging NCDs. …

Ancient wheat, such as emmer with its large genetic diversity, high phytochemcial content, and better nutritional and health-relevant bioactive profiles, is a suitable candidate to address these nutritional securities…

There’s a lot of information about emmer wheat nutrition in this article/book.


Bluing Meat

Inspired by a question from Littlefoot, I went out to do a little sleuthing:

I remember an anthropology professor reminiscing about buying food at open air markets somewhere in Africa, where refrigeration is non-existent the meat is simply out in the heat and “somehow everyone doesn’t die.” It’s a bit strange to us, because we’re inundated with messages that improper food handling will lead to the growth of horrible bacteria and death (I even refrigerate the eggs and butter, even though our grandmothers never did and the French still don’t,) but our ancestors not only managed without refrigeration, sometimes they actually tried to make the meat rot on purpose.

Helpful Twitter user Stefan Beldie explains that traditionally, pheasants were killed, eviscerated, and then hung for 4-10 days, depending on the weather.

The phrase “bluing meat” is extremely rare these days (try googling it,) but extra-rare or uncooked meat is still referred to as “blue”:

Temperatures for beef, veal, lamb steaks and roasts
Extra-rare or Blue (bleu) very red 46–49 °C 115–125 °F

Of course, there is a bit of difference between food that is merely uncooked/barely cooked, and food that has been intentionally allowed to rot.

Here’s the tale of an Inuit (Eskimo) delicacy, walrus meat that has been allowed to decompose in a hole in the ground for a year (though I suspect not much decomposition happens for about half the year up in the arctic).

Before you judge, remember that cheese is really just rotten vomit.

Have you ever heard the story that early modern Brits used a bunch of spices on their meat to cover up the taste of rot?

It turns out that this is a myth, a tall tale created by people misunderstanding cookbooks that gave instructions for properly rotting meat before eating it:

One of the most pervasive myths about medieval food is that medieval cooks used lots of spices to cover up the taste of rotten meat. This belief is often presented in the popular media as fact, with no cited references. Occasionally though a source is mentioned, and the trail invariably leads to:

The Englishman’s Food: Five Centuries of English Diet
J.C. Drummond, Anne Wilbraham
First published by Jonathan Cape Ltd 1939

Drummond claimed,

… It is not surprising to find that the recipe books of these times give numerous suggestions for making tainted meat edible. Washing with vinegar was an obvious, and one of the commonest procedures. A somewhat startling piece of advice is given in the curious collection of recipes and miscellaneous information published under the title of The Jewell House of Art and Nature by ‘Hugh Platt, of Lincolnes Inne Gentleman’ in 1594. If you had venison that was ‘greene’ you were recommended to ‘cut out all the bones, and bury [it] in a thin olde coarse cloth a yard deep in the ground for 12 or 20 houres’. It would then, he asserted, ‘bee sweet enough to be eaten’.” 

As Daniel Myers notes, washing with vinegar was not done to reduce spoilage, but to tenderize and get rid of the “gamey” taste of some meats. As for burying your meat to make it less spoiled, this is clearly absurd:

The example that Drummond does give is most certainly not for dealing with spoiled meat. He misinterprets the word “greene” to mean spoiled, when in fact it has the exact opposite meaning – unripe. Venison, along with a number of other meats, is traditionally hung to age for two or three days after butchering to help tenderize it and to improve the flavor. With this simple knowledge in mind, Platt’s instructions are clearly a way to take a freshly butchered carcass and speed up the aging process so that it may be eaten sooner.

Similar instructions for rapidly aging poultry can be found in Ménagier de Paris.

Item, to age capons and hens, you should bleed them through their beaks and immediately put them in a pail of very cold water, holding them all the way under, and they will be aged that same day as if they had been killed and hung two days ago.

The goal of these recipes is not to cover up rot, but to speed up the rotting (or “aging”) process.

Myers also notes that the idea of putting spices on rotten meat is also absurd because spices were horribly expensive–often worth their weight in gold. It would be rather like someone looking at a gold-leaf wrapped caviar and concluding the gold was there to distract the peasants from the fact that fish eggs are disgusting. You would have completely misread the dish. In the Medieval case, it would be cheaper to buy fresh meat than to dump spices on it.

Now, to be clear, what I’ve been calling “rot” is really more “aging.” We only think of it as rotting because we are accustomed to throwing everything in the refrigerator as soon as we get it.

As the Omaha World Herald explains:

Three factors affect the tenderness of meat in all animals, whether it be beef cattle or pheasant: background toughness, rigor mortis and aging the meat.

Background toughness results from the amount of collagen (connective tissue) in and between muscle fibers. The amount of collagen, as well as the interconnectivity of the collagen, increases as animals get older, explaining why an old rooster is naturally tougher than a young bird. Rigor mortis is the partial contracting and tightening of muscle fibers in animals after death and results from chemical changes in the muscle cells. Depending on temperature and other factors, rigor mortis typically sets in a few hours after death and maximum muscle contraction is reached 12 to 24 hours after death. Rigor mortis then begins to subside, which is when the aging (tenderization) of the meat begins.

Tenderization results from pH changes in the muscle cells after death that allow naturally occurring proteinase enzymes in cells to become active. These enzymes break down collagen, resulting in more tender meat. In beef cattle, the aging process will continue at a constant rate up to 14 days, as long as the meat is held at a proper and consistent temperature, and then decreases after that. In fowl, the rate of tenderization begins to decline after a few days.

A common misconception is that bacteria-caused rotting is responsible for meat tenderization, and this is why many find the thought of aging game repugnant. … Maintaining a constant, cool temperature is key to preventing bacterial growth when aging meats. The sickness causing E. coli bacteria grows rapidly at temperatures at or above 60 F, but very slowly at 50 F.

It’s a very good article; RTWT.

From Littlefoot we have an article that delves into the technical side of things: Microbiological Changes in the Uneviscerated Bird Hung at 10 degrees C with Particular Reference to the Pheasant:

Several interesting results from this study. They hung up both pheasants and chickens. The pheasants showed very little microbial first two weeks, whereas the chickens started turning green on day five. This is probably a result of chickens having more bacteria in them to start with, a side effect of the crowded, disease-ridden conditions chickens are typically raised in.

A taste testing panel found that pheasants that had hung for at least three days tasted better than ones that had not, with some panel members preferring birds that had aged considerably longer.

So if you plan on hunting pheasant any time soon, consider letting it age for a few days before eating it–carefully, of course. Don’t give yourself food poisoning.

Subcultures and Week 1 of Skateboarding

Despite my husband’s insistence that I would wipe out and kill myself, I am successfully still alive after one week of skateboarding. I have also reassured him that I am not going to turn into one of “those people”: snowboarders. (Mostly because I am afraid of going downhill fast, and also because I don’t have the time and money to go skiing.)

I find it mildly hilarious that there is a cultural difference between skiers–proper, refined, pinkies in the air denizens of Deer Valley–and snowboarders–potheads, troublemakers, and young people with attitudes. Waterskiing also comes in two ski and one ski varieties, but as far as I know, there is no cultural difference between waterskiers who slalom and those who don’t.

In fact, snowboarding used to be banned at most US (and European) ski resorts:

Even though snowboarding was accepted by the mainstream winter sports industry in the 1990s, and is now recognized as a Winter Olympic sport (debuting in 1998), ski areas adopted the sport at a much slower pace than the winter sports public. For many years, animosity existed between skiers and snowboarders, which led to an ongoing skier-vs-snowboarder feud.[9] Early snowboards were banned from the slopes by park officials. In 1985, only seven percent of U.S. ski areas allowed snowboarding,[10] with a similar proportion in Europe. Because of this, snowboarders sought ways to protest such treatment from resorts owners and to a lesser degree, other skiers. Indeed, the snowboarding way of life came about to rebel against skiing. As a result, snowboarders chose to “shock” skiers by snowboarding at ski-only resorts as a protest.

Today, only Alta, Deer Valley, and Mad River Glen maintain the ban; the other resorts have recognized that snowboarders buy lift tickets, too.

Sam Baldrin has a good article on the conflict: Snowboarding vs. Skiing: The dying feud:

However, in those early days, skiing was still very much an elitist sport. Seen as expensive, and catering largely to the more wealthy citizens, resorts weren’t about to let this new, dangerous craze into their exclusive runs. …

But the boarding boom of the 1980s brought with it a very different type of personality to the slopes; droves of teenage skate punks with an accompanying ‘bad ass’ attitude that the average skier didn’t appreciate. This new form of snow sport brought the lawlessness of street skating to the arena of strict slope etiquette. …

And so the war began; on one side, the traditionally upper class, rich kid skiers, who wanted the slopes free of these rude, dangerous, disrespectful hoodlums with their baggy trousers and “trash and thrash” attitude. On the other, a rapidly growing army of young, enthusiastic new snowboarders, scornful of skiing’s conservative yuppie style, pumped full of teenage angst and reveling in the sport’s rebellious image.

How did two activities that are essentially the same–strapping a board or two to your feet and going downhill–develop radically different subcultures? Some sports obviously attract different sorts of people–basketball players are taller than jockeys, for example, but I doubt there’s anything about fine wine or baggy pants that makes one good at one or the other, and both groups have enough money to afford lift tickets at Vail.

In this case–as skiers and snowboarders have grown less antagonistic over the years–I think it’s mostly founder effects. Learning to ski or board is tricky, but people who could already skateboard had an advantage over those who didn’t. And while plenty of serious skiers saw the potential of snowboarding, once it was outlawed, only outlaws rode snowboards.

And who rides skateboards is itself at least partly founder effects that don’t have too much to do with skill, like who lives in cities with lots of smooth concrete.

Of course, young or old, yuppie or punk, one demographic variable unites the majority serious sports enthusiasts: they’re male. Yes, there are a few sports that women dominate, like rhythmic gymnastics, but the vast majority of athletic subcultures, professional, amateur, or merely fan, are dominated by men–and this is not a founder effect.

Some typical men’s hobbies, include riding motorcycles, working on car engines, woodworking, building computers, playing Call of Duty, and sports. Some typical women’s hobbies include include reading books/book clubs, arts and crafts, baking, playing the Sims, and shopping.

Men tend to get involved in hobbies that demand either high levels of skill–technical or athletic–and tend to enjoy tinkering for its own sake. They love optimizing their rigs, maximizing performance, or just hauling the motorcycle into the living room to do whatever repairs need done. Women, by contrast, tend to prefer their hobbies less DIY (except for art and baking) and more ready-off-the-shelf.

New hobbies are often male dominated because new things tend not to be very refined or have well-established supply chains: you can’t find them ready-on-the-shelf. The early internet, for example, wasn’t available on phones. To get on the early internet you had to figure out for yourself how to get on Usenet, and few enough people joined each year (mostly in September, when they arrived at colleges that had internet access), that the internet maintained a specific culture. Then in 1993, AOL went live and an unending stream of normal people flooded onto the internet, swamping the original culture and changing it forever, in what is known as “Eternal September.”

Ham radio–which I regard as the precursor to the internet–also required technical knowledge and assembling giant antennae; early rocketry (before WWII) was a highly technical hobby, with many parts and fuels built and mixed by hand.

In the cultural realm, watching anime was much trickier in the early 90s, before you could just stream it on YouTube or Netflix. (I got into anime because my best friend was Japanese, and we watched it together.) In those days you had to look in the Yellowpages to see if any video or comic shops near you carried it. Fan communities devoted to distributing, translating, dubbing, and subtitling anime developed on the internet–active communities, not just passive consumers.

The entry of large numbers of women into a community tends to mark a fundamental change in the nature of the community, not just because they are women, but also because whatever activity or skill it involves has become easy to get into. You no longer need to build anything or have specialized technical knowledge or spend hours working on a project to get in the culture; just buy something off the shelf and you’re there. Normies of both genders show up. The place changes.

Change isn’t always bad. Most of us seem to like that we can access Google Maps on our phones when we’re lost, or that our favorite shows are easy to find on Netflix or Hulu. I appreciate the skateboarding videos on Youtube that have taught me proper board stances, since there’s no one in my neighborhood I can ask.

But this is still change, and for the people who liked their communities the way they were when they were DIY, something they enjoyed may be lost.

(But don’t worry about me; I won’t be invading your skateparks.)

Anyway, skateboarding, week one:

Since my husband’s assertion that I had bought a “murderboard” and was going to “kill myself,” I have been keeping a list of things that have hurt me worse than skateboarding injuries:

Biting my tongue at breakfast
Stepping on a small plastic Pokemon that nearly punctured my foot
Bumping into the table (I still have the bruise)
Whacking my ankle with the scooter while picking it up
The pain in my elbow from using Twitter

I think a lot of people (including my husband) jump on a skateboard once, the skateboard flies out from under them, they crash to the ground, and they decide that skateboarding must be for people with better balance and pain tolerance than they.

But this is like jumping on a bike without training wheels, immediately falling over, and concluding that bike riding must be really hard.

So if you want to skateboard and you don’t want to fall on your butt, try watching this video first:

A real skateboard is a bit expensive (mine was about $120 dollars), which is a fair impediment to figuring out whether you enjoy skateboarding enough to want to put in the effort to learn it. A good compromise might be starting with a Razor Scooter, which are pretty fun to ride but more stable, due to the handlebar, or borrowing a skateboard from a neighbor.

After my first couple days of awkward step, skate, step, skate, step, skate, leap off the board, repeat, I got used to keeping my weight on my board foot and swishing the ground with my free foot. In the process I had two falls, but neither of these actually resulted in injuries or even pain. I decided to wear a helmet anyway, just in case.

Little known fact: humans are footed, just as they are handed. If you’re having trouble getting comfortable on your board, it might be because you’re using the wrong foot. When I use my non-dominant foot to practice different stances, I feel terribly clumsy and awkward.

So far everyone who has said anything at all has been very friendly and supportive (obviously I don’t look like a miscreant teenager, but a mom supervising her kids at the playground), and most people seem to be impressed that I can just stay on the board while gliding across a flat surface.

I was originally going to name my board “murderboard”, but my lack of injuries (other than a small bug that got squashed,) has made me reconsider.

I will probably never learn any fancy tricks (because I am not very good at athletic things) but I’ve had a really fun first week and am happy to have a hobby that I can actually discuss with strangers (unlike my blog).

We’ll be discussing legal systems on Friday.

Go Outside

So I bought a skateboard.

I’m not going to turn into a “skater” (I am about as athletic as a rock). I just want something to ride around the neighborhood on and entertain myself while my kids are on the playground.

I started riding this thing because the kids and their friends were all picking out vehicles to ride to the park and even though there was a shortage, no one wanted the pretty pink princess skateboard. I can’t really blame them, but to prove that it is actually rideable, I rode it.

After a couple days of riding around on this thing (which is not a good skateboard and I don’t know why we own it,) I realized that 1. skateboarding is fun and 2. I need a skateboard that doesn’t come to a halt when I put both feet on it.

The new board is (unsurprisingly) way better than the pretty pink princess board. (If you’re getting a skateboard, it seems that you should shell out for a real board.) So I have been outside a bunch this week, rolling around the neighborhood and occasionally wiping out.

And I feel absolutely amazingly good. Not because I’ve avoided the internet (though I admit that I can’t use Twitter and skate at the same time) but because that’s just how fresh air, sunshine, and exercise are. They’re good for you.

My outside adventures actually started a few months ago when I decided to hold a garage sale. This simultaneously forced me outside all day and resulted in a cleaner, less cluttered house (and money in my pocket). Since then I’ve been trying to get out more–to get us to the park or playground if not every day, at least several times a week.

Going outside more has certain additive effects–the kids’ friends who live in the neighborhood know we are likely to be out and so are, in turn, more likely to come out. And if their friends are out, my kids are more likely to get out. Having a plethora of outside toys like bikes and scooters so that everyone has something to ride helps a lot, by the way. (I find most kids in our neighborhood are oddly lacking in this area–you know, if you can afford two cars, you can afford a scooter from Goodwill.)

Sometimes getting out is hard. Sometimes you have to force yourself. Sometimes you have to force the kids, too. And sometimes the outside is a disaster. Sometimes you get stung by a bee, or hit by a stray frisbee, or someone falls in the lake. But keep trying. Start small. “Outside” doesn’t need to be kayaking down the fjords or hiking in the Grand Canyon. It can just be your backyard. Just turn off your phone and get out there.

You don’t even need to have kids to go outside. (They are a convenient excuse for why I’m doing chin ups on the monkeybars, though.) Ride a bike. Plant a garden. Walk.

If a clumsy oaf like me can skateboard to the park, you can get some exercise, too.

Go get some sun. It’s fall and the leaves are beautiful, skittering across the road. Exercise warms you up and the wind cools you back down. And when you step back inside, you’ll feel like you’ve brought the sun with you.

Have a great weekend.

A Screed for the Short

Disclaimer: Comments along the lines of “What are you talking about? Short people are totally treated just like tall people and and people date them all the time” will be ignored because they are stupid.

No silliness about beauty being on the inside, or how you, personally, think everyone is beautiful, or you know a short guy who beat the odds and got laid, or worst of all, that a famous rich millionaire is short and popular so therefore normal people can do it, too. Normal people don’t have millions of dollars.

There is a truth, universally acknowledged by short men: women don’t want to date them.


Short men are discriminated against in the dating market, they were bullied more by their peers in school, discriminated against at job interviews, treated generally like shit, and when they complain, get told that discrimination against short people isn’t real.

I don’t talk much about discrimination, but discrimination against unattractive people (and being a short man is an obvious kind of unattractiveness) is obviously real. No one wants to date an ugly person, even if that ugly person is a wonderful person inside. That’s just the cold, ugly truth that every ugly person lives.

I recently read a rather sad story about a young man’s untimely death:

I’m not exactly sure when he died. My father called me with the news on Saturday, November 4, 2017, but Yush was in Italy, which is six hours ahead. I later learned that a blood clot shot up from his leg and blocked his lungs; a pulmonary embolism.

The blood clot was an unfortunate complication of leg-lengthening surgery Yush had pursued because he is short.

The author, Yush’s sister, never seems to understand what was actually motivating him.  She focuses on his attempted suicide back in college, a decade beforehand, and on their estrangement due to differing opinions on feminism:

Both Yush and I were motivated by a vision of how we wanted to change the world around us. However, where he applied his vision to the physical world, I applied my observations to social constructs, questioning and challenging the power structures around me. I asked Prasad [her feminist therapist] how Yush went from being my best friend to someone I couldn’t even speak to, especially since I believed that, at heart, we wanted the same things: to be free of societal expectations, and to be treated with respect and dignity regardless of appearance, race, or gender. “The reality is, what patriarchy is meant to do is divide,” Prasad told me. “Men can still be lured by it and think, Oh if I take on these characteristics, I get what I want,” she said.

One of society’s eternal problems is people using a lot of mumbo-jumbo to sound smart.

Your brother doesn’t like feminism because feminism is about helping women, and he’s not a woman. He is a short, brown guy whom most women don’t want to date. He wants a philosophy that helps him.

That’s not “the patriarchy.” That’s your brother being a human being.

Yush’s view of manhood, coupled with unmanaged depression is one that, I think, inflicted pain, created resentment, and exacerbated his insecurities. In 2015, a few months before our estrangement, Yush told me he was pushed out of the company he helped build with men he had thought of as brothers; the betrayal deepened a belief that he was not taken seriously, or treated with the same level of respect as other male entrepreneurs, despite his profound technical knowledge and general brilliance.

Jesus fucking christ, lady. Your brother gets pushed out of the company he founded, and your reaction is to blame it on his view of “manhood” rather than, you know his co-founders being assholes?

Can you pause for one minute and contemplate the possibility that your brother’s pain was REAL? That some things in life were actually tough for him, and no amount of medication in the world would paper that over? Even women don’t like getting pushed out of things they created or betrayed by their friends.

The author keeps going on about how her brother just needed more treatment for his depression (she doesn’t give us any reason to believe he was depressed at this point, but she thinks he was) instead of realizing that he had actual, real world problems he was trying to solve.

Oh, the chorus cries, but only a crazy person would get surgery to alter their appearance so people will treat them better!

Apparently the chorus has never heard of liposuction, face lifts, breast augmentation, gastric bypass, or any of the myriad surgeries that people get every day to improve their looks. There’s a very high likelihood that the author also accepts sex change operations as perfectly reasonable. More mundanely, people alter their appearances via makeup, nice clothes, haircuts, wigs, and endlessly on–humans care about how they look and try hard to affect how other people treat them by enhancing their appearances.

We are supposed to sympathize with the protagonist in Gattaca, not conclude that he was crazy because he was willing to undergo painful surgery to pursue his dream–even though his dream was much less likely to actually come to fruition (very few people get to be astronauts) than Yush.

You can say that most of these operations are less painful than leg lengthening, but the article makes it clear that Yush sought out an operation that was supposed to be less painful than the standard version–and sex reassignment surgery is pretty darn painful.

Finally the author does talk to a psychologist who counsels short men (she apparently cannot be bothered to actually talk to a short man):

Men she’s counseled, she said, often “feel like they’re at a disadvantage. They feel like they’re not taken as seriously in terms of work environment. They feel like romantic partners don’t see them as being as attractive as they could be if they were taller,” she said.

She can’t even bring herself to just come right out and say that men like her brother are discriminated against! She couches this in a quote, and a weasely one, at that! Short men don’t just feel like they are at a disadvantage, they are actually at a disadvantage! Your feminism teaches you to see power structures and identify oppression, but you can’t even bring yourself to just directly state in plain English that people discriminated against your brother?

While the decisions he made were his own, I believe that Yush felt that society’s narrow confines of what it means to be a man—especially a brown man in America—offered him little choice.

The problem here is how society treats short men, not manhood in the abstract.

Finally–finally!–nearly 6,000 words in, she admits that her brother was right:

Yush’s observations about power, masculinity, and his standing in the world were not incorrect. Research has shown that tall people are richer and more successful, and Western culture has a long history of trying to emasculate Asian-American men (East Asian men in particular)…

Of course, she still can’t bring herself to admit that a great deal of the discrimination against short, Asian men is done by women, on the dating market.

While Yush and I saw some of the same problems in society, our responses were opposite: I have found a community of people who reject stereotypical gender identities, roles, and behaviors, whereas I think Yush internalized these messages, deepening insecurities that burrowed even further due to unmanaged depression.

Oh, honey. That works. Because. You’re a woman.

Try. Just try. To focus. For a moment. On what your brother wanted. And the options available to him.

Let’s imagine for a moment that instead of the siblings being different genders, they were different races. (Half-siblings.) One sibling is white. The other is half black.

The half-black one is being discriminated against socially, romantically, and economically because of his race, and so decides to bleach his skin. He has a tragic allergic reaction to the skin bleaching cream and dies.

Would the white sibling go online and wonder why their half-black brother didn’t embrace a political philosophy that promotes the needs of white people? Would they proclaim that with just more medication and therapy, their brother would have been okay with people discriminating against him? Would they quote some wish-washy psychiatrist about how black people feel like they’re discriminated against?

Or would they scream at the world that discriminated against him? .

Maybe Yosh did need fixing. Maybe he was stupidly fixated on something that wasn’t actually a problem. Maybe he had a hot wife or girlfriend, tons of friends, and a great job. But I don’t see any reason to declare one cosmetic surgery “crazier” than all the others. I don’t look down on people for wanting to look nicer or have nicer lives.

Fundamentally, most people just want to be happy. They want to be loved.

Before someone objects that being short isn’t the same as being black, blah blah blah, here’s a quote from a different story about a short man, The Awfulness–and Awesomeness–of Being Short:

In the years that have passed since then, I’ve come to two major conclusions about being a short man in Western society:

1. It’s awful.

2. No-one wants to hear you complain about it.

I tend to keep quiet on the subject. I’ve heard many people say to me, “Oh, come on! People don’t treat you any differently because you’re short!” (Every person who has ever said this to me has been at least 5ft 11in.)

But I know the reality of what is means to be a short man in our society. There is as much discrimination about size as there is about gender, race, religion, etc. …

According to Malcolm Gladwell’s book, Blink, it is estimated that an inch of height is worth an extra $789 (£699) a year in salary. This means that a man who is 6ft tall, might earn $7,890 more a year than I would for the same job. Over the course of a 40-year career, that could amount to a difference of $315,600.

When I read that I didn’t even feel surprised. In my heart, I always knew it was true. …

Have you ever walked into a room and felt yourself evaluated and dismissed in a matter of seconds?

Short men know that feeling very well. This is where disparaging terms like “Little Napoleon” come in, and the desire to succeed is dismissed as evidence of “short man syndrome”. If a 6ft 2in guy stands up for himself, it’s described as having self-confidence, but someone my height fighting to be heard is deemed insecure and needy.

In a marketing job I had, I would be talked over in meetings. I’d make a suggestion, which would get ignored, and then a few minutes later, someone else would make the same suggestion. People responded “Oh yes, that’s a good idea” to the second person. …

What about when it comes to dating?

The reality is, as a short man you can expect eight out of 10 women to immediately dismiss you as a potential sexual partner at first sight. The chances are, the remaining two out of 10 will only give you a couple of minutes to make your case before making excuses.

Whenever I say to my female friends that women don’t like dating short men, they almost always say the same thing: “That’s not true. I bet there are lots of women who love short men.”

“Have you ever dated one?” I ask.

“Well, no…” they reply.

“Would you?”

An uncomfortable silence follows.

According to Freakonomics, the bestselling book by Steven Levitt and Stephen Dubner, short men are statistically less likely to receive responses from their online dating profiles than any other demographic group. The fact that I’m averaging one a year on my online dating profile means I’m actually breaking the odds through the sheer force of my amazing personality.

And I’m just going out on a limb: it’s probably worse for short Asian and Indian guys.

Now, I’m focusing on Yosh’s death because I’m pissed about it, but it’s just an example of how often people refuse to acknowledge discrimination against short people–and unattractive ones in general.

Yet SJWs never talk about “tall privilege” or “pretty people privilege”.

It’s just kind of sad.

People who cannot find a place for themselves in society have nothing to lose if society burns

Detroit Abandoned Buildings

I have I’m trying to write, but words aren’t flowing. Still, it’s a general fact: you preserve what’s yours; you love what’s yours.

When you don’t feel part of society, it stops mattering to you whether society burns. You know the principle: not my circus = not my monkeys.

Which means that you can’t half-ass community. Long term, you can’t have an underclass. You can’t have outcasts. You have to have community, and you can’t force it through some idiotic top-down “team building” exercise, because dammit, that will just make people hate each other even more. Community has to be a real thing that real people actually enjoy being part of, or they will, at best, let it fall apart; at worst they burn it down with you in it.

The Detroit riot of 1967 left 43 dead and 2,000 buildings destroyed; Detroit has yet to recover.

Xenophobia” is apparently the fancy new word people are using for old-fashioned racism in South Africa:

Prior to 1994, immigrants from elsewhere faced discrimination and even violence in South Africa. After majority rule in 1994, contrary to expectations, the incidence of xenophobia increased.[1] Between 2000 and March 2008, at least 67 people died in what were identified as xenophobic attacks. In May 2008, a series of attacks left 62 people dead; although 21 of those killed were South African citizens. The attacks were motivated by xenophobia.[2] In 2015, another nationwide spike in xenophobic attacks against immigrants in general prompted a number of foreign governments to begin repatriating their citizens.[3] A Pew Research poll conducted in 2018 showed that 62% of South Africans viewed immigrants as a burden on society by taking jobs and social benefits and that 61% of South Africans thought that immigrants were more responsible for crime than other groups.[4] Between 2010 and 2017 the immigrant community in South Africa increased from 2 million people to 4 million people.[4]

Why the hell did anyone think that majority rule by black people in South Africa would result in less racism? Is there something magical about voting that stops people from being racist? No, you idiots. (Not you, my gentle reader. I know you never thought such nonsense; you know that the media has reported on plenty of racist Americans voting in elections.)

According to a 2004 study published by the Southern African Migration Project (SAMP):

“The ANC government – in its attempts to overcome the divides of the past and build new forms of social cohesion … embarked on an aggressive and inclusive nation-building project. One unanticipated by-product of this project has been a growth in intolerance towards outsiders … Violence against foreign citizens and African refugees has become increasingly common and communities are divided by hostility and suspicion.[7]  “

What, being aggressively pro-your-own-group leads to being aggressively anti-other-groups? Who could have figured that one out?

Reminder that Johannesburg used to be a first world city.

Meanwhile, Nigerian TV has some interesting segments. “Shrine” seems to be a euphemism for “human sacrifice cult”:

Maybe some of those South Africans are on to something?

— Oh jeebus, I just read about lobotomies. Changing course, guys:

 Freeman’s name gained popularity despite the widespread criticism of his methods following a lobotomy on President John F. Kennedy’s sister Rosemary Kennedy, which left her with severe mental and physical disability.[2] … Walter Freeman charged just $25 for each procedure that he performed.[8] After four decades Freeman had personally performed as many as 4,000[11][12][13] lobotomy surgeries in 23 states, of which 2,500 used his ice-pick procedure,[14] despite the fact that he had no formal surgical training.[2] … Up to 40% of Freeman’s patients were gay individuals subjected to a lobotomy in an attempt to change their homosexual orientation, leaving most of these perfectly healthy individuals severely disabled for the rest of their life.[15]… His patients often had to be retaught how to eat and use the bathroom. Relapses were common, some never recovered, and about 15%[16] died from the procedure. In 1951, one patient at Iowa’s Cherokee Mental Health Institute died when Freeman suddenly stopped for a photo during the procedure, and the surgical instrument accidentally penetrated too far into the patient’s brain.[17] Freeman wore neither gloves nor a mask during these procedures.[17] He lobotomized 19 minors including a 4-year-old child.[18]

“We went through the top of the head, I think Rosemary was awake. She had a mild tranquilizer. I made a surgical incision in the brain through the skull. It was near the front. It was on both sides. We just made a small incision, no more than an inch.” The instrument Dr. Watts used looked like a butter knife. He swung it up and down to cut brain tissue. “We put an instrument inside”, he said. As Dr. Watts cut, Dr. Freeman asked Rosemary some questions. For example, he asked her to recite the Lord’s Prayer or sing “God Bless America” or count backward. “We made an estimate on how far to cut based on how she responded.” When Rosemary began to become incoherent, they stopped.[23]It quickly became apparent that the procedure had not been successful. Kennedy’s mental capacity diminished to that of a two-year-old child. She could not walk or speak intelligibly and was incontinent.[24]”

This guy won a nobel prize in medicine.

I don’t trust doctors very much.

A few other random thoughts:

I have no opinion on the Hong Kong protests because I am not from HK or China and don’t speak Chinese and so don’t know enough to have an opinion. I do think, however, that there is a frequent–and understandable–impulse crave excitement that modern life cannot otherwise supply. We want to be heroes; we want to be like the people in games and movies.

Even in Minecraft, a game that starts with you digging dirt blocks with your bare hands, ends with you fighting a dragon. People want that dragon; they want to be heroes, and who cares if it involves burning down someone else’s house? Characters in movies never stop to consider whether their rampages are flipping innocent people’s cars or preventing normal people from getting to their jobs; these mundane considerations pale to nothing when there is an ENEMY to be conquered… but often enough that enemy is just an invention of our own boredom.

Antifa, too, want to play-act being important by killing the enemy. It’s the same impulse that leads normal people to play video games; normal people are just good at distinguishing between games and real life.

Does Special Ed do any good, and other items of interest

A study on the genetic correlates of empathizing and systematizing with different psychiatric conditions found, unsurprisingly, that autism correlates a bit more with systematizing than empathizing, but interestingly also found the genetic correlates of both empathizing and systematizing correlate with schizophrenia:


I’m not really sure how this works, but I’ve never bought into the “autism and schizophrenia are opposites” theory. Too many people seem perfectly capable of getting diagnosed with both conditions at once. Indeed, the stereotypic schizophrenic delusion is filled with science fiction tropes (telepathic communication, UN black helicopters, subdermal tracking devices, perpetual motion machines, etc.) that are far more familiar systematizers than empathizers.  

The authors note that the anorexia correlation holds even after you control for sex.


One suggestion for dealing with deepfakes and authentication: blockchain:

So let’s say we create an image. How can we set things up so that we can prove when we did it? Well, in modern times it’s actually very easy. We take the image, and compute a cryptographic hash from it (effectively by applying a mathematical operation that derives a number from the bits in the image). Then we take this hash and put it on a blockchain.

The blockchain acts as a permanent ledger. Once we’ve put data on it, it can never be changed, and we can always go back and see what the data was, and when it was added to the blockchain.

Does special ed do any good? Looks like no:

The purpose of the current study was to compare the adulthood outcomes of children who received special education services with those who did not, using one-to-one match propensity score methodology. Our analyses revealed that Hispanic students showed evidence of benefitting from special education, in terms of reporting better physical health and less family conflict, compared to non-Hispanics. Despite this, the majority of results suggest that individuals who were born between 1980 and 1994 and received special education services did not differ on adulthood outcomes across educational attainment, social adjustment, economic self-sufficiency, and physical health, compared to individuals with the same likelihood of receiving services who did not receive services. In other words, children who received special education services did not fare better than children who were equally likely to have received services, but did not receive them.

Note that they compared kids who were equally qualified for special ed but just either did or didn’t receive services, and found no meaningful difference between them (excepting Hispanics.)

The finding that it does something for Hispanics might be a version of the middle-aged-Hispanic-woman-syndrome (that is, if you divide your data into enough categories, by random chance one of them may look significant but really isn’t) but I think it more likely that it’s because this special ed amounts to extra English practice at a critical time in the child’s life, allowing ESL kids to learn faster and adapt to the otherwise English-speaking classroom faster, putting them ahead of peers who learned English more slowly and so fell behind in school.

That special ed does very little useful (though it might be more pleasant for some of the folks involved) than certain alternatives is rather disheartening, especially given the expense. According to the NEA: 

The current average per student cost is $7,552 and the average cost per special education student is an additional $9,369 per student, or $16,921

According to DisabilityScoop, 6.7 million kids are in special ed, for a cost of about 62.8 billion dollars. 

Of course, the study is on kids who went to school in the 80s and 90s, and special ed might have improved since then, but I see no reason to assume it has. That is a LOT of money for no improvement, money that could have gone toward better playgrounds, more art supplies, less crowded classrooms, or just stayed with the taxpayers.

It should come as no surprise that I think the best place for most kids who qualify for special education is at home (since I am homeschooling my own children), where they can get their entire education individually tailored to their exact strengths and weaknesses.

Too many smart (or average!) kids who don’t fit within the school environment–boys who are wiggly or immature, girls who are spacey and distracted–get labeled as “disabled” and then pushed pushed pushed to perform the necessary school-related behaviors, rather than simply put into a different environment where these behaviors become irrelevant and they can focus on learning.

School itself is a fairly recent institution, based largely on the German model. It was not created by scientists who figured out some optimal way to get students to learn and prepare them for life; it’s just a particular system that we happen to have, and some kids are not suited for it.

Unfortunately, most educational interventions, in the long run, don’t do very much. The standard stuff, like teaching a kid to read and write, does a ton. Almost everyone benefits from clear, direct instruction in “what those squiggly lines on the page mean.” And everyone benefits from getting enough sleep, eating plenty of healthy food, etc. Everyone benefits from a loving, peaceful home life (even if it doesn’t show up much on IQ tests.) Everyone benefits from adequate iodine levels and not catching malaria.

Getting beyond these basics, though, is much trickier.