One of the big problems with psychiatric medications is they tend to stop working over time. Mundanely, they do this as your body processes and excretes the chemicals: they wear off. More annoyingly, brains will actually up- or down-regulate their own activities over time in order to reestablish normalcy. Alcohol, for example, is a depressant, so the brains of alcoholics actually become more active over time in order achieve a more normal brain state. At this point, if you remove the alcohol, the brain can no longer function, because it is now too active: alcoholics in withdrawal can go into seizures.
If you’re trying to give people medications to make them feel better, like anti-depressants or anti-anxieties, then you have to fight against these two problems: 1. you don’t want the medication to just wear off every evening, leaving the patient in a funk for the rest of the day; and 2. you don’t want the patient to become habituated to the medication, where it not only no longer works, but if they try to go off of it (perhaps to switch to another medication,) things get much worse.
So I was thinking, why not use the rebound effects? Suppose a depressed person took a medication right before bed that, like alcohol, was effectively a downer, but would wear off in 8 hours and leave them in a happier state? And after three months of constant use, maybe their brains would habituate to the medication by producing more of whatever counteracts an unhappy state?
Has anyone studied or tested any drugs that work like this?
There’s an obvious downside here, which is that you’re intentionally trying to make someone who already feels bad feel worse, which is why you’d probably want to couple it with some sort of sleep aid (and that probably wouldn’t work with anything that makes people anxious, so maybe it’s not an effective anxiety treatment). You’d want to keep a very close eye on people when starting such a treatment, of course.
But more generally, has anyone tried to use rebound states and habituation to get the brain to where they want it to be, rather than fighting against these? If it worked, we could call it reverse psychiatry.
Jordan B. Peterson, darling of the right, punching bag of the left, has had an amazingly shitty year.
Peterson rocketed to fame after publishing a couple of books and making some fairly anodyne (as far as I can tell) statements about the encroachment of political correctness on college campuses and in Canadian Law.
Fame is bad for people: just look at the lives of movie stars. At this point, Hollywood has probably developed some protocols for dealing with some of the unpleasant parts of being famous–I doubt Johnny Depp reads all of the mail he receives; Lady Gaga probably has someone who manages her online presence, etc–but we know Peterson wasn’t doing this because his daughter is doing his press releases.
Authors don’t expect to become famous, much less reviled.
I should note that I haven’t actually read Peterson’s book (I’m not in the market for self-help), nor have a watched more than a smattering of podcasts/interviews, but I have spent enough time here on the internet to get the general flavor of things. Peterson has always struck me as a basically kind-hearted, well-intentioned person who was trying to help others, not tear them down, so even if I disagreed with this or that specific thing he said, he still seemed like a pretty decent guy.
In exchange for being basically decent and trying to help people, Peterson received an amazing amount of hate. The left reacted to him like a demon casting off its disguise and screaming in hysterical rage.
Most famous people get more love than hate; this level of hate isn’t good for anyone, much less someone who isn’t a sociopath or a murderer.
Despite the hysterics that JBP was going to destroy civilization, he has faded pretty quickly from view. His time in the spotlight ended with a speed that makes all of the hysteria look, in retrospect, absurd. He wasn’t a threat; he was just a guy who published a book and had his fifteen minutes of fame.
The benefit of hindsight makes the lunacy of it all the stranger. I can’t think of a similarly mid-profile leftist (Peterson is way below the fame level of Krugman or Ta Nehisi) who has received the same level of vitriol. Maybe David Hogg? (But maybe that’s just sampling bias due to the particular things I happen to read.)
Peterson faded from view in part because there isn’t very much for intellectual “right wingers” who aren’t insane and aren’t on TV to do. Books take a long time to write, and hosting a regular podcast gets old. The idea that JBP was part of the “Alt Right” was only ever correct in the vaguest sense of him not being part of the mainstream Republican right, which I wouldn’t really expect him to be since he’s not even an American. He doesn’t seem to be racist, think we should repeal the 19th amendment, or want to invade Poland. The idea that he is some sort of gateway to the Alt Right proper is the kind of fevered nonsense that comes of trying to smash all human existence into a single left-right axis with everything that is not explicitly trying to accelerate leftward labeled as “reactionary.”
But anyway, Peterson’s life since he dropped out of sight has apparently been absolutely awful. According to his daughter, “Dad was put on a low dose of a benzodiazepine a few years ago for anxiety following an extremely severe autoimmune reaction to food.”
This is maddeningly specific and unspecific at the same time. What sort of autoimmune reaction? What sort of food? Is he allergic to shellfish? I am familiar with some of the conditions that might get characterized this way, eg:
In a joint effort, Ye Qian, PhD, professor of dermatology, and Timothy Moran, MD, PhD, assistant professor of pediatrics, found that walnut allergen, in addition to inducing allergic diseases to certain individuals, could also promote autoantibody development in an autoimmune skin disease called pemphigus vulgaris. …
Two major outcomes of a dysfunctional immune system are allergy and autoimmunity. Growing evidence suggests there are some connections between the development of these two abnormalities.
Can autoimmune conditions cause anxiety? Presumably they can cause all sorts of things, especially if we play fast and lose with what we call “autoimmune.” People who are breaking out in hives and feel their throats constricting because they just ate a peanut presumably feel a lot of anxiety. Some people who are sensitive to wheat experience psychiatric symptoms (eg, celiac psychosis) that are caused by some sort of weird bodily reaction to the wheat.
So this is not a crazy thing to claim, but it might be garbled since some people use terms like “autoimmune” very loosely.
BUT, if the anxiety was caused by an autoimmune reaction to food, then the correct response shouldn’t have been psychiatric medication. It should have been treating the autoimmune disorder (and eliminating whatever food was triggering it from the diet). For that you probably need immune-suppressing drugs like infliximab or steroids like prednisone.
Anxiety is unpleasant and benzos can bring it down, especially in an emergency, but if the autoimmune condition is triggering the anxiety than you really aren’t making it go away. This is life if you have the flu and it’s causing a fever and you take an aspirin to bring down your fever, well, you still have the flu and you still feel shitty.
Except instead of aspirin, you’re taking something that is much stronger and has a much higher risk of side effects.
So at least from what she’s said (and I admit that this might be a highly compressed or slightly garbled account of things,) Peterson shouldn’t have been on benzos at all and had a different medical disorder that effectively went untreated.
According to his daughter, Peterson’s dose was increased when his wife developed cancer. Cancer is understandably extremely stressful and people need help getting through it, though I question the wisdom of giving psychiatric medication for people going through conditions which really ought to make you feel shitty. If your wife is dying and you don’t feel bad, I think there’s something wrong.
At this point, the bezos stopped doing their job (perhaps because of the untreated autoimmune disorder?):
It became apparent that he was suffering from both a physical dependency and a paradoxical reaction to the medication.
This is really interesting, at least from an abstract point of view.
To radically over-simplify the brain, think of it as having two potential directions, up and down. When you up regulate something, you get more of it. When you downregulate, you get less of it. The actual mechanics involved are obviously way more complicated. Sometimes a chemical has an exciting effect, so more of that chemical means more of the effect you want, and sometimes a chemical has a depressing effect, so more of that chemical means less of the effect you want. Brains also have receptors, which have to be present to actually use the chemicals, so it doesn’t matter how many chemicals you have if you don’t have any receptors to receive them.
Anti-anxiety drugs, like alcohol, are designed to depress the brain. Here’s a great video by ChubbyEmu explaining how alcohol dependence works:
I don’t know the exact mechanism of benzos, but the principle is likely the same. As you put in more and more depressants, trying to down-regulate the brain, the brain up-regulates something else to reassert homeostasis. This is how you build up tolerance to drugs and even become dependent on them: the physical architecture of your brain has been modified to deal with them. Take the drugs away, and suddenly the physical architecture of the brain no longer has the the right balance of chemicals to receptors that it needs. If you take out a depressant, suddenly your brain is massively up-reulated. If you’ve been chugging alcohol, all of that un-depressed brain activity is likely to massively up itself into a seizure as brain activity explodes.
In Peterson’s case, when he tried to go off benzos, he developed akathisia, a condition usually described as restlessness but described by people who’ve had it as an absolutely maddening compulsion to move endlessly for hours and hours and hours on end with no rest or stillness, no ability to turn off the racing thoughts in your brain or stop talking like you are a train hurling 300 miles an hour down the track until you fall asleep, exhausted, only to wake up the next day and do it all over again until you want to put a bullet in your brain.
I am pretty sure that you can recover from this as your brain eventually resets to its original balance, but that takes a very long time and in the meanwhile you are still dependent on the same drug/medication that caused the problem in the first place. (A hospital dealing with a patient going through acute alcohol withdrawal will give the patient alcohol to stop their seizure, for example.)
Here is where it seems that Mikhaila and her dad gave up on “North American” medicine and went off to Russia to detox Peterson cold turkey.
After several failed treatment attempts in North American hospitals, including attempts at tapering and micro-tapering, we had to seek an emergency medical benzodiazepine detox, which we were only able to find in Russia.
I understand where they’re coming from and their frustration, but once you’ve built a tolerance to drug, there is no safe way to detox without tapering, and tapering is just going to be shitty, because your brain is now designed to use that drug and you can’t get around that until you build new brain architecture.
Unfortunately, just as going cold turkey off an alcohol addition can cause seizures, so taking Peterson off the benzos seems to have had terrible effects, and he ended up in a COMA. Excuse me, a medically induced coma. I think they usually do that because someone has gone into uncontrollable seizures, but maybe there are other reasons for them:
She and her husband took him to Moscow last month, where he was diagnosed with pneumonia and put into an induced coma for eight days. She said his withdrawal was “horrific,” worse than anything she had ever heard about. She said Russian doctors are not influenced by pharmaceutical companies to treat the side-effects of one drug with more drugs, and that they “have the guts to medically detox someone from benzodiazepines.”
There is just so much horrifying here; Peterson, please do not ever place your life in your daughter’s hands again. She does not understand addiction. Look, I undrserstand your reluctance to try to treat the akasithia with more medications, but that is not a good reason to go to Russia. Peterson could just have refused the prescription for anti-akasithia drugs while still continuing a controlled, tapered detox in a “North American” hospital. The fact that they apparently couldn’t find any doctors in all of “North America” who would sign off on this plan, not even a “naturopath”, is a huge red flag. Of course his withdrawal was “horrific”; that’s why the doctors kept telling you not to do this fucking thing but you had to go drag your dad to some second world country to find doctors willing to gamble with his life.
By the way, a “coma” shouldn’t be “horrific.” By nature, people in comas don’t really do anything. They’re asleep. Something is being left out of this story.
Jordan Peterson has only just come out of an intensive care unit, Mikhaila said. He has neurological damage, and a long way to go to full recovery. He is taking anti-seizure medication and cannot type or walk unaided, but is “on the mend” and his sense of humour has returned.
Aha. Seizures. Looks like I was right. The “horrific” part of this ordeal was most likely her dad going status epilepticus. But let’s all admire the “guts” of Russian doctors to go along with this absolutely insane idea and give her dad permanent brain damage. Great job, Mikhaila.
Everything about this is horrifying. Peterson strikes me as a decent man who wanted to make people’s lives better. Whether his advice was good or not, most of it didn’t sound outright terrible. Hard to go wrong with “clean your room.” He’s been hit with a ton of hate, his wife had cancer, and he was, from the sounds of it, incorrectly put on very strong and dependency forming medications. Getting off the medication became its own hell, so his daughter gave up on “North American” medicine and went for the cold turkey method, which of course caused seizures and brain damage.
I know where people are coming from when they look at conventional medicine and say, “Gosh, that seems wrong.” Yes, putting Peterson on benzos on the first place may have been wrong. Increasing his dosage may have also been wrong. There may have been other wrong decisions in there. But that doesn’t make going off cold turkey the right decision.
There’s this awful place you end up when you have a medical condition that falls just on the edge of mapped medical territory. We are great at treating broken bones. Trauma medical care is amazing. We can transplant organs and save people from heart attacks. Antibiotics and vaccines are also amazing. And we have solved many long-term conditions, like type 1 diabetes.
Autoimmune conditions are much harder to treat and much less well-mapped territory. Sometimes doctors are wrong. Sometimes ordinary people have good ideas that medicine hasn’t recognized yet. Sometimes a specialized diet like eating just meat is exactly what someone needs. And sometimes it isn’t. Sometimes the doctors are right. Finding the correct balance and knowing which information to trust (some peer-reviewed medical papers have turned out to be fraudulent, too,) can be hard. I don’t know how to resolve this dilemma besides “Start with accepted medicine. Talk to doctors. Watch Chubby Emu or something similar. Get a basic idea of the land. Then move on to patient forums. See what patients say. Sometimes patients report side effects as being much more common or severe than medical studies indicate. Sometimes they indicate that certain medications are more effective than indicated. etc. Watch out for anyone touting a cure that sounds too good to be true or that could kill you (do NOT, under any circumstances, drink a gallon of soy sauce.) Watch out for rabbit holes where the relevant authors only cite each other. Watch out for “papers” that don’t seem to have come from anywhere. Watch out for people trying to sell you something. And just keep learning as much as you can.”
Good luck, try to stay healthy and well. Get your sunshine.
In addition to the reported Neanderthal and Denisovan introgressions, our results support a third introgression in all Asian and Oceanian populations from an archaic population. This population is either related to the Neanderthal-Denisova clade or diverged early from the Denisova lineage.
(Congratulations to the authors, Mondal, Bertranpetit, and Lao.)
Here we report an analysis comparing cultural and genetic data from 13 populations from in and around Northeast Asia spanning 10 different language families/isolates. We construct distance matrices for language (grammar, phonology, lexicon), music (song structure, performance style), and genomes (genome-wide SNPs) and test for correlations among them. … robust correlations emerge between genetic and grammatical distances. Our results suggest that grammatical structure might be one of the strongest cultural indicators of human population history, while also demonstrating differences among cultural and genetic relationships that highlight the complex nature of human cultural and genetic evolution.
I feel like there’s a joke about grammar Nazis in here.
While humans average seven hours, other primates range from just under nine hours (blue-eyed black lemurs) to 17 (owl monkeys). Chimps, our closest living evolutionary relatives, average about nine and a half hours. And although humans doze for less time, a greater proportion is rapid eye movement sleep (REM), the deepest phase, when vivid dreams unfold.
Sleep is pretty much universal in the animal kingdom, but different species vary greatly in their habits. Elephants sleep about two hours out of 24; sloths more than 15. Individual humans vary in their sleep needs, but interestingly, different cultures vary greatly in the timing of their sleep, eg, the Spanish siesta. Our modern notion that people “should” sleep in a solid, 7-9 hour chunk (going so far as to “train” children to do it,) is more a result of electricity and industrial work schedules than anything inherent or healthy about human sleep. So if you find yourself stressed out because you keep taking a nap in the afternoon instead of sleeping through the night, take heart: you may be completely normal. (Unless you’re tired because of some illness, of course.)
Within any culture, people also prefer to rest and rise at different times: In most populations, individuals range from night owls to morning larks in a near bell curve distribution. Where someone falls along this continuum often depends on sex (women tend to rise earlier) and age (young adults tend to be night owls, while children and older adults typically go to bed before the wee hours).
Genes matter, too. Recent studies have identified about a dozen genetic variations that predict sleep habits, some of which are located in genes known to influence circadian rhythms.
While this variation can cause conflict today … it may be the vestige of a crucial adaptation. According to the sentinel hypothesis, staggered sleep evolved to ensure that there was always some portion of a group awake and able to detect threats.
So they gave sleep trackers to some Hadza, who must by now think Westerners are very strange, and found that at any particular period of the night, about 40% of people were awake; over 20 nights, there were “only 18 one-minute periods” when everyone was asleep. That doesn’t prove anything, but it does suggest that it’s perfectly normal for some people to be up in the middle of the night–and maybe even useful.
In May, a pair of papers published by separate teams in the journal Cell focused on the NOTCH family of genes, found in all animals and critical to an embryo’s development: They produce the proteins that tell stem cells what to turn into, such as neurons in the brain. The researchers looked at relatives of the NOTCH2 gene that are present today only in humans.
In a distant ancestor 8 million to 14 million years ago, they found, a copying error resulted in an “extra hunk of DNA,” says David Haussler of the University of California, Santa Cruz, a senior author of one of the new studies.
This non-functioning extra piece of NOTCH2 code is still present in chimps and gorillas, but not in orangutans, which went off on their own evolutionary path 14 million years ago.
About 3 million to 4 million years ago, a few million years after our own lineage split from other apes, a second mutation activated the once non-functional code. This human-specific gene, called NOTCH2NL, began producing proteins involved in turning neural stem cells into cortical neurons. NOTCH2NL pumped up the number of neurons in the neocortex, the seat of advanced cognitive function. Over time, this led to bigger, more powerful brains. …
The researchers also found NOTCH2NL in the ancient genomes of our closest evolutionary kin: the Denisovans and the Neanderthals, who had brain volumes similar to our own.
“Genomes that evolve in different geographic locations without intermixing can end up being different from each other,” said Kateryna Makova, Pentz Professor of Biology at Penn State and an author of the paper. “… This variation has a lot of advantages; for example, increased variation in immune genes can provide enhanced protection from diseases. However, variation in geographic origin within the genome could also potentially lead to communication issues between genes, for example between mitochondrial and nuclear genes that work together to regulate mitochondrial function.”
Researchers looked at recently (by evolutionary standards) mixed populations like Puerto Ricans and African Americans, comparing the parts of their DNA that interact with mitochondria to the parts that don’t. Since mitochondria hail from your mother, and these populations have different ethnic DNA contributions along maternal and paternal lines. If all of the DNA were equally compatible with their mitochondria, then we’d expect to see equal contributions to the specifically mitochondria-interacting genes. If some ethnic origins interact better with the mitochondria, then we expect to see more of this DNA in these specific places.
The latter is, in fact, what we find. Puerto Ricans hail more from the Taino Indians along their mtDNA, and have relatively more Taino DNA in the genes that affect their mitochondria–indicating that over the years, individuals with more balanced contributions were selected against in Puerto Rico. (“Selection” is such a sanitized way of saying they died/had fewer children.)
This indicates that a recently admixed population may have more health issues than its parents, but the issues will work themselves out over time.
Watson parses questions into different keywords and sentence fragments in order to find statistically related phrases. Watson’s main innovation was not in the creation of a new algorithm for this operation but rather its ability to quickly execute hundreds of proven language analysis algorithms simultaneously. The more algorithms that find the same answer independently the more likely Watson is to be correct. Once Watson has a small number of potential solutions, it is able to check against its database to ascertain whether the solution makes sense or not.
That is at least one reason why Watson represents such a significant milestone: Jeopardy! is precisely such a challenging language task. … What is perhaps not evident to many observers is that Watson not only had to master the language in the unexpected and convoluted queries, but for the most part its knowledge was not hand-coded. It obtained that knowledge by actually reading 200 million pages of natural-language documents, including all of Wikipedia… If Watson can understand and respond to questions based on 200 million pages–in three seconds!–here is nothing to stop similar systems from reading the other billions of documents on the Web. Indeed, that effort is now under way.
A point about the history of computing that may be petty of me to emphasize:
Babbage’s conception is quite miraculous when you consider the era in which he lived and worked. However, by the mid-twentieth century, his ideas had been lost in the mists of time (although they were subsequently rediscovered.) It was von Neumann who conceptualized and articulated the key principles of the computer as we know it today, and the world recognizes this by continuing to refer to the von Neumann machine as the principal model of computation. Keep in mind, though, that the von Neumann machine continually communicates data between its various units and within those units, so it could not be built without Shannon’s theorems and the methods he devised for transmitting and storing reliable digital information. …
You know what? No, it’s not petty.
Amazon lists 57 books about Ada Lovelace aimed at children, 14 about Alan Turing, and ZERO about John von Neumann.
(Some of these results are always irrelevant, but they are roughly correct.)
“EvX,” you may be saying, “Why are you counting children’s books?”
Because children are our future, and the books that get published for children show what society deems important for children to learn–and will have an effect on what adults eventually know.
I don’t want to demean Ada Lovelace’s role in the development of software, but surely von Neumann’s contributions to the field are worth a single book!
*Slides soapbox back under the table*
Anyway, back to Kurzweil, now discussing quantum mechanics:
There are two ways to view the questions we have been considering–converse Western an Easter perspective on the nature of consciousness and of reality. In the Western perspective, we start with a physical world that evolves patterns of information. After a few billion years of evolution, the entities in that world have evolved sufficiently to become conscious beings In the Eastern view, consciousness is the fundamental reality, the physical word only come into existence through the thoughts of conscious beings. …
The East-West divide on the issue of consciousness has also found expression in opposing schools of thought in the field of subatomic physics. In quantum mechanics, particles exist in what are called probability fields. Any measurement carried out on them by a measuring device causes what is called a collapse of the wave function, meaning that the particle suddenly assumes a particular location. A popular view is that such a measurement constitutes observation by a conscious observer… Thus the particle assume a particular location … only when it is observed. Basically particles figure that if no one is bothering to look at them, they don’t need to decide where they are. I call this the Buddhist school of quantum mechanics …
Or as Niels Bohr put it, “A physicist is just an atom’s way of looking at itself.” He also claimed that we could describe electrons exercised free will in choosing their positions, a statement I do not think he meant literally; “We must be clear that when it comes to atoms, language can be used only as in poetry,” as he put it.
Kurzweil explains the Western interpretation of quantum mechanics:
There is another interpretation of quantum mechanics… In this analysis, the field representing a particle is not a probability field, but rather just a function that has different values in different locations. The field, therefore, is fundamentally what the particle is. … The so-called collapse of the wave function, this view holds, is not a collapse at all. … It is just that a measurement device is also made up of particles with fields, and the interaction of the particle field being measured and the particle fields of the measuring device result in a reading of the particle being in a particular location. The field, however, is still present. This is the Western interpretation of quantum mechanics, although it is interesting to note that the more popular view among physicists worldwide is what I have called the Eastern interpretation.
For example, Bohr has the yin-yang symbol on his coat of arms, along with the motto contraria sunt complementa, or contraries are complementary. Oppenheimer was such a fan of the Bhagavad Gita that he read it in Sanskrit and quoted it upon successful completion of the Trinity Test, “If the radiance of a thousand suns were to burst at once into the sky, that would be like the splendor of the mighty one,” and “Now I am become death, the destroyer of worlds.” He credited the Gita as one of the most important books in his life.
Why the appeal of Eastern philosophy? Is it something about physicists and mathematicians? Leibnitz, after all, was fond of the I Ching. As Wikipedia says:
Leibniz was perhaps the first major European intellectual to take a close interest in Chinese civilization, which he knew by corresponding with, and reading other works by, European Christian missionaries posted in China. Having read Confucius Sinarum Philosophus on the first year of its publication, he concluded that Europeans could learn much from the Confucian ethical tradition. He mulled over the possibility that the Chinese characters were an unwitting form of his universal characteristic. He noted with fascination how the I Ching hexagrams correspond to the binary numbers from 000000 to 111111, and concluded that this mapping was evidence of major Chinese accomplishments in the sort of philosophical mathematics he admired. Leibniz communicated his ideas of the binary system representing Christianity to the Emperor of China hoping it would convert him. Leibniz may be the only major Western philosopher who attempted to accommodate Confucian ideas to prevailing European beliefs.
Leibniz’s attraction to Chinese philosophy originates from his perception that Chinese philosophy was similar to his own. The historian E.R. Hughes suggests that Leibniz’s ideas of “simple substance” and “pre-established harmony” were directly influenced by Confucianism, pointing to the fact that they were conceived during the period that he was reading Confucius Sinarum Philosophus.
Perhaps it is just that physicists and mathematicians are naturally curious people, and Eastern philosophy is novel to a Westerner, or perhaps by adopting Eastern ideas, they were able to purge their minds of earlier theories of how the universe works, creating a blank space in which to evaluate new data without being biased by old conceptions–or perhaps it is just something about the way their minds work.
As for quantum, I favor the de Broglie-Bohm interpretation of quantum mechanics, but obviously I am not a physicist and my opinion doesn’t count for much. What do you think?
But back to the book. If you are fond of philosophical ruminations on the nature of consciousness, like “What if someone who could only see in black and white read extensively about color “red,” could they ever achieve the qualia of actually seeing the color red?” or “What if a man were locked in a room with a perfect Chinese rulebook that told him which Chinese characters to write in response to any set of characters written on notes passed under the door? The responses are be in perfect Chinese, but the man himself understands not a word of Chinese,” then you’ll enjoy the discussion. If you already covered all of this back in Philosophy 101, you might find it a bit redundant.
Kurzweil notes that conditions have improved massively over the past century for almost everyone on earth, but people are increasingly anxious:
A primary reason people believe life is getting worse is because our information about the problems of the world has steadily improved. If there is a battle today somewhere on the planet, we experience it almost as if we were there. During World War II, tens of thousand of people might perish in a battle, and if the public could see it at all it was in a grainy newsreel in a movie theater weeks later. During World War I a small elite could read about the progress of the conflict in the newspaper (without pictures.) During the nineteenth century there was almost no access to news in a timely fashion for anyone.
As for the future of man, machines, and code, Kurzweil is even more optimistic than Auerswald:
The last invention that biological evolution needed to make–the neocortex–is inevitably leading to the last invention that humanity needs to make–truly intelligent machines–and the design of one is inspiring the other. … by the end of this century we will be able to create computation at the limits of what is possible, based on the laws of physics… We call matter and energy organized in this way “computronium” which is vastly more powerful pound per pound than the human brain. It will not jut be raw computation but will be infused with intelligent algorithms constituting all of human-machine knowledge. Over time we will convert much of the mass and energy in our tiny corner of the galaxy that is suitable for this purpose to computronium. … we will need to speed out to the rest of the galaxy and universe. …
How long will it take for us to spread our intelligence in its nonbiological form throughout the universe? … waking up the universe, and then intelligently deciding its fate by infusing it with our human intelligence in its nonbiological form, is our destiny.
Whew! That is quite the ending–and with that, so will we. I hope you enjoyed the book. What did you think of it? Will Humanity 2.0 be good? Bad? Totally different? Or does the Fermi Paradox imply that Kurzweil is wrong? Did you like this shorter Book Club format? And do you have any ideas for our next Book Club pick?
If you aren’t familiar with Ray Kurzweil (you must be new to the internet), he is a computer scientist, inventor, and futurist whose work focuses primarily on artificial intelligence and phrases like “technological singularity.”
Wikipedia really likes him.
The book is part neuroscience, part explanations of how various AI programs work. Kurzweil uses models of how the brain works to enhance his pattern-recognition programs, and evidence from what works in AI programs to build support for theories on how the brain works.
The book delves into questions like “What is consciousness?” and “Could we recognize a sentient machine if we met one?” along with a brief history of computing and AI research.
My core thesis, which I call the Law of Accelerating Returns, (LOAR), is that fundamental measures of of information technology follow predictable and exponential trajectories…
The quintessential example of the law of accelerating returns is the perfectly smooth, doubly exponential growth of the price/performance of computation, which has held steady for 110 years through two world was, the Great Depression, the Cold War, the collapse of the Soviet Union, the reemergence of China, the recent financial crisis, … Some people refer to this phenomenon as “Moore’s law,” but… [this] is just one paradigm among many.
Auerswald claims that the advance of “code” (that is, technologies like writing that allow us to encode information) has, for the past 40,000 years or so, supplemented and enhanced human abilities, making our lives better. Auerswald is not afraid of increasing mechanization and robotification of the economy putting people out of jobs because he believes that computers and humans are good at fundamentally different things. Computers, in fact, were invented to do things we are bad at, like decode encryption, not stuff we’re good at, like eating.
The advent of computers, in his view, lets us concentrate on the things we’re good at, while off-loading the stuff we’re bad at to the machines.
Kurzweil’s view is different. While he agrees that computers were originally invented to do things we’re bad at, he also thinks that the computers of the future will be very different from those of the past, because they will be designed to think like humans.
A computer that can think like a human can compete with a human–and since it isn’t limited in its processing power by pelvic widths, it may well out-compete us.
But Kurzweil does not seem worried:
Ultimately we will create an artificial neocortex that has the full range and flexibility of its human counterpart. …
When we augment our own neocortex with a synthetic version, we won’t have to worry about how much additional neocortex can physically fit into our bodies and brains, as most of it will be in the cloud, like most of the computing we use today. I estimated earlier that we have on the order of 300 million pattern recognizers in our biological neocortex. That’s as much as could b squeezed into our skulls even with the evolutionary innovation of a large forehead and with the neocortex taking about 80 percent of the available space. As soon as we start thinking in the cloud, there will be no natural limits–we will be able to use billions or trillions of pattern recognizers, basically whatever we need. and whatever the law of accelerating returns can provide at each point in time. …
Last but not least, we will be able to back up the digital portion of our intelligence. …
That is kind of what I already do with this blog. The downside is that sometimes you people see my incomplete or incorrect thoughts.
On the squishy side, Kurzweil writes of the biological brain:
The story of human intelligence starts with a universe that is capable of encoding information. This was the enabling factor that allowed evolution to take place. …
The story of evolution unfolds with increasing levels of abstraction. Atoms–especially carbon atoms, which can create rich information structures by linking in four different directions–formed increasingly complex molecules. …
A billion yeas later, a complex molecule called DNA evolved, which could precisely encode lengthy strings of information and generate organisms described by these “programs”. …
The mammalian brain has a distinct aptitude not found in any other class of animal. We are capable of hierarchical thinking, of understanding a structure composed of diverse elements arranged in a pattern, representing that arrangement with a symbol, and then using that symbol as an element in a yet more elaborate configuration. …
Through an unending recursive process we are capable of building ideas that are ever more complex. … Only Homo sapiens have a knowledge base that itself evolves, grow exponentially, and is passe down from one generation to another.
Kurzweil proposes an experiment to demonstrate something of how our brains encode memories: say the alphabet backwards.
If you’re among the few people who’ve memorized it backwards, try singing “Twinkle Twinkle Little Star” backwards.
It’s much more difficult than doing it forwards.
This suggests that our memories are sequential and in order. They can be accessed in the order they are remembered. We are unable to reverse the sequence of a memory.
Funny how that works.
On the neocortex itself:
A critically important observation about the neocortex is the extraordinary uniformity of its fundamental structure. … In 1957 Mountcastle discovered the columnar organization of the neocortex. … [In 1978] he described the remarkably unvarying organization of the neocortex, hypothesizing that it was composed of a single mechanism that was repeated over and over again, and proposing the cortical column as the basic unit. The difference in the height of certain layers in different region noted above are simply differences in the amount of interconnectivity that the regions are responsible for dealing with. …
extensive experimentation has revealed that there are in fact repeating units within each column. It is my contention that the basic unit is a pattern organizer and that this constitute the fundamental component of the neocortex.
As I read, Kurzweil’s hierarchical models reminded me of Chomsky’s theories of language–both Ray and Noam are both associated with MIT and have probably conversed many times. Kurzweil does get around to discussing Chomsky’s theories and their relationship to his work:
Language is itself highly hierarchical and evolved to take advantage of the hierarchical nature of the neocortex, which in turn reflects the hierarchical nature of reality. The innate ability of human to lean the hierarchical structures in language that Noam Chomsky wrote about reflects the structure of of the neocortex. In a 2002 paper he co-authored, Chomsky cites the attribute of “recursion” as accounting for the unique language faculty of the human species. Recursion, according to Chomsky, is the ability to put together small parts into a larger chunk, and then use that chunk as a part in yet another structure, and to continue this process iteratively In this way we are able to build the elaborate structure of sentences and paragraphs from a limited set of words. Although Chomsky was not explicitly referring here to brain structure, the capability he is describing is exactly what the neocortex does. …
This sounds good to me, but I am under the impression that Chomsky’s linguistic theories are now considered outdated. Perhaps that is only his theory of universal grammar, though. Any linguistics experts care to weigh in?
The basis to Chomsky’s linguistic theory is rooted in biolinguistics, holding that the principles underlying the structure of language are biologically determined in the human mind and hence genetically transmitted. He therefore argues that all humans share the same underlying linguistic structure, irrespective of sociocultural differences. In adopting this position, Chomsky rejects the radical behaviorist psychology of B. F. Skinner which views the mind as a tabula rasa (“blank slate”) and thus treats language as learned behavior. Accordingly, he argues that language is a unique evolutionary development of the human species and is unlike modes of communication used by any other animal species. Chomsky’s nativist, internalist view of language is consistent with the philosophical school of “rationalism“, and is contrasted with the anti-nativist, externalist view of language, which is consistent with the philosophical school of “empiricism“.
Anyway, back to Kuzweil, who has an interesting bit about love:
Science has recently gotten into the act as well, and we are now able to identify the biochemical changes that occur when someone falls in love. Dopamine is released, producing feelings of happiness and delight. Norepinephrin levels soar, which lead to a racing heart and overall feelings of exhilaration. These chemicals, along with phenylethylamine, produce elevation, high energy levels, focused attention, loss of appetite, and a general craving for the object of one’s desire. … serotonin level go down, similar to what happens in obsessive-compulsive disorder….
If these biochemical phenomena sound similar to those of the flight-or-fight syndrome, they are, except that we are running toward something or someone; indeed, a cynic might say toward rather than away form danger. The changes are also fully consistent with those of the early phase of addictive behavior. … Studies of ecstatic religious experiences also show the same physical phenomena, it can be said that the person having such an experiences is falling in love with God or whatever spiritual connection on which they are focused. …
Religious readers care to weigh in?
Consider two related species of voles: the prairie vole and the montane vole. They are pretty much identical, except that the prairie vole has receptors for oxytocin and vasopressin, whereas the montane vole does not. The prairie vole is noted for lifetime monogamous relationships, while the montane vole resorts almost exclusively to one-night stands.
Learning by species:
A mother rat will build a nest for her young even if she has never seen another rat in her lifetime. Similarly, a spider will spin a web, a caterpillar will create her own cocoon, and a beaver will build a damn, even if no contemporary ever showed them how to accomplish these complex tasks. That is not to say that these are not learned behavior. It is just that he animals did not learn them in a single lifetime… The evolution of animal behavior does constitute a learning process, but it is learning by the species, not by the individual and the fruits of this leaning process are encoded in DNA.
I think that’s enough for today; what did you think? Did you enjoy the book? Is Kurzweil on the right track with his pattern recognizers? Are non-biological neocortexes on the horizon? Will we soon convert the solar system to computronium?
Let’s continue this discussion next Monday–so if you haven’t read the book yet, you still have a whole week to finish.
The other day I was walking through the garden when I looked down, saw one of these, leapt back, screamed loudly enough to notify the entire neighborhood:
(The one in my yard was insect free, however.)
After catching my breath, I wondered, “Is that a wasp nest or a beehive?” and crept back for a closer look. Wasp nest. I mentally paged through my knowledge of wasp nests: wasps abandon nests when they fall on the ground. This one was probably empty and safe to step past. I later tossed it onto the compost pile.
The interesting part of this incident wasn’t the nest, but my reaction. I jumped away from the thing before I had even consciously figured out what the nest was. Only once I was safe did I consciously think about the nest.
Gazzaniga discusses a problem faced by brains trying to evolve to be bigger and smarter: how do you get more neurons working without taking up an absurd amount of space connecting each and every neuron to every other neuron?
Imagine a brain with 5 connected neurons: each neuron requires 4 connections to talk to every other neuron. A 5 neuron brain would thus need space for 10 total connections.
The addition of a 6th neuron would require 5 new connections; a 7th neuron requires 6 new connections, etc. A fully connected brain of 100 neurons would require 99 connections per neuron, for a total of 4,950 connections.
Connecting all of your neurons might work fine if if you’re a sea squirt, with only 230 or so neurons, but it is going to fail hard if you’re trying to hook up 86 billion. The space required to hook up all of these neurons would be massively larger than the space you can actually maintain by eating.
So how does an organism evolving to be smarter deal with the connectivity demands of increasing brain size?
Human social lives suggest an answer: Up on the human scale, one person can, Dunbar estimates, have functional social relationships with about 150 other people, including an understanding of those people’s relationships with each other. 150 people (the “Dunbar number”) is therefore the amount of people who can reliably cooperate or form groups without requiring any top-down organization.
So how do humans survive in groups of a thousand, a million, or a billion (eg, China)? How do we build large-scale infrastructure projects requiring the work of thousands of people and used by millions, like interstate highways? By organization–that is, specialization.
In a small tribe of 150 people, almost everyone in the tribe can do most of the jobs necessary for the tribe’s survival, within the obvious limits of biology. Men and women are both primarily occupied with collecting food. Both prepare clothing and shelter; both can cook. There is some specialization of labor–obviously men can carry heavier loads; women can nurse children–but most people are generally competent at most jobs.
In a modern industrial economy, most people are completely incompetent at most jobs. I have a nice garden, but I don’t even know how to turn on a tractor, much less how to care for a cow. The average person does not know how to knit or sew, much less build a house, wire up the electricity and lay the plumbing. We attend school from 5 to 18 or 22 or 30 and end up less competent at surviving in our own societies than a cave man with no school was in his, not because school is terrible but because modern industrial society requires so much specialized knowledge to keep everything running that no one person can truly master even a tenth of it.
Specialization, not just of people but of organizations and institutions, like hospitals devoted to treating the sick, Walmarts devoted to selling goods, and Microsoft devoted to writing and selling computer software and hardware, lets society function without requiring that everyone learn to be a doctor, merchant, and computer expert.
Similarly, brains expand their competence via specialization, not denser neural connections.
The smartest people may boast more neurons than those of average intelligence, but their brains have fewer neural connections…
Neuroscientists in Germany recruited 259 participants, both men and women, to take IQ tests and have their brains imaged…
The research revealed a strong correlation between the number of dendrites in a person’s cerebral cortex and their intelligence. The smartest participants had fewer neural connections in their cerebral cortex.
Fewer neural connections overall allows different parts of the brain to specialize, increasing local competence.
All things are produced more plentifully and easily and of a better quality when one man does one thing that is natural to him and does it at the right time, and leaves other things. –Plato, The Republic
The brains of mice, as Gazzinga discusses, do not need to be highly specialized, because mice are not very smart and do not do many specialized activities. Human brains, by contrast, are highly specialized, as anyone who has ever had a stroke has discovered. (Henry Harpending of West Hunter, for example, once had a stroke while visiting Germany that knocked out the area of his brain responsible for reading, but since he couldn’t read German in the first place, he didn’t realize anything was wrong until several hours later.)
I read, about a decade ago, that male and female brains have different levels, and patterns, of internal connectivity. (Here and here are articles on the subject.) These differences in connectivity may allow men and women to excel at different skills, and since we humans are a social species that can communicate by talking, this allows us to take cognitive modality beyond the level of a single brain.
So modularity lets us learn (and do) more things, with the downside that sometimes knowledge is highly localized–that is, we have a lot of knowledge that we seem able to access only under specific circumstances, rather than use generally.
For example, I have long wondered at the phenomenon of people who can definitely do complicated math when asked to, but show no practical number sense in everyday life, like the folks from the Yale Philosophy department who are confused about why African Americans are under-represented in their major, even though Yale has an African American Studies department which attracts a disproportionate % of Yale’s African American students. The mathematical certainty that if any major in the whole school that attracts more African American students, then other majors will end up with fewer, has been lost on these otherwise bright minds.
Yalies are not the only folks who struggle to use the things they know. When asked to name a book–any book–ordinary people failed. Surely these people have heard of a book at some point in their lives–the Bible is pretty famous, as is Harry Potter. Even if you don’t like books, they were assigned in school, and your parents probably read The Cat in the Hat and Green Eggs and Ham to you when you were a kid. It is not that they do not have the knowledge as they cannot access it.
Teachers complain all the time that students–even very good ones–can memorize all of the information they need for a test, regurgitate it all perfectly, and then turn around and show no practical understanding of the information at all.
Richard Feynman wrote eloquently of his time teaching future science teachers in Brazil:
In regard to education in Brazil, I had a very interesting experience. I was teaching a group of students who would ultimately become teachers, since at that time there were not many opportunities in Brazil for a highly trained person in science. These students had already had many courses, and this was to be their most advanced course in electricity and magnetism – Maxwell’s equations, and so on. …
I discovered a very strange phenomenon: I could ask a question, which the students would answer immediately. But the next time I would ask the question – the same subject, and the same question, as far as I could tell – they couldn’t answer it at all! For instance, one time I was talking about polarized light, and I gave them all some strips of polaroid.
Polaroid passes only light whose electric vector is in a certain direction, so I explained how you could tell which way the light is polarized from whether the polaroid is dark or light.
We first took two strips of polaroid and rotated them until they let the most light through. From doing that we could tell that the two strips were now admitting light polarized in the same direction – what passed through one piece of polaroid could also pass through the other. But then I asked them how one could tell the absolute direction of polarization, for a single piece of polaroid.
They hadn’t any idea.
I knew this took a certain amount of ingenuity, so I gave them a hint: “Look at the light reflected from the bay outside.”
Nobody said anything.
Then I said, “Have you ever heard of Brewster’s Angle?”
“Yes, sir! Brewster’s Angle is the angle at which light reflected from a medium with an index of refraction is completely polarized.”
“And which way is the light polarized when it’s reflected?”
“The light is polarized perpendicular to the plane of reflection, sir.” Even now, I have to think about it; they knew it cold! They even knew the tangent of the angle equals the index!
I said, “Well?”
Still nothing. They had just told me that light reflected from a medium with an index, such as the bay outside, was polarized; they had even told me which way it was polarized.
I said, “Look at the bay outside, through the polaroid. Now turn the polaroid.”
“Ooh, it’s polarized!” they said.
After a lot of investigation, I finally figured out that the students had memorized everything, but they didn’t know what anything meant. When they heard “light that is reflected from a medium with an index,” they didn’t know that it meant a material such as water. They didn’t know that the “direction of the light” is the direction in which you see something when you’re looking at it, and so on. Everything was entirely memorized, yet nothing had been translated into meaningful words. So if I asked, “What is Brewster’s Angle?” I’m going into the computer with the right keywords. But if I say, “Look at the water,” nothing happens – they don’t have anything under “Look at the water”!
The students here are not dumb, and memorizing things is not bad–memorizing your times tables is very useful–but they have everything lodged in their “memorization module” and nothing in their “practical experience module.” (Note: I am not necessarily suggesting that thee exists a literal, physical spot in the brain where memorized and experienced knowledge reside, but that certain brain structures and networks lodge information in ways that make it easier or harder to access.)
People frequently make arguments that don’t make logical sense when you think them all the way through from start to finish, but do make sense if we assume that people are using specific brain modules for quick reasoning and don’t necessarily cross-check their results with each other. For example, when we are angry because someone has done something bad to us, we tend to snap at people who had nothing to do with it. Our brains are in “fight and punish mode” and latch on to the nearest person as the person who most likely committed the offense, even if we consciously know they weren’t involved.
Political discussions are often marred by folks running what ought to be logical arguments through status signaling, emotional, or tribal modules. The desire to see Bad People punished (a reasonable desire if we all lived in the same physical community with each other) interferes with a discussion of whether said punishment is actually useful, effective, or just. For example, a man who has been incorrectly convicted of the rape of a child will have a difficult time getting anyone to listen sympathetically to his case.
In the case of white South African victims of racially-motivated murder, the notion that their ancestors did wrong and therefore they deserve to be punished often overrides sympathy. As BBC notes, these killings tend to be particularly brutal (they often involve torture) and targeted, but the South African government doesn’t care:
According to one leading political activist, Mandla Nyaqela, this is the after-effect of the huge degree of selfishness and brutality which was shown towards the black population under apartheid. …
Virtually every week the press here report the murders of white farmers, though you will not hear much about it in the media outside South Africa.In South Africa you are twice as likely to be murdered if you are a white farmer than if you are a police officer – and the police here have a particularly dangerous life. The killings of farmers are often particularly brutal. …
Ernst Roets’s organisation has published the names of more than 2,000 people who have died over the last two decades. The government has so far been unwilling to make solving and preventing these murders a priority. …
There used to be 60,000 white farmers in South Africa. In 20 years that number has halved.
The Christian Science Monitor reports on the measures ordinary South Africans have to take in what was once a safe country to not become human shishkabobs, which you should pause and read, but is a bit of a tangent from our present discussion. The article ends with a mind-bending statement about a borrowed dog (dogs are also important for security):
My friends tell me the dog is fine around children, but is skittish around men, especially black men. The people at the dog pound told them it had probably been abused. As we walk past house after house, with barking dog after barking dog, I notice Lampo pays no attention. Instead, he’s watching the stream of housekeepers and gardeners heading home from work. They eye the dog nervously back.
Great, I think, I’m walking a racist dog.
Module one: Boy South Africa has a lot of crime. Better get a dog, cover my house with steel bars, and an extensive security system.
Module two: Associating black people with crime is racist, therefore my dog is racist for being wary of people who look like the person who abused it.
And while some people are obviously sympathetic to the plight of murdered people, “Cry me a river White South African Colonizers” is a very common reaction. (Never mind that the people committing crimes in South Africa today never lived under apartheid; they’ve lived in a black-run country for their entire lives.) Logically, white South Africans did not do anything to deserve being killed, and like the golden goose, killing the people who produce food will just trigger a repeat of Zimbabwe, but the modes of tribalism–“I do not care about these people because they are not mine and I want their stuff”–and punishment–“I read about a horrible thing someone did, so I want to punish everyone who looks like them”–trump logic.
Who dies–and how they die–significantly shapes our engagement with the news. Gun deaths via mass shootings get much more coverage and worry than ordinary homicides, even though ordinary homicides are far more common. homicides get more coverage and worry than suicides, even though suicides are far more common. The majority of gun deaths are actually suicides, but you’d never know that from listening to our national conversation about guns, simply because we are biased to worry far more about other people killng us than about ourselves.
Similarly, the death of one person via volcano receives about the same news coverage as 650 in a flood, 2,000 in a drought, or 40,000 in a famine. As the article notes:
Instead of considering the objective damage caused by natural disasters, networks tend to look for disasters that are “rife with drama”, as one New York Times article put it4—hurricanes, tornadoes, forest fires, earthquakes all make for splashy headlines and captivating visuals. Thanks to this selectivity, less “spectacular” but often times more deadly natural disasters tend to get passed over. Food shortages, for example, result in the most casualties and affect the most people per incident5 but their onset is more gradual than that of a volcanic explosion or sudden earthquake. … This bias for the spectacular is not only unfair and misleading, but also has the potential to misallocate attention and aid.
There are similar biases by continent, with disasters in Africa receiving less attention than disasters in Europe (this correlates with African disasters being more likely to be the slow-motion famines, epidemics and droughts that kill lots of people, and European disasters being splashier, though perhaps we’d consider famines “splashier” if they happened in Paris instead of Ethiopia.)
From a neuropolitical perspective, I suspect that patterns such as the Big Five personality traits correlating with particular political positions (“openness” with “liberalism,” for example, or “conscientiousness” with “conservativeness,”) is caused by patterns of brain activity that cause some people to depend more or less on particular brain modules for processing.
For example, conservatives process more of the world through the areas of their brain that are also used for processing disgust, (not one of “the five” but still an important psychological trait) which increases their fear of pathogens, disease vectors, and generally anything new or from the outside. Disgust can go so far as to process other people’s faces or body language as “disgusting” (eg, trans people) even when there is objectively nothing that presents an actual contamination or pathogenic risk involved.
Similarly, people who feel more guilt in one area of their life often feel guilt in others–eg, “White guilt was significantly associated with bulimia nervosa symptomatology.” The arrow of causation is unclear–guilt about eating might spill over into guilt about existing, or guilt about existing might cause guilt about eating, or people who generally feel guilty about everything could have both. Either way, these people are generally not logically reasoning, “Whites have done bad things, therefore I should starve myself.” (Should veganism be classified as a politically motivated eating disorder?)
I could continue forever–
Restrictions on medical research are biased toward preventing mentally salient incidents like thalidomide babies, but against the invisible cost of children who die from diseases that could have been cured had research not been prevented by regulations.
America has a large Somali community but not Congolese, (85,000 Somalis vs. 13,000 Congolese, of whom 10,000 hail from the DRC. Somalia has about 14 million people, the DRC has about 78.7 million people, so it’s not due to there being more Somalis in the world,) for no particular reason I’ve been able to discover, other than President Clinton once disastrously sent a few helicopters to intervene in the eternal Somali civil war and so the government decided that we now have a special obligation to take in Somalis.
–but that’s probably enough.
I have tried here to present a balanced account of different political biases, but I would like to end by noting that modular thinking, while it can lead to stupid decisions, exists for good reasons. If purely logical thinking were superior to modular, we’d probably be better at it. Still, cognitive biases exist and lead to a lot of stupid or sub-optimal results.
I began this post intending to write about testosterone metabolization in autism and possible connections with transgender identity, but realized halfway through that I didn’t actually know whether the autist-trans connection was primarily male-to-female or female-to-male. I had assumed that the relevant population is primarily MtF because both autists and trans people are primarily male, but both groups do have female populations that are large enough to contribute significantly. Here’s a sample of the data I’ve found so far:
A study conducted by a team of British scientists in 2012 found that of a pool of individuals not diagnosed on the autism spectrum, female-to-male (FTM) transgender people have higher rates of autistic features than do male-to-female (MTF) transgender people or cisgender males and females. Another study, which looked at children and adolescents admitted to a gender identity clinic in the Netherlands, found that almost 8 percent of subjects were also diagnosed with ASD.
Note that both of these studies are looking at trans people and assessing whether or not they have autism symptoms, not looking at autists and asking if they have trans symptoms. Given the characterization of autism as “extreme male brain” and that autism is diagnosed in males at about 4x the rate of females, the fact that there is some overlap between “women who think they think like men” and “traits associated with male thought patterns” is not surprising.
If the reported connection between autism and trans identity is just “autistic women feel like men,” that’s pretty non-mysterious and I just wasted an afternoon.
Though the data I have found so far still does not look directly at autists and ask how many of them have trans symptoms, the wikipedia page devoted to transgender and transsexual computer programmers lists only MtFs and no FtMs. Whether this is a pattern throughout the wider autism community, it definitely seems to be a thing among programmers. (Relevant discussion.)
So, returning to the original post:
Autism contains an amusing contradiction: on the one hand, autism is sometimes characterized as “extreme male brain,” and on the other hand, (some) autists (may be) more likely than neurotypicals to self-identify as transwomen–that is, biological men who see themselves as women. This seems contradictory: if autists are more masculine, mentally, than the average male, why don’t they identify as football players, army rangers, or something else equally masculine? For that matter, why isn’t a group with “extreme male brains” regarded as more, well, masculine?
(And if autists have extreme male brains, does that mean football players don’t? Do football players have more feminine brains than autists? Do colorless green ideas sleep furiously? DO WORDS MEAN?)
In favor of the “extreme male brain” hypothesis, we have evidence that testosterone is important for certain brain functions, like spacial recognition, we have articles like this one: Testosterone and the brain:
Gender differences in spatial recognition, and age-related declines in cognition and mood, point towards testosterone as an important modulator of cerebral functions. Testosterone appears to activate a distributed cortical network, the ventral processing stream, during spatial cognition tasks, and addition of testosterone improves spatial cognition in younger and older hypogonadal men. In addition, reduced testosterone is associated with depressive disorders.
(Note that women also suffer depression at higher rates than men.)
So people with more testosterone are better at spacial cognition and other tasks that “autistic” brains typically excel at, and brains with less testosterone tend to be moody and depressed.
But hormones are tricky things. Where do they come from? Where do they go? How do we use them?
According to Wikipedia:
During the second trimester [of pregnancy], androgen level is associated with gender formation.This period affects the femininization or masculinization of the fetus and can be a better predictor of feminine or masculine behaviours such as sex typed behaviour than an adult’s own levels. A mother’s testosterone level during pregnancy is correlated with her daughter’s sex-typical behavior as an adult, and the correlation is even stronger than with the daughter’s own adult testosterone level.
… Early infancy androgen effects are the least understood. In the first weeks of life for male infants, testosterone levels rise. The levels remain in a pubertal range for a few months, but usually reach the barely detectable levels of childhood by 4–6 months of age.The function of this rise in humans is unknown. It has been theorized that brain masculinization is occurring since no significant changes have been identified in other parts of the body.The male brain is masculinized by the aromatization of testosterone into estrogen, which crosses the blood–brain barrier and enters the male brain, whereas female fetuses have α-fetoprotein, which binds the estrogen so that female brains are not affected.
Let’s re-read that: the male brain is masculinized by the aromatization of testosterone into estrogen.
If that’s not a weird sentence, I don’t know what is.
Burgeoning evidence now documents profound effects of estrogens on learning, memory, and mood as well as neurodevelopmental and neurodegenerative processes. Most data derive from studies in females, but there is mounting recognition that estrogens play important roles in the male brain, where they can be generated from circulating testosterone by local aromatase enzymes or synthesized de novo by neurons and glia. Estrogen-based therapy therefore holds considerable promise for brain disorders that affect both men and women. However, as investigations are beginning to consider the role of estrogens in the male brain more carefully, it emerges that they have different, even opposite, effects as well as similar effects in male and female brains. This review focuses on these differences, including sex dimorphisms in the ability of estradiol to influence synaptic plasticity, neurotransmission, neurodegeneration, and cognition, which, we argue, are due in a large part to sex differences in the organization of the underlying circuitry.
Hypothesis: the way testosterone works in the brain (where we both do math and “feel” male or female) and the way it works in the muscles might be very different.
Do autists actually differ from other people in testosterone (or other hormone) levels?
Compared to controls, significantly more women with ASC [Autism Spectrum Conditions] reported (a) hirsutism, (b) bisexuality or asexuality, (c) irregular menstrual cycle, (d) dysmenorrhea, (e) polycystic ovary syndrome, (f) severe acne, (g) epilepsy, (h) tomboyism, and (i) family history of ovarian, uterine, and prostate cancers, tumors, or growths. Compared to controls, significantly more mothers of ASC children reported (a) severe acne, (b) breast and uterine cancers, tumors, or growths, and (c) family history of ovarian and uterine cancers, tumors, or growths.
Three of the children had exhibited explosive aggression against others (anger, broken objects, violence toward others). Three engaged in self-mutilations, and three demonstrated no aggression and were in a severe state of autistic withdrawal. The appearance of aggression against others was associated with having fewer of the main symptoms of autism (autistic withdrawal, stereotypies, language dysfunctions).
Three of their subjects (they don’t say which, but presumably from the first group,) had abnormally high testosterone levels (including one of the girls in the study.) The other six subjects had normal androgen levels.
This is the first report of an association between abnormally high androgenic activity and aggression in subjects with autism. Although a previously reported study did not find group mean elevations in plasma testosterone in prepubertal autistic subjects (4), it appears here that in certain autistic individuals, especially those in puberty, hyperandrogeny may play a role in aggressive behaviors. Also, there appear to be distinct clinical forms of autism that are based on aggressive behaviors and are not classified in DSM-IV. Our preliminary findings suggest that abnormally high plasma testosterone concentration is associated with aggression against others and having fewer of the main autistic symptoms.
So, some autists have do have abnormally high testosterone levels, but those same autists are less autistic, overall, than other autists. More autistic behavior, aggression aside, is associated with normal hormone levels. Probably.
Levels of FT [Fetal Testosterone] were analysed in amniotic fluid and compared with autistic traits, measured using the Quantitative Checklist for Autism in Toddlers (Q-CHAT) in 129 typically developing toddlers aged between 18 and 24 months (mean ± SD 19.25 ± 1.52 months). …
Sex differences were observed in Q-CHAT scores, with boys scoring significantly higher (indicating more autistic traits) than girls. In addition, we confirmed a significant positive relationship between FT levels and autistic traits.
I feel like this is veering into “we found that boys score higher on a test of male traits than girls did” territory, though.
The present study evaluates androgen and estrogen levels in saliva as well as polymorphisms in genes for androgen receptor (AR), 5-alpha reductase (SRD5A2), and estrogen receptor alpha (ESR1) in the Slovak population of prepubertal (under 10 years) and pubertal (over 10 years) children with autism spectrum disorders. The examined prepubertal patients with autism, pubertal patients with autism, and prepubertal patients with Asperger syndrome had significantly increased levels of salivary testosterone (P < 0.05, P < 0.01, and P < 0.05, respectively) in comparison with control subjects. We found a lower number of (CAG)n repeats in the AR gene in boys with Asperger syndrome (P < 0.001). Autistic boys had an increased frequency of the T allele in the SRD5A2 gene in comparison with the control group. The frequencies of T and C alleles in ESR1 gene were comparable in all assessed groups.
Individuals with a lower number of CAG repeats exhibit higher AR gene expression levels and generate more functional AR receptors increasing their sensitivity to testosterone…
Fewer repeats, more sensitivity to androgens. The SRD5A2 gene is also involved in testosterone metabolization, though I’m not sure exactly what the T allele does relative to the other variants.
But just because there’s a lot of something in the blood (or saliva) doesn’t mean the body is using it. Diabetics can have high blood sugar because their bodies lack the necessary insulin to move the sugar from the blood, into their cells. Fewer androgen receptors could mean the body is metabolizing testosterone less effectively, which in turn leaves more of it floating in the blood… Biology is complicated.
Here, we show that male and female hormones differentially regulate the expression of a novel autism candidate gene, retinoic acid-related orphan receptor-alpha (RORA) in a neuronal cell line, SH-SY5Y. In addition, we demonstrate that RORA transcriptionally regulates aromatase, an enzyme that converts testosterone to estrogen. We further show that aromatase protein is significantly reduced in the frontal cortex of autistic subjects relative to sex- and age-matched controls, and is strongly correlated with RORA protein levels in the brain.
If autists are bad at converting testosterone to estrogen, this could leave extra testosterone floating around in their blood… but doens’t explain their supposed “extreme male brain.” Here’s another study on the same subject, since it’s confusing:
Comparing the brains of 13 children with and 13 children without autism spectrum disorder, the researchers found a 35 percent decrease in estrogen receptor beta expression as well as a 38 percent reduction in the amount of aromatase, the enzyme that converts testosterone to estrogen.
Levels of estrogen receptor beta proteins, the active molecules that result from gene expression and enable functions like brain protection, were similarly low. There was no discernable change in expression levels of estrogen receptor alpha, which mediates sexual behavior.
The animals in the new studies, called ‘reeler’ mice, have one defective copy of the reelin gene and make about half the amount of reelin compared with controls. …
Reeler mice with one faulty copy serve as a model of one of the most well-established neuro-anatomical abnormalities in autism. Since the mid-1980s, scientists have known that people with autism have fewer Purkinje cells in the cerebellum than normal. These cells integrate information from throughout the cerebellum and relay it to other parts of the brain, particularly the cerebral cortex.
But there’s a twist: both male and female reeler mice have less reelin than control mice, but only the males lose Purkinje cells. …
In one of the studies, the researchers found that five days after birth, reeler mice have higher levels of testosterone in the cerebellum compared with genetically normal males3.
Keller’s team then injected estradiol — a form of the female sex hormone estrogen — into the brains of 5-day-old mice. In the male reeler mice, this treatment increases reelin levels in the cerebellum and partially blocks Purkinje cell loss. Giving more estrogen to female reeler mice has no effect — but females injected with tamoxifen, an estrogen blocker, lose Purkinje cells. …
In another study, the researchers investigated the effects of reelin deficiency and estrogen treatment on cognitive flexibility — the ability to switch strategies to solve a problem4. …
“And we saw indeed that the reeler mice are slower to switch. They tend to persevere in the old strategy,” Keller says. However, male reeler mice treated with estrogen at 5 days old show improved cognitive flexibility as adults, suggesting that the estrogen has a long-term effect.
This still doesn’t explain why autists would self-identify as transgender women (mtf) at higher rates than average, but it does suggest that any who do start hormone therapy might receive benefits completely independent of gender identity.
Let’s stop and step back a moment.
Autism is, unfortunately, badly defined. As the saying goes, if you’ve met one autist, you’ve met one autist. There are probably a variety of different, complicated things going on in the brains of different autists simply because a variety of different, complicated conditions are all being lumped together under a single label. Any mental disability that can include both non-verbal people who can barely dress and feed themselves and require lifetime care and billionaires like Bill Gates is a very badly defined condition.
(Unfortunately, people diagnose autism with questionnaires that include questions like “Is the child pedantic?” which could be equally true of both an autistic child and a child who is merely very smart and has learned more about a particular subject than their peers and so is responding in more detail than the adult is used to.)
The average autistic person is not a programmer. Autism is a disability, and the average diagnosed autist is pretty darn disabled. Among the people who have jobs and friends but nonetheless share some symptoms with formally diagnosed autists, though, programmer and the like appear to be pretty popular professions.
Back in my day, we just called these folks nerds.
Here’s a theory from a completely different direction: People feel the differences between themselves and a group they are supposed to fit into and associate with a lot more strongly than the differences between themselves and a distant group. Growing up, you probably got into more conflicts with your siblings and parents than with random strangers, even though–or perhaps because–your family is nearly identical to you genetically, culturally, and environmentally. “I am nothing like my brother!” a man declares, while simultaneously affirming that there is a great deal in common between himself and members of a race and culture from the other side of the planet. Your coworker, someone specifically selected for the fact that they have similar mental and technical aptitudes and training as yourself, has a distinct list of traits that drive you nuts, from the way he staples papers to the way he pronounces his Ts, while the women of an obscure Afghan tribe of goat herders simply don’t enter your consciousness.
Nerds, somewhat by definition, don’t fit in. You don’t worry much about fitting into a group you’re not part of in the fist place–you probably don’t worry much about whether or not you fit in with Melanesian fishermen–but most people work hard at fitting in with their own group.
So if you’re male, but you don’t fit in with other males (say, because you’re a nerd,) and you’re down at the bottom of the highschool totem pole and feel like all of the women you’d like to date are judging you negatively next to the football players, then you might feel, rather strongly, the differences between you and other males. Other males are aggressive, they call you a faggot, they push you out of their spaces and threaten you with violence, and there’s very little you can do to respond besides retreat into your “nerd games.”
By contrast, women are polite to you, not aggressive, and don’t aggressively push you out of their spaces. Your differences with them are much less problematic, so you feel like you “fit in” with them.
(There is probably a similar dynamic at play with American men who are obsessed with anime. It’s not so much that they are truly into Japanese culture–which is mostly about quietly working hard–as they don’t fit in very well with their own culture.) (Note: not intended as a knock on anime, which certainly has some good works.)
And here’s another theory: autists have some interesting difficulties with constructing categories and making inferences from data. They also have trouble going along with the crowd, and may have fewer “mirror neurons” than normal people. So maybe autists just process the categories of “male” and “female” a little differently than everyone else, and in a small subset of autists, this results in trans identity.*
And another: maybe there are certain intersex disorders which result in differences in brain wiring/organization. (Yes, there are real interesx disorders, like Klinefelter’s, in which people have XXY chromosomes instead of XX or XY.) In a small set of cases, these unusually wired brains may be extremely good at doing certain tasks (like programming) resulting people who are both “autism spectrum” and “trans”. This is actually the theory I’ve been running with for years, though it is not incompatible with the hormonal theories discussed above.
But we are talking small: trans people of any sort are extremely rare, probably on the order of <1/1000. Even if autists were trans at 8 times the rates of non-autists, that’s still only 8/1000 or 1/125. Autists themselves are pretty rare (estimates vary, but the vast majority of people are not autistic at all,) so we are talking about a very small subset of a very small population in the first place. We only notice these correlations at all because the total population has gotten so huge.
Sometimes, extremely rare things are random chance.
I was really excited about this book when I picked it up at the library. It has the word “numbers” on the cover and a subtitle that implies a story about human cultural and cognitive evolution.
Regrettably, what could have been a great books has turned out to be kind of annoying. There’s some fascinating information in here–for example, there’s a really interesting part on pages 249-252–but you have to get through pages 1-248 to get there. (Unfortunately, sometimes authors put their most interesting bits at the end so that people looking to make trouble have gotten bored and wandered off by then.)
I shall try to discuss/quote some of the book’s more interesting bits, and leave aside my differences with the author (who keeps reiterating his position that mathematical ability is entirely dependent on the culture you’re raised in.) Everett nonetheless has a fascinating perspective, having actually spent much of his childhood in a remote Amazonian village belonging to the Piraha, who have no real words for numbers. (His parents were missionaries.)
Which languages contain number words? Which don’t? Everett gives a broad survey:
“…we can reach a few broad conclusions about numbers in speech. First, they are common to nearly all of the world’s languages. … this discussion has shown that number words, across unrelated language, tend to exhibit striking parallels, since most languages employ a biologically based body-part model evident in their number bases.”
That is, many languages have words that translate essentially to “One, Two, Three, Four, Hand, … Two hands, (10)… Two Feet, (20),” etc., and reflect this in their higher counting systems, which can end up containing a mix of base five, 10, and 20. (The Romans, for example, used both base five and ten in their written system.)
“Third, the linguistic evidence suggests not only that this body-part model has motivated the innovation of numebers throughout the world, but also that this body-part basis of number words stretches back historically as far as the linguistic data can take us. It is evident in reconstruction of ancestral languages, including Proto-Sino-Tibetan, Proto-Niger-Congo, Proto-Autronesian, and Proto-Indo-European, the languages whose descendant tongues are best represented in the world today.”
Note, though, that linguistics does not actually give us a very long time horizon. Proto-Indo-European was spoken about 4-6,000 years ago. Proto-Sino-Tibetan is not as well studied yet as PIE, but also appears to be at most 6,000 years old. Proto-Niger-Congo is probably about 5-6,000 years old. Proto-Austronesian (which, despite its name, is not associated with Australia,) is about 5,000 years old.
These ranges are not a coincidence: languages change as they age, and once they have changed too much, they become impossible to classify into language families. Older languages, like Basque or Ainu, are often simply described as isolates, because we can’t link them to their relatives. Since humanity itself is 200,000-300,000 years old, comparative linguistics only opens a very short window into the past. Various groups–like the Amazonian tribes Everett studies–split off from other groups of humans thousands 0r hundreds of thousands of years before anyone started speaking Proto-Indo-European. Even agriculture, which began about 10,000-15,000 years ago, is older than these proto-languages (and agriculture seems to have prompted the real development of math.)
I also note these language families are the world’s biggest because they successfully conquered speakers of the world’s other languages. Spanish, Portuguese, and English are now widely spoken in the Americas instead of Cherokee, Mayan, and Nheengatu because Indo-European language speakers conquered the speakers of those languages.
The guy with the better numbers doesn’t always conquer the guy with the worse numbers–the Mongol conquest of China is an obvious counter. But in these cases, the superior number system sticks around, because no one wants to replace good numbers with bad ones.
In general, though, better tech–which requires numbers–tends to conquer worse tech.
Which means that even though our most successful language families all have number words that appear to be about 4-6,000 years old, we shouldn’t assume this was the norm for most people throughout most of history. Current human numeracy may be a very recent phenomenon.
“The invention of number is attainable by the human mind but is attained through our fingers. Linguistic data, both historical and current, suggest that numbers in disparate cultures have arisen independently, on an indeterminate range of occasions, through the realization that hands can be used to name quantities like 5 and 10. … Words, our ultimate implements for abstract symbolization, can thankfully be enlisted to denote quantities. But they are usually enlisted only after people establish a more concrete embodied correspondence between their finger sand quantities.”
Some more on numbers in different languages:
“Rare number bases have been observed, for instance, in the quaternary (base-4) systems of Lainana languages of California, or in the senary (base-6) systems that are found in southern New Guinea. …
Several languages in Melanesia and Polynesia have or once had number system that vary in accordance with the type of object being counted. In the case of Old High Fijian, for instance, the word for 100 was Bola when people were counting canoes, but Kora when they were counting coconuts. …
some languages in northwest Amazonia base their numbers on kinship relationships. This is true of Daw and Hup two related language in the region. Speakers of the former languages use fingers complemented with words when counting from 4 to 10. The fingers signify the quantity of items being counted, but words are used to denote whether the quantity is odd or even. If the quantity is even, speakers say it “has a brother,” if it is odd they state it “has no brother.”
What about languages with no or very few words for numbers?
In one recent survey of limited number system, it was found that more than a dozen languages lack bases altogether, and several do not have words for exact quantities beyond 2 and, in some cases, beyond 1. Of course, such cases represent a miniscule fraction of the world’s languages, the bulk of which have number bases reflecting the body-part model. Furthermore, most of the extreme cases in question are restricted geographically to Amazonia. …
All of the extremely restricted languages, I believe, are used by people who are hunter-gatherers or horticulturalists, eg, the Munduruku. Hunter gatherers typically don’t have a lot of goods to keep track of or trade, fields to measure or taxes to pay, and so don’t need to use a lot of numbers. (Note, however, that the Inuit/Eskimo have a perfectly normal base-20 counting system. Their particularly harsh environment appears to have inspired both technological and cultural adaptations.) But why are Amazonian languages even less numeric than those of other hunter-gatherers from similar environments, like central African?
Famously, most of the languages of Australia have somewhat limited number system, and some linguists previously claimed that most Australian language slack precise terms for quantities beyond 2…. [however] many languages on that continent actually have native means of describing various quantities in precise ways, and their number words for small quantities can sometimes be combined to represent larger quantities via the additive and even multiplicative usage of bases. …
Of the nearly 200 Australian languages considered in the survey, all have words to denote 1 and 2. In about three-quarters of the languages, however, the highest number is 3 or 4. Still, may of the languages use a word for “two” as a base for other numbers. Several of the languages use a word for “five” as a base, an eight of the languages top out at a word for “ten.”
Everett then digresses into what initially seems like a tangent about grammatical number, but luckily I enjoy comparative linguistics.
In an incredibly comprehensive survey of 1,066 languages, linguist Matthew Dryer recently found that 98 of them are like Karitiana and lack a grammatical means of marking nouns of being plural. So it is not particularly rare to find languages in which numbers do not show plurality. … about 90% of them, have a grammatical means through which speakers can convey whether they are talking about one or more than one thing.
Mandarin is a major language that has limited expression of plurals. According to Wikipedia:
Some languages, such as modern Arabic and Proto-Indo-European also have a “dual” category distinct from singular or plural; an extremely small set of languages have a trial category.
Many languages also change their verbs depending on how many nouns are involved; in English we say “He runs; they run;” languages like Latin or Spanish have far more extensive systems.
In sum: the vast majority of languages distinguish between 1 and more than one; a few distinguish between one, two, and many, and a very few distinguish between one, two, three, and many.
From the endnotes:
… some controversial claims of quadral markers, used in restricted contexts, have been made for the Austronesian languages Tangga, Marshallese, and Sursurunga. .. As Corbett notes in his comprehensive survey, the forms are probably best considered quadral markers. In fact, his impressive survey did not uncover any cases of quadral marking in the world’s languages.
Everett tends to bury his point; his intention in this chapter is to marshal support for the idea that humans have an “innate number sense” that allows them to pretty much instantly realize if they are looking at 1, 2, or 3 objects, but does not allow for instant recognition of larger numbers, like 4. He posits a second, much vaguer number sense that lets us distinguish between “big” and “small” amounts of things, eg, 10 looks smaller than 100, even if you can’t count.
He does cite actual neuroscience on this point–he’s not just making it up. Even newborn humans appear to be able to distinguish between 1, 2, and 3 of something, but not larger numbers. They also seem to distinguish between some and a bunch of something. Anumeric peoples, like the Piraha, also appear to only distinguish between 1, 2, and 3 items with good accuracy, though they can tell “a little” “some” and “a lot” apart. Everett also cites data from animal studies that find, similarly, that animals can distinguish 1, 2, and 3, as well as “a little” and “a lot”. (I had been hoping for a discussion of cephalopod intelligence, but unfortunately, no.)
How then, Everett asks, do we wed our specific number sense (1, 2, and 3) with our general number sense (“some” vs “a lot”) to produce ideas like 6, 7, and a googol? He proposes that we have no innate idea of 6, nor ability to count to 10. Rather, we can count because we were taught to (just as some highly trained parrots and chimps can.) It is only the presence of number words in our languages that allows us to count past 3–after all, anumeric people cannot.
But I feel like Everett is railroading us to a particular conclusion. For example, he sites neurology studies that found one part of the brain does math–the intraparietal suclus (IPS)–but only one part? Surely there’s more than one part of the brain involved in math.
The IPS turns out to be part of the extensive network of brain areas that support human arithmetic (Figure 1). Like all networks it is distributed, and it is clear that numerical cognition engages perceptual, motor, spatial and mnemonic functions, but the hub areas are the parietal lobes …
(By contrast, I’ve spent over half an hour searching and failing to figure out how high octopuses can count.)
Moreover, I question the idea that the specific and general number senses are actually separate. Rather, I suspect there is only one sense, but it is essentially logarithmic. For example, hearing is logarithmic (or perhaps exponential,) which is why decibels are also logarithmic. Vision is also logarithmic:
The eye senses brightness approximately logarithmically over a moderate range (but more like a power law over a wider range), and stellar magnitude is measured on a logarithmic scale. This magnitude scale was invented by the ancient Greek astronomer Hipparchus in about 150 B.C. He ranked the stars he could see in terms of their brightness, with 1 representing the brightest down to 6 representing the faintest, though now the scale has been extended beyond these limits; an increase in 5 magnitudes corresponds to a decrease in brightness by a factor of 100. Modern researchers have attempted to incorporate such perceptual effects into mathematical models of vision.
So many experiments have revealed logarithmic responses to stimuli that someone has formulated a mathematical “law” on the matter:
Fechner’s law states that the subjective sensation is proportional to the logarithm of the stimulus intensity. According to this law, human perceptions of sight and sound work as follows: Perceived loudness/brightness is proportional to logarithm of the actual intensity measured with an accurate nonhuman instrument.
p = k ln S S 0
The relationship between stimulus and perception is logarithmic. This logarithmic relationship means that if a stimulus varies as a geometric progression (i.e., multiplied by a fixed factor), the corresponding perception is altered in an arithmetic progression (i.e., in additive constant amounts). For example, if a stimulus is tripled in strength (i.e., 3 x 1), the corresponding perception may be two times as strong as its original value (i.e., 1 + 1). If the stimulus is again tripled in strength (i.e., 3 x 3 x 3), the corresponding perception will be three times as strong as its original value (i.e., 1 + 1 + 1). Hence, for multiplications in stimulus strength, the strength of perception only adds. The mathematical derivations of the torques on a simple beam balance produce a description that is strictly compatible with Weber’s law.
In any logarithmic scale, small quantities–like 1, 2, and 3–are easy to distinguish, while medium quantities–like 101, 102, and 103–get lumped together as “approximately the same.”
Of course, this still doesn’t answer the question of how people develop the ability to count past 3, but this is getting long, so we’ll continue our discussion next week.
So I was thinking about taste (flavor) and disgust (emotion.)
As I mentioned about a month ago, 25% of people are “supertasters,” that is, better at tasting than the other 75% of people. Supertasters experience flavors more intensely than ordinary tasters, resulting in a preference for “bland” food (food with too much flavor is “overwhelming” to them.) They also have a more difficult time getting used to new foods.
One of my work acquaintances of many years –we’ll call her Echo–is obese, constantly on a diet, and constantly eats sweets. She knows she should eat vegetables and tries to do so, but finds them bitter and unpleasant, and so the general outcome is as you expect: she doesn’t eat them.
Since I find most vegetables quite tasty, I find this attitude very strange–but I am willing to admit that I may be the one with unusual attitudes toward food.
Echo is also quite conservative.
This got me thinking about vegetarians vs. people who think vegetarians are crazy. Why (aside from novelty of the idea) should vegetarians be liberals? Why aren’t vegetarians just people who happen to really like vegetables?
What if there were something in preference for vegetables themselves that correlated with political ideology?
Certainly we can theorize that “supertaster” => “vegetables taste bitter” => “dislike of vegetables” => “thinks vegetarians are crazy.” (Some supertasters might think meat tastes bad, but anecdotal evidence doesn’t support this; see also Wikipedia, where supertasting is clearly associated with responses to plants:
Any evolutionary advantage to supertasting is unclear. In some environments, heightened taste response, particularly to bitterness, would represent an important advantage in avoiding potentially toxic plant alkaloids. In other environments, increased response to bitterness may have limited the range of palatable foods. …
Although individual food preference for supertasters cannot be typified, documented examples for either lessened preference or consumption include:
Mushrooms? Echo was just complaining about mushrooms.
Let’s talk about disgust. Disgust is an important reaction to things that might infect or poison you, triggering reactions from scrunching up your face to vomiting (ie, expelling the poison.) We process disgust in our amygdalas, and some people appear to have bigger or smaller amygdalas than others, with the result that the folks with more amygdalas feel more disgust.
Humans also route a variety of social situations through their amygdalas, resulting in the feeling of “disgust” in response to things that are not rotten food, like other people’s sexual behaviors, criminals, or particularly unattractive people. People with larger amygdalas also tend to find more human behaviors disgusting, and this disgust correlates with social conservatism.
To what extent are “taste” and “disgust” independent of each other? I don’t know; perhaps they are intimately linked into a single feedback system, where disgust and taste sensitivity cause each other, or perhaps they are relatively independent, so that a few unlucky people are both super-sensitive to taste and easily disgusted.
People who find other people’s behavior disgusting and off-putting may also be people who find flavors overwhelming, prefer bland or sweet foods over bitter ones, think vegetables are icky, vegetarians are crazy, and struggle to stay on diets.
What’s that, you say, I’ve just constructed a just-so story?
Michael Shin and William McCarthy, researchers from UCLA, have found an association between counties with higher levels of support for the 2012 Republican presidential candidate and higher levels of obesity in those counties.
Looks like the Mormons and Southern blacks are outliers.
(I don’t really like maps like this for displaying data; I would much prefer a simple graph showing orientation on one axis and obesity on the other, with each county as a datapoint.)
(Unsurprisingly, the first 49 hits I got when searching for correlations between political orientation and obesity were almost all about what other people think of fat people, not what fat people think. This is probably because researchers tend to be skinny people who want to fight “fat phobia” but aren’t actually interested in the opinions of fat people.)
Liberals are 28 percent more likely than conservatives to eat fresh fruit daily, and 17 percent more likely to eat toast or a bagel in the morning, while conservatives are 20 percent more likely to skip breakfast.
Ten percent of liberals surveyed indicated they are vegetarians, compared with 3 percent of conservatives.
Liberals are 28 percent more likely than conservatives to enjoy beer, with 60 percent of liberals indicating they like beer.
(See above where Wikipedia noted that supertasters dislike beer.) I will also note that coffee, which supertasters tend to dislike because it is too bitter, is very popular in the ultra-liberal cities of Portland and Seattle, whereas heavily sweetened iced tea is practically the official beverage of the South.
The only remaining question is if supertasters are conservative. That may take some research.
Update: I have not found, to my disappointment, a simple study that just looks at correlation between ideology and supertasting (or nontasting.) However, I have found a couple of useful items.
Standard tests of disgust sensitivity, a questionnaire developed for this research assessing different types of moral transgressions (nonvisceral, implied-visceral, visceral) with the terms “angry” and “grossed-out,” and a taste sensitivity test of 6-n-propylthiouracil (PROP) were administered to 102 participants. [PROP is commonly used to test for “supertasters.”] Results confirmed past findings that the more sensitive to PROP a participant was the more disgusted they were by visceral, but not moral, disgust elicitors. Importantly, the findings newly revealed that taste sensitivity had no bearing on evaluations of moral transgressions, regardless of their visceral nature, when “angry” was the emotion primed. However, when “grossed-out” was primed for evaluating moral violations, the more intense PROP tasted to a participant the more “grossed-out” they were by all transgressions. Women were generally more disgust sensitive and morally condemning than men, … The present findings support the proposition that moral and visceral disgust do not share a common oral origin, but show that linguistic priming can transform a moral transgression into a viscerally repulsive event and that susceptibility to this priming varies as a function of an individual’s sensitivity to the origins of visceral disgust—bitter taste. [bold mine.]
In other words, supertasters are more easily disgusted, and with verbal priming will transfer that disgust to moral transgressions. (And easily disgusted people tend to be conservatives.)
This is an attempt at a coherent explanation for why left-handedness (and right-handedness) exist in the distributions that they do.
Handedness is a rather exceptional human trait. Most animals don’t have a dominant hand (or foot.) Horses have no dominant hooves; anteaters dig equally well with both paws; dolphins don’t favor one flipper over the other; monkeys don’t fall out of trees if they try to grab a branch with their left hands. Only humans have a really distinct tendency to use one side of their bodies over the other.
And about 90% of us use our right hands, and about 10% of us use our left hands, (Wikipedia claims 10%, but The Lopsided Ape reports 12%.) an observation that appears to hold pretty consistently throughout both time and culture, so long as we aren’t dealing with a culture where lefties are forced to write with their right hands.
A simple Mendel-square two-gene explanation for handedness–a dominant allele for right-handedness and a recessive one for left-handedness, with equal proportions of alleles in society, would result in a 75% righties to 25% lefties. Even if the proportions weren’t equal, the offspring of two lefties ought to be 100% left-handed. This is not, however, what we see. The children of two lefties have only a 25% chance or so of being left-handed themselves.
So let’s try a more complicated model.
Let’s assume that there are two alleles that code for right-handedness. (Hereafter “R”) You get one from your mom and one from your dad.
Each of these alleles is accompanied by a second allele that codes for either nothing (hereafter “O”) or potentially switches the expression of your handedness (hereafter “S”)
Everybody in the world gets two identical R alleles, one from mom and one from dad.
Everyone also gets two S or O alleles, one from mom and one from dad. One of these S or O alleles affects one of your Rs, and the other affects the other R.
Your potential pairs, then, are:
RO/RO, RO/RS, RS/RO, or RS/RS
RO=right handed allele.
RS=50% chance of expressing for right or left dominance; RS/RS thus => 25% chance of both alleles coming out lefty.
So RO/RO, RO/RS, and RS/RO = righties, (but the RO/ROs may have especially dominant right hands; half of the RO/RS guys may have weakly dominant right hands.)
Only RS/RS produces lefties, and of those, only 25% defeat the dominance odds.
This gets us our observed correlation of only 25% of children of left-handed couples being left-handed themselves.
(Please note that this is still a very simplified model; Wikipedia claims that there may be more than 40 alleles involved.)
What of the general population as a whole?
Assuming random mating in a population with equal quantities of RO/RO, RO/RS, RS/RO and RS/RS, we’d end up with 25% of children RS/RS. But if only 25% of RS/RS turn out lefties, only 6.25% of children would be lefties. We’re still missing 4-6% of the population.
This implies that either: A. Wikipedia has the wrong #s for % of children of lefties who are left-handed; B. about half of lefties are RO/RS (about 1/8th of the RO/RS population); C. RS is found in twice the proportion as RO in the population; or D. my model is wrong.
Dr Chris McManus reported in his book Right Hand, Left Hand on a study he had done based on a review of scientific literature which showed parent handedness for 70,000 children. On average, the chances of two right-handed parents having a left-handed child were around 9% left-handed children, two left-handed parents around 26% and one left and one right-handed parent around 19%. …
More than 50% of left-handers do not know of any other left-hander anywhere in their living family.
This implies B, that about half of lefties are RO/RS. Having one RS combination gives you a 12.5% chance of being left-handed; having two RS combinations gives you a 25% chance.
And that… I think that works. And it means we can refine our theory–we don’t need two R alleles; we only need one. (Obviously it is more likely a whole bunch of alleles that code for a whole system, but since they act together, we can model them as one.) The R allele is then modified by a pair of alleles that comes in either O (do nothing,) or S (switch.)
One S allele gives you a 12.5% chance of being a lefty; two doubles your chances to 25%.
Interestingly, this model suggests that not only does no gene for “left handedness” exist, but that “left handedness” might not even be the allele’s goal. Despite the rarity of lefties, the S allele is found in 75% of the population (an equal % as the O allele.) My suspicion is that the S allele is doing something else valuable, like making sure we don’t become too lopsided in our abilities or try to shunt all of our mental functions to one side of our brain.