In addition to the reported Neanderthal and Denisovan introgressions, our results support a third introgression in all Asian and Oceanian populations from an archaic population. This population is either related to the Neanderthal-Denisova clade or diverged early from the Denisova lineage.
(Congratulations to the authors, Mondal, Bertranpetit, and Lao.)
Here we report an analysis comparing cultural and genetic data from 13 populations from in and around Northeast Asia spanning 10 different language families/isolates. We construct distance matrices for language (grammar, phonology, lexicon), music (song structure, performance style), and genomes (genome-wide SNPs) and test for correlations among them. … robust correlations emerge between genetic and grammatical distances. Our results suggest that grammatical structure might be one of the strongest cultural indicators of human population history, while also demonstrating differences among cultural and genetic relationships that highlight the complex nature of human cultural and genetic evolution.
I feel like there’s a joke about grammar Nazis in here.
While humans average seven hours, other primates range from just under nine hours (blue-eyed black lemurs) to 17 (owl monkeys). Chimps, our closest living evolutionary relatives, average about nine and a half hours. And although humans doze for less time, a greater proportion is rapid eye movement sleep (REM), the deepest phase, when vivid dreams unfold.
Sleep is pretty much universal in the animal kingdom, but different species vary greatly in their habits. Elephants sleep about two hours out of 24; sloths more than 15. Individual humans vary in their sleep needs, but interestingly, different cultures vary greatly in the timing of their sleep, eg, the Spanish siesta. Our modern notion that people “should” sleep in a solid, 7-9 hour chunk (going so far as to “train” children to do it,) is more a result of electricity and industrial work schedules than anything inherent or healthy about human sleep. So if you find yourself stressed out because you keep taking a nap in the afternoon instead of sleeping through the night, take heart: you may be completely normal. (Unless you’re tired because of some illness, of course.)
Within any culture, people also prefer to rest and rise at different times: In most populations, individuals range from night owls to morning larks in a near bell curve distribution. Where someone falls along this continuum often depends on sex (women tend to rise earlier) and age (young adults tend to be night owls, while children and older adults typically go to bed before the wee hours).
Genes matter, too. Recent studies have identified about a dozen genetic variations that predict sleep habits, some of which are located in genes known to influence circadian rhythms.
While this variation can cause conflict today … it may be the vestige of a crucial adaptation. According to the sentinel hypothesis, staggered sleep evolved to ensure that there was always some portion of a group awake and able to detect threats.
So they gave sleep trackers to some Hadza, who must by now think Westerners are very strange, and found that at any particular period of the night, about 40% of people were awake; over 20 nights, there were “only 18 one-minute periods” when everyone was asleep. That doesn’t prove anything, but it does suggest that it’s perfectly normal for some people to be up in the middle of the night–and maybe even useful.
In May, a pair of papers published by separate teams in the journal Cell focused on the NOTCH family of genes, found in all animals and critical to an embryo’s development: They produce the proteins that tell stem cells what to turn into, such as neurons in the brain. The researchers looked at relatives of the NOTCH2 gene that are present today only in humans.
In a distant ancestor 8 million to 14 million years ago, they found, a copying error resulted in an “extra hunk of DNA,” says David Haussler of the University of California, Santa Cruz, a senior author of one of the new studies.
This non-functioning extra piece of NOTCH2 code is still present in chimps and gorillas, but not in orangutans, which went off on their own evolutionary path 14 million years ago.
About 3 million to 4 million years ago, a few million years after our own lineage split from other apes, a second mutation activated the once non-functional code. This human-specific gene, called NOTCH2NL, began producing proteins involved in turning neural stem cells into cortical neurons. NOTCH2NL pumped up the number of neurons in the neocortex, the seat of advanced cognitive function. Over time, this led to bigger, more powerful brains. …
The researchers also found NOTCH2NL in the ancient genomes of our closest evolutionary kin: the Denisovans and the Neanderthals, who had brain volumes similar to our own.
“Genomes that evolve in different geographic locations without intermixing can end up being different from each other,” said Kateryna Makova, Pentz Professor of Biology at Penn State and an author of the paper. “… This variation has a lot of advantages; for example, increased variation in immune genes can provide enhanced protection from diseases. However, variation in geographic origin within the genome could also potentially lead to communication issues between genes, for example between mitochondrial and nuclear genes that work together to regulate mitochondrial function.”
Researchers looked at recently (by evolutionary standards) mixed populations like Puerto Ricans and African Americans, comparing the parts of their DNA that interact with mitochondria to the parts that don’t. Since mitochondria hail from your mother, and these populations have different ethnic DNA contributions along maternal and paternal lines. If all of the DNA were equally compatible with their mitochondria, then we’d expect to see equal contributions to the specifically mitochondria-interacting genes. If some ethnic origins interact better with the mitochondria, then we expect to see more of this DNA in these specific places.
The latter is, in fact, what we find. Puerto Ricans hail more from the Taino Indians along their mtDNA, and have relatively more Taino DNA in the genes that affect their mitochondria–indicating that over the years, individuals with more balanced contributions were selected against in Puerto Rico. (“Selection” is such a sanitized way of saying they died/had fewer children.)
This indicates that a recently admixed population may have more health issues than its parents, but the issues will work themselves out over time.
Watson parses questions into different keywords and sentence fragments in order to find statistically related phrases. Watson’s main innovation was not in the creation of a new algorithm for this operation but rather its ability to quickly execute hundreds of proven language analysis algorithms simultaneously. The more algorithms that find the same answer independently the more likely Watson is to be correct. Once Watson has a small number of potential solutions, it is able to check against its database to ascertain whether the solution makes sense or not.
That is at least one reason why Watson represents such a significant milestone: Jeopardy! is precisely such a challenging language task. … What is perhaps not evident to many observers is that Watson not only had to master the language in the unexpected and convoluted queries, but for the most part its knowledge was not hand-coded. It obtained that knowledge by actually reading 200 million pages of natural-language documents, including all of Wikipedia… If Watson can understand and respond to questions based on 200 million pages–in three seconds!–here is nothing to stop similar systems from reading the other billions of documents on the Web. Indeed, that effort is now under way.
A point about the history of computing that may be petty of me to emphasize:
Babbage’s conception is quite miraculous when you consider the era in which he lived and worked. However, by the mid-twentieth century, his ideas had been lost in the mists of time (although they were subsequently rediscovered.) It was von Neumann who conceptualized and articulated the key principles of the computer as we know it today, and the world recognizes this by continuing to refer to the von Neumann machine as the principal model of computation. Keep in mind, though, that the von Neumann machine continually communicates data between its various units and within those units, so it could not be built without Shannon’s theorems and the methods he devised for transmitting and storing reliable digital information. …
You know what? No, it’s not petty.
Amazon lists 57 books about Ada Lovelace aimed at children, 14 about Alan Turing, and ZERO about John von Neumann.
(Some of these results are always irrelevant, but they are roughly correct.)
“EvX,” you may be saying, “Why are you counting children’s books?”
Because children are our future, and the books that get published for children show what society deems important for children to learn–and will have an effect on what adults eventually know.
I don’t want to demean Ada Lovelace’s role in the development of software, but surely von Neumann’s contributions to the field are worth a single book!
*Slides soapbox back under the table*
Anyway, back to Kurzweil, now discussing quantum mechanics:
There are two ways to view the questions we have been considering–converse Western an Easter perspective on the nature of consciousness and of reality. In the Western perspective, we start with a physical world that evolves patterns of information. After a few billion years of evolution, the entities in that world have evolved sufficiently to become conscious beings In the Eastern view, consciousness is the fundamental reality, the physical word only come into existence through the thoughts of conscious beings. …
The East-West divide on the issue of consciousness has also found expression in opposing schools of thought in the field of subatomic physics. In quantum mechanics, particles exist in what are called probability fields. Any measurement carried out on them by a measuring device causes what is called a collapse of the wave function, meaning that the particle suddenly assumes a particular location. A popular view is that such a measurement constitutes observation by a conscious observer… Thus the particle assume a particular location … only when it is observed. Basically particles figure that if no one is bothering to look at them, they don’t need to decide where they are. I call this the Buddhist school of quantum mechanics …
Or as Niels Bohr put it, “A physicist is just an atom’s way of looking at itself.” He also claimed that we could describe electrons exercised free will in choosing their positions, a statement I do not think he meant literally; “We must be clear that when it comes to atoms, language can be used only as in poetry,” as he put it.
Kurzweil explains the Western interpretation of quantum mechanics:
There is another interpretation of quantum mechanics… In this analysis, the field representing a particle is not a probability field, but rather just a function that has different values in different locations. The field, therefore, is fundamentally what the particle is. … The so-called collapse of the wave function, this view holds, is not a collapse at all. … It is just that a measurement device is also made up of particles with fields, and the interaction of the particle field being measured and the particle fields of the measuring device result in a reading of the particle being in a particular location. The field, however, is still present. This is the Western interpretation of quantum mechanics, although it is interesting to note that the more popular view among physicists worldwide is what I have called the Eastern interpretation.
For example, Bohr has the yin-yang symbol on his coat of arms, along with the motto contraria sunt complementa, or contraries are complementary. Oppenheimer was such a fan of the Bhagavad Gita that he read it in Sanskrit and quoted it upon successful completion of the Trinity Test, “If the radiance of a thousand suns were to burst at once into the sky, that would be like the splendor of the mighty one,” and “Now I am become death, the destroyer of worlds.” He credited the Gita as one of the most important books in his life.
Why the appeal of Eastern philosophy? Is it something about physicists and mathematicians? Leibnitz, after all, was fond of the I Ching. As Wikipedia says:
Leibniz was perhaps the first major European intellectual to take a close interest in Chinese civilization, which he knew by corresponding with, and reading other works by, European Christian missionaries posted in China. Having read Confucius Sinarum Philosophus on the first year of its publication, he concluded that Europeans could learn much from the Confucian ethical tradition. He mulled over the possibility that the Chinese characters were an unwitting form of his universal characteristic. He noted with fascination how the I Ching hexagrams correspond to the binary numbers from 000000 to 111111, and concluded that this mapping was evidence of major Chinese accomplishments in the sort of philosophical mathematics he admired. Leibniz communicated his ideas of the binary system representing Christianity to the Emperor of China hoping it would convert him. Leibniz may be the only major Western philosopher who attempted to accommodate Confucian ideas to prevailing European beliefs.
Leibniz’s attraction to Chinese philosophy originates from his perception that Chinese philosophy was similar to his own. The historian E.R. Hughes suggests that Leibniz’s ideas of “simple substance” and “pre-established harmony” were directly influenced by Confucianism, pointing to the fact that they were conceived during the period that he was reading Confucius Sinarum Philosophus.
Perhaps it is just that physicists and mathematicians are naturally curious people, and Eastern philosophy is novel to a Westerner, or perhaps by adopting Eastern ideas, they were able to purge their minds of earlier theories of how the universe works, creating a blank space in which to evaluate new data without being biased by old conceptions–or perhaps it is just something about the way their minds work.
As for quantum, I favor the de Broglie-Bohm interpretation of quantum mechanics, but obviously I am not a physicist and my opinion doesn’t count for much. What do you think?
But back to the book. If you are fond of philosophical ruminations on the nature of consciousness, like “What if someone who could only see in black and white read extensively about color “red,” could they ever achieve the qualia of actually seeing the color red?” or “What if a man were locked in a room with a perfect Chinese rulebook that told him which Chinese characters to write in response to any set of characters written on notes passed under the door? The responses are be in perfect Chinese, but the man himself understands not a word of Chinese,” then you’ll enjoy the discussion. If you already covered all of this back in Philosophy 101, you might find it a bit redundant.
Kurzweil notes that conditions have improved massively over the past century for almost everyone on earth, but people are increasingly anxious:
A primary reason people believe life is getting worse is because our information about the problems of the world has steadily improved. If there is a battle today somewhere on the planet, we experience it almost as if we were there. During World War II, tens of thousand of people might perish in a battle, and if the public could see it at all it was in a grainy newsreel in a movie theater weeks later. During World War I a small elite could read about the progress of the conflict in the newspaper (without pictures.) During the nineteenth century there was almost no access to news in a timely fashion for anyone.
As for the future of man, machines, and code, Kurzweil is even more optimistic than Auerswald:
The last invention that biological evolution needed to make–the neocortex–is inevitably leading to the last invention that humanity needs to make–truly intelligent machines–and the design of one is inspiring the other. … by the end of this century we will be able to create computation at the limits of what is possible, based on the laws of physics… We call matter and energy organized in this way “computronium” which is vastly more powerful pound per pound than the human brain. It will not jut be raw computation but will be infused with intelligent algorithms constituting all of human-machine knowledge. Over time we will convert much of the mass and energy in our tiny corner of the galaxy that is suitable for this purpose to computronium. … we will need to speed out to the rest of the galaxy and universe. …
How long will it take for us to spread our intelligence in its nonbiological form throughout the universe? … waking up the universe, and then intelligently deciding its fate by infusing it with our human intelligence in its nonbiological form, is our destiny.
Whew! That is quite the ending–and with that, so will we. I hope you enjoyed the book. What did you think of it? Will Humanity 2.0 be good? Bad? Totally different? Or does the Fermi Paradox imply that Kurzweil is wrong? Did you like this shorter Book Club format? And do you have any ideas for our next Book Club pick?
If you aren’t familiar with Ray Kurzweil (you must be new to the internet), he is a computer scientist, inventor, and futurist whose work focuses primarily on artificial intelligence and phrases like “technological singularity.”
Wikipedia really likes him.
The book is part neuroscience, part explanations of how various AI programs work. Kurzweil uses models of how the brain works to enhance his pattern-recognition programs, and evidence from what works in AI programs to build support for theories on how the brain works.
The book delves into questions like “What is consciousness?” and “Could we recognize a sentient machine if we met one?” along with a brief history of computing and AI research.
My core thesis, which I call the Law of Accelerating Returns, (LOAR), is that fundamental measures of of information technology follow predictable and exponential trajectories…
The quintessential example of the law of accelerating returns is the perfectly smooth, doubly exponential growth of the price/performance of computation, which has held steady for 110 years through two world was, the Great Depression, the Cold War, the collapse of the Soviet Union, the reemergence of China, the recent financial crisis, … Some people refer to this phenomenon as “Moore’s law,” but… [this] is just one paradigm among many.
Auerswald claims that the advance of “code” (that is, technologies like writing that allow us to encode information) has, for the past 40,000 years or so, supplemented and enhanced human abilities, making our lives better. Auerswald is not afraid of increasing mechanization and robotification of the economy putting people out of jobs because he believes that computers and humans are good at fundamentally different things. Computers, in fact, were invented to do things we are bad at, like decode encryption, not stuff we’re good at, like eating.
The advent of computers, in his view, lets us concentrate on the things we’re good at, while off-loading the stuff we’re bad at to the machines.
Kurzweil’s view is different. While he agrees that computers were originally invented to do things we’re bad at, he also thinks that the computers of the future will be very different from those of the past, because they will be designed to think like humans.
A computer that can think like a human can compete with a human–and since it isn’t limited in its processing power by pelvic widths, it may well out-compete us.
But Kurzweil does not seem worried:
Ultimately we will create an artificial neocortex that has the full range and flexibility of its human counterpart. …
When we augment our own neocortex with a synthetic version, we won’t have to worry about how much additional neocortex can physically fit into our bodies and brains, as most of it will be in the cloud, like most of the computing we use today. I estimated earlier that we have on the order of 300 million pattern recognizers in our biological neocortex. That’s as much as could b squeezed into our skulls even with the evolutionary innovation of a large forehead and with the neocortex taking about 80 percent of the available space. As soon as we start thinking in the cloud, there will be no natural limits–we will be able to use billions or trillions of pattern recognizers, basically whatever we need. and whatever the law of accelerating returns can provide at each point in time. …
Last but not least, we will be able to back up the digital portion of our intelligence. …
That is kind of what I already do with this blog. The downside is that sometimes you people see my incomplete or incorrect thoughts.
On the squishy side, Kurzweil writes of the biological brain:
The story of human intelligence starts with a universe that is capable of encoding information. This was the enabling factor that allowed evolution to take place. …
The story of evolution unfolds with increasing levels of abstraction. Atoms–especially carbon atoms, which can create rich information structures by linking in four different directions–formed increasingly complex molecules. …
A billion yeas later, a complex molecule called DNA evolved, which could precisely encode lengthy strings of information and generate organisms described by these “programs”. …
The mammalian brain has a distinct aptitude not found in any other class of animal. We are capable of hierarchical thinking, of understanding a structure composed of diverse elements arranged in a pattern, representing that arrangement with a symbol, and then using that symbol as an element in a yet more elaborate configuration. …
Through an unending recursive process we are capable of building ideas that are ever more complex. … Only Homo sapiens have a knowledge base that itself evolves, grow exponentially, and is passe down from one generation to another.
Kurzweil proposes an experiment to demonstrate something of how our brains encode memories: say the alphabet backwards.
If you’re among the few people who’ve memorized it backwards, try singing “Twinkle Twinkle Little Star” backwards.
It’s much more difficult than doing it forwards.
This suggests that our memories are sequential and in order. They can be accessed in the order they are remembered. We are unable to reverse the sequence of a memory.
Funny how that works.
On the neocortex itself:
A critically important observation about the neocortex is the extraordinary uniformity of its fundamental structure. … In 1957 Mountcastle discovered the columnar organization of the neocortex. … [In 1978] he described the remarkably unvarying organization of the neocortex, hypothesizing that it was composed of a single mechanism that was repeated over and over again, and proposing the cortical column as the basic unit. The difference in the height of certain layers in different region noted above are simply differences in the amount of interconnectivity that the regions are responsible for dealing with. …
extensive experimentation has revealed that there are in fact repeating units within each column. It is my contention that the basic unit is a pattern organizer and that this constitute the fundamental component of the neocortex.
As I read, Kurzweil’s hierarchical models reminded me of Chomsky’s theories of language–both Ray and Noam are both associated with MIT and have probably conversed many times. Kurzweil does get around to discussing Chomsky’s theories and their relationship to his work:
Language is itself highly hierarchical and evolved to take advantage of the hierarchical nature of the neocortex, which in turn reflects the hierarchical nature of reality. The innate ability of human to lean the hierarchical structures in language that Noam Chomsky wrote about reflects the structure of of the neocortex. In a 2002 paper he co-authored, Chomsky cites the attribute of “recursion” as accounting for the unique language faculty of the human species. Recursion, according to Chomsky, is the ability to put together small parts into a larger chunk, and then use that chunk as a part in yet another structure, and to continue this process iteratively In this way we are able to build the elaborate structure of sentences and paragraphs from a limited set of words. Although Chomsky was not explicitly referring here to brain structure, the capability he is describing is exactly what the neocortex does. …
This sounds good to me, but I am under the impression that Chomsky’s linguistic theories are now considered outdated. Perhaps that is only his theory of universal grammar, though. Any linguistics experts care to weigh in?
The basis to Chomsky’s linguistic theory is rooted in biolinguistics, holding that the principles underlying the structure of language are biologically determined in the human mind and hence genetically transmitted. He therefore argues that all humans share the same underlying linguistic structure, irrespective of sociocultural differences. In adopting this position, Chomsky rejects the radical behaviorist psychology of B. F. Skinner which views the mind as a tabula rasa (“blank slate”) and thus treats language as learned behavior. Accordingly, he argues that language is a unique evolutionary development of the human species and is unlike modes of communication used by any other animal species. Chomsky’s nativist, internalist view of language is consistent with the philosophical school of “rationalism“, and is contrasted with the anti-nativist, externalist view of language, which is consistent with the philosophical school of “empiricism“.
Anyway, back to Kuzweil, who has an interesting bit about love:
Science has recently gotten into the act as well, and we are now able to identify the biochemical changes that occur when someone falls in love. Dopamine is released, producing feelings of happiness and delight. Norepinephrin levels soar, which lead to a racing heart and overall feelings of exhilaration. These chemicals, along with phenylethylamine, produce elevation, high energy levels, focused attention, loss of appetite, and a general craving for the object of one’s desire. … serotonin level go down, similar to what happens in obsessive-compulsive disorder….
If these biochemical phenomena sound similar to those of the flight-or-fight syndrome, they are, except that we are running toward something or someone; indeed, a cynic might say toward rather than away form danger. The changes are also fully consistent with those of the early phase of addictive behavior. … Studies of ecstatic religious experiences also show the same physical phenomena, it can be said that the person having such an experiences is falling in love with God or whatever spiritual connection on which they are focused. …
Religious readers care to weigh in?
Consider two related species of voles: the prairie vole and the montane vole. They are pretty much identical, except that the prairie vole has receptors for oxytocin and vasopressin, whereas the montane vole does not. The prairie vole is noted for lifetime monogamous relationships, while the montane vole resorts almost exclusively to one-night stands.
Learning by species:
A mother rat will build a nest for her young even if she has never seen another rat in her lifetime. Similarly, a spider will spin a web, a caterpillar will create her own cocoon, and a beaver will build a damn, even if no contemporary ever showed them how to accomplish these complex tasks. That is not to say that these are not learned behavior. It is just that he animals did not learn them in a single lifetime… The evolution of animal behavior does constitute a learning process, but it is learning by the species, not by the individual and the fruits of this leaning process are encoded in DNA.
I think that’s enough for today; what did you think? Did you enjoy the book? Is Kurzweil on the right track with his pattern recognizers? Are non-biological neocortexes on the horizon? Will we soon convert the solar system to computronium?
Let’s continue this discussion next Monday–so if you haven’t read the book yet, you still have a whole week to finish.
The other day I was walking through the garden when I looked down, saw one of these, leapt back, screamed loudly enough to notify the entire neighborhood:
(The one in my yard was insect free, however.)
After catching my breath, I wondered, “Is that a wasp nest or a beehive?” and crept back for a closer look. Wasp nest. I mentally paged through my knowledge of wasp nests: wasps abandon nests when they fall on the ground. This one was probably empty and safe to step past. I later tossed it onto the compost pile.
The interesting part of this incident wasn’t the nest, but my reaction. I jumped away from the thing before I had even consciously figured out what the nest was. Only once I was safe did I consciously think about the nest.
Gazzaniga discusses a problem faced by brains trying to evolve to be bigger and smarter: how do you get more neurons working without taking up an absurd amount of space connecting each and every neuron to every other neuron?
Imagine a brain with 5 connected neurons: each neuron requires 4 connections to talk to every other neuron. A 5 neuron brain would thus need space for 10 total connections.
The addition of a 6th neuron would require 5 new connections; a 7th neuron requires 6 new connections, etc. A fully connected brain of 100 neurons would require 99 connections per neuron, for a total of 4,950 connections.
Connecting all of your neurons might work fine if if you’re a sea squirt, with only 230 or so neurons, but it is going to fail hard if you’re trying to hook up 86 billion. The space required to hook up all of these neurons would be massively larger than the space you can actually maintain by eating.
So how does an organism evolving to be smarter deal with the connectivity demands of increasing brain size?
Human social lives suggest an answer: Up on the human scale, one person can, Dunbar estimates, have functional social relationships with about 150 other people, including an understanding of those people’s relationships with each other. 150 people (the “Dunbar number”) is therefore the amount of people who can reliably cooperate or form groups without requiring any top-down organization.
So how do humans survive in groups of a thousand, a million, or a billion (eg, China)? How do we build large-scale infrastructure projects requiring the work of thousands of people and used by millions, like interstate highways? By organization–that is, specialization.
In a small tribe of 150 people, almost everyone in the tribe can do most of the jobs necessary for the tribe’s survival, within the obvious limits of biology. Men and women are both primarily occupied with collecting food. Both prepare clothing and shelter; both can cook. There is some specialization of labor–obviously men can carry heavier loads; women can nurse children–but most people are generally competent at most jobs.
In a modern industrial economy, most people are completely incompetent at most jobs. I have a nice garden, but I don’t even know how to turn on a tractor, much less how to care for a cow. The average person does not know how to knit or sew, much less build a house, wire up the electricity and lay the plumbing. We attend school from 5 to 18 or 22 or 30 and end up less competent at surviving in our own societies than a cave man with no school was in his, not because school is terrible but because modern industrial society requires so much specialized knowledge to keep everything running that no one person can truly master even a tenth of it.
Specialization, not just of people but of organizations and institutions, like hospitals devoted to treating the sick, Walmarts devoted to selling goods, and Microsoft devoted to writing and selling computer software and hardware, lets society function without requiring that everyone learn to be a doctor, merchant, and computer expert.
Similarly, brains expand their competence via specialization, not denser neural connections.
The smartest people may boast more neurons than those of average intelligence, but their brains have fewer neural connections…
Neuroscientists in Germany recruited 259 participants, both men and women, to take IQ tests and have their brains imaged…
The research revealed a strong correlation between the number of dendrites in a person’s cerebral cortex and their intelligence. The smartest participants had fewer neural connections in their cerebral cortex.
Fewer neural connections overall allows different parts of the brain to specialize, increasing local competence.
All things are produced more plentifully and easily and of a better quality when one man does one thing that is natural to him and does it at the right time, and leaves other things. –Plato, The Republic
The brains of mice, as Gazzinga discusses, do not need to be highly specialized, because mice are not very smart and do not do many specialized activities. Human brains, by contrast, are highly specialized, as anyone who has ever had a stroke has discovered. (Henry Harpending of West Hunter, for example, once had a stroke while visiting Germany that knocked out the area of his brain responsible for reading, but since he couldn’t read German in the first place, he didn’t realize anything was wrong until several hours later.)
I read, about a decade ago, that male and female brains have different levels, and patterns, of internal connectivity. (Here and here are articles on the subject.) These differences in connectivity may allow men and women to excel at different skills, and since we humans are a social species that can communicate by talking, this allows us to take cognitive modality beyond the level of a single brain.
So modularity lets us learn (and do) more things, with the downside that sometimes knowledge is highly localized–that is, we have a lot of knowledge that we seem able to access only under specific circumstances, rather than use generally.
For example, I have long wondered at the phenomenon of people who can definitely do complicated math when asked to, but show no practical number sense in everyday life, like the folks from the Yale Philosophy department who are confused about why African Americans are under-represented in their major, even though Yale has an African American Studies department which attracts a disproportionate % of Yale’s African American students. The mathematical certainty that if any major in the whole school that attracts more African American students, then other majors will end up with fewer, has been lost on these otherwise bright minds.
Yalies are not the only folks who struggle to use the things they know. When asked to name a book–any book–ordinary people failed. Surely these people have heard of a book at some point in their lives–the Bible is pretty famous, as is Harry Potter. Even if you don’t like books, they were assigned in school, and your parents probably read The Cat in the Hat and Green Eggs and Ham to you when you were a kid. It is not that they do not have the knowledge as they cannot access it.
Teachers complain all the time that students–even very good ones–can memorize all of the information they need for a test, regurgitate it all perfectly, and then turn around and show no practical understanding of the information at all.
Richard Feynman wrote eloquently of his time teaching future science teachers in Brazil:
In regard to education in Brazil, I had a very interesting experience. I was teaching a group of students who would ultimately become teachers, since at that time there were not many opportunities in Brazil for a highly trained person in science. These students had already had many courses, and this was to be their most advanced course in electricity and magnetism – Maxwell’s equations, and so on. …
I discovered a very strange phenomenon: I could ask a question, which the students would answer immediately. But the next time I would ask the question – the same subject, and the same question, as far as I could tell – they couldn’t answer it at all! For instance, one time I was talking about polarized light, and I gave them all some strips of polaroid.
Polaroid passes only light whose electric vector is in a certain direction, so I explained how you could tell which way the light is polarized from whether the polaroid is dark or light.
We first took two strips of polaroid and rotated them until they let the most light through. From doing that we could tell that the two strips were now admitting light polarized in the same direction – what passed through one piece of polaroid could also pass through the other. But then I asked them how one could tell the absolute direction of polarization, for a single piece of polaroid.
They hadn’t any idea.
I knew this took a certain amount of ingenuity, so I gave them a hint: “Look at the light reflected from the bay outside.”
Nobody said anything.
Then I said, “Have you ever heard of Brewster’s Angle?”
“Yes, sir! Brewster’s Angle is the angle at which light reflected from a medium with an index of refraction is completely polarized.”
“And which way is the light polarized when it’s reflected?”
“The light is polarized perpendicular to the plane of reflection, sir.” Even now, I have to think about it; they knew it cold! They even knew the tangent of the angle equals the index!
I said, “Well?”
Still nothing. They had just told me that light reflected from a medium with an index, such as the bay outside, was polarized; they had even told me which way it was polarized.
I said, “Look at the bay outside, through the polaroid. Now turn the polaroid.”
“Ooh, it’s polarized!” they said.
After a lot of investigation, I finally figured out that the students had memorized everything, but they didn’t know what anything meant. When they heard “light that is reflected from a medium with an index,” they didn’t know that it meant a material such as water. They didn’t know that the “direction of the light” is the direction in which you see something when you’re looking at it, and so on. Everything was entirely memorized, yet nothing had been translated into meaningful words. So if I asked, “What is Brewster’s Angle?” I’m going into the computer with the right keywords. But if I say, “Look at the water,” nothing happens – they don’t have anything under “Look at the water”!
The students here are not dumb, and memorizing things is not bad–memorizing your times tables is very useful–but they have everything lodged in their “memorization module” and nothing in their “practical experience module.” (Note: I am not necessarily suggesting that thee exists a literal, physical spot in the brain where memorized and experienced knowledge reside, but that certain brain structures and networks lodge information in ways that make it easier or harder to access.)
People frequently make arguments that don’t make logical sense when you think them all the way through from start to finish, but do make sense if we assume that people are using specific brain modules for quick reasoning and don’t necessarily cross-check their results with each other. For example, when we are angry because someone has done something bad to us, we tend to snap at people who had nothing to do with it. Our brains are in “fight and punish mode” and latch on to the nearest person as the person who most likely committed the offense, even if we consciously know they weren’t involved.
Political discussions are often marred by folks running what ought to be logical arguments through status signaling, emotional, or tribal modules. The desire to see Bad People punished (a reasonable desire if we all lived in the same physical community with each other) interferes with a discussion of whether said punishment is actually useful, effective, or just. For example, a man who has been incorrectly convicted of the rape of a child will have a difficult time getting anyone to listen sympathetically to his case.
In the case of white South African victims of racially-motivated murder, the notion that their ancestors did wrong and therefore they deserve to be punished often overrides sympathy. As BBC notes, these killings tend to be particularly brutal (they often involve torture) and targeted, but the South African government doesn’t care:
According to one leading political activist, Mandla Nyaqela, this is the after-effect of the huge degree of selfishness and brutality which was shown towards the black population under apartheid. …
Virtually every week the press here report the murders of white farmers, though you will not hear much about it in the media outside South Africa.In South Africa you are twice as likely to be murdered if you are a white farmer than if you are a police officer – and the police here have a particularly dangerous life. The killings of farmers are often particularly brutal. …
Ernst Roets’s organisation has published the names of more than 2,000 people who have died over the last two decades. The government has so far been unwilling to make solving and preventing these murders a priority. …
There used to be 60,000 white farmers in South Africa. In 20 years that number has halved.
The Christian Science Monitor reports on the measures ordinary South Africans have to take in what was once a safe country to not become human shishkabobs, which you should pause and read, but is a bit of a tangent from our present discussion. The article ends with a mind-bending statement about a borrowed dog (dogs are also important for security):
My friends tell me the dog is fine around children, but is skittish around men, especially black men. The people at the dog pound told them it had probably been abused. As we walk past house after house, with barking dog after barking dog, I notice Lampo pays no attention. Instead, he’s watching the stream of housekeepers and gardeners heading home from work. They eye the dog nervously back.
Great, I think, I’m walking a racist dog.
Module one: Boy South Africa has a lot of crime. Better get a dog, cover my house with steel bars, and an extensive security system.
Module two: Associating black people with crime is racist, therefore my dog is racist for being wary of people who look like the person who abused it.
And while some people are obviously sympathetic to the plight of murdered people, “Cry me a river White South African Colonizers” is a very common reaction. (Never mind that the people committing crimes in South Africa today never lived under apartheid; they’ve lived in a black-run country for their entire lives.) Logically, white South Africans did not do anything to deserve being killed, and like the golden goose, killing the people who produce food will just trigger a repeat of Zimbabwe, but the modes of tribalism–“I do not care about these people because they are not mine and I want their stuff”–and punishment–“I read about a horrible thing someone did, so I want to punish everyone who looks like them”–trump logic.
Who dies–and how they die–significantly shapes our engagement with the news. Gun deaths via mass shootings get much more coverage and worry than ordinary homicides, even though ordinary homicides are far more common. homicides get more coverage and worry than suicides, even though suicides are far more common. The majority of gun deaths are actually suicides, but you’d never know that from listening to our national conversation about guns, simply because we are biased to worry far more about other people killng us than about ourselves.
Similarly, the death of one person via volcano receives about the same news coverage as 650 in a flood, 2,000 in a drought, or 40,000 in a famine. As the article notes:
Instead of considering the objective damage caused by natural disasters, networks tend to look for disasters that are “rife with drama”, as one New York Times article put it4—hurricanes, tornadoes, forest fires, earthquakes all make for splashy headlines and captivating visuals. Thanks to this selectivity, less “spectacular” but often times more deadly natural disasters tend to get passed over. Food shortages, for example, result in the most casualties and affect the most people per incident5 but their onset is more gradual than that of a volcanic explosion or sudden earthquake. … This bias for the spectacular is not only unfair and misleading, but also has the potential to misallocate attention and aid.
There are similar biases by continent, with disasters in Africa receiving less attention than disasters in Europe (this correlates with African disasters being more likely to be the slow-motion famines, epidemics and droughts that kill lots of people, and European disasters being splashier, though perhaps we’d consider famines “splashier” if they happened in Paris instead of Ethiopia.)
From a neuropolitical perspective, I suspect that patterns such as the Big Five personality traits correlating with particular political positions (“openness” with “liberalism,” for example, or “conscientiousness” with “conservativeness,”) is caused by patterns of brain activity that cause some people to depend more or less on particular brain modules for processing.
For example, conservatives process more of the world through the areas of their brain that are also used for processing disgust, (not one of “the five” but still an important psychological trait) which increases their fear of pathogens, disease vectors, and generally anything new or from the outside. Disgust can go so far as to process other people’s faces or body language as “disgusting” (eg, trans people) even when there is objectively nothing that presents an actual contamination or pathogenic risk involved.
Similarly, people who feel more guilt in one area of their life often feel guilt in others–eg, “White guilt was significantly associated with bulimia nervosa symptomatology.” The arrow of causation is unclear–guilt about eating might spill over into guilt about existing, or guilt about existing might cause guilt about eating, or people who generally feel guilty about everything could have both. Either way, these people are generally not logically reasoning, “Whites have done bad things, therefore I should starve myself.” (Should veganism be classified as a politically motivated eating disorder?)
I could continue forever–
Restrictions on medical research are biased toward preventing mentally salient incidents like thalidomide babies, but against the invisible cost of children who die from diseases that could have been cured had research not been prevented by regulations.
America has a large Somali community but not Congolese, (85,000 Somalis vs. 13,000 Congolese, of whom 10,000 hail from the DRC. Somalia has about 14 million people, the DRC has about 78.7 million people, so it’s not due to there being more Somalis in the world,) for no particular reason I’ve been able to discover, other than President Clinton once disastrously sent a few helicopters to intervene in the eternal Somali civil war and so the government decided that we now have a special obligation to take in Somalis.
–but that’s probably enough.
I have tried here to present a balanced account of different political biases, but I would like to end by noting that modular thinking, while it can lead to stupid decisions, exists for good reasons. If purely logical thinking were superior to modular, we’d probably be better at it. Still, cognitive biases exist and lead to a lot of stupid or sub-optimal results.
I began this post intending to write about testosterone metabolization in autism and possible connections with transgender identity, but realized halfway through that I didn’t actually know whether the autist-trans connection was primarily male-to-female or female-to-male. I had assumed that the relevant population is primarily MtF because both autists and trans people are primarily male, but both groups do have female populations that are large enough to contribute significantly. Here’s a sample of the data I’ve found so far:
A study conducted by a team of British scientists in 2012 found that of a pool of individuals not diagnosed on the autism spectrum, female-to-male (FTM) transgender people have higher rates of autistic features than do male-to-female (MTF) transgender people or cisgender males and females. Another study, which looked at children and adolescents admitted to a gender identity clinic in the Netherlands, found that almost 8 percent of subjects were also diagnosed with ASD.
Note that both of these studies are looking at trans people and assessing whether or not they have autism symptoms, not looking at autists and asking if they have trans symptoms. Given the characterization of autism as “extreme male brain” and that autism is diagnosed in males at about 4x the rate of females, the fact that there is some overlap between “women who think they think like men” and “traits associated with male thought patterns” is not surprising.
If the reported connection between autism and trans identity is just “autistic women feel like men,” that’s pretty non-mysterious and I just wasted an afternoon.
Though the data I have found so far still does not look directly at autists and ask how many of them have trans symptoms, the wikipedia page devoted to transgender and transsexual computer programmers lists only MtFs and no FtMs. Whether this is a pattern throughout the wider autism community, it definitely seems to be a thing among programmers. (Relevant discussion.)
So, returning to the original post:
Autism contains an amusing contradiction: on the one hand, autism is sometimes characterized as “extreme male brain,” and on the other hand, (some) autists (may be) more likely than neurotypicals to self-identify as transwomen–that is, biological men who see themselves as women. This seems contradictory: if autists are more masculine, mentally, than the average male, why don’t they identify as football players, army rangers, or something else equally masculine? For that matter, why isn’t a group with “extreme male brains” regarded as more, well, masculine?
(And if autists have extreme male brains, does that mean football players don’t? Do football players have more feminine brains than autists? Do colorless green ideas sleep furiously? DO WORDS MEAN?)
In favor of the “extreme male brain” hypothesis, we have evidence that testosterone is important for certain brain functions, like spacial recognition, we have articles like this one: Testosterone and the brain:
Gender differences in spatial recognition, and age-related declines in cognition and mood, point towards testosterone as an important modulator of cerebral functions. Testosterone appears to activate a distributed cortical network, the ventral processing stream, during spatial cognition tasks, and addition of testosterone improves spatial cognition in younger and older hypogonadal men. In addition, reduced testosterone is associated with depressive disorders.
(Note that women also suffer depression at higher rates than men.)
So people with more testosterone are better at spacial cognition and other tasks that “autistic” brains typically excel at, and brains with less testosterone tend to be moody and depressed.
But hormones are tricky things. Where do they come from? Where do they go? How do we use them?
According to Wikipedia:
During the second trimester [of pregnancy], androgen level is associated with gender formation.This period affects the femininization or masculinization of the fetus and can be a better predictor of feminine or masculine behaviours such as sex typed behaviour than an adult’s own levels. A mother’s testosterone level during pregnancy is correlated with her daughter’s sex-typical behavior as an adult, and the correlation is even stronger than with the daughter’s own adult testosterone level.
… Early infancy androgen effects are the least understood. In the first weeks of life for male infants, testosterone levels rise. The levels remain in a pubertal range for a few months, but usually reach the barely detectable levels of childhood by 4–6 months of age.The function of this rise in humans is unknown. It has been theorized that brain masculinization is occurring since no significant changes have been identified in other parts of the body.The male brain is masculinized by the aromatization of testosterone into estrogen, which crosses the blood–brain barrier and enters the male brain, whereas female fetuses have α-fetoprotein, which binds the estrogen so that female brains are not affected.
Let’s re-read that: the male brain is masculinized by the aromatization of testosterone into estrogen.
If that’s not a weird sentence, I don’t know what is.
Burgeoning evidence now documents profound effects of estrogens on learning, memory, and mood as well as neurodevelopmental and neurodegenerative processes. Most data derive from studies in females, but there is mounting recognition that estrogens play important roles in the male brain, where they can be generated from circulating testosterone by local aromatase enzymes or synthesized de novo by neurons and glia. Estrogen-based therapy therefore holds considerable promise for brain disorders that affect both men and women. However, as investigations are beginning to consider the role of estrogens in the male brain more carefully, it emerges that they have different, even opposite, effects as well as similar effects in male and female brains. This review focuses on these differences, including sex dimorphisms in the ability of estradiol to influence synaptic plasticity, neurotransmission, neurodegeneration, and cognition, which, we argue, are due in a large part to sex differences in the organization of the underlying circuitry.
Hypothesis: the way testosterone works in the brain (where we both do math and “feel” male or female) and the way it works in the muscles might be very different.
Do autists actually differ from other people in testosterone (or other hormone) levels?
Compared to controls, significantly more women with ASC [Autism Spectrum Conditions] reported (a) hirsutism, (b) bisexuality or asexuality, (c) irregular menstrual cycle, (d) dysmenorrhea, (e) polycystic ovary syndrome, (f) severe acne, (g) epilepsy, (h) tomboyism, and (i) family history of ovarian, uterine, and prostate cancers, tumors, or growths. Compared to controls, significantly more mothers of ASC children reported (a) severe acne, (b) breast and uterine cancers, tumors, or growths, and (c) family history of ovarian and uterine cancers, tumors, or growths.
Three of the children had exhibited explosive aggression against others (anger, broken objects, violence toward others). Three engaged in self-mutilations, and three demonstrated no aggression and were in a severe state of autistic withdrawal. The appearance of aggression against others was associated with having fewer of the main symptoms of autism (autistic withdrawal, stereotypies, language dysfunctions).
Three of their subjects (they don’t say which, but presumably from the first group,) had abnormally high testosterone levels (including one of the girls in the study.) The other six subjects had normal androgen levels.
This is the first report of an association between abnormally high androgenic activity and aggression in subjects with autism. Although a previously reported study did not find group mean elevations in plasma testosterone in prepubertal autistic subjects (4), it appears here that in certain autistic individuals, especially those in puberty, hyperandrogeny may play a role in aggressive behaviors. Also, there appear to be distinct clinical forms of autism that are based on aggressive behaviors and are not classified in DSM-IV. Our preliminary findings suggest that abnormally high plasma testosterone concentration is associated with aggression against others and having fewer of the main autistic symptoms.
So, some autists have do have abnormally high testosterone levels, but those same autists are less autistic, overall, than other autists. More autistic behavior, aggression aside, is associated with normal hormone levels. Probably.
Levels of FT [Fetal Testosterone] were analysed in amniotic fluid and compared with autistic traits, measured using the Quantitative Checklist for Autism in Toddlers (Q-CHAT) in 129 typically developing toddlers aged between 18 and 24 months (mean ± SD 19.25 ± 1.52 months). …
Sex differences were observed in Q-CHAT scores, with boys scoring significantly higher (indicating more autistic traits) than girls. In addition, we confirmed a significant positive relationship between FT levels and autistic traits.
I feel like this is veering into “we found that boys score higher on a test of male traits than girls did” territory, though.
The present study evaluates androgen and estrogen levels in saliva as well as polymorphisms in genes for androgen receptor (AR), 5-alpha reductase (SRD5A2), and estrogen receptor alpha (ESR1) in the Slovak population of prepubertal (under 10 years) and pubertal (over 10 years) children with autism spectrum disorders. The examined prepubertal patients with autism, pubertal patients with autism, and prepubertal patients with Asperger syndrome had significantly increased levels of salivary testosterone (P < 0.05, P < 0.01, and P < 0.05, respectively) in comparison with control subjects. We found a lower number of (CAG)n repeats in the AR gene in boys with Asperger syndrome (P < 0.001). Autistic boys had an increased frequency of the T allele in the SRD5A2 gene in comparison with the control group. The frequencies of T and C alleles in ESR1 gene were comparable in all assessed groups.
Individuals with a lower number of CAG repeats exhibit higher AR gene expression levels and generate more functional AR receptors increasing their sensitivity to testosterone…
Fewer repeats, more sensitivity to androgens. The SRD5A2 gene is also involved in testosterone metabolization, though I’m not sure exactly what the T allele does relative to the other variants.
But just because there’s a lot of something in the blood (or saliva) doesn’t mean the body is using it. Diabetics can have high blood sugar because their bodies lack the necessary insulin to move the sugar from the blood, into their cells. Fewer androgen receptors could mean the body is metabolizing testosterone less effectively, which in turn leaves more of it floating in the blood… Biology is complicated.
Here, we show that male and female hormones differentially regulate the expression of a novel autism candidate gene, retinoic acid-related orphan receptor-alpha (RORA) in a neuronal cell line, SH-SY5Y. In addition, we demonstrate that RORA transcriptionally regulates aromatase, an enzyme that converts testosterone to estrogen. We further show that aromatase protein is significantly reduced in the frontal cortex of autistic subjects relative to sex- and age-matched controls, and is strongly correlated with RORA protein levels in the brain.
If autists are bad at converting testosterone to estrogen, this could leave extra testosterone floating around in their blood… but doens’t explain their supposed “extreme male brain.” Here’s another study on the same subject, since it’s confusing:
Comparing the brains of 13 children with and 13 children without autism spectrum disorder, the researchers found a 35 percent decrease in estrogen receptor beta expression as well as a 38 percent reduction in the amount of aromatase, the enzyme that converts testosterone to estrogen.
Levels of estrogen receptor beta proteins, the active molecules that result from gene expression and enable functions like brain protection, were similarly low. There was no discernable change in expression levels of estrogen receptor alpha, which mediates sexual behavior.
The animals in the new studies, called ‘reeler’ mice, have one defective copy of the reelin gene and make about half the amount of reelin compared with controls. …
Reeler mice with one faulty copy serve as a model of one of the most well-established neuro-anatomical abnormalities in autism. Since the mid-1980s, scientists have known that people with autism have fewer Purkinje cells in the cerebellum than normal. These cells integrate information from throughout the cerebellum and relay it to other parts of the brain, particularly the cerebral cortex.
But there’s a twist: both male and female reeler mice have less reelin than control mice, but only the males lose Purkinje cells. …
In one of the studies, the researchers found that five days after birth, reeler mice have higher levels of testosterone in the cerebellum compared with genetically normal males3.
Keller’s team then injected estradiol — a form of the female sex hormone estrogen — into the brains of 5-day-old mice. In the male reeler mice, this treatment increases reelin levels in the cerebellum and partially blocks Purkinje cell loss. Giving more estrogen to female reeler mice has no effect — but females injected with tamoxifen, an estrogen blocker, lose Purkinje cells. …
In another study, the researchers investigated the effects of reelin deficiency and estrogen treatment on cognitive flexibility — the ability to switch strategies to solve a problem4. …
“And we saw indeed that the reeler mice are slower to switch. They tend to persevere in the old strategy,” Keller says. However, male reeler mice treated with estrogen at 5 days old show improved cognitive flexibility as adults, suggesting that the estrogen has a long-term effect.
This still doesn’t explain why autists would self-identify as transgender women (mtf) at higher rates than average, but it does suggest that any who do start hormone therapy might receive benefits completely independent of gender identity.
Let’s stop and step back a moment.
Autism is, unfortunately, badly defined. As the saying goes, if you’ve met one autist, you’ve met one autist. There are probably a variety of different, complicated things going on in the brains of different autists simply because a variety of different, complicated conditions are all being lumped together under a single label. Any mental disability that can include both non-verbal people who can barely dress and feed themselves and require lifetime care and billionaires like Bill Gates is a very badly defined condition.
(Unfortunately, people diagnose autism with questionnaires that include questions like “Is the child pedantic?” which could be equally true of both an autistic child and a child who is merely very smart and has learned more about a particular subject than their peers and so is responding in more detail than the adult is used to.)
The average autistic person is not a programmer. Autism is a disability, and the average diagnosed autist is pretty darn disabled. Among the people who have jobs and friends but nonetheless share some symptoms with formally diagnosed autists, though, programmer and the like appear to be pretty popular professions.
Back in my day, we just called these folks nerds.
Here’s a theory from a completely different direction: People feel the differences between themselves and a group they are supposed to fit into and associate with a lot more strongly than the differences between themselves and a distant group. Growing up, you probably got into more conflicts with your siblings and parents than with random strangers, even though–or perhaps because–your family is nearly identical to you genetically, culturally, and environmentally. “I am nothing like my brother!” a man declares, while simultaneously affirming that there is a great deal in common between himself and members of a race and culture from the other side of the planet. Your coworker, someone specifically selected for the fact that they have similar mental and technical aptitudes and training as yourself, has a distinct list of traits that drive you nuts, from the way he staples papers to the way he pronounces his Ts, while the women of an obscure Afghan tribe of goat herders simply don’t enter your consciousness.
Nerds, somewhat by definition, don’t fit in. You don’t worry much about fitting into a group you’re not part of in the fist place–you probably don’t worry much about whether or not you fit in with Melanesian fishermen–but most people work hard at fitting in with their own group.
So if you’re male, but you don’t fit in with other males (say, because you’re a nerd,) and you’re down at the bottom of the highschool totem pole and feel like all of the women you’d like to date are judging you negatively next to the football players, then you might feel, rather strongly, the differences between you and other males. Other males are aggressive, they call you a faggot, they push you out of their spaces and threaten you with violence, and there’s very little you can do to respond besides retreat into your “nerd games.”
By contrast, women are polite to you, not aggressive, and don’t aggressively push you out of their spaces. Your differences with them are much less problematic, so you feel like you “fit in” with them.
(There is probably a similar dynamic at play with American men who are obsessed with anime. It’s not so much that they are truly into Japanese culture–which is mostly about quietly working hard–as they don’t fit in very well with their own culture.) (Note: not intended as a knock on anime, which certainly has some good works.)
And here’s another theory: autists have some interesting difficulties with constructing categories and making inferences from data. They also have trouble going along with the crowd, and may have fewer “mirror neurons” than normal people. So maybe autists just process the categories of “male” and “female” a little differently than everyone else, and in a small subset of autists, this results in trans identity.*
And another: maybe there are certain intersex disorders which result in differences in brain wiring/organization. (Yes, there are real interesx disorders, like Klinefelter’s, in which people have XXY chromosomes instead of XX or XY.) In a small set of cases, these unusually wired brains may be extremely good at doing certain tasks (like programming) resulting people who are both “autism spectrum” and “trans”. This is actually the theory I’ve been running with for years, though it is not incompatible with the hormonal theories discussed above.
But we are talking small: trans people of any sort are extremely rare, probably on the order of <1/1000. Even if autists were trans at 8 times the rates of non-autists, that’s still only 8/1000 or 1/125. Autists themselves are pretty rare (estimates vary, but the vast majority of people are not autistic at all,) so we are talking about a very small subset of a very small population in the first place. We only notice these correlations at all because the total population has gotten so huge.
Sometimes, extremely rare things are random chance.
I was really excited about this book when I picked it up at the library. It has the word “numbers” on the cover and a subtitle that implies a story about human cultural and cognitive evolution.
Regrettably, what could have been a great books has turned out to be kind of annoying. There’s some fascinating information in here–for example, there’s a really interesting part on pages 249-252–but you have to get through pages 1-248 to get there. (Unfortunately, sometimes authors put their most interesting bits at the end so that people looking to make trouble have gotten bored and wandered off by then.)
I shall try to discuss/quote some of the book’s more interesting bits, and leave aside my differences with the author (who keeps reiterating his position that mathematical ability is entirely dependent on the culture you’re raised in.) Everett nonetheless has a fascinating perspective, having actually spent much of his childhood in a remote Amazonian village belonging to the Piraha, who have no real words for numbers. (His parents were missionaries.)
Which languages contain number words? Which don’t? Everett gives a broad survey:
“…we can reach a few broad conclusions about numbers in speech. First, they are common to nearly all of the world’s languages. … this discussion has shown that number words, across unrelated language, tend to exhibit striking parallels, since most languages employ a biologically based body-part model evident in their number bases.”
That is, many languages have words that translate essentially to “One, Two, Three, Four, Hand, … Two hands, (10)… Two Feet, (20),” etc., and reflect this in their higher counting systems, which can end up containing a mix of base five, 10, and 20. (The Romans, for example, used both base five and ten in their written system.)
“Third, the linguistic evidence suggests not only that this body-part model has motivated the innovation of numebers throughout the world, but also that this body-part basis of number words stretches back historically as far as the linguistic data can take us. It is evident in reconstruction of ancestral languages, including Proto-Sino-Tibetan, Proto-Niger-Congo, Proto-Autronesian, and Proto-Indo-European, the languages whose descendant tongues are best represented in the world today.”
Note, though, that linguistics does not actually give us a very long time horizon. Proto-Indo-European was spoken about 4-6,000 years ago. Proto-Sino-Tibetan is not as well studied yet as PIE, but also appears to be at most 6,000 years old. Proto-Niger-Congo is probably about 5-6,000 years old. Proto-Austronesian (which, despite its name, is not associated with Australia,) is about 5,000 years old.
These ranges are not a coincidence: languages change as they age, and once they have changed too much, they become impossible to classify into language families. Older languages, like Basque or Ainu, are often simply described as isolates, because we can’t link them to their relatives. Since humanity itself is 200,000-300,000 years old, comparative linguistics only opens a very short window into the past. Various groups–like the Amazonian tribes Everett studies–split off from other groups of humans thousands 0r hundreds of thousands of years before anyone started speaking Proto-Indo-European. Even agriculture, which began about 10,000-15,000 years ago, is older than these proto-languages (and agriculture seems to have prompted the real development of math.)
I also note these language families are the world’s biggest because they successfully conquered speakers of the world’s other languages. Spanish, Portuguese, and English are now widely spoken in the Americas instead of Cherokee, Mayan, and Nheengatu because Indo-European language speakers conquered the speakers of those languages.
The guy with the better numbers doesn’t always conquer the guy with the worse numbers–the Mongol conquest of China is an obvious counter. But in these cases, the superior number system sticks around, because no one wants to replace good numbers with bad ones.
In general, though, better tech–which requires numbers–tends to conquer worse tech.
Which means that even though our most successful language families all have number words that appear to be about 4-6,000 years old, we shouldn’t assume this was the norm for most people throughout most of history. Current human numeracy may be a very recent phenomenon.
“The invention of number is attainable by the human mind but is attained through our fingers. Linguistic data, both historical and current, suggest that numbers in disparate cultures have arisen independently, on an indeterminate range of occasions, through the realization that hands can be used to name quantities like 5 and 10. … Words, our ultimate implements for abstract symbolization, can thankfully be enlisted to denote quantities. But they are usually enlisted only after people establish a more concrete embodied correspondence between their finger sand quantities.”
Some more on numbers in different languages:
“Rare number bases have been observed, for instance, in the quaternary (base-4) systems of Lainana languages of California, or in the senary (base-6) systems that are found in southern New Guinea. …
Several languages in Melanesia and Polynesia have or once had number system that vary in accordance with the type of object being counted. In the case of Old High Fijian, for instance, the word for 100 was Bola when people were counting canoes, but Kora when they were counting coconuts. …
some languages in northwest Amazonia base their numbers on kinship relationships. This is true of Daw and Hup two related language in the region. Speakers of the former languages use fingers complemented with words when counting from 4 to 10. The fingers signify the quantity of items being counted, but words are used to denote whether the quantity is odd or even. If the quantity is even, speakers say it “has a brother,” if it is odd they state it “has no brother.”
What about languages with no or very few words for numbers?
In one recent survey of limited number system, it was found that more than a dozen languages lack bases altogether, and several do not have words for exact quantities beyond 2 and, in some cases, beyond 1. Of course, such cases represent a miniscule fraction of the world’s languages, the bulk of which have number bases reflecting the body-part model. Furthermore, most of the extreme cases in question are restricted geographically to Amazonia. …
All of the extremely restricted languages, I believe, are used by people who are hunter-gatherers or horticulturalists, eg, the Munduruku. Hunter gatherers typically don’t have a lot of goods to keep track of or trade, fields to measure or taxes to pay, and so don’t need to use a lot of numbers. (Note, however, that the Inuit/Eskimo have a perfectly normal base-20 counting system. Their particularly harsh environment appears to have inspired both technological and cultural adaptations.) But why are Amazonian languages even less numeric than those of other hunter-gatherers from similar environments, like central African?
Famously, most of the languages of Australia have somewhat limited number system, and some linguists previously claimed that most Australian language slack precise terms for quantities beyond 2…. [however] many languages on that continent actually have native means of describing various quantities in precise ways, and their number words for small quantities can sometimes be combined to represent larger quantities via the additive and even multiplicative usage of bases. …
Of the nearly 200 Australian languages considered in the survey, all have words to denote 1 and 2. In about three-quarters of the languages, however, the highest number is 3 or 4. Still, may of the languages use a word for “two” as a base for other numbers. Several of the languages use a word for “five” as a base, an eight of the languages top out at a word for “ten.”
Everett then digresses into what initially seems like a tangent about grammatical number, but luckily I enjoy comparative linguistics.
In an incredibly comprehensive survey of 1,066 languages, linguist Matthew Dryer recently found that 98 of them are like Karitiana and lack a grammatical means of marking nouns of being plural. So it is not particularly rare to find languages in which numbers do not show plurality. … about 90% of them, have a grammatical means through which speakers can convey whether they are talking about one or more than one thing.
Mandarin is a major language that has limited expression of plurals. According to Wikipedia:
Some languages, such as modern Arabic and Proto-Indo-European also have a “dual” category distinct from singular or plural; an extremely small set of languages have a trial category.
Many languages also change their verbs depending on how many nouns are involved; in English we say “He runs; they run;” languages like Latin or Spanish have far more extensive systems.
In sum: the vast majority of languages distinguish between 1 and more than one; a few distinguish between one, two, and many, and a very few distinguish between one, two, three, and many.
From the endnotes:
… some controversial claims of quadral markers, used in restricted contexts, have been made for the Austronesian languages Tangga, Marshallese, and Sursurunga. .. As Corbett notes in his comprehensive survey, the forms are probably best considered quadral markers. In fact, his impressive survey did not uncover any cases of quadral marking in the world’s languages.
Everett tends to bury his point; his intention in this chapter is to marshal support for the idea that humans have an “innate number sense” that allows them to pretty much instantly realize if they are looking at 1, 2, or 3 objects, but does not allow for instant recognition of larger numbers, like 4. He posits a second, much vaguer number sense that lets us distinguish between “big” and “small” amounts of things, eg, 10 looks smaller than 100, even if you can’t count.
He does cite actual neuroscience on this point–he’s not just making it up. Even newborn humans appear to be able to distinguish between 1, 2, and 3 of something, but not larger numbers. They also seem to distinguish between some and a bunch of something. Anumeric peoples, like the Piraha, also appear to only distinguish between 1, 2, and 3 items with good accuracy, though they can tell “a little” “some” and “a lot” apart. Everett also cites data from animal studies that find, similarly, that animals can distinguish 1, 2, and 3, as well as “a little” and “a lot”. (I had been hoping for a discussion of cephalopod intelligence, but unfortunately, no.)
How then, Everett asks, do we wed our specific number sense (1, 2, and 3) with our general number sense (“some” vs “a lot”) to produce ideas like 6, 7, and a googol? He proposes that we have no innate idea of 6, nor ability to count to 10. Rather, we can count because we were taught to (just as some highly trained parrots and chimps can.) It is only the presence of number words in our languages that allows us to count past 3–after all, anumeric people cannot.
But I feel like Everett is railroading us to a particular conclusion. For example, he sites neurology studies that found one part of the brain does math–the intraparietal suclus (IPS)–but only one part? Surely there’s more than one part of the brain involved in math.
The IPS turns out to be part of the extensive network of brain areas that support human arithmetic (Figure 1). Like all networks it is distributed, and it is clear that numerical cognition engages perceptual, motor, spatial and mnemonic functions, but the hub areas are the parietal lobes …
(By contrast, I’ve spent over half an hour searching and failing to figure out how high octopuses can count.)
Moreover, I question the idea that the specific and general number senses are actually separate. Rather, I suspect there is only one sense, but it is essentially logarithmic. For example, hearing is logarithmic (or perhaps exponential,) which is why decibels are also logarithmic. Vision is also logarithmic:
The eye senses brightness approximately logarithmically over a moderate range (but more like a power law over a wider range), and stellar magnitude is measured on a logarithmic scale. This magnitude scale was invented by the ancient Greek astronomer Hipparchus in about 150 B.C. He ranked the stars he could see in terms of their brightness, with 1 representing the brightest down to 6 representing the faintest, though now the scale has been extended beyond these limits; an increase in 5 magnitudes corresponds to a decrease in brightness by a factor of 100. Modern researchers have attempted to incorporate such perceptual effects into mathematical models of vision.
So many experiments have revealed logarithmic responses to stimuli that someone has formulated a mathematical “law” on the matter:
Fechner’s law states that the subjective sensation is proportional to the logarithm of the stimulus intensity. According to this law, human perceptions of sight and sound work as follows: Perceived loudness/brightness is proportional to logarithm of the actual intensity measured with an accurate nonhuman instrument.
p = k ln S S 0
The relationship between stimulus and perception is logarithmic. This logarithmic relationship means that if a stimulus varies as a geometric progression (i.e., multiplied by a fixed factor), the corresponding perception is altered in an arithmetic progression (i.e., in additive constant amounts). For example, if a stimulus is tripled in strength (i.e., 3 x 1), the corresponding perception may be two times as strong as its original value (i.e., 1 + 1). If the stimulus is again tripled in strength (i.e., 3 x 3 x 3), the corresponding perception will be three times as strong as its original value (i.e., 1 + 1 + 1). Hence, for multiplications in stimulus strength, the strength of perception only adds. The mathematical derivations of the torques on a simple beam balance produce a description that is strictly compatible with Weber’s law.
In any logarithmic scale, small quantities–like 1, 2, and 3–are easy to distinguish, while medium quantities–like 101, 102, and 103–get lumped together as “approximately the same.”
Of course, this still doesn’t answer the question of how people develop the ability to count past 3, but this is getting long, so we’ll continue our discussion next week.
So I was thinking about taste (flavor) and disgust (emotion.)
As I mentioned about a month ago, 25% of people are “supertasters,” that is, better at tasting than the other 75% of people. Supertasters experience flavors more intensely than ordinary tasters, resulting in a preference for “bland” food (food with too much flavor is “overwhelming” to them.) They also have a more difficult time getting used to new foods.
One of my work acquaintances of many years –we’ll call her Echo–is obese, constantly on a diet, and constantly eats sweets. She knows she should eat vegetables and tries to do so, but finds them bitter and unpleasant, and so the general outcome is as you expect: she doesn’t eat them.
Since I find most vegetables quite tasty, I find this attitude very strange–but I am willing to admit that I may be the one with unusual attitudes toward food.
Echo is also quite conservative.
This got me thinking about vegetarians vs. people who think vegetarians are crazy. Why (aside from novelty of the idea) should vegetarians be liberals? Why aren’t vegetarians just people who happen to really like vegetables?
What if there were something in preference for vegetables themselves that correlated with political ideology?
Certainly we can theorize that “supertaster” => “vegetables taste bitter” => “dislike of vegetables” => “thinks vegetarians are crazy.” (Some supertasters might think meat tastes bad, but anecdotal evidence doesn’t support this; see also Wikipedia, where supertasting is clearly associated with responses to plants:
Any evolutionary advantage to supertasting is unclear. In some environments, heightened taste response, particularly to bitterness, would represent an important advantage in avoiding potentially toxic plant alkaloids. In other environments, increased response to bitterness may have limited the range of palatable foods. …
Although individual food preference for supertasters cannot be typified, documented examples for either lessened preference or consumption include:
Mushrooms? Echo was just complaining about mushrooms.
Let’s talk about disgust. Disgust is an important reaction to things that might infect or poison you, triggering reactions from scrunching up your face to vomiting (ie, expelling the poison.) We process disgust in our amygdalas, and some people appear to have bigger or smaller amygdalas than others, with the result that the folks with more amygdalas feel more disgust.
Humans also route a variety of social situations through their amygdalas, resulting in the feeling of “disgust” in response to things that are not rotten food, like other people’s sexual behaviors, criminals, or particularly unattractive people. People with larger amygdalas also tend to find more human behaviors disgusting, and this disgust correlates with social conservatism.
To what extent are “taste” and “disgust” independent of each other? I don’t know; perhaps they are intimately linked into a single feedback system, where disgust and taste sensitivity cause each other, or perhaps they are relatively independent, so that a few unlucky people are both super-sensitive to taste and easily disgusted.
People who find other people’s behavior disgusting and off-putting may also be people who find flavors overwhelming, prefer bland or sweet foods over bitter ones, think vegetables are icky, vegetarians are crazy, and struggle to stay on diets.
What’s that, you say, I’ve just constructed a just-so story?
Michael Shin and William McCarthy, researchers from UCLA, have found an association between counties with higher levels of support for the 2012 Republican presidential candidate and higher levels of obesity in those counties.
Looks like the Mormons and Southern blacks are outliers.
(I don’t really like maps like this for displaying data; I would much prefer a simple graph showing orientation on one axis and obesity on the other, with each county as a datapoint.)
(Unsurprisingly, the first 49 hits I got when searching for correlations between political orientation and obesity were almost all about what other people think of fat people, not what fat people think. This is probably because researchers tend to be skinny people who want to fight “fat phobia” but aren’t actually interested in the opinions of fat people.)
Liberals are 28 percent more likely than conservatives to eat fresh fruit daily, and 17 percent more likely to eat toast or a bagel in the morning, while conservatives are 20 percent more likely to skip breakfast.
Ten percent of liberals surveyed indicated they are vegetarians, compared with 3 percent of conservatives.
Liberals are 28 percent more likely than conservatives to enjoy beer, with 60 percent of liberals indicating they like beer.
(See above where Wikipedia noted that supertasters dislike beer.) I will also note that coffee, which supertasters tend to dislike because it is too bitter, is very popular in the ultra-liberal cities of Portland and Seattle, whereas heavily sweetened iced tea is practically the official beverage of the South.
The only remaining question is if supertasters are conservative. That may take some research.
Update: I have not found, to my disappointment, a simple study that just looks at correlation between ideology and supertasting (or nontasting.) However, I have found a couple of useful items.
Standard tests of disgust sensitivity, a questionnaire developed for this research assessing different types of moral transgressions (nonvisceral, implied-visceral, visceral) with the terms “angry” and “grossed-out,” and a taste sensitivity test of 6-n-propylthiouracil (PROP) were administered to 102 participants. [PROP is commonly used to test for “supertasters.”] Results confirmed past findings that the more sensitive to PROP a participant was the more disgusted they were by visceral, but not moral, disgust elicitors. Importantly, the findings newly revealed that taste sensitivity had no bearing on evaluations of moral transgressions, regardless of their visceral nature, when “angry” was the emotion primed. However, when “grossed-out” was primed for evaluating moral violations, the more intense PROP tasted to a participant the more “grossed-out” they were by all transgressions. Women were generally more disgust sensitive and morally condemning than men, … The present findings support the proposition that moral and visceral disgust do not share a common oral origin, but show that linguistic priming can transform a moral transgression into a viscerally repulsive event and that susceptibility to this priming varies as a function of an individual’s sensitivity to the origins of visceral disgust—bitter taste. [bold mine.]
In other words, supertasters are more easily disgusted, and with verbal priming will transfer that disgust to moral transgressions. (And easily disgusted people tend to be conservatives.)
This is an attempt at a coherent explanation for why left-handedness (and right-handedness) exist in the distributions that they do.
Handedness is a rather exceptional human trait. Most animals don’t have a dominant hand (or foot.) Horses have no dominant hooves; anteaters dig equally well with both paws; dolphins don’t favor one flipper over the other; monkeys don’t fall out of trees if they try to grab a branch with their left hands. Only humans have a really distinct tendency to use one side of their bodies over the other.
And about 90% of us use our right hands, and about 10% of us use our left hands, (Wikipedia claims 10%, but The Lopsided Ape reports 12%.) an observation that appears to hold pretty consistently throughout both time and culture, so long as we aren’t dealing with a culture where lefties are forced to write with their right hands.
A simple Mendel-square two-gene explanation for handedness–a dominant allele for right-handedness and a recessive one for left-handedness, with equal proportions of alleles in society, would result in a 75% righties to 25% lefties. Even if the proportions weren’t equal, the offspring of two lefties ought to be 100% left-handed. This is not, however, what we see. The children of two lefties have only a 25% chance or so of being left-handed themselves.
So let’s try a more complicated model.
Let’s assume that there are two alleles that code for right-handedness. (Hereafter “R”) You get one from your mom and one from your dad.
Each of these alleles is accompanied by a second allele that codes for either nothing (hereafter “O”) or potentially switches the expression of your handedness (hereafter “S”)
Everybody in the world gets two identical R alleles, one from mom and one from dad.
Everyone also gets two S or O alleles, one from mom and one from dad. One of these S or O alleles affects one of your Rs, and the other affects the other R.
Your potential pairs, then, are:
RO/RO, RO/RS, RS/RO, or RS/RS
RO=right handed allele.
RS=50% chance of expressing for right or left dominance; RS/RS thus => 25% chance of both alleles coming out lefty.
So RO/RO, RO/RS, and RS/RO = righties, (but the RO/ROs may have especially dominant right hands; half of the RO/RS guys may have weakly dominant right hands.)
Only RS/RS produces lefties, and of those, only 25% defeat the dominance odds.
This gets us our observed correlation of only 25% of children of left-handed couples being left-handed themselves.
(Please note that this is still a very simplified model; Wikipedia claims that there may be more than 40 alleles involved.)
What of the general population as a whole?
Assuming random mating in a population with equal quantities of RO/RO, RO/RS, RS/RO and RS/RS, we’d end up with 25% of children RS/RS. But if only 25% of RS/RS turn out lefties, only 6.25% of children would be lefties. We’re still missing 4-6% of the population.
This implies that either: A. Wikipedia has the wrong #s for % of children of lefties who are left-handed; B. about half of lefties are RO/RS (about 1/8th of the RO/RS population); C. RS is found in twice the proportion as RO in the population; or D. my model is wrong.
Dr Chris McManus reported in his book Right Hand, Left Hand on a study he had done based on a review of scientific literature which showed parent handedness for 70,000 children. On average, the chances of two right-handed parents having a left-handed child were around 9% left-handed children, two left-handed parents around 26% and one left and one right-handed parent around 19%. …
More than 50% of left-handers do not know of any other left-hander anywhere in their living family.
This implies B, that about half of lefties are RO/RS. Having one RS combination gives you a 12.5% chance of being left-handed; having two RS combinations gives you a 25% chance.
And that… I think that works. And it means we can refine our theory–we don’t need two R alleles; we only need one. (Obviously it is more likely a whole bunch of alleles that code for a whole system, but since they act together, we can model them as one.) The R allele is then modified by a pair of alleles that comes in either O (do nothing,) or S (switch.)
One S allele gives you a 12.5% chance of being a lefty; two doubles your chances to 25%.
Interestingly, this model suggests that not only does no gene for “left handedness” exist, but that “left handedness” might not even be the allele’s goal. Despite the rarity of lefties, the S allele is found in 75% of the population (an equal % as the O allele.) My suspicion is that the S allele is doing something else valuable, like making sure we don’t become too lopsided in our abilities or try to shunt all of our mental functions to one side of our brain.
We’re talking about foods, not whether you prefer Beethoven or Lil’ Wayne.
Certainly there are broad correlations between the foods people enjoy and their ethnicity/social class. If you know whether I chose fried okra, chicken feet, gefilte fish, escargot, or grasshoppers for dinner, you can make a pretty good guess about my background. (Actually, I have eaten all of these things. The grasshoppers were over-salted, but otherwise fine.) The world’s plethora of tasty (and not-so-tasty) cuisines is due primarily to regional variations in what grows well where (not a lot of chili peppers growing up in Nunavut, Canada,) and cost (the rich can always afford fancier fare than the poor,) with a side dish of seemingly random cultural taboos like “don’t eat pork” or “don’t eat cows” or “don’t eat grasshoppers.”
But do people vary in their experience of taste? Does intelligence influence how you perceive your meal, driving smarter (or less-smart) people to seek out particular flavor profiles or combinations? Or could there be other psychological or neurological factors at play n people’s eating decisions?
This post was inspired by a meal my husband, an older relative and I shared recently at McDonald’s. It had been a while since we’d last patronized McDonald’s, but older relative likes their burgers, so we went and ordered some new-to-us variety of meat-on-a-bun. As my husband and I sat there, deconstructing the novel taste experience and comparing it to other burgers, the older relative gave us this look of “Jeez, the idiots are discussing the flavor of a burger! Just eat it already!”
As we dined later that evening at my nemesis, Olive Garden, I began wondering whether we actually experienced the food the same way. Perhaps there is something in people that makes them prefer bland, predictable food. Perhaps some people are better at discerning different flavors, and the people who cannot discern them end up with worse food because they can’t tell?
Unfortunately, it appears that not a lot of people have studied whether there is any sort of correlation between IQ and taste (or smell.) There’s a fair amount of research on taste (and smell,) like “do relatives of schizophrenics have impaired senses of smell?” (More on Schizophrenics and their decreased ability to smell) or “can we get fat kids to eat more vegetables?” Oh, and apparently the nature of auditory hallucinations in epileptics varies with IQ (IIRC.) But not much that directly addresses the question.
I did find two references that, somewhat in passing, noted that they found no relationship between taste and IQ, but these weren’t studies designed to test for that. For example, in A Food Study of Monotony, published in 1958 (you know I am really looking for sources when I have to go back to 1958,) researchers restricted the diets of military personnel employed at an army hospital to only 4 menus to see how quickly and badly they’d get bored of the food. They found no correlation between boredom and IQ, but people employed at an army hospital are probably pre-selected for being pretty bright (and having certain personality traits in common, including ability to stand army food.)
Interestingly, three traits did correlate with (or against) boredom:
Fatter people got bored fastest (the authors speculate that they care the most about their food,) while depressed and feminine men (all subjects in the study were men) got bored the least. Depressed people are already disinterested in food, so it is hard to get less-interested, but no explanation was given of what they meant by “femininity” or how this might affect food preferences. (Also, the hypochondriacs got bored quickly.)
Some foods inspire boredom (or even disgust) quickly, while others are virtually immune. Milk and bread, for example, can be eaten every day without complaint (though you might get bored if bread were your only food.) Potted meat, by contrast, gets old fast.
Although self-reported eating practices were not associated with educational level, intelligence, nor various indices of psychopathology, they were related to the demographic variables of gender and age: older participants reported eating more fiber in their diets than did younger ones, and women reported more avoidance of fats from meats than did men.
Self-reported eating habits may not be all that reliable, though.
Participants with autism were significantly less accurate than control participants in identifying sour tastes and marginally less accurate for bitter tastes, but they were not different in identifying sweet and salty stimuli. … Olfactory identification was significantly worse among participants with autism. … True differences exist in taste and olfactory identification in autism. Impairment in taste identification with normal detection thresholds suggests cortical, rather than brainstem dysfunction.
(Another study of the eating habits of autistic kids found that the pickier ones were rated by their parents as more severely impaired than the less picky ones, but then severe food aversions are a form of life impairment. By the way, do not tell the parents of an autistic kid, “oh, he’ll eat when he’s hungry.” They will probably respond politely, but mentally they are stabbing you.)
On brainstem vs. cortical function–it appears that we do some of our basic flavor identification way down in the most instinctual part of the brain, as Facial Expressions in Response to Taste and Smell Stimulation explores. The authors found that pretty much everyone makes the same faces in response to sweet, sour, and bitter flavors–whites and blacks, old people and newborns, retarded people and blind people, even premature infants, blind infants, and infants born missing most of their brains. All of which is another point in favor of my theory that disgust is real. (And if that is not enough science of taste for you, I recommend Place and Taste Aversion Learning, in which animals with brain lesions lost their fear of new foods.)
Genetics obviously plays a role in taste. If you are one of the 14% or so of people who think cilantro tastes like soap (and I sympathize, because cilantro definitely tastes like soap,) then you’ve already discovered this in a very practical way. Genetics also obviously determine whether you continue producing the enzyme for milk digestion after infancy (lactase persistence). According to Why are you a picky eater? Blame genes, brains, and breastmilk:
Researchers at Philadelphia’s Monell Chemical Senses Center, a scientific institute dedicated to the study of smell and taste, have found that this same gene also predicts the strength of sweet-tooth cravings among children. Kids who were more sensitive to bitterness preferred sugary foods and drinks. However, adults with the bitter receptor genes remained picky about bitter foods but did not prefer more sweets, the Monell study found. This suggests that sometimes age and experience can override genetics.
I suspect that there is actually a sound biological, evolutionary reason why kids crave sweets more than grownups, and this desire for sweets is somewhat “turned off” as we age.
Ethnobotanist Gary Paul Nabhan suggests that diet had a key role in human evolution, specifically, that human genetic diversity is predominately a product of regional differences in ancestral diets. Chemical compounds found within animals and plants varied depending on climate. These compounds induced changes in gene expression, which can vary depending on the amount within the particular food and its availability. The Agricultural Age led to further diet-based genetic diversity. Cultivation of foods led to the development of novel plants and animals that were not available in the ancestral environment. …
There are other fascinating examples of gene-diet interaction. Culturally specific recipes, semi-quantitative blending of locally available foods and herbs, and cooking directions needed in order to reduce toxins present in plants, emerged over time through a process of trial-and error and were transmitted through the ages. The effects on genes by foods can be extremely complex given the range of plant-derived compounds available within a given region. The advent of agriculture is suggested to have overridden natural selection by random changes in the environment. The results of human-driven selection can be highly unexpected. …
In sedentary herding societies, drinking water was frequently contaminated by livestock waste. The author suggests in order to avoid contaminated water, beverages made with fermented grains or fruit were drunk instead. Thus, alcohol resistance was selected for in populations that herded animals, such as Europeans. By contrast, those groups which did not practice herding, such as East Asians and Native Americans, did not need to utilize alcohol as a water substitute and are highly sensitive to the effects of alcohol.
Speaking of genetics:
Indians and Africans are much more likely than Europeans and native South Americans to have an allele that lets them eat a vegetarian diet:
The vegetarian allele evolved in populations that have eaten a plant-based diet over hundreds of generations. The adaptation allows these people to efficiently process omega-3 and omega-6 fatty acids and convert them into compounds essential for early brain development and controlling inflammation. In populations that live on plant-based diets, this genetic variation provided an advantage and was positively selected in those groups.
In Inuit populations of Greenland, the researchers uncovered that a previously identified adaptation is opposite to the one found in long-standing vegetarian populations: While the vegetarian allele has an insertion of 22 bases (a base is a building block of DNA) within the gene, this insertion was found to be deleted in the seafood allele.
Dr. Hirsh, neurological director of the Smell and Taste Research and Treatment Foundation in Chicago, stands by his book that is based on over 24 years of scientific study and tests on more than 18,000 people’s food choices and personalities.)
Byrnes assessed the group using the Arnett Inventory of Sensation Seeking (AISS), a test for the personality trait of sensation-seeking, defined as desiring novel and intense stimulation and presumed to contribute to risk preferences. Those in the group who score above the mean AISS score are considered more open to risks and new experiences, while those scoring below the mean are considered less open to those things.
The subjects were given 25 micrometers of capsaicin, the active component of chili peppers, and asked to rate how much they liked a spicy meal as the burn from the capsaicin increased in intensity. Those in the group who fell below the mean AISS rapidly disliked the meal as the burn increased. People who were above the mean AISS had a consistently high liking of the meal even as the burn increased. Those in the mean group liked the meal less as the burn increased, but not nearly as rapidly as those below the mean.
And then there are the roughly 25% of us who are “supertasters“:
A supertaster is a person who experiences the sense of taste with far greater intensity than average. Women are more likely to be supertasters, as are those from Asia, South America and Africa. The cause of this heightened response is unknown, although it is thought to be related to the presence of the TAS2R38 gene, the ability to taste PROP and PTC, and at least in part, due to an increased number of fungiform papillae.
Perhaps the global distribution of supertasters is related to the distribution of vegetarian-friendly alleles. It’s not surprising that women are more likely to be supertasters, as they have a better sense of smell than men. What may be surprising is that supertasters tend not to be foodies who delight in flavoring their foods with all sorts of new spices, but instead tend toward more restricted, bland diets. Because their sense of taste is essentially on overdrive, flavors that taste “mild” to most people taste “overwhelming” on their tongues. As a result, they tend to prefer a much more subdued palette–which is, of course, perfectly tasty to them.
“During research back in the 1980s, we discovered that people are more reluctant to try new foods of animal origin than those of plant origin,” Pelchat says. “That’s ironic in two ways. As far as taste is concerned, the range of flavors in animal meat isn’t that large compared to plants, so there isn’t as much of a difference. And, of course, people are much more likely to be poisoned by eating plants than by animals, as long as the meat is properly cooked.” …
It’s also possible that reward mechanisms in our brain can drive changes in taste. Pelchat’s team once had test subjects sample tiny bits of unfamiliar food with no substantial nutritional value, and accompanied them with pills that contained either nothing or a potent cocktail of caloric sugar and fat. Subjects had no idea what was in the pills they swallowed. They learned to like the unfamiliar flavors more quickly when they were paired with a big caloric impact—suggesting that body and brain combined can alter tastes more easily when unappetizing foods deliver big benefits.
So trying to get people to adopt new foods while losing weight may not be the best idea.
(For all that people complain about kids’ pickiness, parents are much pickier. Kids will happily eat playdoh and crayons, but one stray chicken heart in your parents’ soup and suddenly it’s “no more eating at your house.”)
Of course, you can’t talk about food without encountering meddlers who are convinced that people should eat whatever they’re convinced is the perfect diet, like these probably well-meaning folks trying to get Latinos to eat fewer snacks:
Latinos are the largest racial and ethnic minority group in the United States and bear a disproportionate burden of obesity related chronic disease. Despite national efforts to improve dietary habits and prevent obesity among Latinos, obesity rates remain high. …
there is a need for more targeted health promotion and nutrition education efforts on the risks associated with soda and energy-dense food consumption to help improve dietary habits and obesity levels in low-income Latino communities.
Never mind that Latinos are one of the healthiest groups in the country, with longer life expectancies than whites! We’d better make sure they know that their food ways are not approved of!
(Just in case it is not clear already: different people are adapted to and will be healthy on different diets. There is no magical, one-size-fits-all diet.)
4,000 Scottish children aged 3-5 years old were examined to compare the intelligence dampening effects of fast food consumption versus “from scratch” fare prepared with only fresh ingredients.
Higher fast food consumption by the children was linked with lower intelligence and this was even after adjustments for wealth and social status were taken into account.
It’d be better if they controlled for parental IQ.
The conclusions of this study confirm previous research which shows long lasting effects on IQ from a child’s diet. An Australian study from the University of Adelaide published in August 2012 showed that toddlers who consume junk food grow less smart as they get older. In that study, 7000 children were examined at the age of 6 months, 15 months, 2 years to examine their diet.
When the children were examined again at age 8, children who were consuming the most unhealthy food had IQs up to 2 points lower than children eating a wholesome diet.
As we were discussing yesterday, I theorize that people have neural feedback loops that reward them for conforming/imitating others/obeying authorities and punish them for disobeying/not conforming.
This leads people to obey authorities or go along with groups even when they know, logically, that they shouldn’t.
There are certainly many situations in which we want people to conform even though they don’t want to, like when my kids have to go to bed or buckle their seatbelts–as I said yesterday, the feedback loop exists because it is useful.
But there are plenty of situations where we don’t want people to conform, like when trying to brainstorm new ideas.
Under what conditions will people disobey authority?
But in person, people may disobey authorities when they have some other social systtem to fall back on. If disobeying an authority in Society A means I lose social status in Society A, I will be more likely to disobey if I am a member in good standing in Society B.
If I can use my disobedience against Authority A as social leverage to increase my standing in Society B, then I am all the more likely to disobey. A person who can effectively stand up to an authority figure without getting punished must be, our brains reason, a powerful person, an authority in their own right.
Teenagers do this all the time, using their defiance against adults, school, teachers, and society in general to curry higher social status among other teenagers, the people they actually care about impressing.
SJWs do this, too:
I normally consider the president of Princeton an authority figure, and even though I probably disagree with him on far more political matters than these students do, I’d be highly unlikely to be rude to him in real life–especially if I were a student he could get expelled from college.
But if I had an outside audience–Society B–clapping and cheering for me behind the scenes, the urge to obey would be weaker. And if yelling at the President of Princeton could guarantee me high social status, approval, job offers, etc., then there’s a good chance I’d do it.
But then I got to thinking: Are there any circumstances under which these students would have accepted the president’s authority?
Obviously if the man had a proven track record of competently performing a particular skill the students wished to learn, they might follow hi example.
If authority works via neural feedback loops, employing some form of “mirror neurons,” do these systems activate more strongly when the people we are perceiving look more like ourselves (or our internalized notion of people in our “tribe” look like, since mirrors are a recent invention)?
In other words, what would a cross-racial version of the Milgram experiment look like?
Unfortunately, it doesn’t look like anyone has tried it (and to do it properly, it’d need to be a big experiment, involving several “scientists” of different races [so that the study isn’t biased by one “scientist” just being bad at projecting authority] interacting with dozens of students of different races, which would be a rather large undertaking.) I’m also not finding any studies on cross-racial authority (I did find plenty of websites offering practical advice about different groups’ leadership styles,) though I’m sure someone has studied it.
However, I did find cross-racial experiments on empathy, which may involve the same brain systems, and so are suggestive:
Using transcranial magnetic stimulation, we explored sensorimotor empathic brain responses in black and white individuals who exhibited implicit but not explicit ingroup preference and race-specific autonomic reactivity. We found that observing the pain of ingroup models inhibited the onlookers’ corticospinal system as if they were feeling the pain. Both black and white individuals exhibited empathic reactivity also when viewing the pain of stranger, very unfamiliar, violet-hand models. By contrast, no vicarious mapping of the pain of individuals culturally marked as outgroup members on the basis of their skin color was found. Importantly, group-specific lack of empathic reactivity was higher in the onlookers who exhibited stronger implicit racial bias.
Using the event-related potential (ERP) approach, we tracked the time-course of white participants’ empathic reactions to white (own-race) and black (other-race) faces displayed in a painful condition (i.e. with a needle penetrating the skin) and in a nonpainful condition (i.e. with Q-tip touching the skin). In a 280–340 ms time-window, neural responses to the pain of own-race individuals under needle penetration conditions were amplified relative to neural responses to the pain of other-race individuals displayed under analogous conditions.
In this study, we used functional magnetic resonance imaging (fMRI) to investigate how people perceive the actions of in-group and out-group members, and how their biased view in favor of own team members manifests itself in the brain. We divided participants into two teams and had them judge the relative speeds of hand actions performed by an in-group and an out-group member in a competitive situation. Participants judged hand actions performed by in-group members as being faster than those of out-group members, even when the two actions were performed at physically identical speeds. In an additional fMRI experiment, we showed that, contrary to common belief, such skewed impressions arise from a subtle bias in perception and associated brain activity rather than decision-making processes, and that this bias develops rapidly and involuntarily as a consequence of group affiliation. Our findings suggest that the neural mechanisms that underlie human perception are shaped by social context.
None of these studies shows definitevely whether or not in-group vs. out-group biases are an inherent feature of neurological systems, or Avenanti’s finding that people were more empathetic toward a purple-skinned person than to a member of a racial out-group suggests that some amount of learning is involved in the process–and that rather than comparing people against one’s in-group, we may be comparing them against our out-group.
At any rate, you may get similar outcomes either way.
In cases where you want to promote group cohesion and obedience, it may be beneficial to sort people by self-identity.
In cases where you want to guard against groupthink, obedience, or conformity, it may be beneficial to mix up the groups. Intellectual diversity is great, but even ethnic diversity may help people resist defaulting to obedience, especially when they know they shouldn’t.
Using data from two panel studies on U.S. firms and an online experiment, we examine investor reactions to increases in board diversity. Contrary to conventional wisdom, we find that appointing female directors has no impact on objective measures of performance, such as ROA, but does result in a systematic decrease in market value.
(Solal argues that investors may perceive the hiring of women–even competent ones–as a sign that the company is pursuing social justice goals instead of money-making goals and dump the stock.)
Additionally, diverse companies may find it difficult to work together toward a common goal–there is a good quantity of evidence that increasing diversity decreases trust and inhibits group cohesion. EG, from The downside of diversity:
IT HAS BECOME increasingly popular to speak of racial and ethnic diversity as a civic strength. From multicultural festivals to pronouncements from political leaders, the message is the same: our differences make us stronger.
But a massive new study, based on detailed interviews of nearly 30,000 people across America, has concluded just the opposite. Harvard political scientist Robert Putnam — famous for “Bowling Alone,” his 2000 book on declining civic engagement — has found that the greater the diversity in a community, the fewer people vote and the less they volunteer, the less they give to charity and work on community projects. In the most diverse communities, neighbors trust one another about half as much as they do in the most homogenous settings. The study, the largest ever on civic engagement in America, found that virtually all measures of civic health are lower in more diverse settings.
As usual, I suspect there is an optimum level of diversity–depending on a group’s purpose and its members’ preferences–that helps minimize groupthink while still preserving most of the benefits of cohesion.