Invasive Memes

 

220px-Smallpox_virus_virions_TEM_PHIL_1849
Smallpox virus

Do people eventually grow ideologically resistant to dangerous local memes, but remain susceptible to foreign memes, allowing them to spread like invasive species?

And if so, can we find some way to memetically vaccinate ourselves against deadly ideas?

***

Memetics is the study of how ideas (“memes”) spread and evolve, using evolutionary theory and epidemiology as models. A “viral meme” is one that spreads swiftly through society, “infecting” minds as it goes.

Of course, most memes are fairly innocent (e.g. fashion trends) or even beneficial (“wash your hands before eating to prevent disease transmission”), but some ideas, like communism, kill people.

Ideologies consist of a big set of related ideas rather than a single one, so let’s call them memeplexes.

Almost all ideological memeplexes (and religions) sound great on paper–they have to, because that’s how they spread–but they are much more variable in actual practice.

Any idea that causes its believers to suffer is unlikely to persist–at the very least, because its believers die off.

Over time, in places where people have been exposed to ideological memeplexes, their worst aspects become known and people may learn to avoid them; the memeplexes themselves can evolve to be less harmful.

Over in epidemiology, diseases humans have been exposed to for a long time become less virulent as humans become adapted to them. Chickenpox, for example, is a fairly mild disease that kills few people because the virus has been infecting people for as long as people have been around (the ancestral Varicella-Zoster virus evolved approximately 65 million years ago and has been infecting animals ever since). Rather than kill you, chickenpox prefers to enter your nerves and go dormant for decades, reemerging later as shingles, ready to infect new people.

By contrast, smallpox (Variola major and Variola minor) probably evolved from a rodent-infecting virus about 16,000 to 68,000 years ago. That’s a big range, but either way, it’s much more recent than chickenpox. Smallpox made its first major impact on the historical record around the third century BC, Egypt, and thereafter became a recurring plague in Africa and Eurasia. Note that unlike chickenpox, which is old enough to have spread throughout the world with humanity, smallpox emerged long after major population splits occurred–like part of the Asian clade splitting off and heading into the Americas.

By 1400, Europeans had developed some immunity to smallpox (due to those who didn’t have any immunity dying), but when Columbus landed in the New World, folks here had had never seen the disease before–and thus had no immunity. Diseases like smallpox and measles ripped through native communities, killing approximately 90% of the New World population.

If we extend this metaphor back to ideas–if people have been exposed to an ideology for a long time, they are more likely to have developed immunity to it or the ideology to have adapted to be relatively less harmful than it initially was. For example, the Protestant Reformation and subsequent Catholic counter-reformation triggered a series of European wars that killed 10 million people, but today Catholics and Protestants manage to live in the same countries without killing each other. New religions are much more likely to lead all of their followers in a mass suicide than old, established religions; countries that have just undergone a political revolution are much more likely to kill off large numbers of their citizens than ones that haven’t.

This is not to say that old ideas are perfect and never harmful–chickenpox still kills people and is not a fun disease–but that any bad aspects are likely to become more mild over time as people wise up to bad ideas, (certain caveats applying).

But this process only works for ideas that have been around for a long time. What about new ideas?

You can’t stop new ideas. Technology is always changing. The world is changing, and it requires new ideas to operate. When these new ideas arrive, even terrible ones can spread like wildfire because people have no memetic antibodies to resist them. New memes, in short, are like invasive memetic species.

In the late 1960s, 15 million people still caught smallpox every year. In 1980, it was declared officially eradicated–not one case had been seen since 1977, due to a massive, world-wide vaccination campaign.

Humans can acquire immunity to disease in two main ways. The slow way is everyone who isn’t immune dying; everyone left alive happens to have adaptations that let them not die, which they can pass on to their children. As with chickenpox, over generations, the disease becomes less severe because humans become successively more adapted to it.

The fast way is to catch a disease, produce antibodies that recognize and can fight it off, and thereafter enjoy immunity. This, of course, assumes that you survive the disease.

Vaccination works by teaching body’s immune system to recognize a disease without infecting it with a full-strength germ, using a weakened or harmless version of the germ, instead. Early on, weakened germs from actual smallpox scabs or lesions to inoculate people, a risky method since the germs often weren’t that weak. Later, people discovered that cowpox was similar enough to smallpox that its antibodies could also fight smallpox, but cowpox itself was too adapted to cattle hosts to seriously harm humans. (Today I believe the vaccine uses a different weakened virus, but the principle is the same.)

The good part about memes is that you do not actually have to inject a physical substance into your body in order to learn about them.

Ideologies are very difficult to evaluate in the abstract, because, as mentioned, they are all optimized to sound good on paper. It’s their actual effects we are interested in.

So if we want to learn whether an idea is good or not, it’s probably best not to learn about it by merely reading books written by its advocates. Talk to people in places where the ideas have already been tried and learn from their experiences. If those people tell you this ideology causes mass suffering and they hate it, drop it like a hot potato. If those people are practicing an “impure” version of the ideology, it’s probably an improvement over the original.

For example, “communism” as practiced in China today is quite different from “communism” as practiced there 50 years ago–so much so that the modern system really isn’t communism at all. There was never, to my knowledge, an official changeover from one system to another, just a gradual accretion of improvements. This speaks strongly against communism as an ideology, since no country has managed to be successful by moving toward ideological communist purity, only by moving away from it–though they may still find it useful to retain some of communism’s original ideas.

I think there is a similar dynamic occurring in many Islamic countries. Islam is a relatively old religion that has had time to adapt to local conditions in many different parts of the world. For example, in Morocco, where the climate is more favorable to raising pigs than in other parts of the Islamic world, the taboo against pigs isn’t as strongly observed. The burka is not an Islamic universal, but characteristic of central Asia (the similar niqab is from Yemen). Islamic head coverings vary by culture–such as this kurhars, traditionally worn by unmarried women in Ingushetia, north of the Caucuses, or this cap, popular in Xianjiang. Turkey has laws officially restricting burkas in some areas, and Syria discourages even hijabs. Women in Iran did not go heavily veiled prior to the Iranian Revolution. So the insistence on extensive veiling in many Islamic communities (like the territory conquered by ISIS) is not a continuation of old traditions, but the imposition of a new, idealized, version of Islam.

Purity is counter to practicality.

Of course, this approach is hampered by the fact that what works in one place, time, and community may not work in a different one. Tilling your fields one way works in Europe, and tilling them a different way works in Papua New Guinea. But extrapolating from what works is at least a good start.

 

 

Neanderthal Skull for 3D Printing

e3fa487b36f43641fc86d1fbe40665b4_preview_featured Meet Nandy the Neanderthal. You can download him at Thingiverse.

This is my first creation, Nandy the Neanderthal, based on the Chapelle-aux-Saints 1 skull and this side view. Note that he is based on two different skulls, but still very much a Neanderthal.

Since this is my very first creation and I don’t have a 3D printer yet, (I expect to receive one soon and am planning ahead,) I am still learning all of the ins and outs of this technology and so would appreciate any technical feedback.

Neanderthals evolved around 600,000-800,000 years ago and spread into the Middle East, Europe, and central Asia. They made stone tools, controlled fire, and hunted. They survived in a cold and difficult climate, but likely could make no more than the simplest of clothes. As a result, they may have been, unlike modern humans, hairy.

Cochran and Harpending of West Hunter write in The 10,000 Year Explosion: 

 Chimpanzees have ridges on their finger bones that stem from the way that they clutch their mothers’ fur as infants. Modern humans don’t have these ridges, but Neanderthals do.

Hoffecker, in The Spread of Modern Humans in Europe writes:

Neanderthal sites show no evidence of tools for making tailored clothing. There are only hide scrapers, which might have been used to make blankets or ponchos. This is in contrast to Upper Paleolithic (modern human) sites, which have an abundance of eyed bone needles and bone awls.

Their skulls were, on average, larger than ours, with wide noses, round eyes, and an elongated braincase. Their facial features were robust–that is, strong, thick, and heavy.

The Chappel-aux-Saints 1 Neanderthal lived to about 40 years old. He had lost most of his teeth years before his death, (I gave Nandy some teeth, though,) suffered arthritis, and must have been cared for in his old age by the rest of his tribe. At his death he was most likely buried in a pit dug by his family, which preserved his skeleton in nearly complete condition for 60,000 years.

Anatomically modern humans, Homo sapiens, encountered and interbred with Neanderthals around 40,000 years ago. (Neanderthals are also humans–Homo neanderthalensis.) Today, about 1-5% of the DNA in non-Sub-Saharan Africans hails originally from a Neanderthal ancestor. (Melanesians also have DNA from a cousin of the Neanderthals, the Denisovans, and Sub-Saharan Africans may have their own archaic ancestors.)

Unfortunately for Nandy and his relations, the Neanderthals also began to disappear around 40,000 years ago. Perhaps it was the weather, or Homo sapiens out competed them, or their enormous skulls just caused too much trouble in childbirth. Whatever happened, the Neanderthals remain a mystery, evidence of the past when we weren’t the only human species in town.

The Endless Ratiocination of the Dysphoric Mind

Begin

My endless inquiries made it impossible for me to achieve anything. Moreover, I get to think about my own thoughts of the situation in which I find myself. I even think that I think of it, and divide myself into an infinite retrogressive sequence of ‘I’s who consider each other. I do not know at which ‘I’ to stop as the actual, and as soon as I stop, there is indeed again an ‘I’ which stops at it. I become confused and feel giddy as if I were looking down into a bottomless abyss, and my ponderings result finally in a terrible headache. –Møller, Adventures of a Danish Student

Moller’s Adventures of a Danish Student was one of Niels Bohr’s favorite books; it reflected his own difficulties with cycles of ratiocination, in which the mind protects itself against conclusions by watching itself think.

I have noticed a tendency on the left, especially among the academic-minded, to split the individual into sets of mental twins–one who is and one who feels that it is; one who does and one who observes the doing.

Take the categories of “biological sex” and “gender.” Sex is defined as the biological condition of “producing small gametes” (male) or “producing large gametes” (female) for the purpose of sexual reproduction. Thus we can talk about male and female strawberry plants, male and female molluscs, male and female chickens, male and female Homo Sapiens.

(Indeed, the male-female binary is remarkably common across sexually reproducing plants and animals–it appears that the mathematics of a third sex simply don’t work out, unless you’re a mushroom. How exactly sex is created varies by species, which makes the stability of the sex-binary all the more remarkable.)

And for the first 299,945 years or so of our existence, most people were pretty happy dividing humanity into “men” “women” and the occasional “we’re not sure.” People didn’t understand why or how biology works, but it was a functional enough division for people.

In 1955, John Money decided we needed a new term, “gender,” to describe, as Wikipedia puts it, “the range of characteristics pertaining to, and differentiating between, masculinity and femininity.” Masculinity is further defined as “a set of attributes, behaviors, and roles associated with boys and men;” we can define “femininity” similarly.

So if we put these together, we get a circular definition: gender is a range of characteristics of the attributes of males and females. Note that attributes are already characteristics. They cannot further have characteristics that are not already inherent in themselves.

But really, people invoke “gender” to speak of a sense of self, a self that reflexively looks at itself and perceives itself as possessing traits of maleness of femaleness; the thinker who must think of himself as “male” before he can act as a male. After all, you cannot walk without desiring first to move in a direction; how can you think without first knowing what it is you want to think? It is a cognitive splitting of the behavior of the whole person into two separate, distinct entities–an acting body, possessed of biological sex, and a perceiving mind, that merely perceives and “displays” gender.

But the self that looks at itself looking at itself is not real–it cannot be, for there is only one self. You can look at yourself in the mirror, but you cannot stand outside of yourself and be simultaneously yourself; there is only one you. The alternative, a fractured consciousness, is a symptom of mental disorder and treated with chlorpromazine.

Robert Oppenheimer was once diagnosed with schizophrenia–dementia praecox, as they called it then. Whether he had it or simply confused the therapist by talking about wave/particle dualities is another matter.

Then there are the myriad variants of the claim that men and women “perform femininity” or “display masculinity” or “do gender.” They do not claim that people are feminine or act masculine–such conventional phrasing assumes the existence of a unitary self that is, perceives, and acts. Rather, they posit an inner self that possesses no inherent male or female traits, for whom masculinity and femininity are only created via the interaction of their body and external expectations. In this view, women do not buy clothes because they have some inherent desire to go shopping and buy pretty things, but because society has compelled them to do so in order to comply with external notion of “what it means to be female.” The self who produces large gametes is not the self who shops.

The biological view of human behavior states that most humans engage in a variety of behaviors because similar behaviors contributed to the evolutionary success of our ancestors. We eat because ancestors who didn’t think eating was important died. We jump back when we see something that looks like a spider because ancestors who didn’t got bitten and died. We love cute things with big eyes because they look like babies because we are descended mostly from people who loved their babies.

Sometimes we do things that we don’t enjoy but rationalize will benefit us, like work for an overbearing boss or wear a burka, but most “masculine” and “feminine” behaviors fall into the category of things people do voluntarily, like “compete at sports” or “gossip with friends.” The fact that more men than women play baseball and more women than men enjoy gossiping with friends has nothing to do with an internal self attempting to perform gender roles and everything to do with the challenges ancestral humans faced in reproducing.

But whence this tendency toward ratiocination? I can criticize it as a physical mistake, but does it reflect an underlying psychological reality? Do some people really perceive themselves as a self separate from themselves, a meta-self watching the first self acting in particular manners?

Here is a study that found that folks with more cognitive flexibility tended to be more socially liberal, though economic conservatism/liberalism didn’t particularly correlate with cognitive flexibility.

I find that if I work hard, I may achieve a state of zen, an inner tranquility in which the endless narrative of thoughts coalesce for a moment and I can just be. Zen is flying down a straight road at 80 miles an hour on a motorcycle; zen is working on a math problem that consumes all of your attention; zen is dancing until you only feel the music. The opposite of zen is lying in bed at 3 AM, staring at the ceiling, thinking of all of your failures, unable to switch off your brain and fall asleep.

Dysphoria is a state of unease. Some people have gender dysphoria; a few report temporal dysphoria. It might be better defined at disconnection, a feeling of being eternally out of place. I feel a certain dysphoria every time I surface from reading some text of anthropology, walk outside, and see cars. What are these metal things? What are these straight, right-angled streets? Everything about modern society strikes me as so artificial and counter to nature that I find it deeply unsettling.

It is curious that dysphoria itself is not discussed more in the psychiatric literature. Certainly a specific form or two receives a great deal of attention, but not the general sense itself.

When things are in place, you feel tranquil and at ease; when things are out of place you agitated, always aware of the sense of crawling out of your own skin. People will try any number of things to turn off the dysphoria; a schizophrenic friend reports that enough alcohol will make the voices stop, at least for a while. Drink until your brain shuts up.

But this is only when things are out of place. Healthy people seek a balance between division and unity. Division of the self is necessary for self-criticism and improvement; people can say, then, “I did a bad thing, but I am not a bad person, so I will change my behavior and be better.” Metacognition allows people to reflect on their behavior without feeling that their self is fundamentally at threat, but too much metacognition leads to fragmentation and an inability to act.

People ultimately seek a balanced, unified sense of self.

It is said that not everyone has an inner voice, a meta-self commenting on the acting self, and some have more than one:

My previous blogs have observed that some people –women with bulimia nervosa, for example– have frequent multiple simultaneous experiences, but that multiple experience is not frequent in the general population. …

Consider inner speech. Subject experienced themselves as innerly talking to themselves in 26% of all samples, but there were large individual differences: some subjects never experienced inner speech; other subjects experienced inner speech in as many as 75% of their samples. The median percentage across subjects was 20%.

It’s hard to tell what people really experience, but certainly there is a great deal of variety in people’s internal experiences. Much of thought is not easily describable. Some people hear many voices. Some cannot form mental images:

I think the best way I can describe my aphantasia is to say that I am unaware of anything in my mind except these categories: i) direct sensory input, ii) unheardwords that carry thoughts, iii) unheardmusic, iv) a kind of invisible imagery, which I can best describe as sensation of pictures that are in a sense too faint to see, v) emotions, and vi) thoughts which seem too fastto exist as words. … I see what is around me, unless my eyes are closed when all is always black. I hear, taste, smell and so forth, but I dont have the experience people describe of
hearing a tune or a voice in their heads. Curiously, I do frequently have a tune going around in my head, all I am lacking is the direct experience of hearingit.

The quoted author is, despite his lack of internal imagery, quite intelligent, with a PhD in physics.

Some cannot hear themselves think at all.

I would like to know if there is any correlation between metacognition, ratiocination, and political orientations–I have so far found a little on the subject:

We find a relationship between thinking style and political orientation and that these effects are particularly concentrated on social attitudes. We also find it harder to manipulate intuitive and reflective thinking than a number of prominent studies suggest. Priming manipulations used to induce reflection and intuition in published articles repeatedly fail in our studies. We conclude that conservatives—more specifically, social conservatives—tend to be dispositionally less reflective, social liberals tend to be dispositionally more reflective, and that the relationship between reflection and intuition and political attitudes may be more resistant to easy manipulation than existing research would suggest.

And a bit more:

… Berzonsky and Sullivan (1992) cite evidence that individuals higher in reported
self-reflection also exhibit more openness to experience, more liberal values, and more general tolerance for exploration. As noted earlier, conservatives tend to be less open to experience, more intolerant of ambiguity, and generally more reliant on self-certainty than liberals. That, coupled with the evidence reported by Berzonsky and Sullivan, strongly suggests conservatives engage in less introspective behaviors.

Following an interesting experiment looking at people’s online dating profiles, the authors conclude:

Results from our data support the hypothesis that individuals identifying
themselves as “Ultra Conservative‟ exhibit less introspection in a written passage with personal content than individuals identifying themselves as “Very Liberal‟. Individuals who reported a conservative political orientation often provided more descriptive and explanatory statements in their profile’s “About me and who I‟m looking for‟ section (e.g., “I am 62 years old and live part time in Montana” and “I enjoy hiking, fine restaurants”). In contrast, individuals who reported a liberal political orientation often provided more insightful and introspective statements in their narratives (e.g., “No regrets, that‟s what I believe in” and “My philosophy in life is to make complicated things simple”).

The ratiocination of the scientist’s mind can ultimately be stopped by delving into that most blessed of substances, reality, (or as close to it as we can get.) There is, at base, a fundamentally real thing to delve into, a thing which makes ambiguities disappear. Even a moral dilemma can be resolved with good enough data. We do not need to wander endlessly within our own thoughts; the world is here.

End

 

Denny: the Neanderthal-Denisovan Hybrid

Carte_Neandertaliens
Neanderthal Sites (source: Wikipedia)

Homo Sapiens–that is, us, modern humans, are about 200-300,000 years old. Our ancestor, Homo heidelbergensis, lived in Africa around 700,000-300,000 years ago.

Around 700,000 years ago, another group of humans split off from the main group. By 400,000 years ago, their descendants, Homo neanderthalensis–Neanderthals–had arrived in Europe, and another band of their descendants, the enigmatic Denisovans, arrived in Asia.

While we have found quite a few Neanderthal remains and archaeological sites with tools, hearths, and other artifacts, we’ve uncovered very few Denisovan remains–a couple of teeth, a finger bone, and part of an arm in Denisova Cave, Russia. (Perhaps a few other remains I am unaware of.)

Yet from these paltry remains scientists have extracted enough DNA to ascertain that no only were Denisovans a distinct species, but also that Melanesians, Papuans, and Aborigines derive about 3-6% of their DNA from a Denisovan ancestors. (All non-African populations also have a small amount of Neanderthal DNA, derived from a Neanderthal ancestors.)

If Neanderthals and Homo sapiens interbred, and Denisovans and Homo sapiens interbred, did Neanderthals and Denisovans ever mate?

nature-siberian-neanderthals-17.02.16-v2
The slightly more complicated family tree, not including Denny

Yes.

The girl, affectionately nicknamed Denny, lived and died about 90,000 years ago in Siberia. The remains of an arm, found in Denisova Cave, reveal that her mother was a Neanderthal, her father a Denisovan.

We don’t yet know what Denisovans looked like, because we don’t have any complete skeletons of them, much less good skulls to examine, so we don’t know what a Neanderthal-Denisovan hybrid like Denny looked like.

But the fact that we can extract so much information from a single bone–or fragment of bone–preserved in a Siberian cave for 90,000 years–is amazing.

We are still far from truly understanding what sorts of people our evolutionary cousins were, but we are gaining new insights all the time.

Book Club: How to Create a Mind, pt 2/2

Ray Kurzweil, writer, inventor, thinker

Welcome back to EvX’s Book Club. Today  are finishing Ray Kurzweil’s How to Create a Mind: The Secret of Human thought Revealed.

Spiders are interesting, but Kurzweil’s focus is computers, like Watson, which trounced the competition on Jeopardy!

I’ll let Wikipedia summarize Watson:

Watson was created as a question answering (QA) computing system that IBM built to apply advanced natural language processing, information retrieval, knowledge representation, automated reasoning, and machine learning technologies to the field of open domain question answering.[2]

The sources of information for Watson include encyclopedias, dictionaries, thesauri, newswire articles, and literary works. Watson also used databases, taxonomies, and ontologies. …

Watson parses questions into different keywords and sentence fragments in order to find statistically related phrases.[22] Watson’s main innovation was not in the creation of a new algorithm for this operation but rather its ability to quickly execute hundreds of proven language analysis algorithms simultaneously.[22][24] The more algorithms that find the same answer independently the more likely Watson is to be correct.[22] Once Watson has a small number of potential solutions, it is able to check against its database to ascertain whether the solution makes sense or not.[22]

Kurzweil opines:

That is at least one reason why Watson represents such a significant milestone: Jeopardy! is precisely such a challenging language task. … What is perhaps not evident to many observers is that Watson not only had to master the language in the unexpected and convoluted queries, but for the most part its knowledge was not hand-coded. It obtained that knowledge by actually reading 200 million pages of natural-language documents, including all of Wikipedia… If Watson can understand and respond to questions based on 200 million pages–in three seconds!–here is nothing to stop similar systems from reading the other billions of documents on the Web. Indeed, that effort is now under way.

A point about the history of computing that may be petty of me to emphasize:

Babbage’s conception is quite miraculous when you consider the era in which he lived and worked. However, by the mid-twentieth century, his ideas had been lost in the mists of time (although they were subsequently rediscovered.) It was von Neumann who conceptualized and articulated the key principles of the computer as we know it today, and the world recognizes this by continuing to refer to the von Neumann machine as the principal model of computation. Keep in mind, though, that the von Neumann machine continually communicates data between its various units and within those units, so it could not be built without Shannon’s theorems and the methods he devised for transmitting and storing reliable digital information. …

You know what? No, it’s not petty.

Amazon lists 57 books about Ada Lovelace aimed at children, 14 about Alan Turing, and ZERO about John von Neumann.

(Some of these results are always irrelevant, but they are roughly correct.)

“EvX,” you may be saying, “Why are you counting children’s books?”

Because children are our future, and the books that get published for children show what society deems important for children to learn–and will have an effect on what adults eventually know.

I don’t want to demean Ada Lovelace’s role in the development of software, but surely von Neumann’s contributions to the field are worth a single book!

*Slides soapbox back under the table*

Anyway, back to Kurzweil, now discussing quantum mechanics:

There are two ways to view the questions we have been considering–converse Western an Easter perspective on the nature of consciousness and of reality. In the Western perspective, we start with a physical world that evolves patterns of information. After a few billion years of evolution, the entities in that world have evolved sufficiently to become conscious beings In the Eastern view, consciousness is the fundamental reality, the physical word only come into existence through the thoughts of conscious beings. …

The East-West divide on the issue of consciousness has also found expression in opposing schools of thought in the field of subatomic physics. In quantum mechanics, particles exist in what are called probability fields. Any measurement carried out on them by a measuring device causes what is called a collapse of the wave function, meaning that the particle suddenly assumes a particular location. A popular view is that such a measurement constitutes observation by a conscious observer… Thus the particle assume a particular location … only when it is observed. Basically particles figure that if no one is bothering to look at them, they don’t need to decide where they are. I call this the Buddhist school of quantum mechanics …

Niels Bohr

Or as Niels Bohr put it, “A physicist is just an atom’s way of looking at itself.” He also claimed that we could describe electrons exercised free will in choosing their positions, a statement I do not think he meant literally; “We must be clear that when it comes to atoms, language can be used only as in poetry,” as he put it.

Kurzweil explains the Western interpretation of quantum mechanics:

There is another interpretation of quantum mechanics… In this analysis, the field representing a particle is not a probability field, but rather just a function that has different values in different locations. The field, therefore, is fundamentally what the particle is. … The so-called collapse of the wave function, this view holds, is not a collapse at all. … It is just that a measurement device is also made up of particles with fields, and the interaction of the particle field being measured and the particle fields of the measuring device result in a reading of the particle being in a particular location. The field, however, is still present. This is the Western interpretation of quantum mechanics, although it is interesting to note that the more popular view among physicists worldwide is what I have called the Eastern interpretation.

Soviet atomic bomb, 1951

For example, Bohr has the yin-yang symbol on his coat of arms, along with the motto contraria sunt complementa, or contraries are complementary. Oppenheimer was such a fan of the Bhagavad Gita that he read it in Sanskrit and quoted it upon successful completion of the Trinity Test, “If the radiance of a thousand suns were to burst at once into the sky, that would be like the splendor of the mighty one,” and “Now I am become death, the destroyer of worlds.” He credited the Gita as one of the most important books in his life.

Why the appeal of Eastern philosophy? Is it something about physicists and mathematicians? Leibnitz, after all, was fond of the I Ching. As Wikipedia says:

Leibniz was perhaps the first major European intellectual to take a close interest in Chinese civilization, which he knew by corresponding with, and reading other works by, European Christian missionaries posted in China. Having read Confucius Sinarum Philosophus on the first year of its publication,[153] he concluded that Europeans could learn much from the Confucian ethical tradition. He mulled over the possibility that the Chinese characters were an unwitting form of his universal characteristic. He noted with fascination how the I Ching hexagrams correspond to the binary numbers from 000000 to 111111, and concluded that this mapping was evidence of major Chinese accomplishments in the sort of philosophical mathematics he admired.[154] Leibniz communicated his ideas of the binary system representing Christianity to the Emperor of China hoping it would convert him.[84] Leibniz may be the only major Western philosopher who attempted to accommodate Confucian ideas to prevailing European beliefs.[155]

Leibniz’s attraction to Chinese philosophy originates from his perception that Chinese philosophy was similar to his own.[153] The historian E.R. Hughes suggests that Leibniz’s ideas of “simple substance” and “pre-established harmony” were directly influenced by Confucianism, pointing to the fact that they were conceived during the period that he was reading Confucius Sinarum Philosophus.[153]

Perhaps it is just that physicists and mathematicians are naturally curious people, and Eastern philosophy is novel to a Westerner, or perhaps by adopting Eastern ideas, they were able to purge their minds of earlier theories of how the universe works, creating a blank space in which to evaluate new data without being biased by old conceptions–or perhaps it is just something about the way their minds work.

As for quantum, I favor the de Broglie-Bohm interpretation of quantum mechanics, but obviously I am not a physicist and my opinion doesn’t count for much. What do you think?

But back to the book. If you are fond of philosophical ruminations on the nature of consciousness, like “What if someone who could only see in black and white read extensively about color “red,” could they ever achieve the qualia of actually seeing the color red?” or “What if a man were locked in a room with a perfect Chinese rulebook that told him which Chinese characters to write in response to any set of characters written on notes passed under the door? The responses are be in perfect Chinese, but the man himself understands not a word of Chinese,” then you’ll enjoy the discussion. If you already covered all of this back in Philosophy 101, you might find it a bit redundant.

Kurzweil notes that conditions have improved massively over the past century for almost everyone on earth, but people are increasingly anxious:

A primary reason people believe life is getting worse is because our information about the problems of the world has steadily improved. If there is a battle today somewhere on the planet, we experience it almost as if we were there. During World War II, tens of thousand of people might perish in a battle, and if the public could see it at all it was in a grainy newsreel in a movie theater weeks later. During World War I a small elite could read about the progress of the conflict in the newspaper (without pictures.) During the nineteenth century there was almost no access to news in a timely fashion for anyone.

As for the future of man, machines, and code, Kurzweil is even more optimistic than Auerswald:

The last invention that biological evolution needed to make–the neocortex–is inevitably leading to the last invention that humanity needs to make–truly intelligent machines–and the design of one is inspiring the other. … by the end of this century we will be able to create computation at the limits of what is possible, based on the laws of physics… We call matter and energy organized in this way “computronium” which is vastly more powerful pound per pound than the human brain. It will not jut be raw computation but will be infused with intelligent algorithms constituting all of human-machine knowledge. Over time we will convert much of the mass and energy in our tiny corner of the galaxy that is suitable for this purpose to computronium. … we will need to speed out to the rest of the galaxy and universe. …

How long will it take for us to spread our intelligence in its nonbiological form throughout the universe? … waking up the universe, and then intelligently deciding its fate by infusing it with our human intelligence in its nonbiological form, is our destiny.

Whew! That is quite the ending–and with that, so will we. I hope you enjoyed the book. What did you think of it? Will Humanity 2.0 be good? Bad? Totally different? Or does the Fermi Paradox imply that Kurzweil is wrong? Did you like this shorter Book Club format? And do you have any ideas for our next Book Club pick?