Book Club: How to Create a Mind, pt 2/2

Ray Kurzweil, writer, inventor, thinker

Welcome back to EvX’s Book Club. Today  are finishing Ray Kurzweil’s How to Create a Mind: The Secret of Human thought Revealed.

Spiders are interesting, but Kurzweil’s focus is computers, like Watson, which trounced the competition on Jeopardy!

I’ll let Wikipedia summarize Watson:

Watson was created as a question answering (QA) computing system that IBM built to apply advanced natural language processing, information retrieval, knowledge representation, automated reasoning, and machine learning technologies to the field of open domain question answering.[2]

The sources of information for Watson include encyclopedias, dictionaries, thesauri, newswire articles, and literary works. Watson also used databases, taxonomies, and ontologies. …

Watson parses questions into different keywords and sentence fragments in order to find statistically related phrases.[22] Watson’s main innovation was not in the creation of a new algorithm for this operation but rather its ability to quickly execute hundreds of proven language analysis algorithms simultaneously.[22][24] The more algorithms that find the same answer independently the more likely Watson is to be correct.[22] Once Watson has a small number of potential solutions, it is able to check against its database to ascertain whether the solution makes sense or not.[22]

Kurzweil opines:

That is at least one reason why Watson represents such a significant milestone: Jeopardy! is precisely such a challenging language task. … What is perhaps not evident to many observers is that Watson not only had to master the language in the unexpected and convoluted queries, but for the most part its knowledge was not hand-coded. It obtained that knowledge by actually reading 200 million pages of natural-language documents, including all of Wikipedia… If Watson can understand and respond to questions based on 200 million pages–in three seconds!–here is nothing to stop similar systems from reading the other billions of documents on the Web. Indeed, that effort is now under way.

A point about the history of computing that may be petty of me to emphasize:

Babbage’s conception is quite miraculous when you consider the era in which he lived and worked. However, by the mid-twentieth century, his ideas had been lost in the mists of time (although they were subsequently rediscovered.) It was von Neumann who conceptualized and articulated the key principles of the computer as we know it today, and the world recognizes this by continuing to refer to the von Neumann machine as the principal model of computation. Keep in mind, though, that the von Neumann machine continually communicates data between its various units and within those units, so it could not be built without Shannon’s theorems and the methods he devised for transmitting and storing reliable digital information. …

You know what? No, it’s not petty.

Amazon lists 57 books about Ada Lovelace aimed at children, 14 about Alan Turing, and ZERO about John von Neumann.

(Some of these results are always irrelevant, but they are roughly correct.)

“EvX,” you may be saying, “Why are you counting children’s books?”

Because children are our future, and the books that get published for children show what society deems important for children to learn–and will have an effect on what adults eventually know.

I don’t want to demean Ada Lovelace’s role in the development of software, but surely von Neumann’s contributions to the field are worth a single book!

*Slides soapbox back under the table*

Anyway, back to Kurzweil, now discussing quantum mechanics:

There are two ways to view the questions we have been considering–converse Western an Easter perspective on the nature of consciousness and of reality. In the Western perspective, we start with a physical world that evolves patterns of information. After a few billion years of evolution, the entities in that world have evolved sufficiently to become conscious beings In the Eastern view, consciousness is the fundamental reality, the physical word only come into existence through the thoughts of conscious beings. …

The East-West divide on the issue of consciousness has also found expression in opposing schools of thought in the field of subatomic physics. In quantum mechanics, particles exist in what are called probability fields. Any measurement carried out on them by a measuring device causes what is called a collapse of the wave function, meaning that the particle suddenly assumes a particular location. A popular view is that such a measurement constitutes observation by a conscious observer… Thus the particle assume a particular location … only when it is observed. Basically particles figure that if no one is bothering to look at them, they don’t need to decide where they are. I call this the Buddhist school of quantum mechanics …

Niels Bohr

Or as Niels Bohr put it, “A physicist is just an atom’s way of looking at itself.” He also claimed that we could describe electrons exercised free will in choosing their positions, a statement I do not think he meant literally; “We must be clear that when it comes to atoms, language can be used only as in poetry,” as he put it.

Kurzweil explains the Western interpretation of quantum mechanics:

There is another interpretation of quantum mechanics… In this analysis, the field representing a particle is not a probability field, but rather just a function that has different values in different locations. The field, therefore, is fundamentally what the particle is. … The so-called collapse of the wave function, this view holds, is not a collapse at all. … It is just that a measurement device is also made up of particles with fields, and the interaction of the particle field being measured and the particle fields of the measuring device result in a reading of the particle being in a particular location. The field, however, is still present. This is the Western interpretation of quantum mechanics, although it is interesting to note that the more popular view among physicists worldwide is what I have called the Eastern interpretation.

Soviet atomic bomb, 1951

For example, Bohr has the yin-yang symbol on his coat of arms, along with the motto contraria sunt complementa, or contraries are complementary. Oppenheimer was such a fan of the Bhagavad Gita that he read it in Sanskrit and quoted it upon successful completion of the Trinity Test, “If the radiance of a thousand suns were to burst at once into the sky, that would be like the splendor of the mighty one,” and “Now I am become death, the destroyer of worlds.” He credited the Gita as one of the most important books in his life.

Why the appeal of Eastern philosophy? Is it something about physicists and mathematicians? Leibnitz, after all, was fond of the I Ching. As Wikipedia says:

Leibniz was perhaps the first major European intellectual to take a close interest in Chinese civilization, which he knew by corresponding with, and reading other works by, European Christian missionaries posted in China. Having read Confucius Sinarum Philosophus on the first year of its publication,[153] he concluded that Europeans could learn much from the Confucian ethical tradition. He mulled over the possibility that the Chinese characters were an unwitting form of his universal characteristic. He noted with fascination how the I Ching hexagrams correspond to the binary numbers from 000000 to 111111, and concluded that this mapping was evidence of major Chinese accomplishments in the sort of philosophical mathematics he admired.[154] Leibniz communicated his ideas of the binary system representing Christianity to the Emperor of China hoping it would convert him.[84] Leibniz may be the only major Western philosopher who attempted to accommodate Confucian ideas to prevailing European beliefs.[155]

Leibniz’s attraction to Chinese philosophy originates from his perception that Chinese philosophy was similar to his own.[153] The historian E.R. Hughes suggests that Leibniz’s ideas of “simple substance” and “pre-established harmony” were directly influenced by Confucianism, pointing to the fact that they were conceived during the period that he was reading Confucius Sinarum Philosophus.[153]

Perhaps it is just that physicists and mathematicians are naturally curious people, and Eastern philosophy is novel to a Westerner, or perhaps by adopting Eastern ideas, they were able to purge their minds of earlier theories of how the universe works, creating a blank space in which to evaluate new data without being biased by old conceptions–or perhaps it is just something about the way their minds work.

As for quantum, I favor the de Broglie-Bohm interpretation of quantum mechanics, but obviously I am not a physicist and my opinion doesn’t count for much. What do you think?

But back to the book. If you are fond of philosophical ruminations on the nature of consciousness, like “What if someone who could only see in black and white read extensively about color “red,” could they ever achieve the qualia of actually seeing the color red?” or “What if a man were locked in a room with a perfect Chinese rulebook that told him which Chinese characters to write in response to any set of characters written on notes passed under the door? The responses are be in perfect Chinese, but the man himself understands not a word of Chinese,” then you’ll enjoy the discussion. If you already covered all of this back in Philosophy 101, you might find it a bit redundant.

Kurzweil notes that conditions have improved massively over the past century for almost everyone on earth, but people are increasingly anxious:

A primary reason people believe life is getting worse is because our information about the problems of the world has steadily improved. If there is a battle today somewhere on the planet, we experience it almost as if we were there. During World War II, tens of thousand of people might perish in a battle, and if the public could see it at all it was in a grainy newsreel in a movie theater weeks later. During World War I a small elite could read about the progress of the conflict in the newspaper (without pictures.) During the nineteenth century there was almost no access to news in a timely fashion for anyone.

As for the future of man, machines, and code, Kurzweil is even more optimistic than Auerswald:

The last invention that biological evolution needed to make–the neocortex–is inevitably leading to the last invention that humanity needs to make–truly intelligent machines–and the design of one is inspiring the other. … by the end of this century we will be able to create computation at the limits of what is possible, based on the laws of physics… We call matter and energy organized in this way “computronium” which is vastly more powerful pound per pound than the human brain. It will not jut be raw computation but will be infused with intelligent algorithms constituting all of human-machine knowledge. Over time we will convert much of the mass and energy in our tiny corner of the galaxy that is suitable for this purpose to computronium. … we will need to speed out to the rest of the galaxy and universe. …

How long will it take for us to spread our intelligence in its nonbiological form throughout the universe? … waking up the universe, and then intelligently deciding its fate by infusing it with our human intelligence in its nonbiological form, is our destiny.

Whew! That is quite the ending–and with that, so will we. I hope you enjoyed the book. What did you think of it? Will Humanity 2.0 be good? Bad? Totally different? Or does the Fermi Paradox imply that Kurzweil is wrong? Did you like this shorter Book Club format? And do you have any ideas for our next Book Club pick?

Advertisements

Book Club: How to Create a Mind by Ray Kurzweil pt 1/2

Welcome to our discussion of Ray Kurzweil’s How to Create a Mind: The Secret of Human thought Revealed. This book was requested by one my fine readers; I hope you have enjoyed it.

If you aren’t familiar with Ray Kurzweil (you must be new to the internet), he is a computer scientist, inventor, and futurist whose work focuses primarily on artificial intelligence and phrases like “technological singularity.”

Wikipedia really likes him.

The book is part neuroscience, part explanations of how various AI programs work. Kurzweil uses models of how the brain works to enhance his pattern-recognition programs, and evidence from what works in AI programs to build support for theories on how the brain works.

The book delves into questions like “What is consciousness?” and “Could we recognize a sentient machine if we met one?” along with a brief history of computing and AI research.

My core thesis, which I call the Law of Accelerating Returns, (LOAR), is that fundamental measures of of information technology follow predictable and exponential trajectories…

I found this an interesting sequel to Auerswald’s The Code Economy and counterpart to Gazzaniga’s Who’s In Charge? Free Will and the Science of the Brain, which I listened to in audiobook form and therefore cannot quote very easily. Nevertheless, it’s a good book and I recommend it if you want more on brains.

The quintessential example of the law of accelerating returns is the perfectly smooth, doubly exponential growth of the price/performance of computation, which has held steady for 110 years through two world was, the Great Depression, the Cold War, the collapse of the Soviet Union, the reemergence of China, the recent financial crisis, … Some people refer to this phenomenon as “Moore’s law,” but… [this] is just one paradigm among many.

From Ray Kurzweil,

Auerswald claims that the advance of “code” (that is, technologies like writing that allow us to encode information) has, for the past 40,000 years or so, supplemented and enhanced human abilities, making our lives better. Auerswald is not afraid of increasing mechanization and robotification of the economy putting people out of jobs because he believes that computers and humans are good at fundamentally different things. Computers, in fact, were invented to do things we are bad at, like decode encryption, not stuff we’re good at, like eating.

The advent of computers, in his view, lets us concentrate on the things we’re good at, while off-loading the stuff we’re bad at to the machines.

Kurzweil’s view is different. While he agrees that computers were originally invented to do things we’re bad at, he also thinks that the computers of the future will be very different from those of the past, because they will be designed to think like humans.

A computer that can think like a human can compete with a human–and since it isn’t limited in its processing power by pelvic widths, it may well out-compete us.

But Kurzweil does not seem worried:

Ultimately we will create an artificial neocortex that has the full range and flexibility of its human counterpart. …

When we augment our own neocortex with a synthetic version, we won’t have to worry about how much additional neocortex can physically fit into our bodies and brains, as most of it will be in the cloud, like most of the computing we use today. I estimated earlier that we have on the order of 300 million pattern recognizers in our biological neocortex. That’s as much as could b squeezed into our skulls even with the evolutionary innovation of a large forehead and with the neocortex taking about 80 percent of the available space. As soon as we start thinking in the cloud, there will be no natural limits–we will be able to use billions or trillions of pattern recognizers, basically whatever we need. and whatever the law of accelerating returns can provide at each point in time. …

Last but not least, we will be able to back up the digital portion of our intelligence. …

That is kind of what I already do with this blog. The downside is that sometimes you people see my incomplete or incorrect thoughts.

On the squishy side, Kurzweil writes of the biological brain:

The story of human intelligence starts with a universe that is capable of encoding information. This was the enabling factor that allowed evolution to take place. …

The story of evolution unfolds with increasing levels of abstraction. Atoms–especially carbon atoms, which can create rich information structures by linking in four different directions–formed increasingly complex molecules. …

A billion yeas later, a complex molecule called DNA evolved, which could precisely encode lengthy strings of information and generate organisms described by these “programs”. …

The mammalian brain has a distinct aptitude not found in any other class of animal. We are capable of hierarchical thinking, of understanding a structure composed of diverse elements arranged in a pattern, representing that arrangement with a symbol, and then using that symbol as an element in a yet more elaborate configuration. …

I really want to know if squids or octopuses can engage in symbolic thought.

Through an unending recursive process we are capable of building ideas that are ever more complex. … Only Homo sapiens have a knowledge base that itself evolves, grow exponentially, and is passe down from one generation to another.

Kurzweil proposes an experiment to demonstrate something of how our brains encode memories: say the alphabet backwards.

If you’re among the few people who’ve memorized it backwards, try singing “Twinkle Twinkle Little Star” backwards.

It’s much more difficult than doing it forwards.

This suggests that our memories are sequential and in order. They can be accessed in the order they are remembered. We are unable to reverse the sequence of a memory.

Funny how that works.

On the neocortex itself:

A critically important observation about the neocortex is the extraordinary uniformity of its fundamental structure. … In 1957 Mountcastle discovered the columnar organization of the neocortex. … [In 1978] he described the remarkably unvarying organization of the neocortex, hypothesizing that it was composed of a single mechanism that was repeated over and over again, and proposing the cortical column as the basic unit. The difference in the height of certain layers in different region noted above are simply differences in the amount of interconnectivity that the regions are responsible for dealing with. …

extensive experimentation has revealed that there are in fact repeating units within each column. It is my contention that the basic unit is a pattern organizer and that this constitute the fundamental component of the neocortex.

As I read, Kurzweil’s hierarchical models reminded me of Chomsky’s theories of language–both Ray and Noam are both associated with MIT and have probably conversed many times. Kurzweil does get around to discussing Chomsky’s theories and their relationship to his work:

Language is itself highly hierarchical and evolved to take advantage of the hierarchical nature of the neocortex, which in turn reflects the hierarchical nature of reality. The innate ability of human to lean the hierarchical structures in language that Noam Chomsky wrote about reflects the structure of of the neocortex. In a 2002 paper he co-authored, Chomsky cites the attribute of “recursion” as accounting for the unique language faculty of the human species. Recursion, according to Chomsky, is the ability to put together small parts into a larger chunk, and then use that chunk as a part in yet another structure, and to continue this process iteratively In this way we are able to build the elaborate structure of sentences and paragraphs from a limited set of words. Although Chomsky was not explicitly referring here to brain structure, the capability he is describing is exactly what the neocortex does. …

This sounds good to me, but I am under the impression that Chomsky’s linguistic theories are now considered outdated. Perhaps that is only his theory of universal grammar, though. Any linguistics experts care to weigh in?

According to Wikipedia:

Within the field of linguistics, McGilvray credits Chomsky with inaugurating the “cognitive revolution“.[175] McGilvray also credits him with establishing the field as a formal, natural science,[176] moving it away from the procedural form of structural linguistics that was dominant during the mid-20th century.[177] As such, some have called him “the father of modern linguistics”.[178][179][180][181]

The basis to Chomsky’s linguistic theory is rooted in biolinguistics, holding that the principles underlying the structure of language are biologically determined in the human mind and hence genetically transmitted.[182] He therefore argues that all humans share the same underlying linguistic structure, irrespective of sociocultural differences.[183] In adopting this position, Chomsky rejects the radical behaviorist psychology of B. F. Skinner which views the mind as a tabula rasa (“blank slate”) and thus treats language as learned behavior.[184] Accordingly, he argues that language is a unique evolutionary development of the human species and is unlike modes of communication used by any other animal species.[185][186] Chomsky’s nativist, internalist view of language is consistent with the philosophical school of “rationalism“, and is contrasted with the anti-nativist, externalist view of language, which is consistent with the philosophical school of “empiricism“.[187][174]

Anyway, back to Kuzweil, who has an interesting bit about love:

Science has recently gotten into the act as well, and we are now able to identify the biochemical changes that occur when someone falls in love. Dopamine is released, producing feelings of happiness and delight. Norepinephrin levels soar, which lead to a racing heart and overall feelings of exhilaration. These chemicals, along with phenylethylamine, produce elevation, high energy levels, focused attention, loss of appetite, and a general craving for the object of one’s desire. … serotonin level go down, similar to what happens in obsessive-compulsive disorder….

If these biochemical phenomena sound similar to those of the flight-or-fight syndrome, they are, except that we are running toward something or someone; indeed, a cynic might say toward rather than away form danger. The changes are also fully consistent with those of the early phase of addictive behavior. …  Studies of ecstatic religious experiences also show the same physical phenomena, it can be said that the person having such an experiences is falling in love with God or whatever spiritual connection on which they are focused. …

Religious readers care to weigh in?

Consider two related species of voles: the prairie vole and the montane vole. They are pretty much identical, except that the prairie vole has receptors for oxytocin and vasopressin, whereas the montane vole does not. The prairie vole is noted for lifetime monogamous relationships, while the montane vole resorts almost exclusively to one-night stands.

Learning by species:

A mother rat will build a nest for her young even if she has never seen another rat in her lifetime. Similarly, a spider will spin a web, a caterpillar will create her own cocoon, and a beaver will build a damn, even if no contemporary ever showed them how to accomplish these complex tasks. That is not to say that these are not learned behavior. It is just that he animals did not learn them in a single lifetime… The evolution of animal behavior does constitute a learning process, but it is learning by the species, not by the individual and the fruits of this leaning process are encoded in DNA.

I think that’s enough for today; what did you think? Did you enjoy the book? Is Kurzweil on the right track with his pattern recognizers? Are non-biological neocortexes on the horizon? Will we soon convert the solar system to computronium?

Let’s continue this discussion next Monday–so if you haven’t read the book yet, you still have a whole week to finish.

 

Book Club: The Code Economy: The DNA of Business

“DNA builds products with a purpose. So do people.” –Auerswald, The Code Economy

McDonald’s is the world’s largest restaurant chain by revenue[7], serving over 69 million customers daily in over 100 countries[8] across approximately 36,900 outlets as of 2016.[9] … According to a BBC report published in 2012, McDonald’s is the world’s second-largest private employer (behind Walmart with 1.9 million employees), 1.5 million of whom work for franchises. …

There are currently a total of 5,669 company-owned locations and 31,230 franchised locations… Notably, McDonald’s has increased shareholder dividends for 25 consecutive years,[18] making it one of the S&P 500 Dividend Aristocrats.[19][20]

According to Fast Food Nation by Eric Schlosser (2001), nearly one in eight workers in the U.S. have at some time been employed by McDonald’s. … Fast Food Nation also states that McDonald’s is the largest private operator of playgrounds in the U.S., as well as the single largest purchaser of beef, pork, potatoes, and apples.  (Wikipedia)

How did a restaurant whose only decent products are french fries and milkshakes come to dominate the global corporate landscape?

IKEA is not only the world’s largest furniture store, but also among the globe’s top 10 retailers of anything and the 25th most beloved corporation. (Disney ranks number one.) Even I feel a strange, heartwarming emotion at the thought of IKEA, which somehow comes across as a sweet and kind multi-national behemoth.

In The Code Economy, Auerswald suggests that the secret to McDonald’s success isn’t (just) the french fries and milkshake machines:

Kroc opened his first McDonald’s restaurant in 1955 in Des Plaines, California. Within five years he had opened two hundred new franchises across the country. [!!!] He pushed his operators obsessively to adhere to a system that reinforced the company motto: “Quality, service, cleanliness, and value.”

h/t @simongerman600

Quoting Kroc’s1987 autobiography,

“It’s all interrelated–our development of the restaurant, the training, the marketing advice, the product development, the research that has gone into each element of the equipment package. Together with our national advertising and continuing supervisory assistance, it forms an invaluable support system. Individual operators pay 11.5 percent of their gross to the corporation for all of this…”

The process of operating a McDonald’s franchise was engineered to be as cognitively undemanding as possible. …

Kroc created a program that could be broken into subroutines…. Acting like the DNA of the organization, the manual allowed the Speedee Service System to function in a variety of environments without losing essential structure or function.

McDonald’s is big because it figured out how to reproduce.

source: Statista

I’m not sure why IKEA is so big (I don’t think it’s a franchise like McDonald’s,) but based on the information posted on their walls, it’s because of their approach to furniture design. First, think of a problem, eg, People Need Tables. Second, determine a price–IKEA makes some very cheap items and some pricier items, to suit different customers’ needs. Third, use Standard IKEA Wooden Pieces to design a nice-looking table. Fourth, draw the assembly instructions, so that anyone, anywhere, can assemble the furniture themselves–no translation needed.

IKEA furniture is kind of like Legos, in that much of it is made of very similar pieces of wood assembled in different ways. The wooden boards in my table aren’t that different in size and shape from the ones in my dresser nor the ones in my bookshelf, though the items themselves have pretty different dimensions. So on the production side, IKEA lowers costs by producing not actual furniture, but collections of boards. Boards are easy to make–sawmills produce tons of them.

Furniture is heavy, but mostly empty space. By contrast, piles of boards stack very neatly and compactly, saving space both in shipping and when buyers are loading the boxes into their cars. (I am certain that IKEA accounts for common car dimensions in designing and packing their furniture.)

And the assembly instruction allow the buyer to ultimately construct the furniture.

In other words, IKEA has hit upon a successful code that allows them to produce many different designs from a few basic boards and ship them efficiently–keeping costs low and allowing them to thrive.

From Anatomy of an IKEA product:

The company is also looking for ways to maximize warehouse efficiency.

“We have (only) two pallet sizes,” Marston said, referring to the wooden platforms on which goods are placed. “Our warehouses are dimensioned and designed to hold these two pallet sizes. It’s all about efficiencies because that helps keep the price of innovation down.”

In Europe, some IKEA warehouses utilize robots to “pick the goods,” a term of art for grabbing products off very high shelves.

These factories, Marston said, are dark, since no lighting is needed for the robots, and run 24 hours a day, picking and moving goods around.

“You (can) stand on a catwalk,” she said, “and you look out at this huge warehouse with 12 pallets (stacked on top of each other) and this robot’s running back and forth running on electronic eyebeams.”

IKEA’s code and McDonald’s code are very different, but both let the companies produce the core items they sell quickly, cheaply, and efficiently.

In The Code Economy, Chapter 8: Evolution, discusses the rise of Tollhouse Cookies, McDonald’s, the difference between natural and artificial objects, and the development of evolutionary theory from Darwin through Watson and Crick and through to Kauffman and Levine’s 1987 paper, “Toward a General Theory of Adaptive Walks on Rugged Landscapes.” (With a brief stop at Erwin Shrodinger along the way.)

The difficulty with evolution is that systems are complicated; successful mutations or even just combinations of existing genes must work synergistically with all of the other genes and systems already operating in the body. A mutation that increases IQ by tweaking neurons in a particular way might have the side effect of causing neurons outside the brain to malfunction horribly; a mutation that protects against sickle-cell anemia when you have one copy of it might just kill you itself if you have two copies.

Auerswald quotes Kauffman and Levin:

“Natural selection does not work as an engineer works… It works like a tinkereer–a tinkerer who does not know exactly what he is going to produce but uses… everything at his disposal to produce some kind of workable object.” This process is progressive, moving form simpler to more complex forms: “Evolution doe not produce novelties from scratch. It works on what already exists, either transforming a system to give it new functions or combining several systems to produce a more elaborate one [as] during the passage from unicellular to multicellular forms.”

Further:

The Kauffman and Levin model was as simple as it was powerful. Imagine a genetic code of length N, where each gene might occupy one of two possible “states”–for example, “o” and “i” in a binary computer. The difficulty of the evolutionary problem was tunable with the parameter K, which represented the average number of interactions among genes. The NK model, as it came to be called, was able to reproduce a number of measurable features of evolution in biological systems. Evolution could be represented as a genetic walk on a fitness landscape, in which increasing complexity was now a central parameter.

You may remember my previous post on Local Optima, Diversity, and Patchwork:

Local optima–or optimums, if you prefer–are an illusion created by distance. A man standing on the hilltop at (approximately) X=2 may see land sloping downward all around himself and think that he is at the highest point on the graph. But hand him a telescope, and he discovers that the fellow standing on the hilltop at X=4 is even higher than he is. And hand the fellow at X=4 a telescope, and he’ll discover that X=6 is even higher.

A global optimum is the best possible way of doing something; a local optimum can look like a global optimum because all of the other, similar ways of doing the same thing are worse.

Some notable examples of cultures that were stuck at local optima but were able, with exposure, to jump suddenly to a higher optima: The “opening of Japan” in the late 1800s resulted in breakneck industrialization and rising standards of living; the Cherokee invented their own alphabet (technically a syllabary) after glimpsing the Roman one, and achieved mass literacy within decades; European mathematics and engineering really took off after the introduction of Hindu-Arabic numerals and the base-ten system.

If we consider each culture its own “landscape” in which people (and corporations) are finding locally optimal solutions to problems, then it becomes immediately obvious that we need both a large number of distinct cultures working out their own solutions to problems and occasional communication and feedback between those cultures so results can transfer. If there is only one, global, culture, then we only get one set of solutions–and they will probably be sub-optimal. If we have many cultures but they don’t interact, we’ll get tons of solutions, and many of them will be sub-optimal. But many cultures developing their own solutions and periodically interacting can develop many solutions and discard sub-optimal ones for better ones.

On a related note, Gore Burnelli writes: How Nassim Taleb changed my mind about religion:

Life constantly makes us take decisions under conditions of uncertainty. We can’t simply compute every possible outcome, and decide with perfect accuracy what the path forward is. We have to use heuristics. Religion is seen as a record of heuristics that have worked in the past. …

But while every generation faces new circumstances, there are also some common problems that every living being is faced with: survival and reproduction, and these are the most important problems because everything else depends on them. Mess with these, and everything else becomes irrelevant.

This makes religion an evolutionary record of solutions which persisted long enough, by helping those who held them to persist.

This is not saying “All religions are perfect and good and we should follow them,” but it is suggesting, “Traditional religions (and cultures) have figured out ways to solve common problems and we should listen to their ideas.”

From Ray Kurzweil

Back in The Code Economy, Auerswald asks:

Might the same model, derived from evolutionary biology, explain the evolution of technology?

… technology may also be nothing else but the capacity for invariant reproduction. However, in order for more complex forms of technology to be viable over time, technology also must possess a capacity for learning and adaptation.

Evolutionary theory as applied to the advance of code is the focus of the next chapter. Kauffman and Levin’s NK model ends up providing a framework for studying the creation and evolution of code. Learning curves act as the link between biology and economics.

Will the machines become sentient? Or McDonald’s? And which should we worry about?