The World is Written in Beautiful Maths

Eight Suns

This is a timelapse multiple exposure photo of an arctic day, apparently titled “Six Suns” (even though there are 8 in the picture?) With credit to Circosatabolarc for posting the photo on Twitter, where I saw it. Photo by taken by Donald MacMillan of the Crocker Land Expedition, 1913-1917.

Attempting to resolve the name-suns discrepancy, I searched for “Six Suns” and found this photo, also taken by Donald MacMillan, from The Peary-MacMillian Arctic Museum, which actually shows six suns.

I hearby dub this photo “Eight Suns.”

A reverse image search turned up one more similar photo, a postcard titled “Midnight Sun and Moon,” taken at Fort McMurray on the Arctic Coast, sometime before 1943.

As you can see, above the arctic circle, the sun’s arc lies so low relative to the horizon that it appears to move horizontally across the sky. If you extended the photograph into a time-lapse movie, taken at the North Pole, you’d see the sun spiral upward from the Spring Equinox until it reaches 23.5 degrees above the horizon–about a quarter of the way to the top–on the Summer Solstice, and then spiral back down until the Fall Equinox, when it slips below the horizon for the rest of the year.

In other news, here’s a graph of size vs speed for three different classes of animals–flying, running, and swimming creatures–all of which show the same shape. “A general scaling law reveals why the largest animals are not the fastest” H/T NatureEcoEvo

I love this graph; it is a beautiful demonstration of the mathematics underlying bodily shape and design, not just for one class of animals, but for all of us. It is a rule that applies to all moving creatures, despite the fact that running, flying, and swimming are such different activities.

I assume similar scaling laws apply to mechanical and aggregate systems, as well.

Advertisements

Book Club: Code Economy, Ch. 10: In which I am Confused

Welcome back to EvX’s Book Club. Today we start the third (and final) part of Auerswald’s The Code Economy: The Human Advantage.

Chapter 10: Complementarity discuses bifurcation, a concept Auerswald mentions frequently throughout the book. He has a graph of the process of bifurcation, whereby the development of new code (ie, technology), leads to the creation of a new “platform” on the one hand, and new human work on the other. With each bifurcation, we move away from the corner of the graph marked “simplicity” and “autonomy,” and toward the corner marked “complexity” and “interdependence.” It looks remarkably like a graph I made about energy inputs vs outputs at different complexity levels, based on a memory of a graph I saw in a textbook some years ago.

There are some crucial differences between our two graphs, but I think they nonetheless related–and possibly trying to express the same thing.

Auerswald argues that as code becomes platform, it doesn’t steal jobs, but becomes the new base upon which people work. The Industrial Revolution eliminated the majority of farm laborers via automation, but simultaneously provided new jobs for them, in factories. Today, the internet is the “platform” where jobs are being created, not in building the internet, but via businesses like Uber that couldn’t exist without the internet.

Auerswald’s graph (not mine) is one of the few places in the book where he comes close to examining the problem of intelligence. It is difficult to see what unintelligent people are going to do in a world that is rapidly becoming more complicated.

On the other hand people who didn’t have access to all sorts of resources now do, due to internet-based platforms–people in the third world, for example, who never bought land-line telephones because their country couldn’t afford to build the infrastructure to support them, are snapping up mobile and smartphones at an extraordinary rate:

And overwhelming majorities in almost every nation surveyed report owning some form of mobile device, even if they are not considered “smartphones.”

And just like Auerswald’s learning curves from the last chapter, technological spread is speeding up. It took the landline telephone 64 years to go from 0% to 40% of the US market. Mobile phones took only 20 years to accomplish the same feat, and smartphones did it in about 10. (source.)

There are now more mobile phones in the developing world than the first world, and people aren’t just buying just buying these phones to chat. People who can’t afford to open bank accounts now use their smarphones as “mobile wallets”:

According to the GSMA, an industry group for the mobile communications business, there are now 79 mobile money systems globally, mostly in Africa and Asia. Two-thirds of them have been launched since 2009.

To date, the most successful example is M-Pesa, which Vodafone launched in Kenya in 2007. A little over three years later, the service has 13.5 million users, who are expected to send 20 percent of the country’s GDP through the system this year. “We proved at Vodafone that if you get the proposition right, the scale-up is massive,” says Nick Hughes, M-Pesa’s inventor.

But let’s get back to Auerswald. Chapter 10 contains a very interesting description of the development of the development of the Swiss Watch industry. Of course, today, most people don’t go out of their way to buy watches, since their smartphones have clocks built into them. Have smartphones put the Swiss out of business? Not quite, says Auerswald:

Switzerland… today produces fewer than 5 percent of the timepieces manufactured for export globally. In 2014, Switzerland exported 29 million watches, as compaed to China’ 669 million… But what of value? … Swiss watch exports were worth 24.3 billion in 2014, nearly five times as much as all Chinese watches combined.

Aside from the previously mentioned bifurcation of human and machine labor, Auerswald suggests that automation bifurcates products into cheap and expensive ones. He claims that movies, visual art services (ie, copying and digitization of art vs. fine art,) and music have also undergone bifurcation, not extinction, due to new technology.

In each instance, disruptive advances in code followed a consistent and predictable pattern: the creation of a new high-volume, low-price option creates a new market for the low-volume, high-price option. Every time this happens, the new value created through improved code forces a bifurcation of markets, and of work.

Detroit

He then discusses a watch-making startup located in Detroit, which I feel completely and totally misses the point of whatever economic lessons we can draw from Detroit.

Detroit is, at least currently, a lesson in how people fail to deal with increasing complexity, much less bifurcation.

Even that word–bifurcation–contains a problem: what happens to the middle? A huge mass of people at the bottom, making and consuming cheap products, and a small class at the top, making and consuming expensive products–well I will honor the demonstrated preferences of everyone involved for stuff, of whatever price, but what about the middle?

Is this how the middle class dies?

But if the poor become rich enough… does it matter?

Because work is fundamentally algorithmic, it is capable of almost limitless diversification though both combinatorial and incremental change. The algorithms of work become, fairly literally, the DNA of the economy. …

As Geoff Moore puts it, “Digital innovation is reengineering our manufacturing-based product-centric economy to improve quality, reduce cost, expand markets, … It is doing so, however, largely at the expense of traditional middle class jobs. This class of work is bifurcating into elite professions that are highly compensated but outside the skillset of the target population and commoditizing workloads for which the wages fall well below the target level.”

It is easy to take the long view and say, “Hey, the agricultural revolution didn’t result in massive unemployment among hunter-gatherers; the bronze and iron ages didn’t result in unemployed flint-knappers starving in the streets, so we’ll probably survive the singularity, too,” and equally easy to take the short view and say, “screw the singularity, I need a job that pays the bills now.”

Auerswald then discusses the possibilities for using big data and mobile/wearable computers to bring down healthcare costs. I am also in the middle of a Big Data reading binge, and my general impression of health care is that there is a ton of data out there (and more being collected every day,) but it is unwieldy and disorganized and doctors are too busy to use most of it and patients don’t have access to it. and if someone can amass, organize, and sort that data in useful ways, some very useful discoveries could be made.

Then we get to the graph that I didn’t understand,”Trends in Nonroutine Task Input, 1960 to 1998,” which is a bad sign for my future employment options in this new economy.

My main question is what is meant by “nonroutine manual” tasks, and since these were the occupations with the biggest effect shown on the graph, why aren’t they mentioned in the abstract?:

We contend that computer capital (1) substitutes for a limited and well-defined set of human activities, those involving routine (repetitive) cognitive and manual tasks; and (2) complements activities involving non-routine problem solving and interactive tasks. …Computerization is associated with declining relative industry demand for routine manual and cognitive tasks and increased relative demand for non-routine cognitive tasks.

Yes, but what about the non-routine manual? What is that, and why did it disappear first? And does this graph account for increased offshoring of manufacturing jobs to China?

If you ask me, it looks like there are three different events recorded in the graph, not just one. First, from 1960 onward, “non-routine manual” jobs plummet. Second, from 1960 through 1970, “routine cognitive” and “routine manual” jobs increase faster than “non-routine analytic” and “non-routine interactive.” Third, from 1980 onward, the routine jobs head downward while the analytic and interactive jobs become more common.

*Downloads the PDF and begins to read* Here’s the explanation of non-routine manual:

Both optical recognition of objects in a visual field and bipedal locomotion across an uneven surface appear to require enormously sophisticated algorithms, the one in optics and the other in mechanics, which are currently poorly understood by cognitive science (Pinker, 1997). These same problems explain the earlier mentioned inability of computers to perform the tasks of long haul truckers.

In this paper we refer to such tasks requiring visual and manual skills as ‘non-routine manual activities.’

This does not resolve the question.

Discussion from the paper:

Trends in routine task input, both cognitive and manual, also follow a striking pattern. During the  1960s, both forms of input increased due to a combination of between- and within-industry shifts. In the 1970s, however, within-industry input of both tasks declined, with the rate of decline accelerating.

As distinct from the other four task measures, we observe steady within- and between-industry shifts against non-routine manual tasks for the entire four decades of our sample. Since our conceptual framework indicates that non-routine manual tasks are largely orthogonal to computerization, we view
this pattern as neither supportive nor at odds with our model.

Now, it’s 4 am and the world is swimming a bit, but I think “we aren’t predicting any particular effect on non-routine manual tasks” should have been stated up front in the thesis portion. Sticking it in here feels like ad-hoc explaining away of a discrepancy. “Well, all of the other non-routine tasks went up, but this one didn’t, so, well, it doesn’t count because they’re hard to computerize.”

Anyway, the paper is 62 pages long, including the tables and charts, and I’m not reading it all or second-guessing their math at this hour, but I feel like there is something circular in all of this–“We already know that jobs involving routine labor like manufacturing are down, so we made a models saying they decreased as a percent of jobs because of computers and automation, looked through jobs data, and low and behold, found that they had decreased. Confusingly, though, we also found that non-routine manual jobs decreased during this time period, even though they don’t lend themselves to automation and computerization.”

I also searched in the document and could find no instance of the words “offshor-” “China” “export” or “outsource.”

Also, the graph Auerswald uses and the corresponding graph in the paper have some significant differences, especially the “routine cognitive” line. Maybe the authors updated their graph with more data, or Auerswald was trying to make the graph clearer. I don’t know.

Whatever is up with this paper, I think we may provisionally accept its data–fewer factory workers, more lawyers–without necessarily accepting its model.

The day after I wrote this, I happened to be reading Davidowitz’s Everybody Lies: Big Data, New Data, and What the Internet Can Tell us about who we Really Are, which has a discussion of the best places to raise children.

Talking about Chetty’s data, Davidowitz writes:

The question asked: what is the chance that a person with parents in the bottom 20 percent of the income distribution reaches the top 20 percent of the income distribution? …

So what is it about part of the United States where there is high income mobility? What makes some places better at equaling the playing field, of allowing a poor kid to have a pretty good life? Areas that spend more on education provide a better chance to poor kids. Places with more religious people and lower crime do better. Places with more black people do worse. Interestingly, this has an effect on not just the black kids but on the white kids living there as well.

Here is Chetty’s map of upward mobility (or the lack thereof) by county. Given how closely it matches a map of “African Americans” + “Native Americans” I have my reservations about the value of Chetty’s research on the bottom end (is anyone really shocked to discover that black kids enjoy little upward mobility?) but it still has some comparative value.

Davidowitz then discusses Chetty’s analysis of where people live the longest:

Interestingly, for the wealthiest Americans, life expectancy is hardly affected by where you live. …

For the poorest Americans, life expectancy varies tremendously…. living in the right place can add five years to a poor person’s life expectancy. …

religion, environment, and health insurance–do not correlate with longer life spans for the poor. The variable that does matter, according to Chetty and the others who worked on this study? How many rich people live in a city. More rich people in a city means the poor there live longer. Poor people in New York City, for example, live longer than poor people in Detroit.

Davidowitz suggests that maybe this happens because the poor learn better habits from the rich. I suspect the answer is simpler–here are a few possibilities:

1. The rich are effectively stopping the poor from doing self-destructive things, whether positively, eg, funding cultural that poor people go to rather than turn to drugs or crime out of boredom, or negatively, eg, funding police forces that discourage life-shortening crime.

2. The rich fund/support projects that improve general health, like cleaner water systems or better hospitals.

3. The effect is basically just a measurement error that doesn’t account for rich people driving up land prices. The “poor” of New York would be wealthier if they had Detroit rents.

(In general, I think Davidowitz is stronger when looking for correlations in the data than when suggesting explanations for it.)

Now contrast this with Davidowitz’s own study on where top achievers grow up:

I was curious where the most successful Americans come from, so one day I decided to download Wikipedia. …

[After some narrowing for practical reasons] Roughly 2,058 American-born baby boomers were deemed notable enough to warrant a Wikipedia entry. About 30 percent made it through achievements in art or entertainment, 29 percent through sports, 9 percent via politics, and 3 percent in academia or science.

And this is why we are doomed.

The first striking fact I noticed in the data was the enormous geographic variation in the likelihood of becoming a big success …

Roughly one in 1,209 baby boomers born in California reached Wikipedia. Only one in 4,496 baby boomers born in West Virginia did. … Roughly one in 748 baby boomers born in Suffolk County, MA, here Boston is located, made it to Wikipedia. In some counties, the success rate was twenty times lower. …

I closely examined the top counties. It turns out that nearly all of them fit into one of two categories.

First, and this surprised me, many of these counties contained a sizable college town. …

I don’t know why that would surprise anyone. But this was interesting:

Of fewer than 13,000 boomers born in Macon County, Alabama, fifteen made it to Wikipedia–or one in 852. Every single one of them is black. Fourteen of them were from the town of Tuskegee, home of Tuskegee University, a historically black college founded by Booker . Washington. The list included judges, writers, and scientists. In fact, a black child born in Tuskegee had the same probability of becoming a notable in a field outside of ports as a white child born in some of the highest-scoring, majority-white college towns.

The other factor that correlates with the production of notables?

A big city.

Being born in born in San Francisco County, Los Angeles County, or New York City all offered among the highest probabilities of making it to Wikipedia. …

Suburban counties, unless they contained major college towns, performed far worse than their urban counterparts.

A third factor that correlates with success is the proportion of immigrants in a county, though I am skeptical of this finding because I’ve never gotten the impression that the southern border of Texas produces a lot of famous people.

Migrant farm laborers aside, though, America’s immigrant population tends to be pretty well selected overall and thus produces lots of high-achievers. (Steve Jobs, for example, was the son of a Syrian immigrant; Thomas Edison was the son of a Canadian refugee.)

The variable that didn’t predict notability:

One I found more than a little surprising was how much money a state spends on education. In states with similar percentages of its residents living in urban areas, education spending did not correlate with rates of producing notable writers, artists, or business leaders.

Of course, this is probably because 1. districts increase spending when students do poorly in school, and 2. because rich people in urban send their kids to private schools.

BUT:

It is interesting to compare my Wikipedia study to one of Chetty’s team’s studies discussed earlier. Recall that Chetty’s team was trying to figure out what areas are good at allowing people to reach the upper middle class. My study was trying to figure out what areas are good at allowing people to reach fame. The results are strikingly different.

Spending a lot on education help kids reach the upper middle class. It does little to help them become a notable writer, artist, or business leader. Many of these huge successes hated school. Some dropped out.

Some, like Mark Zuckerberg, went to private school.

New York City, Chetty’s team found, is not a particularly good place to raise a child if you want to ensure he reaches the upper middle class. it is a great place, my study found, if you want to give him a chance at fame.

A couple of methodological notes:

Note that Chetty’s data not only looked at where people were born, but also at mobility–poor people who moved from the Deep South to the Midwest were also more likely to become upper middle class, and poor people who moved from the Midwest to NYC were also more likely to stay poor.

Davidowitz’s data only looks at where people were born; he does not answer whether moving to NYC makes you more likely to become famous. He also doesn’t discuss who is becoming notable–are cities engines to make the children of already successful people becoming even more successful, or are they places where even the poor have a shot at being famous?

I reject Davidowitz’s conclusions (which impute causation where there is only correlation) and substitute my own:

Cities are acceleration platforms for code. Code creates bifurcation. Bifurcation creates winners and losers while obliterating the middle.

This is not necessarily a problem if your alternatives are worse–if your choice is between poverty in NYC or poverty in Detroit, you may be better off in NYC. If your choice is between poverty in Mexico and poverty in California, you may choose California.

But if your choice is between a good chance of being middle class in Salt Lake City verses a high chance of being poor and an extremely small chance of being rich in NYC, you are probably a lot better off packing your bags and heading to Utah.

But if cities are important drivers of innovation (especially in science, to which we owe thanks for things like electricity and refrigerated food shipping,) then Auerswald has already provided us with a potential solution to their runaway effects on the poor: Henry George’s land value tax. As George accounts, one day, while overlooking San Francisco:

I asked a passing teamster, for want of something better to say, what land was worth there. He pointed to some cows grazing so far off that they looked like mice, and said, “I don’t know exactly, but there is a man over there who will sell some land for a thousand dollars an acre.” Like a flash it came over me that there was the reason of advancing poverty with advancing wealth. With the growth of population, land grows in value, and the men who work it must pay more for the privilege.[28]

Alternatively, higher taxes on fortunes like Zuckerberg’s and Bezos’s might accomplish the same thing.

Thermodynamics and Urban Sprawl

Termite Mound

Evolution is just a special case of thermodynamics. Molecules spontaneously arrange themselves to optimally dissipate energy.

Society itself is a thermodynamic system for entropy dissipation. Energy goes in–in the form of food and, recently, fuels like oil–and children and buildings come out.

Government is simply the entire power structure of a region–from the President to your dad, from bandits to your boss. But when people say, “government,” they typically mean the official one written down in laws that lives in white buildings in Washington, DC.

London

When the “government” makes laws that try to change the natural flow of energy or information through society, society responds by routing around the law, just as water flows around a boulder that falls in a stream.

The ban on trade with Britain and France in the early 1800s, for example, did not actually stop people from trading with Britain and France–trade just became re-routed through smuggling operations. It took a great deal of energy–in the form of navies–to suppress piracy and smuggling in the Gulf and Caribbean–chiefly by executing pirates and imprisoning smugglers.

Beehive

When the government decided that companies couldn’t use IQ tests in hiring anymore (because IQ tests have a “disparate impact” on minorities because black people tend to score worse, on average, than whites,) in Griggs vs. Duke Power, they didn’t start hiring more black folks. They just started using college degrees as a proxy for intelligence, contributing to the soul-crushing debt and degree inflation young people know and love today.

Similarly, when the government tried to stop companies from asking about applicants’ criminal histories–again, because the results were disproportionately bad for minorities–companies didn’t start hiring more blacks. Since not hiring criminals is important to companies, HR departments turned to the next best metric: race. These laws ironically led to fewer blacks being hired, not more.

Where the government has tried to protect the poor by passing tenant’s rights laws, we actually see the opposite: poorer tenants are harmed. By making it harder to evict tenants, the government makes landlords reluctant to take on high-risk (ie, poor) tenants.

The passage of various anti-discrimination and subsidized housing laws (as well as the repeal of various discriminatory laws throughout the mid-20th century) lead to the growth of urban ghettos, which in turn triggered the crime wave of the 70s, 80s, and 90s.

Crime and urban decay have made inner cities–some of the most valuable real estate in the country–nigh unlivable, resulting in the “flight” of millions of residents and the collective loss of millions of dollars due to plummeting home values.

Work-arounds are not cheap. They are less efficient–and thus more expensive–than the previous, banned system.

Urban sprawl driven by white flight

Smuggled goods cost more than legally traded goods due to the personal risks smugglers must take. If companies can’t tell who is and isn’t a criminal, the cost of avoiding criminals becomes turning down good employees just because they happen to be black. If companies can’t directly test intelligence, the cost becomes a massive increase in the amount of money being spent on accreditation and devaluation of the signaling power of a degree.

We have dug up literally billions of dollars worth of concentrated sunlight in the form of fossil fuels in order to rebuild our nation’s infrastructure in order to work around the criminal blights in the centers of our cities, condemning workers to hour-long commutes and paying inflated prices for homes in neighborhoods with “good schools.”

Note: this is not an argument against laws. Some laws increase efficiency. Some laws make life better.

This is a reminder that everything is subject to thermodynamics. Nothing is free.

Entropy, Life, and Welfare (pt. 3/3)

Communism works so well, soldiers had to push Fidel Castro's hearse because the Cuban government couldn't find a working truck
Communism works so well, soldiers had to push Fidel Castro’s hearse because the Cuban government couldn’t find a working truck

This is Part Three of a series on how incentives affect the distribution of energy/resources throughout a society and the destructive effects of social systems like communism. (Part One and Part Two are here)

But before we criticize these programs too much, let’s understand where they came from:

The Industrial Revolution, which began around 1760 in Britain, created mass economic and social dislocation as millions of workers were forced off their farms and flooded into the cities.

Communism: the world's single biggest source of murder in the 20th century
Communism: the world’s single biggest source of murder in the 20th century

The booms and busts of the unregulated (and regulated) industrial economy caused sudden, unpredictable unemployment and, without a social safety net of some kind, starvation. This suffering unleashed Marxism, which soon transformed into an anti-capitalist, anti-Western ideology and tore across the planet, demolishing regimes and killing millions of people.

Reason.com attributes 94 million deaths to communism. The Black Book of Communism places the total between 85 and 100 million people. Historian on the Warpath totals almost 150 million people killed or murdered by communist governments, not including war deaths. (Wikipedia estimates that WWII killed, between battle deaths in Europe and the Pacific, disease, starvation, and genocide, 50-80 million people–and there were communists involved in WWII, also.)

cdchqnbumaaj5sjThe US and Europe, while not explicitly communist, have adopted many of socialism’s suggestions: Social Security, Welfare, Medicaid, etc., many in direct response to the Great Depression.

These solutions are, at best, a stop-gap measures to deal with the massive changes new technologies are still causing. Remember, humans were hunter-gatherers for 190,000 years. We had a long time to get used to being hunter gatherers. 10,000 years ago, a few of us started farming, and developed whole new cultures. A mere 200 years ago, the Industrial Revolution began spreading through Europe. Today, the “post industrial information economy” (or “robot economy,” as I call it,) is upon us, and we have barely even begun to adapt.

We are in an age that is–out of our 200,000 years of existence–entirely novel and the speed of change is increasing. We have not yet figured out how to cope, how to structure society for the long-term so that we don’t accidentally break it.

We have gotten very good, however, at creative accounting to make it look like we are producing more than we are.

ceumwjhviaiaquiBy the mid-1950s, the Industrial Revolution had brought levels of prosperity never before seen in human history to the US (and soon to Europe, Japan, Korea, etc.) But since the ’70s, things seem to have gone off-track.

People fault outsourcing and trade for the death of the great American job market, but technical progress and automation also deserve much of the blame. As the Daily Caller reports:

McDonald’s has announced plans to roll out automated kiosks and mobile pay options at all of its U.S. locations, raising questions about the future of its 1.5 million employees in the country and around the globe.

Roughly 500 restaurants in Florida, New York and California now have the automated ordering stations, and restaurants in Chicago, Boston, San Francisco, Seattle and Washington, D.C., will be outfitted in 2017, according to CNNMoney.

The locations that are seeing the first automated kiosks closely correlate with the fight for a $15 minimum wage. Gov. Andrew Cuomo signed into law a new $15 minimum wage for New York state in 2016, and the University of California has proposed to pay its low-wage employees $15.

There is an obvious trade-off between robots and employees: where wages are low enough, there is little incentive to invest capital in developing and purchasing robots. Where wages are high, there is more incentive to build robots.

labor_forceThe Robot Economy will continue to replace low-skilled, low-wage jobs blue collar workers and young people used to do. No longer will teenagers get summer jobs at McDonald’s. Many if not most of these workers are simply extraneous in the modern economy and cannot be “retrained” to do more information-dependent work. The expansion of the Welfare State, education (also paid for with tax dollars,) and make-work administrative positions can keep these displaced workers fed and maybe even “employed” for the foreseeable future, but they are not a long-term solution, and it is obvious that people in such degraded positions, unable to work, often lose the will to keep going.

picture-30But people do not appreciate the recommendation that they should just fuck off and die already. That’s how you get communist revolutions in the first place.

Mass immigration of unskilled labor into a market already shrinking due to automation / technological progress is a terrible idea. This is Basic Econ 101: Supply and Demand. If the supply of labor keeps increasing while the demand for labor keeps decreasing, the cost of labor (wages) will plummet. Likewise, corporations quite explicitly state that they want immigrants–including illegal ones–because they can pay them less.

In an economy with more demand than supply for labor, labor can organize (unions) and advocate in behalf of its common interests, demanding a higher share of profits, health insurance, pensions, cigarette breaks, etc. When the supply of labor outstrips demand, labor cannot advocate on its own behalf, because any uppity worker can simply be replaced by some desperate, unemployed person willing to work for less and not make a fuss.

gr2009012802033Note two professions in the US that are essentially protected by union-like organizations: doctors and lawyers. Both professions require years of expensive training at exclusive schools and high scores on difficult tests. Lawyers must also be members of their local Bar Associations, and doctors must endure residency. These requirements keep out the majority of people who would like to join these professions, and ensure high salaries for most who do.

While Residency sounds abjectly awful, the situation for doctors in Britain and Ireland sounds much worse. Slate Star Codex goes into great detail about the problems:

Many of the junior doctors I worked with in Ireland were working a hundred hours a week. It’s hard to describe what working 100 hours a week is like. Saying “it means you work from 7 AM to 9 PM every day including weekends” doesn’t really cut it. Imagine the hobbies you enjoy and the people you love. Now imagine you can’t spend time on any of them, because you are being yelled at as people die all around you for fourteen hours a day, and when you get home you have just enough time to eat dinner, brush your teeth, possibly pay a bill or two, and curl up in a ball before you have to go do it all again, and your next day off is in two weeks.

And this is the best case scenario, where everything is spaced out nice and even. The junior doctors I knew frequently worked thirty-six hour shifts at a time (the European Court of Human Rights has since declined to fine Ireland for this illegal practice). …

The psychological consequences are predictable: after one year, 55% of junior doctors describe themselves as burned out, 30% meet criteria for moderate depression, and 12% report considering suicide.

A lot of American junior doctors are able to bear this by reminding themselves that it’s only temporary. The worst part, internship, is only one year; junior doctorness as a whole only lasts three or four. After that you become a full doctor and a free agent – probably still pretty stressed, but at least making a lot of money and enjoying a modicum of control over your life.

In Britain, this consolation is denied most junior doctors. Everyone works for the government, and the government has a strict hierarchy of ranks, only the top of which – “consultant” – has anything like the freedom and salary that most American doctors enjoy. It can take ten to twenty years for junior doctors in Britain to become consultants, and some never do.

I don’t know about you, but I really don’t want my doctor to be suicidal.

Now, you may notice that Scott doesn’t live in Ireland anymore, and similarly, many British doctors to take their credentials and move abroad as quickly as possible. The British medical system would be forced to reform if not for the influx of foreign doctors willing to put up with hell in exchange for not living in the third world.

From the outside, many of these systems, from underfunded pensions to British medicine, look just fine. Indeed, an underfunded pension will operate just fine until the day it runs out of money. Until that day, everyone who clams the pension is in deep trouble looks like Chicken Little, running around claiming that the sky is falling.

There’s a saying in finance: The market can stay irrational longer than you can stay solvent.

BTW, the entire state of California is in deep trouble, from budget problems to insane property tax laws. They already consume far more water than they receive, (and are set for massive forest fires,) but vote for increased population via immigration with Mexico. California’s economy is being propped up by–among other things–masses of cash flowing into Silicon Valley. This is Dot.Com Bubble 2.0, and like the first, it will pop–the only question is when. As Reuters reported last February:

LinkedIn Corp’s (LNKD.N) shares closed down 43.6 percent on Friday, wiping out nearly $11 billion of market value, after the social network for professionals shocked Wall Street with a revenue forecast that fell far short of expectations. …

As of Thursday, LinkedIn shares were trading at 50 times forward 12-month earnings, making it one of the most expensive stocks in the tech sector.

Twitter Inc (TWTR.N) trades at 29.5 times forward earnings, Facebook Inc (FB.O) at 33.8 times and Alphabet Inc (GOOGL.O) at 20.9 times.

Even after the selloff, LinkedIn’s shares may still be overvalued, according to Thomson Reuters StarMine data.

LinkedIn should be trading at $71.79, a 30 percent discount to the stock’s Friday’s low, according to StarMine’s Intrinsic Valuation model, which takes analysts’ five-year estimates and models the growth trajectory over a longer period.

Linked in has since been bought out by Microsoft for $26 billion. As Fortune notes, this is absolutely insane, as there is no way Microsoft can make back that much money off of LinkedIn:

Source Fortune http://fortune.com/2016/06/13/microsoft-linkedin-overpaid/
Source Fortune

“Ebitda” stands for Earnings Before Interest, Tax, Depreciation and Amortisation. There is absolutely no way that LinkedIn, a social network that barely turns a profit, is worth more than Sun, EMC, Compaq, and Time Warner.

Shares normally trade around 20x a company’s previous year’s earnings, though right now the S & P’s P/E ratio is around 25. In 2016, LinkedIn’s P/E ratio has been around 180. (Even crazier, their ratio in 2015 was -1,220, because they lost money.)

Ever wonder where all of that money from QE is going? It’s turning into Ferraris cruising around San Francisco, and LinkedIn is not the only offender.

But these companies will not maintain fantasy valuations forever.

(While we’re at it: Why the AOL-Time Warner Merger Went so Wrong:

When the deal was announced on Jan. 10, 2000, Stephen M. Case, a co-founder of AOL, said, “This is a historic moment in which new media has truly come of age.” His counterpart at Time Warner, the philosopher chief executive Gerald M. Levin, who was fond of quoting the Bible and Camus, said the Internet had begun to “create unprecedented and instantaneous access to every form of media and to unleash immense possibilities for economic growth, human understanding and creative expression.”

The trail of despair in subsequent years included countless job losses, the decimation of retirement accounts, investigations by the Securities and Exchange Commission and the Justice Department, and countless executive upheavals. Today, the combined values of the companies, which have been separated, is about one-seventh of their worth on the day of the merger.)

So, that was a bit of a long diversion into the sheer artificiality of much of our economy, and how sooner or later, the Piper must be paid.

picture-5bWhen I try to talk to liberal friends about the problems of increasing automation and immigration on the incomes of the American working class, their response is that “We just need more regulation.”

In this cheerful fantasy, we can help my friend who cannot afford health insurance by requiring his employer to provide health insurance–when in reality, my friend now cannot find a job that lasts for more than a month because employers just fire him before the health insurance requirement kicks in. In fantasy land, you can protect poor people by making it harder for landlords to evict them, but in the real world, this makes it even harder for the poorest to get long-term housing because no landlord wants to take the chance of getting stuck with them. In fantasy land, immigration doesn’t hurt wages because you can just legislate a higher minimum wage, but the idea that you can legislate a wage that the market does not support is an absurdity worthy only of the USSR. In the real world, your job gets replaced with a robot.

picture-4bThis is not to say that we can’t have some form of welfare or social safety net to deal with the dislocations and difficulties of our new economy. Indeed, some form of social welfare may, in the long run, make the economic system more robust by allowing people to change jobs or weather temporary unemployment without dying. Nor does it mean that any inefficiency is going to break the system. But long-term, using legislation to create a problem and then using more legislation to prevent the market from correcting it increases inefficiency, and you are now spending resources to enforce both laws.

Just like Enron’s “creative accounting,” you cannot keep hiding losses indefinitely.

You can have open-borders capitalism with minimal welfare, in which the most skilled thrive and survive and the least skilled die out. This is more-or-less the system in Singapore (see here for a discussion of how they use personal savings accounts instead of most welfare; a discussion of poverty in Singapore; and Singapore’s migration policies.)

Or you can have a Japanese or Swedish-style welfare state, but no open borders, (because the system will collapse if you let in just anyone who wants free money [hint: everyone.])

But you cannot just smash two different systems together, heap more laws on top of them to try to prevent the market from responding, and expect it to carry on indefinitely producing the same levels of wealth and well-being as it always has.

The laws of thermodynamics are against you.

(Return to Part One and Part Two.)

Entropy, Life, and Welfare (pt 1)

340px-dna_structurekeylabelled-pn_nobb

(This is Part 1. Part 2 and Part 3 are here.)

All living things are basically just homeostatic entropy reduction machines. The most basic cell, floating in the ocean, uses energy from sunlight to order its individual molecules, creating, repairing, and building copies of itself, which continue the cycle. As Jeremy England of MIT demonstrates:

From the standpoint of physics, there is one essential difference between living things and inanimate clumps of carbon atoms: The former tend to be much better at capturing energy from their environment and dissipating that energy as heat. Jeremy England … has derived a mathematical formula that he believes explains this capacity. The formula, based on established physics, indicates that when a group of atoms is driven by an external source of energy (like the sun or chemical fuel) and surrounded by a heat bath (like the ocean or atmosphere), it will often gradually restructure itself in order to dissipate increasingly more energy. …

This class of systems includes all living things. England then determined how such systems tend to evolve over time as they increase their irreversibility. “We can show very simply from the formula that the more likely evolutionary outcomes are going to be the ones that absorbed and dissipated more energy from the environment’s external drives on the way to getting there,” he said. …

“This means clumps of atoms surrounded by a bath at some temperature, like the atmosphere or the ocean, should tend over time to arrange themselves to resonate better and better with the sources of mechanical, electromagnetic or chemical work in their environments,” England explained.

Self-replication (or reproduction, in biological terms), the process that drives the evolution of life on Earth, is one such mechanism by which a system might dissipate an increasing amount of energy over time. As England put it, “A great way of dissipating more is to make more copies of yourself.” In a September paper in the Journal of Chemical Physics, he reported the theoretical minimum amount of dissipation that can occur during the self-replication of RNA molecules and bacterial cells, and showed that it is very close to the actual amounts these systems dissipate when replicating.

usenergy2009Energy isn’t just important to plants, animals, and mitochondria. Everything from molecules to sand dunes, cities and even countries absorb and dissipate energy. And like living things, cities and countries use energy to grow, construct buildings, roads, water systems, and even sewers to dispose of waste. Just as finding food and not being eaten are an animal’s first priority, so are energy policy and not being conquered are vital to a nation’s well-being.

Hunter-gatherer societies are, in most environments, the most energy-efficient–hunter gatherers expend relatively little energy to obtain food and build almost no infrastructure, resulting in a fair amount of time left over for leisure activities like singing, dancing, and visiting with friends.

But as the number of people in a group increases, hunter-gathering cannot scale. Putting in more hours hunting or gathering can only increase the food supply so much before you simply run out.

energyvsorganizationHorticulture and animal herding require more energy inputs–hoeing the soil, planting, harvesting, building fences, managing large animals–but create enough food output to support more people per square mile than hunter-gathering.

Agriculture requires still more energy, and modern industrial agriculture more energy still, but support billions of people. Agricultural societies produced history’s first cities–civilizations–and (as far as I know) its first major collapses. Where the land is over-fished, over-farmed, or otherwise over-extracted, it stops producing and complex systems dependent on that production collapse.

Senenu, an Egyptian scribe, grinding grain by hand, ca. 1352-1336 B.C
Senenu, an Egyptian scribe, grinding grain by hand, ca. 1352-1336 B.C

I’ve made a graph to illustrate the relationship between energy input (work put into food production) and energy output (food, which of course translates into more people.) Note how changes in energy sources have driven our major “revolutions”–the first, not in the graph, was the taming and use of fire to cook our food, releasing more nutrients than mere chewing ever could. Switching from jaw power to fire power unlocked the calories necessary to fund the jump in brain size that differentiates humans from our primate cousins, chimps and gorillas.

That said, hunter gatherers (and horticulturalists) still rely primarily on their own power–foot power–to obtain their food.

Scheme of the Roman Hierapolis sawmill, the earliest known machine to incorporate a crank and connecting rod mechanism.
Scheme of the Roman Hierapolis sawmill, the earliest known machine to incorporate a crank and connecting rod mechanism. Note the use of falling water to perform the work, rather than human muscles.

The Agricultural Revolution harnessed the power of animals–mainly horses and oxen–to drag plows and grind grain. The Industrial Revolution created engines and machines that released the power of falling water, wind, steam, coal, and oil, replacing draft animals with grist mills, tractors, combines, and trains.

Modern industrial societies have achieved their amazing energy outputs–allowing us to put a man on the moon and light up highways at night–via a massive infusion of energy, principally fossil fuels, vital to the production of synthetic fertilizers:

Nitrogen fertilizers are made from ammonia (NH3), which is sometimes injected into the ground directly. The ammonia is produced by the Haber-Bosch process.[5] In this energy-intensive process, natural gas (CH4) supplies the hydrogen, and the nitrogen (N2) is derived from the air. …

Deposits of sodium nitrate (NaNO3) (Chilean saltpeter) are also found in the Atacama desert in Chile and was one of the original (1830) nitrogen-rich fertilizers used.[12] It is still mined for fertilizer.[13]

Actual mountain of corn
Actual mountain of corn, because industrial agriculture is just that awesome

Other fertilizers are made of stone, mined from the earth, shipped, and spread on fields, all courtesy of modern industrial equipment, run on gasoline.

Without the constant application of fertilizer, we wouldn’t have these amazing crop yields:

In 2014, average yield in the United States was 171 bushels per acre. (And the world record is an astonishing 503 bushels, set by a farmer in Valdosta, Ga.) Each bushel weighs 56 pounds and each pound of corn yields about 1,566 calories. That means corn averages roughly 15 million calories per acre. (Again, I’m talking about field corn, a.k.a. dent corn, which is dried before processing. Sweet corn and popcorn are different varieties, grown for much more limited uses, and have lower yields.)

per-capita-world-energy-by-sourceAs anyone who has grown corn will tell you, corn is a nutrient hog; all of those calories aren’t free. Corn must be heavily fertilized or the soil will run out and your farm will be worthless.

We currently have enough energy sources that the specific source–fossil fuels, hydroelectric, wind, solar, even animal–is not particularly important, at least for this discussion. Much more important is how society uses and distributes its resources. For, like all living things, a society that misuses its resources will collapse.

To be continued…Go on to Part 2 and Part 3.

 

Graph of energy input vs. output by economic type.

energyvsorganizationI have been looking for this graph for some time, failed, and finally re-created it from memory. So warning: this was re-created from memory. A really old memory.

Anyway, this graph shows the relationship between energy inputs (work) and energy output (typically food, but also shelter, children, luxury goods, etc.) for a given variety of human technology/economic organizational structure.

(Note that the graph is not to scale and only a conceptual representation of the idea.)

So for example, in a hunter gathering society, inputing more energy by hunting more often will reward people with more food, but only up to a point. As game becomes scarcer, hunters bring home less food, and eventually you eat all of the animals in the area and are actually getting less out of hunting than you’re putting into it.

Even at its maximum efficiency, a hunter-gatherer society simply can’t (in most environments) obtain much food and can’t support many people.

Growing food takes much more energy, but the results support far more people.

Modern industrial societies take a ton of energy to run, but also support billions of people, cities, etc.

Of course, even modern industrial societies still need to be careful about that right-hand side of the curve.

Why are Mammals Brown? (pt. 1)

We don't naturally look like this
We don’t naturally look like this

Compared to colorful fish, lizards, birds, and even ladybugs, we mammals are downright drab. I see no particular environmental reason for this–plenty of mammals live in areas with trees or grass where green fur or spots might help them blend in, or have such striking patterns–like a zebra–that I hardly think a blue stripe would result in more lion attacks.

I think there are two main reasons mammals are mostly brown, instead of showing the vibrant colors of other species:

1. Some colors are difficult to produce.

Blue, for example. Walk into the forest or a meadow on an average day, and you’ll see a lot of green. Anything not green is likely brown. Outside a garden, there are very few naturally blue or purple plants.

This guy, however, does
This guy, however, does

It’s no coincidence that early human art uses colors that could be easily produced from the natural environment, like brown, black, (charcoal,) and yellow. By the Roman era, we could produce purple dye, but it was so hard to obtain from such rare sources (shells) that it was prohibitively expensive for mere mortals, hence why it was called “royal purple.” The European tradition of painting the Virgin Mary’s cloak blue also hails from the days when blue pigments were expensive, and thus a sign of exalted status.

A purple dye cheap enough for average people to buy and wear wasn’t invented until 1856, by William Henry Perkin.

I’m not sure exactly why blue and purple are so hard to produce, but I think it’s because light toward the violent end of the spectrum is higher energy than light toward the red end. As Bulina et al state:

Pigments in nature play important roles ranging from camouflage coloration and sunscreen to visual reception and participation in biochemical pathways. Considering the spectral diversity of pigment-based coloration in animals one can conclude that blue pigments occur relatively rare (as a rule blue coloration results from light diffraction or scattering rather than the presence of a blue pigment). At least partially this fact is explained by an inevitably more complex structure of blue pigments compared to yellow-reds. To appear blue a compound must contain an extended and usually highly polarized system of the conjugated π-electrons.

Okay… So, because blue and purple are more energetic, they require molecules that have more double bonds and are less common in nature. (Why double bonds are less common is a matter I’ll leave for a chemistry discussion.)

You’re probably used to thinking of color as an inherent property of the objects around you–that a green leaf is green, or a red bucket is red, in the same way that the leaf and bucket have a particular mass and are made of their particular atoms.

low energy to the left, high to the right
Low energy to the left, high to the right

But turn off the lights, and suddenly color goes away. (Mass doesn’t.)

The colors we see are created by light “bouncing” (really, being absorbed and then re-emitted) off objects. Within the visible spectrum, red light requires the least energy to produce (because it has the widest wavelength,) and violent takes the most energy.

But nature, being creative, has come up with alternative way to produce blues and purples that doesn’t depend on electron energy levels: structure.

Unless you are a color scientist you are probably accustomed to dealing with chemical colors. For example, if you take a handful of blue pigment powder, mix it with water, paint it onto a chair, let it dry, then scrape it off the chair, and grind it back into powder, you expect it to remain blue at all stages in the process (except if you get a bit of chair mixed in with it.)

Blue Morpho butterfly
Blue Morpho butterfly

By contrast, if you scraped the scales off a blue morpho butterfly’s wings, you’d just end up with a pile of grey dust and a sad butterfly. By themselves, blue morpho scales are not “blue,” even under regular light. Rather, their scales are arranged so that light bounces between them, like light bouncing from molecule to molecule in the air. Or as Ask Nature puts it:

Many types of butterflies use light-interacting structures on their wing scales to produce color. The cuticle on the scales of these butterflies’ wings is composed of nano- and microscale, transparent, chitin-and-air layered structures. Rather than absorb and reflect certain light wavelengths as pigments and dyes do, these multi scale structures cause light that hits the surface of the wing to diffract and interfere.

The same process is at work in the peacock’s plumage and bluebird’s blue:

Male eastern bluebird
Male eastern bluebird

Soft condensed matter physics has been particularly useful in understanding the production of the amorphous nanostructures that imbue the feathers of certain bird species with intensely vibrant hues. The blue color of the male Eastern bluebird (Sialia sialis), for example, is produced by the selective scattering of blue light from a complex nanostructure of b-keratin channels and air pockets in the hairlike branches called feather barbs that give the quill its lift. The size of the air pockets determines the wavelengths that are selectively amplified.

When the bluebird’s feathers are developing, feather barb cells known as medullary keratinocytes expand to their boxy final shape and deposit solid keratin around the periphery of the cell—essentially turning the walled-in cells into soups of ß-keratin suspended in cytoplasm. Next, b-keratin filaments free in the cytoplasm start to bind to each other to form larger bundles. As these filaments become less water-soluble, they begin to come out of solution—a process known as phase separation—ultimately forming solid bars that surround twisted channels of cytoplasm. These nanoscale channels of keratin remain in place after the cytoplasm dries out and the cell dies, resulting in the nanostructures observed in the feathers of mature adults.

“The bluebird doesn’t lay down a squiggly architecture and then put the array of the protein molecules on top of it,” Prum explains. “It lets phase separation, the same process that would occur in oil and vinegar unmixing, create this spatial structure itself.”

The point at which the phase separation halts determines the color each feather produces.

Decades old pollia fruit retains its structural brilliance
Decades old Pollia fruit retains its structural brilliance

This kind of structural color works great if your medium is scales, feathers, carapaces, berries, or even CDs, but just doesn’t work with hair, which we mammals have. Unlike the carefully hooked together structure of a feather or the details of a butterfly’s scales, hair moves. It shakes. It would have to be essentially solid to create structural color, and it’s not.

So for the most part, bright colors like green, blue, and purple are expensive, energy-wise, to produce chemically, and mammals don’t have the option birds, fish, lizards, and insects have of producing them structurally.

To be continued…