I received this question right after I finished crocheting all of the giftwrap ribbons into flowers and thought, “Huh, why am I doing this?”
The short answer is that I don’t know.
At the most direct and obvious level, I knit (or crochet, but for the sake of this post, I will be collapsing most yarn-related arts under the term “knitting”) because it’s fun, fast, and easy, and end the end you actually make something.
Knitting is very portable. I love my 3D printer, but I can’t exactly pop it in my purse and take it to the park with me. I skateboard, but I can’t skateboard at the mall or on an airplane (well, I could, but then I’d be having an awkward discussion with security). Sometimes I make chainmail, but that’s full of fiddly little bits that you can’t really balance on your knees.
Knitting is also very cheap. I’d love to learn something like carpentry or glass blowing, but these skills require a lot of time, room and expensive equipment to learn. Learning to knit only requires two pencils (smooth pencils make perfectly passable knitting needles) and a few dollars in yarn. Crochet requires an actual crochet hook, which might set you back a few more dollars, but either way, you can get started for less than $10.
The learning curve is a lot steeper on these skills, too–if I mess up while building a table, I’ve got a ruined table; if I mess up while knitting, I just pull on the yarn and undo the piece.
So that’s why you’ll see people knitting: it’s easy, cheap, and portable.
But this is only a superficial analysis, for Rubik’s cubes are also cheap and portable, and if not exactly “easy” to solve, you can certainly fiddle with them without any training.
But Rubik’s cubes are pointless, outside of the intellectual activity. With knitting, you get an actual item at the end.
To be honest, I think most women find many male-dominated hobbies, like sports or video games, pointless. I’ve seen many football games, and I can tell you the outcome of every single one: one team wins and the other loses. The world keeps on spinning and nothing changes except that some of the players get hurt. Similarly, men will sink hours into a video game and get nothing tangible as a reward.
I’ve said before that men seem to prefer hobbies in which they get to tinker. They like building their own rigs, repairing their own cars, optimizing settings, or trying to figure out the most efficient ways to do things. Women prefer to just get a product straight off the shelf, use it, and get the job done.
(Of course this tinkering does, long term, produce a lot of good things.)
The one sort of exception to this general rule is arts and crafts , where women dominate. A Norwegian study, for example, found that about 30% of Norwegian women between the ages of 18 and 50 had knitted something in the past year, but less than 7% of Norwegian men. Among older Norwegians, the gender divide was much wider–over 60% of women over 60 knitted, but the number of male knitters rounds to zero. So while most women you meet probably don’t knit, even fewer men are likely to pick up a ball of yarn.
I love the arts and crafts store; it’s like a candy shop for adults.
The desire to make little things for the home probably stems from the nesting instinct–a real instinct found most prominently in pregnant women, who are often struck by a sudden urge to make their homes as baby-friendly as possible. This urge is often far in excess of reason, resulting in women compulsively scrubbing the kitchen tile with a toothbrush or rearranging all of the furniture in order to vacuum under the couch. Personally, aside from all of the cleaning, I made things, including a child sized easel, train table, and stuffed animals.
So the ultimate cause is probably a mild version of the nesting instinct–a desire to make one’s home warm and comfortable.
This is the time of year when posts and article start popping up claiming that Jesus is just a rehash of the old Persion god Mithras (or Mithra, Mitras or some other spelling), lining up all sorts of improbable coincidences like “Mithras was born from a rock, and rocks can’t have sex, so clearly that’s the same as a virgin birth.”
There’s a much simpler and more sensible origin for Jesus: Judaism.
I know this is a bold thesis, but I think Judaism has several things going for it as the ultimate origin of Christianity, so hear me out.
Judaism: Has a tradition that a messiah will come.
Judaism: Has a holiday at the end of December. (It’s called Hanukkah.)
Judaism: Also has a spring holiday that coincides with Easter.
Judaism: is literally the religion that Christianity sprang from.
The early Christian writer Hippolytus of Rome provides the first justification for situating Jesus’s birth on December 25th: because it is nine months after his conception, believed to coincide with the date of his death. This belief probably comes from an actual Jewish belief about prophets, albeit slightly mangled. For example, Moses is believed to have died on his own birthday (at 120 years old).
Furthermore, Hanukkah–also known as the “Feast of Dedication”–is celebrated on the 25th of the Jewish month of Kislev. It seems likely that when early Christians started using the Roman calendar, they translated the holiday directly to the 25th of December.
But wait, I hear you saying, doesn’t Christmas coincide with the Roman festival of Saturnalia?
It turns out that Saturnalia was celebrated on December 17th, not December 25th. The holiday was later extended to last until December 23rd, which still falls two days short of December 25th.
Since people often denigrate Hanukkah as just “the Jewish Christmas,” let’s go back and review what the holiday is actually about.
In Hebrew, Hanukkah (also spelled “Chanukah”–it’s a transliteration of a non-Indo European word written in a non-Latin alphabet, so there’s no one proper spelling) means “dedication.” The Feast of Dedication officially marks when the Maccabees reconquered Jerusalem (from the Seleucids, Syrian Greeks) and re-instated traditional Jewish temple services in the Temple, which the invaders had been using for sacrifices to Zeus.
The Feast of Dedication is actually mentioned in the New Testament, in John 10:
22 Now it was the Feast of Dedication in Jerusalem, and it was winter. 23 And Jesus walked in the temple, in Solomon’s porch. 24 Then the Jews surrounded Him and said to Him, “How long do You keep us in [d]doubt? If You are the Christ, tell us plainly.”
One of the sacred objects in the Temple was the 7-armed menorah, famously depicted on the Arch of Titus. (The Hanukkah menorahs lit in people’s homes have 9 arms.) According to the Bible (Exodus 25), the plan for the menorah was revealed to Moses as part of the overall plan for the tabernacle in which the Ark of the Covenant resided:
31Make a lampstand of pure gold. Hammer out its base and shaft, and make its flowerlike cups, buds and blossoms of one piece with them. 32Six branches are to extend from the sides of the lampstand—three on one side and three on the other. 33Three cups shaped like almond flowers with buds and blossoms are to be on one branch, three on the next branch, and the same for all six branches extending from the lampstand. …
39A talent of pure gold is to be used for the lampstand and all these accessories. 40See that you make them according to the pattern shown you on the mountain.
Once the Temple was built in Jerusalem, the menorah was placed inside, and it was this same menorah that the Maccabees were scrambling to find enough oil to light during their (re)dedication celebration. (Candles had yet to be invented.)
The menorah was looted by the Romans in 70 AD, after Titus conquered Jerusalem, and then probably carried off by the Vandals when they sacked Rome in 455. At this point, it disappears from history–and yet the sacred temple light lives on. You’ve probably seen it in a modern church, as altar lamps still hang in Catholic, Episcopal, and Anglican churches.
In Orthodoxy, what other traditions in Christianity call the altar we call the Holy Table, and the space beyond the ikon screen is called the altar. Among items upon an Orthodox Holy Table will be a cloth ikon of Christ containing a relic, the gospels, a special ‘box’ we call a tabernacle which will contain the reserved sacrament for the sick, and candles. In the Russian tradition the number of candles we use reflect the Jewish Menorah, a seven branched candlestick as expressed in Exodus.
Looking around the synagogue you will see the eastern wall, where the aron ha-kodesh (the holy ark) is located. The ark is the repository for the Torah scrolls when they are not in use. It also serves as the focus for one’s prayers. Above the ark is located the ner tamid–the eternal light — recalling the eternal light in the Temple (Exodus 27:20–21).
In each case, the sacred fire symbolizes the presence of God.
Looking back, deeper into the Bible, we find other instances where fire symbolized God’s presence:
The menorah itself, with multiple “branches” covered in buds and blossoms, is reminiscent of a flowering tree or bush, like the burning bush encountered by Moses.
When the Israelites walked through the desert, they were led by a pillar of cloud by day and of fire by night.
When Moses ascended Mt. Sinai to receive the 10 Commandments, God was again likened to fire (Exodus 24):
12 And the Lord said unto Moses, Come up to me into the mount, and be there: and I will give thee tables of stone, and a law, and commandments which I have written; that thou mayest teach them. …
15 And Moses went up into the mount, and a cloud covered the mount. …
17 And the sight of the glory of the Lord was like devouring fire on the top of the mount in the eyes of the children of Israel.
And in the story of Abram who became Abraham (Genesis 15):
9 He said, “Bring me a three-year-old female calf, a three-year-old female goat, a three-year-old ram, a dove, and a young pigeon.”10 He took all of these animals, split them in half, and laid the halves facing each other, but he didn’t split the birds.11 When vultures swooped down on the carcasses, Abram waved them off.12 After the sun set, Abram slept deeply. A terrifying and deep darkness settled over him. …
17 After the sun had set and darkness had deepened, a smoking vessel with a fiery flame passed between the split-open animals.18 That day the Lord cut a covenant with Abram:
And in the New Testament, Acts, Ch 2:
1 And when the day of Pentecost was fully come, they were all with one accord in one place. 2 And suddenly there came a sound from heaven as of a rushing mighty wind, and it filled all the house where they were sitting. 3 And there appeared unto them cloven tongues like as of fire, and it sat upon each of them. 4 And they were all filled with the Holy Ghost, and began to speak with other tongues, as the Spirit gave them utterance.
“Pentecost” is actually a Jewish holiday. It is partly a harvest festival and partly a celebration of when God gave Moses the books of the Torah (aka the Bible).
I regard the Jewish holiday calendar as cyclical, with many layers of meaning built into each holiday. Pentecost, (aka Shavuot), is both an early harvest festival and a Torah festival. Sukkot is a fall harvest festival similar to Thanksgiving, but it also celebrates the time the Israelites spent wandering in the desert during the Exodus (with overtones of a Jewish wedding). Judaism is an old religion, and meaning has built up over time as people have lived their lives in different ways.
The fact that two different religions celebrate similar holidays on similar dates is not, a priori, sign that they copied each other. I think it very likely that people of all sorts, from all over the world, have placed important holidays on dates like “the solstice” and “the harvest” because these are easy dates to keep track of. You know when the harvest is in; you know when the days are short or long. Other days, well, those are a little trickier to keep track of. Festivals that take place at the same time of year took on similar elements because those elements were common to the times–Sukkot and Thanksgiving both involve lots of food because they are harvest festivals, not because they are copying each other. Winter solstice celebrations involve fire because people light fires to keep themselves warm during cold winter months.
Christmas/Chanukah similarly show many layers of meaning. At the most basic, we have a solstice celebration: the temples and hearths need cleaning and the sacred fires are kindled at the start of winter. We have the historical observance of an actual, historical event–the victory of the Maccabees over the Seleucid Empire, as related in 1 Maccabees:
52 Early in the morning on the twenty-fifth day of the ninth month, which is the month of Chislev, in the one hundred forty-eighth year,[b]53 they rose and offered sacrifice, as the law directs, on the new altar of burnt offering that they had built. 54 At the very season and on the very day that the Gentiles had profaned it, it was dedicated with songs and harps and lutes and cymbals.… 56 So they celebrated the dedication of the altar for eight days, and joyfully offered burnt offerings; they offered a sacrifice of well-being and a thanksgiving offering.
Interestingly, the Seleucid Empire’s control of the Temple symbolically dies here on the same day it was born.
Hanukkah is also, according to the account given in 2 Maccabees, a delayed Sukkot festival (Sukkot is normally 8 days long in the diaspora). Sukkot, the festival of tabernacles, was probably delayed either due to the Seleucids banning traditional Jewish holidays, as the Maccabees complained, or due to the war raging in the country at the time. A further fourth reason for Hanukkha is given in 2 Maccabees, the celebration of a similar miracle performed by Nehemiah during the rebuilding of the Temple a few hundred years before.
What does it mean for early Christian authors to assert that Jesus was born on Chanukah, died on Passover, and the Holy Spirit descended on the Apostles on Shavuot (Pentecost)? Not only does this situate Jesus firmly within the Jewish liturgical year, it is a specific claim about who Jesus is.
For Jews, God’s presence in the world is the Torah, hence why the eternal light burns near the Torah scrolls in synagogues. In churches, this is of course the Eucharist.
The transition from Word to Eucharist is eloquently expressed by John:
1 In the beginning was the Word, and the Word was with God, and the Word was God. …
5 And the light shineth in darkness; and the darkness comprehended it not. …
14 And the Word was made flesh, and dwelt among us, (and we beheld his glory, the glory as of the only begotten of the Father,) full of grace and truth.
For Christians, Jesus is the presence of God in the world symbolized by the menorah’s flame.
What does it mean for modern authors to assert that Jesus was Mithras? It is a claim that the New Testament is a bunch of malarkey and Jesus was, rather than an historical personage, a plagiarised pagan deity.
But let’s take a closer look at Mithras and the claimed parallels. According to Wikipedia:
Mithraism, also known as the Mithraic mysteries, was a Romanmystery religion centered on the god Mithras. The religion was inspired by Iranian worship of the Zoroastrian god Mithra, though the Greek Mithras was linked to a new and distinctive imagery, and the level of continuity between Persian and Greco-Roman practice is debated. The mysteries were popular among the Roman military from about the 1st to the 4th century CE.
Certainly Mithraism was popular in the area and some of its iconography is similar to later Christian paintings and statues. Christians may have borrowed stylistic motifs from Greek and Roman art, since there were no iconographic representations of God in traditional Judaism.
Mithra has the following in common with the Jesus character:
Mithra was born on December 25th of the virgin Anahita.
The babe was wrapped in swaddling clothes, placed in a manger and attended by shepherds.
He was considered a great traveling teacher and master.
He had 12 companions or “disciples.”
He performed miracles.
As the “great bull of the Sun,” Mithra sacrificed himself for world peace.
He ascended to heaven.
Mithra was viewed as the Good Shepherd, the “Way, the Truth and the Light,” the Redeemer, the Savior, the Messiah.
Mithra is omniscient, as he “hears all, sees all, knows all: none can deceive him.”
He was identified with both the Lion and the Lamb.
His sacred day was Sunday, “the Lord’s Day,” hundreds of years before the appearance of Christ.
His religion had a eucharist or “Lord’s Supper.”
Mithra “sets his marks on the foreheads of his soldiers.”
Mithraism emphasized baptism.
There are two problems with such lists. First, reducing any religion to a bullet points tends to render it almost unrecognizable, and second, many of these points are just plain wrong.
Here’s a comparison of Judaism and Sikhism, for example:
Both religions stress the importance of wearing hats
Sikh and Jewish prayers both assert the existence of one God.
Both started in Asia.
Sikhs have gurus, who are teachers. Judaism has rabbis, who are also teachers.
Sikhs and Jews both worship in dedicated religious buildings.
Both forbid religious iconography.
Both teach that God is formless and omnipotent.
Judaism has “10 commandments”. Sikhism has “10 Gurus”.
Sikhs have a ritual bathing ceremony called “Amrit Sanchar.” Jews have a ritual bathing ceremony that takes place in a ritual bathing pool, the mikvah.
Both have sacred texts
There you have it. Judaism and Sikhism have so much in common, they must be copying each other. I’m sure if you met a Jew and a Sikh in person, you’d be hard pressed to tell them apart.
1. The first claim, that Mithras was born on December 25th, is basically wrong. Zoroastrians celebrate Mithras’s birth on the solstice, or December 21st. In the 4th century AD, in some parts of the Roman Empire, the festival’s date was shifted to December 25th, probably due to issues with the calendar (leap years).
The claims that everyone was celebrating Mithras’s birthday on December 25th are extrapolated from Roman celebrations of the solstice (Sol Invictus). Since some people equated Mithras and the sun, the logic goes, therefore everyone who celebrated the solstice was actually celebrating Mithras.
2. Second, Mithras is most commonly depicted as born not from a virgin, but from a rock. The statues are very clear on this point. Only in a few isolated traditions is Mithras given a human mother; these were not the dominant traditions in the area. Even if we are generously metaphorical and allow the claim that Mithras was born in a cave, rather than directly from a rock, Jesus was not born in a cave. Jesus was born in a stable, where animals are kept, or possibly even the lower floor of a home where animals were kept during the winter.
3. Pretty much all babies were wrapped in swaddling clothes. I’ve swaddled my own babies. Oh no, I’ve produced gods.
4. I’ve found nothing confirming that Mithras was laid in a manger. He looks too big in most of his “birth from a rock” statues to be laid in anything, anyway, because he emerged fairly grown up.
5. Mithras is generally depicted attended by two torch bearers, Cautes and Cautophates. They symbolize sunrise and sunset, as Mithras is a solar deity. Occasionally, they are depicted holding shepherds’ crooks instead of torches.
The presence of shepherds at the nativity isn’t really an important theological point in Christianity. Neither is Cautes and Cautophates occasionally holding shepherds’ crooks in statues. These symbols are not meaningfully similar.
6. Empty claim: Anyone can walk around and teach things.
7. The claim that Mithras had 12 companions or disciples is taken from depictions of Mithras alongside the Zodiac. I don’t think anyone was claiming that Mithras was literally accompanied by Pisces and Cancer.
8. Empty: Pretty much ever religious leader/saint/prophet has performed “miracles.”
9. Mithras did not sacrifice himself as a bull. He killed a bull. The bull-killing scene is the most common depiction of Mithras, so it’s hard to figure out how someone could get this wrong. Let’s let Wikipedia describe the scene:
In every mithraeum the centrepiece was a representation of Mithras killing a sacred bull, an act called the tauroctony.[a] … The centre-piece is Mithras clothed in Anatolian costume and wearing a Phrygian cap; who is kneeling on the exhausted bull, holding it by the nostrils with his left hand, and stabbing it with his right. … A scorpion seizes the bull’s genitals. A raven is flying around or is sitting on the bull. … The two torch-bearers are on either side, dressed like Mithras, Cautes with his torch pointing up and Cautopates with his torch pointing down. Sometimes Cautes and Cautopates carry shepherds’ crooks instead of torches.
The event takes place in a cavern, into which Mithras has carried the bull, after having hunted it, ridden it and overwhelmed its strength. Sometimes the cavern is surrounded by a circle, on which the twelve signs of the zodiac appear. Outside the cavern, top left, is Sol the sun, with his flaming crown, often driving a quadriga. A ray of light often reaches down to touch Mithras. At the top right is Luna, with her crescent moon, who may be depicted driving a biga.
In some depictions, the central tauroctony is framed by a series of subsidiary scenes to the left, top and right, illustrating events in the Mithras narrative; Mithras being born from the rock, the water miracle, the hunting and riding of the bull, meeting Sol who kneels to him, shaking hands with Sol and sharing a meal of bull-parts with him, and ascending to the heavens in a chariot.… On the back side was another, more elaborate feasting scene.
If you read that and conclude, “Yup, sounds just like Jesus, shepherds, and apostles,” I’m not sure what to say.
Since Mithras worship was part of a mystery cult, we have very few records of what actually went on in there.. We have archaeological remains, of course, which show mostly feasting. This is the claimed “eucharist.” These were big feasts which left a fair amount of trash behind. Calling any ritual feast a “eucharist” or “last supper” is certainly a stretch–we might as well note that I have a supper every evening. (And besides, there is a much closer parallel to the ceremony with the bread and wine found in Judaism.)
The presence of many cherry pits in the Mithraic garbage indicates that much of that feasting took place in summer, when cherries are ripe–around the time of the summer solstice. If Mithras inspired so much Christian ritual, then why doesn’t Jesus have a summer celebration?
As for what Mithras was called, we have very few actual written texts on the religion–unlike Christians, Mithras’s worshipers did not go around telling people what they believed. There is one inscription on a wall that reads “et nos servasti . . . sanguine fuso” which translates to “And you have saved us… in the shed blood.” This is probably a reference to the killing of the bull, which was celebrated with feasting, rather than a Christ-like sacrifice.
There is one reasonably complete surviving text that might have been part of a “Mithraic liturgy,” and it bears no relation to anything that goes on in a Christian church service:
At this level (lines 537–585), the revelation-seeker is supposed to breathe deeply and feel himself lifted up, as if in midair, hearing and seeing nothing of mortal beings on earth. He is promised to see instead the divine order of the “visible gods” rising and setting. Ritual silence is prescribed, followed by another sequence of hissing, popping, and thirteen magic words: “Then you will see the gods looking graciously upon you and no longer rushing at you, but rather going about in their own order of affairs.” After a shocking crash of thunder, another admonition of silence, and a magic incantation, the disk of the sun is to open and issue five-pointed stars. The eyes are to be closed for the following prayer. …
Next to come forth are the seven Pole-Lords, wearing linen loincloths and with faces of bulls. They have seven gold diadems, and are also to be hailed individually by name. These have powers of thunder, lightning, and earthquakes, as well as the capacity to grant physical health, good eyesight and hearing, and calmness (lines 673–692).The two groups of seven, female and male, are both depicted in an Egyptian manner and represent the “region of the fixed stars.”
This might of course be some other mystery cult, there just aren’t any other texts that survived with enough complete sentences to actually read them. We do have a lot of statues similar to these pole-lords, but with the faces of lions instead of bulls.
These statues are wild and we have no idea what they were for or what they meant. They might be related to a particular demon in Zoroastrianism, or they might represent the “lion degree” of initiation into the cult’s rites, or something else entirely. At any rate, we don’t find these statues in modern churches, and their importance within the Mithraic mysteries has not been transferred to anything in Christianity.
Any claim that Mithras was “called this” or “compared to that” or given particular attributes is on shaky ground given the lack of written records. Nothing in the normal Mithraic iconography looks like the peaceful good shepherd of Christianity. His association with Lions and Lambs is part of his general association with the zodiac, which contains both Leo the Lion and Aries the Ram–as well as Cancer the Crab and Scorpio the Scorpion. Why is Jesus not associated with scorpions, if Mithras was?
Mithras did ascend into Heaven, because he is a solar deity associated with the sun, moon, and stars, and heaven is where the stars are. Zeus lives in a kind of “heaven,” too, as do Odin and Thor.
I hope I do not need to keep refuting individual points. I’ve found nothing about baptism or Sundays associated with Mithras, and as for the marks on the foreheads of Mithraic initiates, Christians do not normally go around with marks on their heads, except on Ash Wednesday as a sign of mourning.
And the son/sun pun doesn’t even work in Latin. (Or Greek.)
My personal opinion is that the Mithras Cult operated rather like a modern Masonic Lodge or even a Rotary Club: a men’s club that periodically feasted together. (Mithraic cults didn’t accept female members.)
There are plenty of obvious pagan practice that survives in Christianity–the Christmas tress, for example. And the influence of pagan Greek philosophers like Plato on early Christianity is a subject worth whole volumes. We could even ask how much of Judaism owes its origins to Zoroastrianism, especially parts developed during the Babylonian exile. But at this point, I think it’s safe to say that people promoting the “Jesus was Mithras” story are either completely ignorant of Mithraism or purposefully trying to denigrate Christianity.
Have a merry Christmas, Chanukah, Mithras Day, or whatever you celebrate.
Pick up a research paper on battery technology, fuel cells, energy storage technologies or any of the advanced materials science used in these fields, and you will likely find somewhere in the introductory paragraphs a throwaway line about its application to the storage of renewable energy. Energy storage makes sense for enabling a transition away from fossil fuels to more intermittent sources like wind and solar, and the storage problem presents a meaningful challenge for chemists and materials scientists… Or does it?
In late 2011, the Red Cross launched a multimillion-dollar project to transform the desperately poor area, which was hit hard by the earthquake that struck Haiti the year before. The main focus of the project — called LAMIKA, an acronym in Creole for “A Better Life in My Neighborhood” — was building hundreds of permanent homes. …
The Red Cross says it has provided homes to more than 130,000 people. But the actual number of permanent homes the group has built in all of Haiti: six. …
In statements, the Red Cross cited the challenges all groups have faced in post-quake Haiti, including the country’s dysfunctional land title system.
“Like many humanitarian organizations responding in Haiti, the American Red Cross met complications in relation to government coordination delays, disputes over land ownership, delays at Haitian customs, challenges finding qualified staff who were in short supply and high demand, and the cholera outbreak, among other challenges,” the charity said.
… While the group won’t provide a breakdown of its projects, the Red Cross said it has done more than 100. The projects include repairing 4,000 homes, giving several thousand families temporary shelters, donating $44 million for food after the earthquake, and helping fund the construction of a hospital.
Analysing population genomic data from killer whale ecotypes, which we estimate have globally radiated within less than 250,000 years, we show that genetic structuring including the segregation of potentially functional alleles is associated with socially inherited ecological niche. Reconstruction of ancestral demographic history revealed bottlenecks during founder events, likely promoting ecological divergence and genetic drift resulting in a wide range of genome-wide differentiation between pairs of allopatric and sympatric ecotypes. Functional enrichment analyses provided evidence for regional genomic divergence associated with habitat, dietary preferences and post-zygotic reproductive isolation. Our findings are consistent with expansion of small founder groups into novel niches by an initial plastic behavioural response, perpetuated by social learning imposing an altered natural selection regime. The study constitutes an important step towards an understanding of the complex interaction between demographic history, culture, ecological adaptation and evolution at the genomic level.
African Pygmies practicing a mobile hunter-gatherer lifestyle are phenotypically and genetically diverged from other anatomically modern humans, and they likely experienced strong selective pressures due to their unique lifestyle in the Central African rainforest. To identify genomic targets of adaptation, we sequenced the genomes of four Biaka Pygmies from the Central African Republic and jointly analyzed these data with the genome sequences of three Baka Pygmies from Cameroon and nine Yoruba famers. … Our two best-fit models both suggest ancient divergence between the ancestors of the farmers and Pygmies, 90,000 or 150,000 years ago. … We found that genes and gene sets involved in muscle development, bone synthesis, immunity, reproduction, cell signaling and development, and energy metabolism are likely to be targets of positive natural selection in Western African Pygmies or their recent ancestors.
NY Times, blaming people for their own deaths:
If only people knew how much the past sucked, they’d die less.
From 2015, but still a good example of the absurd and arbitrary nature of the state we live in: Texas police hit Organic farm with Massive SWAT Raid:
Members of the local police raiding party had a search warrant for marijuana plants, which they failed to find at the Garden of Eden farm. But farm owners and residents who live on the property told a Dallas-Ft. Worth NBC station that the real reason for the law enforcement exercise appears to have been code enforcement. The police seized “17 blackberry bushes, 15 okra plants, 14 tomatillo plants … native grasses and sunflowers,” after holding residents inside at gunpoint for at least a half-hour, property owner Shellie Smith said in a statement. The raid lasted about 10 hours, she said.
In preparation for the holidays, I have been 3D printing (and crocheting) like a madman. Right now I am printing what will eventually be an 8-inch tall Eevee, a Pokemon.
But is 3D printing worth the cost?
My very rough, back of the envelope calculation after about a year of printing is that if you have enough things you want to print, then 3D printing is easily worthwhile.
A spool of PLA printer filament + shipping runs about $25. You can get more expensive filaments, or you can get the cheaper ones and paint them. I paint. (I’ve run the numbers on recycling waste filament/making your own, but filament is cheap enough that it isn’t worth it unless you’re running a really big operation.)
I usually print toys or educational items like Neanderthal skulls for my kids. I’m not sure exactly how many prints you can get off a single spool of filament–it depends on what you’re printing, how much infill you use, how many supports you need, etc–but I haven’t run out of any of my spools, yet.
Here is a nice set of hominid skull (reproductions) being sold for $368 (on sale! Normally $472!) It’s not clear exactly what the dimensions on these are, but let’s assume they’re full size. Given how much I’ve printed so far without running out, I could probably print at least two skulls per spool, for a printing cost of <$75.
That works out to almost $100 of savings per spool of home-printed skulls.
Of course, that’s because someone is massively over-charging for skulls, and besides, how many skulls does the average person want hanging around their home?
Here is a FunkoPop Eevee, similar to the one I am printing right now, for $11. The material on these is probably a little nicer than PLA (which is pretty rigid), but mine will be twice as tall and less than half the price.
If toys are too silly for your tastes (perhaps because you don’t have kids,) rest assured that you can print many tools and technical items, and the cost is still quite low compared to buying to buying them.
The sheer variety of 3D printers available on Amazon these days is overwhelming–Resin printer for $250? Troxny for $240? I know nothing about these brands because I don’t own one, but they have good reviews. While the highest quality printers still cost over $1000, I think it is safe to say that you can get a good printer for $500 or less, and if you are the sort of person who would otherwise buy (or sell) the things you can print with it, you can easily make back the cost.
The biggest expense in 3D printing is, surprisingly, not the machine or the filament, but your time. Printers are fiddly machines and there’s a fair amount of trouble shooting. (I recommend getting one with the auto-bed-leveling feature.) Getting prints to stick well enough that they don’t come off the plate while printing, but not so well that you can’t get them off after printing, is tricky.
The printer needs supervision while printing because sometimes prints come off the plate anyway, filament starts going everywhere, and the machine has to be shut down. And sometimes the nozzle gets clogged and needs cleaning.
So it’s probably best to get a 3D printer only if that sounds like the sort of thing you’d enjoy.
Here’s a side view of Eevee so you can see its tail.
Inspired by a question from Littlefoot, I went out to do a little sleuthing:
The process is something like: you kill a pheasant, bring it home, and string it up. Each day you go outside and tug on a feather to see how hard it is to pull it out. You keep doing this until eventually the feather comes out with little give. The meat can then be cooked.
I remember an anthropology professor reminiscing about buying food at open air markets somewhere in Africa, where refrigeration is non-existent the meat is simply out in the heat and “somehow everyone doesn’t die.” It’s a bit strange to us, because we’re inundated with messages that improper food handling will lead to the growth of horrible bacteria and death (I even refrigerate the eggs and butter, even though our grandmothers never did and the French still don’t,) but our ancestors not only managed without refrigeration, sometimes they actually tried to make the meat rot on purpose.
Helpful Twitter user Stefan Beldie explains that traditionally, pheasants were killed, eviscerated, and then hung for 4-10 days, depending on the weather.
Temperatures for beef, veal, lamb steaks and roasts
Extra-rare or Blue (bleu)
Of course, there is a bit of difference between food that is merely uncooked/barely cooked, and food that has been intentionally allowed to rot.
Here’s the tale of an Inuit (Eskimo) delicacy, walrus meat that has been allowed to decompose in a hole in the ground for a year (though I suspect not much decomposition happens for about half the year up in the arctic).
Before you judge, remember that cheese is really just rotten vomit.
Have you ever heard the story that early modern Brits used a bunch of spices on their meat to cover up the taste of rot?
One of the most pervasive myths about medieval food is that medieval cooks used lots of spices to cover up the taste of rotten meat. This belief is often presented in the popular media as fact, with no cited references. Occasionally though a source is mentioned, and the trail invariably leads to:
The Englishman’s Food: Five Centuries of English Diet
J.C. Drummond, Anne Wilbraham
First published by Jonathan Cape Ltd 1939
… It is not surprising to find that the recipe books of these times give numerous suggestions for making tainted meat edible. Washing with vinegar was an obvious, and one of the commonest procedures. A somewhat startling piece of advice is given in the curious collection of recipes and miscellaneous information published under the title of The Jewell House of Art and Nature by ‘Hugh Platt, of Lincolnes Inne Gentleman’ in 1594. If you had venison that was ‘greene’ you were recommended to ‘cut out all the bones, and bury [it] in a thin olde coarse cloth a yard deep in the ground for 12 or 20 houres’. It would then, he asserted, ‘bee sweet enough to be eaten’.”
As Daniel Myers notes, washing with vinegar was not done to reduce spoilage, but to tenderize and get rid of the “gamey” taste of some meats. As for burying your meat to make it less spoiled, this is clearly absurd:
The example that Drummond does give is most certainly not for dealing with spoiled meat. He misinterprets the word “greene” to mean spoiled, when in fact it has the exact opposite meaning – unripe. Venison, along with a number of other meats, is traditionally hung to age for two or three days after butchering to help tenderize it and to improve the flavor. With this simple knowledge in mind, Platt’s instructions are clearly a way to take a freshly butchered carcass and speed up the aging process so that it may be eaten sooner.
Similar instructions for rapidly aging poultry can be found in Ménagier de Paris.
Item, to age capons and hens, you should bleed them through their beaks and immediately put them in a pail of very cold water, holding them all the way under, and they will be aged that same day as if they had been killed and hung two days ago.
The goal of these recipes is not to cover up rot, but to speed up the rotting (or “aging”) process.
Myers also notes that the idea of putting spices on rotten meat is also absurd because spices were horribly expensive–often worth their weight in gold. It would be rather like someone looking at a gold-leaf wrapped caviar and concluding the gold was there to distract the peasants from the fact that fish eggs are disgusting. You would have completely misread the dish. In the Medieval case, it would be cheaper to buy fresh meat than to dump spices on it.
Now, to be clear, what I’ve been calling “rot” is really more “aging.” We only think of it as rotting because we are accustomed to throwing everything in the refrigerator as soon as we get it.
Three factors affect the tenderness of meat in all animals, whether it be beef cattle or pheasant: background toughness, rigor mortis and aging the meat.
Background toughness results from the amount of collagen (connective tissue) in and between muscle fibers. The amount of collagen, as well as the interconnectivity of the collagen, increases as animals get older, explaining why an old rooster is naturally tougher than a young bird. Rigor mortis is the partial contracting and tightening of muscle fibers in animals after death and results from chemical changes in the muscle cells. Depending on temperature and other factors, rigor mortis typically sets in a few hours after death and maximum muscle contraction is reached 12 to 24 hours after death. Rigor mortis then begins to subside, which is when the aging (tenderization) of the meat begins.
Tenderization results from pH changes in the muscle cells after death that allow naturally occurring proteinase enzymes in cells to become active. These enzymes break down collagen, resulting in more tender meat. In beef cattle, the aging process will continue at a constant rate up to 14 days, as long as the meat is held at a proper and consistent temperature, and then decreases after that. In fowl, the rate of tenderization begins to decline after a few days.
A common misconception is that bacteria-caused rotting is responsible for meat tenderization, and this is why many find the thought of aging game repugnant. … Maintaining a constant, cool temperature is key to preventing bacterial growth when aging meats. The sickness causing E. coli bacteria grows rapidly at temperatures at or above 60 F, but very slowly at 50 F.
Several interesting results from this study. They hung up both pheasants and chickens. The pheasants showed very little microbial first two weeks, whereas the chickens started turning green on day five. This is probably a result of chickens having more bacteria in them to start with, a side effect of the crowded, disease-ridden conditions chickens are typically raised in.
A taste testing panel found that pheasants that had hung for at least three days tasted better than ones that had not, with some panel members preferring birds that had aged considerably longer.
So if you plan on hunting pheasant any time soon, consider letting it age for a few days before eating it–carefully, of course. Don’t give yourself food poisoning.
This is a good example of a common misconception: that physical space per person matters.
Things that actually matter:
1. Water per person
2. Farmland per person
3. Cost of housing near city centers
4. Commuting time to city centers
Thing is, while we still eat food, our economy has been, since the late 1800s, something we describe as “industrial” (and now “post-industrial”). This means that the vast majority of people have to live in cities instead of farms, because industries are in cities.
Don’t get your political information from anyone who doesn’t know we live in an industrial (post-industrial) economy, folks.
One of the side effects of living in an industrial/post-industrial economy is that, by necessity, you end up with uneven population densities. We don’t plop cities down on farmland (not if you want to eat) and you don’t try to grow potatoes in city medians.
So a pure measure of “density” is meaningless.
In an agrarian economy, land is the most important resource. In an industrial/post-industrial economy, proximity to industry is itself a kind of resource. People have to actually be able to get to their jobs. This is why in places like Silicon Valley, where housing is artificially restricted, the price of housing skyrockets. You can probably find some super cheap (relatively speaking) land a mere hundred miles away from SF, but people can’t commute that far, so they bid up the prices on what housing there is.
Of course it would be great if people could just build more housing in CA, but that’s a separate issue–regardless, if people could just move to one of those less populated areas, they would.
(By the way, South Africa is also a modern, industrial economy, which is why the idea of taking people’s farms and redistributing them to the masses is absurd from an economic point of view. South Africa is not an agrarian society, and very few people there actually want to be farmers. The goal is not economic growth, but simply to hurt the farmers.)
Many of our other resources are similarly “invisible”–that is, difficult to quantify easily on a map. Where does your water come from? Rain? Rivers? Aquifer?
How much water can your community use before you run out?
Water feels infinite because it just pours out of the faucet, but it isn’t. Each area has so much water it can obtain easily, a little more that can be obtained with effort, and after that, you’re looking at very large energy expenditures for more.
Groundwater Depletion in the United States (1900–2008). A natural consequence of groundwater withdrawals is the removal of water from subsurface storage, but the overall rates and magnitude of groundwater depletion in the United States are not well characterized. This study evaluates long-term cumulative depletion volumes in 40 separate aquifers or areas and one land use category in the United States, bringing together information from the literature and from new analyses. Depletion is directly calculated using calibrated groundwater models, analytical approaches, or volumetric budget analyses for multiple aquifer systems. Estimated groundwater depletion in the United States during 1900–2008 totals approximately 1,000 cubic kilometers (km3). Furthermore, the rate of groundwater depletion has increased markedly since about 1950, with maximum rates occurring during the most recent period (2000–2008) when the depletion rate averaged almost 25 km3 per year (compared to 9.2 km3 per year averaged over the 1900–2008 timeframe).
We’re not just “full”; we’re eating our seed corn. When the aquifers run out, well, the farms are just fucked.
There are some ways to prevent total aquifer collapse, like planting crops that require less water. We’re not totally doomed. But the idea that we can keep our present lifestyles/consumption levels while continuously expanding the population is nonsense.
Eventually something has to give. Someone has to scale back their consumption. Maybe it’s no more almonds. Maybe it’s less meat. Maybe it’s longer commutes or smaller houses.
No matter how you slice it, resources aren’t infinite and you can’t feed cities on deserts.
One more thought:
This is all technical, addressing the question of “How do we measure whether we are really full or not?”
No one has addressed the question of whether being “full” or not is even important.
You could look at my house and say, “Hey, your house isn’t full! There’s plenty of room for two more people in your living room,” and I can say “Excuse me? Who are you and why are you looking in my windows?”
This is my house, and it’s not my responsibility to justify to some stranger why I want X number of people living here and not Y number of people.
If I want to live alone, that’s my business. I am not obligated to take a roommate. If I want my sister and her husband and five kids to move in here with my husband and kids and their dogs, too, that’s also my business (well, and theirs.)
It is not a stranger’s.
Just because we can cram a lot of people into Nevada does not mean anyone is obligated to do so.
The diamond engagement ring isn’t “trad” by any means–while rings are ancient, the custom of giving one’s beloved a diamond was invented by the DeBeers corporation a mere 80 years ago.
Indeed, the entire modern wedding is mostly a marketing gimmick–I guarantee your dirt poor farming ancestors in the 1800s didn’t spring for a bachelor party (and shotgun marriages were more common than Camelot weddings)–but an insightful Twitter commentator whose name I have regretfully forgotten brings up an intriguing possibility: have diamond rings become so popular because they are an effective, hard to fake signal of future marital fidelity, thus taking the place of a traditional piece of legislation, the “breach of promise to marry“:
A breach of promise to marry, or simply, “breach of a promise,” occurs when a person promises to marry another, and then backs out of their agreement. In about half of all U.S. states, a promise to marry is considered to be legally enforceable, so long as the promise or agreement fulfills all the basic requirements of a valid contract.
According to this theory, as legal enforcement of punishments for breaking marriage contracts fell by the wayside, people found new ways to insure their relationships: by spending a huge hunk of cash on a non-refundable diamond.
This is a really nice theory. It just has one problem: the amount of money spent on a diamond is a really poor predictor of marital quality. In fact, researchers have found the opposite:
In this paper, we estimate the relationship between wedding spending (including spending on engagement rings and wedding ceremonies) and the duration of marriages. To do so, we carried out an online survey of over 3,000 ever married persons residing in the United States. Overall, we find little evidence that expensive weddings and the duration of marriages are positively related. On the contrary, in multivariate analysis, we find evidence that relatively high spending on the engagement ring is inversely associated with marriage duration among male respondents. Relatively high spending on the wedding is inversely associated with marriage duration among female respondents, and relatively low spending on the wedding is positively associated with duration among male and female respondents.
People who spend more on diamonds (and weddings) get divorced faster, but it appears there is a sweet spot for rings between $500 and $2000. Not having a ring at all might spell trouble, for going below $500 also increases your chance of divorce–but not nearly as much as spending over $2000.
The sweet spot for the overall wedding is… below $1000. This is a little concerning when you consider that, according to PBS, the average couple spends about $30,000 on their wedding.
These finding may have an immediate cause: debt is bad for marriage, and blowing $30,000 on a wedding is not a good way to kick off your life together. There may also be a more fundamental cause: people who are impulsive and bad at financial planning may also be bad at managing other parts of their lives and generally make bad spouses.
There is one bright spot in this study:
Additionally, we find that having high wedding attendance and having a
honeymoon (regardless of how much it cost) are generally positively associated with marriage duration.
This is probably because these are activities you do with people you actually like, and the sorts of people who have lots of relationships and like doing things with their friends are good at relationships.
So skip the wedding and just invite all of your friends to a big party in Tahiti.
(If you’re wondering, we spent about $1500 on our wedding and I hand made the rings, and we are now the most successfully and longest-married couple in my entire extended family.)
How did we all get bamboozled? The process by which diamond rings became the engagement staple is really something:
The concept of an engagement ring had existed since medieval times, but it had never been widely adopted. And before World War II, only 10% of engagement rings contained diamonds. …
Creating the Narrative:
The agency wanted to make it look like diamonds were everywhere, and they started by using celebrities in the media. “The big ones sell the little ones,” said Dorothy Dignam, a publicist for De Beers at N.W. Ayer. N.W. Ayer’s publicists wrote newspaper columns and magazine stories about celebrity proposals with diamond rings and the type, size, and worth of their diamonds. Fashion designers talked about the new diamond trend on radio shows.
N.W. Ayer used traditional marketing tools like newspapers and radio in the first half of the twentieth century in a way that kind of reminds me of inbound marketing today: In addition to overt advertisements, they created entertaining and educational content — ideas, stories, fashion, and trends that supported their brand and product, but wasn’t explicitly about it. According to The Atlantic, N.W. Ayer wrote: “There was no direct sale to be made. There was no brand name to be impressed on the public mind. There was simply an idea — the eternal emotional value surrounding the diamond.” Their story was about the people who gave diamonds or were given diamonds, and how happy and loved those diamonds made them feel.
People didn’t realize this was marketing. It just felt like “culture,” and to those who grew up with media saturated with “diamonds=love,” it already felt “traditional” by the time they were ready to marry.
Remember this–there’s a lot more “marketing” going on than just the explicit ads on TV.
In honor family, Thanksgiving, and the discovery that my husband is about as Finnish as Elisabeth Warren is Cherokee, today’s post is on Finnish DNA. (No, I did not just “finish” the field of genetics.)
Finland is one of the few European countries that doesn’t speak an Indo-European language. (Well, technically a lot of them speak Swedish, but obviously that’s because of their long contact with Sweden.) Both Finnish and the Sami language hail from the appropriately named Finno-Ugric family, itself a branch of the larger Uralic family, which spreads across the northern edge of Asia.
While there is one cave that might have housed pre-ice age people in Finland, solid evidence of human occupation doesn’t start until about 9,000 BC (11,000 YA), when the ice sheets retreated. These early Finns were hunter-gatherers (and fishers–one of the world’s oldest fishing nets, from 8300 BC, was found in Finland). For a thousand years or so Baltic Sea was more of a Baltic Lake (called Ancylus Lake), due to some complex geologic processes involving uplift in Sweden that we don’t need to explore, but it seems the lake had some pretty good fishing.
Pottery shows up around 5300 BC, with the “Comb Ceramic Culture” or “Pit-Comb Ware.” According to Wikipedia:
The Xinglongwa are from northern China/inner Mongolia.
This distribution is a pretty decent match to the distribution of Finno-Ugric and Uralic languages before the march of Indo-European (Hungarian arrived in Hungary well after the IE invasion), so it’s pretty decent evidence that the language and pottery went together. Pottery usually indicates the arrival of agricultural peoples (who need pots to store things in,) but in this case, the Comb Ceramic people were primarily nomadic hunter-gatherers/fishers/herders, much like modern people in the far north.
While I usually assume that the arrival of a new toolkit heralds the arrival of a new group of people, the general lifestyle continuity between hunter-gatherers with baskets and hunter-gatherers with pots suggests that they could have been the same people. DNA or more information about their overall cultures would tell the story with more certainty.
Oddly, one variety of pit-comb ware is known as “asbestos ware”, because the locals incorporated asbestos into their pots. The point of asbestos pots, aside from aesthetics (the fibers could make large, thin-walled vessels,) was probably to accommodate the high temperatures needed for metal working.
The Corded Ware people–aka the Yamnaya aka Indo Europeans–showed up around 3,000 BC. They seem to have brought agriculture with them, though Mesopotamian grains didn’t take terribly well to the Finnish weather.
Bronze arrived around 2,000 BC (or perhaps a little later), having spread from the Altai mountains–a route similar to the earlier spread of Comb Ware pottery. (Wikipedia speculates that these bronze artifacts mark the arrival of the Finno-Ugric languages.) Iron arrived around 500 BC.
Since Finland is a difficult place to raise crops, people have gone back and forth between agriculture, hunting, fishing, herding, gathering, etc over the years. For example, around 200 BC, the “hair temperature” pottery disappeared as people transitioned away from agriculture, to a more nomadic, reindeer-herding lifestyle.
A new genetic study carried out at the University of Helsinki and the University of Turku demonstrates that, at the end of the Iron Age, Finland was inhabited by separate and differing populations, all of them influencing the gene pool of modern Finns.
(Gotta love how Science Daily Trumps this as “diverse origins”)
The authors, Oversti et al, actually title their paper “Human mitochondrial DNA lineages in Iron-Age Fennoscandia suggest incipient admixture and eastern introduction of farming-related maternal ancestry” :
Here we report 103 complete ancient mitochondrial genomes from human remains dated to AD 300–1800, and explore mtDNA diversity associated with hunter-gatherers and Neolithic farmers. The results indicate largely unadmixed mtDNA pools of differing ancestries from Iron-Age on, suggesting a rather late genetic shift from hunter-gatherers towards farmers in North-East Europe. …
… aDNA has recently been recovered from c. 1500 year-old bones from Levänluhta in western central Finland18,19. Genomic data from these samples show a Siberian ancestry component still prominently present today, particularly in the indigenous Saami people, and to a lesser extent in modern Finns.
The authors have an interesting observation about a line running through Finland:
Within Finland, an unusually strong genetic border bisects the population along a northwest to southeast axis24,26,27, and is interpreted to reflect an ancient boundary between hunter-gatherer and farmer populations28. The expanse of agriculture north-east of this border was probably limited by environmental factors, especially the length of the growing season.
I thought this part was really neat:
A total of 95 unique complete-mitogenome haplotypes were observed among the 103 complete sequences retrieved: three haplotypes were shared between sampling sites and five within a site. In the latter cases, the placement of the skeletal samples suggests that the shared haplotypes have been carried by different individuals, who may have been maternally related: identical haplotypes (haplogroup U5a2a1e) were obtained from remains of a c. 5-year-old child (grave 18, TU666) and an older woman (grave 7, TU655) from Hollola.
Obviously the death of a child is not neat, but that we can identify relatives in an ancient graveyard is. I have relatives who are all buried near each other, and if some future archaeologist dug them up and realized “Oh, hey, here we have a family,” I think that’d be nice.
The authors discovered something interesting about the direction of the introduction of agriculture.
If you look at a map of Finland, you might guess that agriculture came from the south west, because those are the areas where agriculture is practiced in modern Finland. You’d certainly be correct about the south, but it looks like agriculture was actually introduced from the east–it seems these early farmers didn’t fare well in eastern Finland, and eventually migrated to the west. Alternatively, they may have just failed/given up, and more farmers arrived later from the west and succeeded–but if so, they were related to the first group of farmers.
Overall, the authors found evidence of three different groups in the ancient graveyards: at the oldest site, a Saami-like population (found further south that modern Saami populations); a non-Saami group of hunter gatherers, and Neolithic farmers.
The non-Saami hunter gatherers had high rates of haplogroup U4, which is rare in modern Finns (Saami included). According to the article:
Instead, in contemporary populations, U4 exists in high frequencies in Volga-Ural region (up to 24% in Komi-Zyryans)36 and with lower frequencies around the Baltic Sea, such as in Latvians and Tver Karelians (both around 8%)37. Taking into account that U4 have been prevalent in neighboring areas among Scandinavian10,39,40,41,42,43 and Baltic hunter-gatherers12,13,44, Baltic Comb Ceramics Culture12,13,14 and in Siberia during the Early metal period11, we might be observing ancestries belonging to an earlier layer of ancient inhabitants of the region.
Anyway, it’s an interesting article, so if you’re interested in Finland or polar peoples generally, I hope you give it a read.
These skeletons can be divided into two groups: those for whom we have some historical evidence (eg, Goliath, famous literary villain), and those with no evidence except images like this one.
Incidentally, modern man does not average 6 feet tall. The average American man, hailing from a well-fed cohort, is only 5″9′ (you think men are taller than they are because they all lie). The global average is a bit smaller, at about 5’7“.
Historically, people tended to be a bit shorter, probably due to inconsistent food supplies.
I have often seen it claimed that heights fell when people adopted agriculture, but most hunter-gatherers aren’t especially tall. The Bushmen, for example, are short by modern standards; I suspect that the pre-agricultural human norm was more Bushman than Dinka.
If we roll back time to look at our pre-sapiens ancestors, Homo erectus skeletons are estimated to have been between 4″8′ and 6″1′, which puts them about as tall as we are, but with a lot of variation (we also have a lot of variation). Neanderthals are estimated about 5″4′-5″5′; Homo habilis was shorter, at a mere 4″ 3′. Lucy the Australophithecine, while female, was even shorter, similar to modern chimps.
On net, a few food-related hiccups aside, humans seem to have been evolving to be taller over the past few million years (but our male average still isn’t 6 feet.)
But does this mean humans couldn’t be taller?
The trouble with being unusually tall is that, unlike apatosauruses, we humans aren’t built for it. The tallest confirmed human was Robert Wadlow, at 8 feet, 11 inches. According to acromegalic gigantism specialist John Wass, quoted by The Guardian, it would be difficult for any human to surpass 9 feet for long:
First, high blood pressure in the legs, caused by the sheer volume of blood in the arteries, can burst blood vessels and cause varicose ulcers. An infection of just such an ulcer eventually killed Wadlow.
With modern antibiotics, ulcers are less of an issue now, and most people with acromegalic gigantism eventually die because of complications from heart problems. “Keeping the blood going round such an enormous circulation becomes a huge strain for the heart,” says Wass.
Ancient people, of course, did not have the benefit of antibiotics.
What about Bigfoot?
Well, Bigfoot isn’t real, but Gigantopithecus probably was.
Gigantopithecus … is an extinctgenus of ape that existed from two million years to as recently as one hundred thousand years ago, at the same period as Homo erectus would have been dispersed, in what is now Vietnam, China and Indonesia placing Gigantopithecus in the same time frame and geographical location as several hominin species. The primate fossil record suggests that the species Gigantopithecus blacki were the largest known primate species that ever lived, standing up to 3 m (9.8 ft) and weighing as much as 540–600 kg (1,190–1,320 lb), although some argue that it is more likely that they were much smaller, at roughly 1.8–2 m (5.9–6.6 ft) in height and 180–300 kg (400–660 lb) in weight.
They’re related to orangutans; unfortunately it’s difficult to find their remains because the Chinese keep eating them:
Fossilized teeth and bones are often ground into powder and used in some branches of traditional Chinese medicine. Von Koenigswald named the theorized species Gigantopithecus.
Since then, relatively few fossils of Gigantopithecus have been recovered. Aside from the molars recovered in Chinese traditional medicine shops, Liucheng Cave in Liuzhou, China, has produced numerous Gigantopithecus blacki teeth, as well as several jawbones.
Please stop eating fossils. They’re not good for you.
Unfortunately, since we only have teeth and jawbones from this creature, it’s hard to tell exactly how tall it was.
Let’s just estimate, then, a maximum human height around 10 feet. After that, your heart explodes. (Joking. Sort of.)
Let’s start with Goliath.
The Philistines were a real people–one of the “Sea Peoples” who showed up in the Mediterranean during the Bronze Age Collapse:
In 2016, a large Philistine cemetery was discovered near Ashkelon, containing more than 150 dead buried in oval-shaped graves. A 2019 genetic study found that, while all three Ashkelon populations derive most of their ancestry from the local Levantine gene pool, the early Iron Age population was genetically distinct due to a European-related admixture …According to the authors, the admixture was likely due to a “gene flow from a European-related gene pool” during the Bronze to Iron Age transition…
The inscriptions at Medinet Habu consist of images depicting a coalition of Sea Peoples, among them the Peleset, who are said in the accompanying text to have been defeated by Ramesses III during his Year 8 campaign. In about 1175 BC, Egypt was threatened with a massive land and sea invasion by the “Sea Peoples,” a coalition of foreign enemies which included the Tjeker, the Shekelesh, the Deyen, the Weshesh, the Teresh, the Sherden, and the PRST. … A separate relief on one of the bases of the Osirid pillars with an accompanying hieroglyphic text clearly identifying the person depicted as a captive Peleset chief is of a bearded man without headdress. This has led to the interpretation that Ramesses III defeated the Sea Peoples including Philistines and settled their captives in fortresses in southern Canaan; another related theory suggests that Philistines invaded and settled the coastal plain for themselves. The soldiers were quite tall and clean shaven. They wore breastplates and short kilts, and their superior weapons included chariots drawn by two horses. They carried small shields and fought with straight swords and spears.
Goliath’s height increased over time: the oldest manuscripts, namely the Dead Sea Scrolls text of Samuel from the late 1st century BCE, the 1st-century CE historian Josephus, and the major Septuagint manuscripts, all give it as “four cubits and a span” (6 feet 9 inches or 2.06 metres)…
It looks like Goliath was tall, but only basketball player tall, not Guinness Book of World Records tall.
The shortest guy in the picture, “Maximinus Thrax,” was a real person and emperor of Rome from 235 – 238 AD. 8″6′ is at least within the range of heights humans can achieve, and he was, according to the accounts we have, very tall. Unfortunately, we don’t know how tall he was–the ancient accounts are considered unreliable, the Roman “foot” is not the same as the modern “foot,” and crucially, no one has dug up his skeleton and measured it.
So Maximinus was probably a tall guy, though not 8″6′ (that would require the Roman foot to equal our modern foot).
Og the Rephaim:
Og, King of the Bashan, is only known from the Bible, but might have been an actual king. We don’t have any chronicles from other countries that mention him (kings often show up in such chronicles because they make war, get defeated, send tribute, sign treaties, etc., but there is one Agag of the Amelekites who does have a similar name.
Interestingly, there is one Og attested in archaeology, found in a funerary inscription which appears to say that if the deceased is disturbed, “the mighty Og will avenge me.”
The Bible claims that Og’s bed was 13 feet long. Wikipedia offers us an alternative explanation for this mysterious bed: a megalithic tomb:
It is noteworthy that the region north of the river Jabbok, or Bashan, “the land of Rephaim”, contains hundreds of megalithic stone tombs (dolmen) dating from the 5th to 3rd millennia BC. In 1918, Gustav Dalman discovered in the neighborhood of Amman, Jordan (Amman is built on the ancient city of Rabbah of Ammon) a noteworthy dolmen which matched the approximate dimensions of Og’s bed as described in the Bible. Such ancient rock burials are seldom seen west of the Jordan river, and the only other concentration of these megaliths are to be found in the hills of Judah in the vicinity of Hebron, where the giant sons of Anak were said to have lived (Numbers 13:33).
Og might have actually been a very tall person, though it is doubtful he was 13 feet tall. He might have been a fairly normal-sized person who had a very impressive megalithic tomb which came to be known as “Og’s Bed,” inspiring local legends. He also might not have existed at all. Until someone digs up Og’s body and measures it, we can’t say anything for sure.
Interestingly, I found two French giants, though neither of them, as far as I know, near Valence.
The Giant of Castlenau is known from three pieces of bone uncovered in 1890. If they are human, they are unusually large, but no research has been done on them since 1894 and even a crack team of Wikipedia editors has failed to uncover anything more recent on the subject.
I’d hold off judgment on these until someone within the past century actually seems them and confirms that they didn’t come from a cow.
Teutobochus, king of the Teutons, was a giant found in 1613, France. Unfortunately, he seems to have been a deinotherium–that is, an extinct variety of elephant.
This is the last of the reasonable skeletons. The rest exist only in graphics like the one at the top of the post and articles discussing them–in other words, there’s more evidence for Paul Bunyan.
So far I’ve found no sources on the 15 foot Turkish giant. Yes, lots of people claiming they exist, eg. No, not one photo of them.
Was a 19’6″ human skeleton found in 1577 A.D. under an overturned oak tree in the Canton of Lucerne? There are no records of it.
Any 23 foot tall skeletons near an unidentified river in Valence, France? Can’t find any.
And what about the 36 foot tall Carthaginian skeletons?
Giraffes, currently the tallest animals on earth, only reach 19 feet. T-rex was 12-20 feet tall. Even the famous Apatosaurus was a mere 30 feet tall (though we don’t know how high he could swing his head).
If you’re talking about humans who were bigger than an Apatosaurus, you’re really going to have to pause and take a biology check–and also check to make sure you aren’t holding an Apatosaurus femur.
Humans could be bigger (or smaller) than they currently are, just as dinosaurs came in many different sizes (some, like hummingbirds, are quite small), but different sizes require different anatomy. That’s why people with giganticism have heart trouble and tall people die younger: we aren’t built for it. Humans aren’t designed to handle Apatosaurus-level weights; our hearts aren’t designed to pump blood that far. A 36 foot tall human couldn’t be a single individual with giganticism, nor even a whole family or tribe of unusually tall people–they’d have to have evolved that way over millions of years. They’d be their own species, and we’d have actual evidence that their bones exist.
Incidentally, most of the sources I found discussing these skeletons, including ones using the graphic above, claim that evidence of these giants is being actively hidden or suppressed or destroyed by The Smithsonian, National Geographic, etc., because they would somehow disprove evolution by showing that humans have gotten shorter instead of taller.
This is absurd. Gigantopithecus is taller than any living ape (including humans) but he doesn’t disprove evolution. He doesn’t even disprove orangutans. A giant human skeleton would simply show that there was once a giant human–not that humans didn’t evolve.
Humans can evolve to be shorter–it has happened numerous times. Pygmies are living human people who are much shorter than average–adult male Pygmies average only 5 feet, one inch tall. Pygmoid peoples are just a little taller, and found in many parts of the world.
Even shorter, though, were Homo floresiensis and Homo luzonensis. The remains we have so far uncovered of H. floresiensis stood a mere 3 feet, 7 inches, and luzonensis was similarly petite. Both of these hominins descended from much taller ancestors.
Evolutionists don’t need to hide the existence of giant skeletons because evolution can’t be disproven by the existence of a tall (or short) skeleton. That’s just not how it works. The Smithsonian would love to display giant skeletons–if it had any. National Geographic would love to run articles on them. They’d make money like hotcakes on such sensational relics.
The problem is that no one can actually find any of these skeletons.
Man in his natural state, upon reaching adulthood, is struck with the urge: the urge to travel, to struggle, to conquer, and ultimately triumph (or die trying).
Migration is a goal of the young.
To be young is to struggle: against nature, against society, against himself, against the elements, against hunger, against failure.
To throw himself against the mountains, against the storms. To track and kill his own food. To survive against bears, monsters, enemies. To forge a path in the wilderness, chop down trees, build his own home.
Lion, wolf, or elephant, the young male is unlikely to stay in the pack of his birth. He must leave his mother’s side and forage in the wilderness until he has the strength to lead the pack or found his own.
Some organisms are motile throughout their lives, but others are adapted to move or be moved at precise, limited phases of their life cycles. This is commonly called the dispersive phase of the life cycle. The strategies of organisms’ entire life cycles often are predicated on the nature and circumstances of their dispersive phases. …
Due to population density, dispersal may relieve pressure for resources in an ecosystem, and competition for these resources may be a selection factor for dispersal mechanisms.
Dispersal of organisms is a critical process for understanding both geographic isolation in evolution through gene flow and the broad patterns of current geographic distributions (biogeography).
A distinction is often made between natal dispersal where an individual (often a juvenile) moves away from the place it was born, and breeding dispersal where an individual (often an adult) moves away from one breeding location to breed elsewhere.
Modern man, in modern cities, is deprived of struggle. The land is already cleared. The houses are already built. The food arrives pre-killed in the grocery store. The map has already been drawn and your GPS tells you where to go.
We have made ourselves a paradise and find it wanting.
Like a rooster told not to crow, modern man flings himself at a structure with nothing but ersatz struggles: video games, online flame wars, antifa larping. We turn to empty screeching to make ourselves feel like we’re doing something good.
But for those who have just arrived, getting to the city alone is a success.