Maybe the Uncanny Valley has nothing to do with avoiding sick/dead people, maybe nothing to do with anything specifically human-oriented at all, but with plain-ol’ conceptual category violations? Suppose you are trying to divide some class of reality into two discrete categories, like “plants” and “animals” or “poetry” and “prose”. Edge cases that don’t fit neatly into either category may be problematic, annoying, or otherwise troubling. Your brain tries to cram something into Category A, then a new data point comes along, and you switch to cramming it into Category B. Then more data and back to A. Then back to B. This might happen even at a subconscious level, flicking back and forth between two categories you normally assign instinctively, like human and non-human, forcing you to devote brain power to something that’s normally automatic. This is probably stressful for the brain.
In some cases, edge cases may be inconsequential and people may just ignore them; in some cases, though, group membership is important–people seem particularly keen on arguments about peoples’ inclusion in various human groups, hence accusations that people are “posers” or otherwise claiming membership they may not deserve.
Some people may prefer discreet categories more strongly than others, and so be more bothered by edge cases; other people may be more mentally flexible or capable of dealing with a third category labeled “edge cases”. It’s also possible that some people do not bother with discreet categories at all.
It would be interesting to test people’s preference for discreet categories, and then see if this correlates with disgust at humanoid robots or any particular political identities.
It would also be interesting to see if there are ways to equip people with different conceptual paradigms for dealing with data that better accommodate edge cases; a “Core vs. Periphery” approach may be better in some cases than discreet categories, for example.