Jeff Cooper

Discrimination In Our Time?

17 Nov 2010

Originally written for Interpretation and Argument, 76-101 K, Fall 2010, Taught by David Haeselin

Discrimination In Our Time?

The potential for the acceptance or rejection of post-human beings in our society

It's in our nature, as humans, to segregate things. While this particular characteristic of our nature has helped us to survive as a species – separating good from bad, healthy from poisonous, friend from foe – the intrinsic fear of the unknown that allows us to do this so reliably also often hinders our acceptance of new ideas and ways of life. We are naturally hostile towards new cultures and peoples, with the slave trade, crusades, and the brutalities of European Empires in Africa and Asia all being prime examples. But how would we, as a species and as a culture, react to an entity foreign to all of us – indeed, not even “human” in the sense that we know the word – yet just as intelligent or more so than any of us?

Some would argue that this will never happen. The “singularity” – a word that exists in some nether-region between science and science fiction and defines the point at which humans eclipse themselves with an artificial intelligence superior to their own – might never happen. But both our current trajectory and our current reality as a technologically advanced civilization seem to suggest that it will, some day, in some shape or form, happen. Some, the more outgoing among us, suggest that we will willingly adopt and embrace the new entities as equals or even as superiors. Others posit that our fear of the unknown will prevail and that we will – or perhaps, for whatever reason, must – reject them as equals and do whatever it takes to subjugate them. Finally, some argue that we as humans will slowly transition into some post-human being, absorbing new technologies and ways of thinking until we have surpassed what we know today as “humanity.” These three groups, labeled the adopters, the rejectors, and the absorbers, represent three very different outlooks on the issue of post-human beings.

The first group, who can be labeled the “adopters”, suggests that we will – or at least should – accept post-human entities as a full part of our society. This group is more optimistic and, in some sense, a progressive movement: progressives throughout history have tended to fight for the social status of previously shunned or neglected groups, though often as a byproduct of a larger goal. Those who make predictions of adoption tend to see “post-humans” not as some entirely new intelligence but instead as exactly that: the thing that comes after humanity. In his “How to be Inhuman,” Ronald Bailey argues that we have already begun transforming ourselves into post-humans and that we've willingly embraced the changes we made. He notes that “from childhood on, we are constantly exhorted to improve ourselves by taking more classes, participating in more job training, and reading good books,” and argues that this is in fact modification (Bailey). These practices, he posits, change the structure of the brain and allow it to perform better than it previously could: intentional manipulation of the self at a biological level. Yet we are, in his words, “exhorted” to do these things. Society has, according to Bailey, actively embraced the idea of modification as long as it seems like a natural phenomenon. Erik Davis, the author of “Don't Look Back,” seems to agree. He takes a slightly different approach, arguing that the “modification” that makes us post-human isn't necessarily conditioning and slow pushing of our biological boundaries but instead is the science fiction-esque technology which has become essentially an extension of our bodies. He argues that science fiction has become less a genre of fantasy and closer to “a basic mode of being in...our world” (Davis). His basic premise is that the fantastical – the fictional – is less of an escape into another world than a glimpse of what our future might look like. The idea of a direct link from our brains to information isn't a fantasy, he seems to argue, as much as an oversimplification. A modern smartphone can be always on your person and give you very rapid access to most of human knowledge: is this not a nearly-direct link between knowledge and the mind? Davis argues, if somewhat indirectly, that our modern pervasive technology gives us an super-human and therefore post-human perspective on the world. We don't just adopt or embrace this, we actively pursue new and better ways of doing it. Davis and Bailey both argue that we are in the process of adopting post-human traits. While they disagree on the specifics of how we're doing it or what the traits actually are, they agree completely that we are only helping ourselves and advancing our species by doing so. They are firm believers in the adoption-acceptance paradigm of dealing with post-humanity.

Yet others seem more hesitant. These people, the “rejectors”, try to resist post-humanity because it means, by definition, losing what we know today as “humanity.” In his “The Political Control of Biotechnology,” Francis Fukuyama expresses such concerns. As early as his introduction, he states that “certain technologies...deserve to be banned outright” (Fukuyama 183). His basic argument throughout that work – which deals with his perceived need for regulation of many technologies – is that the potential for destruction and chaos outweigh the potential for positive progress. “Science by itself cannot establish the ends to which it is put,” he writes, “Science can...uncover the physics of semiconductors but also the physics of atomic bombs” (Fukuyama, 185). His argument is not that scientific development should be halted, but that it should be strictly monitored and regulated so that only carefully controlled, “good” results come out. This is intrinsically opposed to the idea of post-human modification: humans are capable of both good and bad, so any modification worth noting should extend their capabilities to do both good and bad simply by extending their capabilities to do everything all at once. Fukuyama's argument is that technology and science cannot distinguish right from wrong and therefore should be limited. In this argument By this logic, post-humans – human-like beings or otherwise – should not be allowed to exist because of their potential for causing even greater chaos than what current humans are capable of. Thomas Pynchon suggests that, even if for potentially the wrong reasons, many people will agree with Fukuyama and reject any semblance of post-humanity. Pynchon argues that many will adopt a “Luddite” attitude towards post-human advances: when they feel obsolete, they will rebel, smashing anything and everything in the way of regaining superiority. Pynchon argues that this mentality, which ultimately boils down to a desire to remain the highest, most advanced form of being at all costs, is natural. Indeed, it may very well be a manifestation of our intrinsic fear of the unknown. Both authors argue that people should and will reject technological advances that will remove them from their firm position of power, even if those advances would put them on a more dependent and dangerous yet higher intellectual pedestal. The rejectors and the adopters have a fundamental disagreement on this point: whereas the adopters would relinquish their absolute control over intellectual dominance to pursue greater intellectual ability, the rejectors would rather maintain their monopoly on superiority even at the expense of superior knowledge.

Finally, many argue that post-humanity is simply the next phase of humanity. They argue that we will, as a species and as a culture, absorb new technologies until we are unrecognizable to our present selves. Every author mentioned so far, whether he supports adoption or rejection of post-humanity, has agreed somewhat with the statement that our modified, technologically-augmented reality is in some ways post-human. Authors like Carr argue that our future may be existence as some synthesis of humanity and information-technology. In his “Is Google Making Us Stupid?,” he suggests that our immediate access to nearly all information has shifted the way we process information from depth-based to breadth-based: knowing a lot about something is no longer as important or useful as knowing about a lot of somethings because we can always look up details. The human part of our brains is used to connect relevant data together, but the actual storage of that information is relegated to a cloud of silicon. He considers the desire of Sergey Brin and Larry Page, the founders of Google, to turn their search engine into “an artificial intelligence, a HAL-like machine that might be connected directly to our brains” (Carr). He presents his views on why this could potentially be a good or bad thing, but ultimately concludes that something along those lines seems inevitable in our future as a race. His article, which catalogs how digital media is changing the way we process information, hints at the fact that as we create new ways to augment our brains with data, we are also changing our brains to augment that information. If this cycle is extrapolated, it is clear to see how our descendants could be considered “post-human:” streams of information would be as much a part of them as their limbs. Vernor Vinge, the author of the seminal “On the Coming Technological Singularity,” calls this slide into post-humanity “Intelligence Amplification (IA)” (Vinge). He describes IA as “something that is proceeding very naturally, in most cases not even recognized by its developers for what it is” (Vinge). This procession is happening now, as evidenced by Vinge and the other authors cited, and Vinge asserts that its continuation is inevitable. N. Katherine Hayles also makes this point in her “How We Became Posthuman.” In addition to the title of the work, she includes the simple but powerful idea that “the posthuman need not be antihuman” (Hayles 289). The thought that humanity will slip into post-humanity without even noticing is a rather comforting alternative to completely embracing or rejecting a new post-human entity.

The idea of a post-human being, whether it is a descendant of humans or something created by humans, is unsettling. Some argue that we should plunge in and embrace whatever that post-human thing is, allowing it to help us on our quest for enlightenment even at the expense of our own monopoly on knowledge, while others argue that losing our place as the most powerful or smartest beings that we know about would spell certain doom for our species. Yet a third group proposes a more abstract idea, that those “post-humans” are just that: the next logical step in humanity, and that we won't get a choice of whether to embrace or reject our future. Who is right is unknowable until some post-humanity exists, and even then rejectors and adopters will argue over the merits of intellectual dominance versus intellectual depth, respectively. If the adopters are right, we may see a peaceful coexistence or a complete takeover of humanity. If the rejectors win out, we'll either stay as the dominant intelligence forever or we'll be violently overthrown. If the absorbers are right, we may never even notice that we've become post-human and wonder “what if...” forevermore. But by the time this happens, it may be too late.

Works Cited

Carr, Nicholas. "Is Google Making Us Stupid? - Magazine - The Atlantic." Editorial. The Atlantic July 2008. Web. 27 Oct. 2010. <>.

Davis, Erik. "Don't Look Back." Techgnosis. 1999. Web. 27 Oct. 2010. <>.

Fukuyama, Francis. Our Posthuman Future: Consequences of the Biotechnology Revolution. New York: Farrar, Straus and Giroux, 2002. 181-94. Print.

Hayles, N. Katherine. "Conclusion." How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. Chicago, IL: University of Chicago, 1999. 282-91. Print.

"How to Be Inhuman." Reason Magazine. 21 Sept. 2005. Web. 27 Oct. 2010. <>.

Pynchon, Thomas. "Is It OK to Be a Luddite?" The Modern Word. 28 Oct. 1984. Web. 27 Oct. 2010. <>.

Vinge, Vernor. "On the Coming Technological Singularity." (1993). Web.

comments powered by Disqus