Tag Archives: personhood

Obligatory Planet of the Apes post


I just taught a class on biotechnology and animals, and am now being pummeled by a flurry of Planet of the Apes-related posts. As usual, such posts are a Rorshach-like template for the blogger’s political leanings, so I figured I may as well do the same. I haven’t seen the movie, and, thanks to our separation-anxiety doggie, probably won’t until it’s on Netflix, but I do have some thoughts, and I’ll channel them through this interesting piece on “Creating Non-Human People” from Oxford’s Practical Ethics blog. The trope of “super-intelligent, violent, most likely malicious animals taking over the world” is Hollywood Summer entertainment, but the interesting issues here actually concern the ethics of enhancement, personhood, and species integrity.

A lot of one’s views of biotechnology will be influenced by your views on science and whether you think the critique of ‘playing God’ is a useful one. (I don’t, for various reasons, but mostly because we’ve been playing God in the dark for 10,000 years, and the double helix let us turn the lights on. One’s view on this issue will also color a range of related issues–hence, for example, environmentalism’s uneasy relationship with science.)

That said, I think there are a lot of good reasons to proceed with a lot of caution. The ethics of animal cloning, and genetic manipulation more generally, raise a number of significant welfare concerns. The irony is that the lay bioethical position has turned a blind eye to all manner of grotesque nonhuman animal genetic manipulation, but anything resembling human chimerism is verboten. In other words, the ethical problem of creating cognitively ‘enhanced’ nonhuman animals is that they would then be more likely to qualify for personhood, and, as such, increased moral protection. Ironic because, as Rollin notes, this kind of Cartesianism is its own undoing–if it’s wrong to test on species that are sufficiently ‘like us’, but the reason we do the testing in the first place is because they’re like us.

An empathic and nonzero civilization. . .but for whom?

This video by Cambridge’s Simon Baron-Cohen does a good job introducing the relation between empathy, pathology, and social trust (and see here for a good RSA Animate on Jeremy Rifkin’s Empathic Civilization). Baron-Cohen’s done a lot of interesting work on empathy and the male/female brain and empathy and autism/asberger’s, and on measuring empathy. I was immediately struck, however, by the way he chose to define empathy: “the drive to identify (cognitive) and appropriately respond to (affective) another person’s feelings.” Further into the talk, some of the research he draws on implies that “persons” and “objects” are the only relevant categories under discussion. I guess this is what makes me an ‘animal rights activist’ (as Wikipedia’s definition of empathy puts it), because I think the natural extention of Baron-Cohen’s argument–that answers to questions about empathy have right and wrong answers, and one of the jobs of psychology is to figure out how to get more people to answer ‘correctly’–is far more radical than even he may acknowledge.

What distinguishes empathy from sympathy, compassion, and pity? This is a difficult question to answer concretely, but links like this have me thinking that the reason empathy might be so commonly perceived as ‘person-oriented’ rather than ‘sentient-or-semi-sentient-being-oriented’ is because of the distinction that empathy, unlike the other words, involves literally feeling the other’s mental state (this is where the much-hyped ‘mirror neurons’ come in). It could follow, I suppose, that this requires a certain level of similarity with the other’s mental state, such that this would work best with other members of our species. Keeping in mind that this might be a semantic quibble, I don’t buy this argument. I could as much “feel” my dog’s pain when he slipped a vertebra last year as I could my wife’s when she tore her ACL.

To return to the radical implications of a high-empathy society: I strongly believe that such a society would treat nonhuman animals in a fundamentally different way than we do today, and that such a shift would entail a range of social, political, and economic reforms with far-research consequences. While it’s easy to speak of expanding the domain of the nonzero (as against zero-sum)–and I’m all for this kind of policy…indeed, only a fool or an IR realist would be against it!–but introducing nonhuman animals into the moral calculus with anything less than a high discount rate will change the game in a basic way. And it should, because the level of structural violence that exists against nonhumans animals in the world today is only ignored because of a conditioned moral blindness that would wither in the face of an empathic civilization.

So how to go about this? There are many possible routes, but I think one of the strongest when it comes to empathizing with nonhuman animals is the priming of our moral sensibilities through art (sometimes called the sympathetic or aesthetic education) is marvelously fecund, as Nussbaum and others have argued. Others argue that fostering nonzero relationships tends to result in increased empathy, and this makes sense too, as long as the in-group/out-group distinction doesn’t stop at the species line. A range of other options exist, of course, all the way from the work in studying pathology by psychologists like Baron-Cohen to essentially sociobiological proposals that we engineer aggression out of our gene pool. The bioethics of the latter are troubling, obviously, but they do reflect a trend towards revived sociobiology in the guise of neuroscience. This takes many forms, though, and each needs to be addressed on its own merits.

If nothing else, Baron-Cohen’s research goes a long way in explaining why I was the only male in my Animals and Public Policy class. This needs to change, but it seems the change can only go so far if he is right about the ‘male brain’.

Animalism and philosophy

(Images source) The recent piece “The animal you are” by UCL philosophy prof. Paul Snowdon was most striking to me for what it left out; for a piece on animality, there sure was a lot of focus on one particular animal. None of the arguments for or against “animalism” (the idea that the human animal is the same thing as the person, or self) even began to engage with nonhuman animal cognition, let alone the people calling for nonhuman animal person for great apes and/or cetaceans.

Setting aside whether ‘person’ is the right word for chimps and dolphins, who clearly have at least some level of self-consciousness and use of reason (these are the criteria listed by Locke and repeated by Snowdon), I think any discussion of mind/body dualism has to seriously engage with the similarities and differences between human and nonhuman animal minds (the Sapolsky video in my first blog post is a good example of this). Snowdon writes that “if we are prepared to allow there might be entities which merit being described as persons who are not human – say God, or angels, or Martians, or robots, – then animalism should not rule them out.” It’s disturbing to me that hypothetical and probably fictional characters are presented to play the role of potential nonhuman persons, when actual, existing animals aren’t even granted a mention in passing. (I’m reminded here of the common line in popular bioethics where human genetic chimeras are abomination–but hey, do whatever the heck you want with other animals–or of the fetishization so common in Japan and elsewhere of robot intelligence and of drafting declarations of the rights of robots, with the irony of cetacean slaughter of existing sentient life continuing unchecked.)

I enjoyed reading this piece, and my comments here aren’t getting into the merits of any of the substantive questions raised, but still: for a piece called ‘the animal you are’, I was expecting more animals. I need to learn more philosophy of mind, if only to unmask some anthropocentric shibboleths.