There have been lots of interesting pieces recently on humanism, morality, and the more-than-human world. My first response to Joel Marks’ supposed rejection of morality is to side with Andrew Sullivan, who has been covering a lot of relevant issues recently. My second was to realize that these issues are bound up with our discussions of secular humanism and its discontents. I agree that something like ‘secular humanism 2.0′ would be an improvement over the current anthropocentric and self-defeating myopia, and I don’t think we need to agree on the primacy of one moral vocabulary to get there.
To briefly recap Joel Mark’s “Confessions of an Ex-Moralist” from last week’s NYT Stone piece: Marks transitions from being a deontologist who fought for the inherent rights on (especially) food animals to a pragmatic/utilitarian person who, less sure of the external moral validity of their core deontological beliefs, “now focus[es] on conveying information” about the conditions on industrial animal ag facilities. I don’t always agree with Coyne, but in this case I do: this looks to me like a distinction without a difference. But that’s also because I’ve come to terms with the fact that most of us exist along a multidimensional plane balancing the poles of the above chart. The best we can do, in my view, is to maintain equilibrium — if it has shown us anything, history teaches us that single-moral-foundation graspings at utopia always tend towards dystopia instead.
As for humanism and its discount tent, well, that’s a big one. As I noted a few posts ago, the recent Rise of the Planet of the Apes film is acting like a confirmation bias-y Rorshach test. To take two examples: this post from Salon argues that the human-nonhuman divide remains very large, while Sue Savage-Rumbaugh’s responses to the film (as recorded in this episode of On Point) mistakes CGI ape intelligence for the considerably less dazzling real thing. My position is somewhere between these poles, but I’m making the connection here just to point out that our position on the role of homo sapiens in a “post-Darwin” world is very likely to dictate, or at least inform, our morals–or our ethics, if you’d rather call them that.
Posted in animal ethics, anthropomorphism, deontology, Drawing the Line, Science and Morality, sentience, the human animal, utilitarianism
Tagged Andrew Sullivan, anthropocentrism, ethics, humanism, Joel Marks, morality, Rise of the Planet of the Apes, secular humanism, Sue Savage-Rumbaugh
I just taught a class on biotechnology and animals, and am now being pummeled by a flurry of Planet of the Apes-related posts. As usual, such posts are a Rorshach-like template for the blogger's political leanings, so I figured I may as well do the same. I haven't seen the movie, and, thanks to our separation-anxiety doggie, probably won't until it's on Netflix, but I do have some thoughts, and I'll channel them through this interesting piece on "Creating Non-Human People" from Oxford's Practical Ethics blog. The trope of "super-intelligent, violent, most likely malicious animals taking over the world" is Hollywood Summer entertainment, but the interesting issues here actually concern the ethics of enhancement, personhood, and species integrity.
A lot of one's views of biotechnology will be influenced by your views on science and whether you think the critique of 'playing God' is a useful one. (I don't, for various reasons, but mostly because we've been playing God in the dark for 10,000 years, and the double helix let us turn the lights on. One's view on this issue will also color a range of related issues--hence, for example, environmentalism's uneasy relationship with science.)
That said, I think there are a lot of good reasons to proceed with a lot of caution. The ethics of animal cloning, and genetic manipulation more generally, raise a number of significant welfare concerns. The irony is that the lay bioethical position has turned a blind eye to all manner of grotesque nonhuman animal genetic manipulation, but anything resembling human chimerism is verboten. In other words, the ethical problem of creating cognitively 'enhanced' nonhuman animals is that they would then be more likely to qualify for personhood, and, as such, increased moral protection. Ironic because, as Rollin notes, this kind of Cartesianism is its own undoing--if it's wrong to test on species that are sufficiently 'like us', but the reason we do the testing in the first place is because they're like us.