Category Archives: neurophilosophy

The moral brain conference

I went to this conference at NYU a few weeks ago, and was thoroughly fascinated all the way through. It was a merger of two conferences – the first on ‘The Significance of Neuroscience for Morality’ and the second on ‘moral enhancement’ – and part one, in particular, was mostly new terrain for me. It was also the first time I used my new iPad/bluetooth keyboard/Evernote combo, which worked really well – and all of my notes are here. Hughes and Dvorsky (from the Institute for Ethics and Emerging Technologies, which I follow on Reader) were also posting updates here, here, here, here, here, and here.

I just sat and listened, absorbing the approximately 25 hours of talks. My general impression is that neuroscientists sure do like fMRI’s; I actually learned a good deal about the different parts of the brain and the different chemicals that affect our moral (and other) behavior. It was also interesting to see Knobe, Greene, and Haidt in person.

Topically, discussions were all over the place – see the links above – but focused on: experimental studies of the effects of seratonin, etc. on empathy and related behaviors, whether it makes sense to talk about a ‘morality pill’ (probably not), and what we’re talking about when we’re talking about moral enhancement.

My only real gripe is that the conference was so strictly anthropocentric. As usual, I saw lots of room for fascinating engagement with the nonhuman animal mind – we could, for example, use fMRI studies of neurotypical humans to assess emotional and maybe even moral states in other primates. Instead, the only discussion of other animals was as ‘animal models’, with a few very minor exceptions. It’s my own fault for not asking a question, though…but hopefully animal studies folks can bone up on this literature and have an overlapping conference of their own!

 

 

Paths to caring, ctd.

Peter Singer and Agata Sagan’s recent Opinionator piece Are We Ready for a ‘Morality Pill’? raises important issues, but is insufficiently nuanced (they have another piece on robot rights, which follows logically from Singer’s version of consequentialist utilitarianism). If and when–probably just when, really–we become able to tinker with our brain chemistry to alter our ability for compassion and empathy, these kinds of questions will be unavoidable. In the meantime, though, it seems odd that we don’t focus instead on those tools which can demonstrably improve both how we care for others and who counts as an other; the short film No Robot provides a good example.

An empathic and nonzero civilization. . .but for whom?

This video by Cambridge’s Simon Baron-Cohen does a good job introducing the relation between empathy, pathology, and social trust (and see here for a good RSA Animate on Jeremy Rifkin’s Empathic Civilization). Baron-Cohen’s done a lot of interesting work on empathy and the male/female brain and empathy and autism/asberger’s, and on measuring empathy. I was immediately struck, however, by the way he chose to define empathy: “the drive to identify (cognitive) and appropriately respond to (affective) another person’s feelings.” Further into the talk, some of the research he draws on implies that “persons” and “objects” are the only relevant categories under discussion. I guess this is what makes me an ‘animal rights activist’ (as Wikipedia’s definition of empathy puts it), because I think the natural extention of Baron-Cohen’s argument–that answers to questions about empathy have right and wrong answers, and one of the jobs of psychology is to figure out how to get more people to answer ‘correctly’–is far more radical than even he may acknowledge.

What distinguishes empathy from sympathy, compassion, and pity? This is a difficult question to answer concretely, but links like this have me thinking that the reason empathy might be so commonly perceived as ‘person-oriented’ rather than ‘sentient-or-semi-sentient-being-oriented’ is because of the distinction that empathy, unlike the other words, involves literally feeling the other’s mental state (this is where the much-hyped ‘mirror neurons’ come in). It could follow, I suppose, that this requires a certain level of similarity with the other’s mental state, such that this would work best with other members of our species. Keeping in mind that this might be a semantic quibble, I don’t buy this argument. I could as much “feel” my dog’s pain when he slipped a vertebra last year as I could my wife’s when she tore her ACL.

To return to the radical implications of a high-empathy society: I strongly believe that such a society would treat nonhuman animals in a fundamentally different way than we do today, and that such a shift would entail a range of social, political, and economic reforms with far-research consequences. While it’s easy to speak of expanding the domain of the nonzero (as against zero-sum)–and I’m all for this kind of policy…indeed, only a fool or an IR realist would be against it!–but introducing nonhuman animals into the moral calculus with anything less than a high discount rate will change the game in a basic way. And it should, because the level of structural violence that exists against nonhumans animals in the world today is only ignored because of a conditioned moral blindness that would wither in the face of an empathic civilization.

So how to go about this? There are many possible routes, but I think one of the strongest when it comes to empathizing with nonhuman animals is the priming of our moral sensibilities through art (sometimes called the sympathetic or aesthetic education) is marvelously fecund, as Nussbaum and others have argued. Others argue that fostering nonzero relationships tends to result in increased empathy, and this makes sense too, as long as the in-group/out-group distinction doesn’t stop at the species line. A range of other options exist, of course, all the way from the work in studying pathology by psychologists like Baron-Cohen to essentially sociobiological proposals that we engineer aggression out of our gene pool. The bioethics of the latter are troubling, obviously, but they do reflect a trend towards revived sociobiology in the guise of neuroscience. This takes many forms, though, and each needs to be addressed on its own merits.

If nothing else, Baron-Cohen’s research goes a long way in explaining why I was the only male in my Animals and Public Policy class. This needs to change, but it seems the change can only go so far if he is right about the ‘male brain’.

Animalism and philosophy

(Images source) The recent piece “The animal you are” by UCL philosophy prof. Paul Snowdon was most striking to me for what it left out; for a piece on animality, there sure was a lot of focus on one particular animal. None of the arguments for or against “animalism” (the idea that the human animal is the same thing as the person, or self) even began to engage with nonhuman animal cognition, let alone the people calling for nonhuman animal person for great apes and/or cetaceans.

Setting aside whether ‘person’ is the right word for chimps and dolphins, who clearly have at least some level of self-consciousness and use of reason (these are the criteria listed by Locke and repeated by Snowdon), I think any discussion of mind/body dualism has to seriously engage with the similarities and differences between human and nonhuman animal minds (the Sapolsky video in my first blog post is a good example of this). Snowdon writes that “if we are prepared to allow there might be entities which merit being described as persons who are not human – say God, or angels, or Martians, or robots, – then animalism should not rule them out.” It’s disturbing to me that hypothetical and probably fictional characters are presented to play the role of potential nonhuman persons, when actual, existing animals aren’t even granted a mention in passing. (I’m reminded here of the common line in popular bioethics where human genetic chimeras are abomination–but hey, do whatever the heck you want with other animals–or of the fetishization so common in Japan and elsewhere of robot intelligence and of drafting declarations of the rights of robots, with the irony of cetacean slaughter of existing sentient life continuing unchecked.)

I enjoyed reading this piece, and my comments here aren’t getting into the merits of any of the substantive questions raised, but still: for a piece called ‘the animal you are’, I was expecting more animals. I need to learn more philosophy of mind, if only to unmask some anthropocentric shibboleths.

The fraught necessity of speaking for the animal other

This review of Jason Hribal’s Fear of the Animal Planet: The Hidden History of Animal Resistance by ‘renegade historian’ Thaddeus Russell caught my attention–as any Reason piece about animals inevitably does. I haven’t read Hribal’s book, so am only going off Russell’s critique here. My first impression is that this article isn’t really about nonhuman animals at all; Russell is using Hribal’s politicized animal as an intentionally farcical springboard for his subaltern critique of the New Left’s tendency to speak for–and thus define and appropriate–marginalized groups.

Indeed, Hribal’s attribution of political consciousness to nonhuman animals is problematic, to put it mildly. But, unsurprisingly for a libertarian column, Russell’s critique overlooks the fundamental challenge of expanding the moral circle beyond the species line. By using the case of nonhumans to support his subaltern ‘history from below’, he draws an arbitrary speciesist line below which nonhuman animals can neither speak for themselves nor have another speak for them. Setting aside the equivocations from various camps about ‘what animals want’, this analysis may well work for humans–indeed, it’s drawing on many of the same arguments as William Easterly’s “white man’s burden” conceptions of humanitarian aid. But it doesn’t work at all for nonhuman animals. On the other hand, Hribal’s politicization of nonhuman animal agency is also problematic.

To me, the world is made up of beings with interests. Part of the work of humanities is to prime our empathy. Part of the work of the social sciences is to foster cooperative nonzero relationships both within and across species lines. And part of the work of science is to reveal the type and degree of human and nonhuman animal preferences. But as this recent SciAm blog post on why animals play points out, we don’t have all the answers.

So Russell is right to be skeptical of speaking for the other–but in the case of nonhuman animals, we have little choice but to try.

What animals want: animal emotion and animal happiness

I’ve been reading a lot about popular neuroscience and related fields recently, and I keep coming back to the question of ‘what animals want’. This question has many variations, each with their own ramifications. Two broad umbrella categories come to mind: 1) the neuro-hyphenators and 2) the animal advocates. These are, of course, overlapping caricatures, but the two approaches have important differences, and I think they both perform an essential role.

The proliferating neuro-hyphenated disciplines preface the question by focusing on what nonhuman animals can want. Studies of animal happiness focusing solely on stress hormones fit this mold. But there’s a problem with this approach, as this SciAm guest blogger identifies: neuro-reductionism in assessing nonhuman animals’ mental states is bound to paint a picture that incomplete at best, and, more likely, reactionary at worst. (An example here would be livestock industry-funded “welfare” studies that justify existing practices…how coincidental!) Whether applied to humans or nonhumans, the idea that our motivations and mental states are reducible to nothing more than the interaction of Oxytocin, dopamine (and so on) strikes me as unlikely to get to the root of the more-than-human condition as it is to get to the root of the human condition.

If nothing else, the above picture tells us that something more is going on. One of the reasons I chose my StumbleUpon handle, surlyotter, is that animal happiness may be as elusive as human happiness, but it’s no less real. This approach to revealing animals preferences–whether through Jonathan Balcombe’s recent Exultant Ark, Marc Bekoff’s Wild Justice, or Dale Peterson’s The Moral Lives of Animals–is of a different type than the neuro-schools. But as long as neuroscience can only paint a reductive picture of nonhuman animal life–that is, until we can, as last week’s New Scientist put it, learn to speak dolphin–such works play a crucial role in helping us understand the more-than-human world.