the planet you can save, maybe

Recently Barath wrote to me:

Peter Singer’s ‘The Life You Can Save’ argument came to mind listening to last weeks’ C-Realm episode.  This was the question of whether we each have an obligation to do as much as we can to save the lives of others and if so (a) why limit it to just human life (given Singer’s anti-speciesist thinking) and (b) is Singer’s narrow formulation right?  Specifically, Singer argues that we should contribute to feed the hungry, etc. but I wonder if ecological restoration projects that have very long but big payoffs are actually better, but harder to quantify.  That is, how does one reason about such ethical questions once they depend upon unknowable or hard to quantify evolving scientific understanding?

These are really good questions.  To begin to answer them, here’s Singer’s argument from The Life You Can Save website:

If we could easily save the life of a child, we would. For example, if we saw a child in danger of drowning in a shallow pond, and all we had to do to save the child was wade into the pond, and pull him out, we would do so. The fact that we would get wet, or ruin a good suit, doesn’t really count when it comes to saving a child’s life.

UNICEF estimates that about 19,000 children die every day from preventable, poverty-related causes. Yet, at the same time almost a billion people live very comfortable lives, with money to spare for many things that are not at all necessary. (When did you last spend money on something to drink, when drinkable water was available for nothing?)

This is a slightly less formal version of an argument he made in the 1970s in his (in)famous “Famine, Affluence, and Morality”, and which was also formulated (independently, I believe) by Louis C.K.  The upshot is that affluent people ought to devote more—a lot more—of their resources and effort to helping those in direst poverty.

It makes sense to ask, when presented with this argument, whether it can be generalized beyond specifically human harms and benefits.  What about harms and benefits to the greater environment?  In fact, this kind of generalization is what I took Kris de Decker to have done  when I tried to reconstruct his argument for bottled water consumption.  But as Barath points out, Singer holds  anti-speciesist commitments which appear to broaden the scope of the conclusion.  The principle at work in Singer’s original argument, for example, is:

If it is in our power to prevent something bad from happening, without thereby sacrificing anything of comparable moral importance, we ought, morally, to do it.

What kind of things count as the “something bad” we should be worried about preventing?  Human suffering, certainly.  But anti-speciesism tells us that we can’t simply neglect the moral significance of nonhumans.  Are we thereby also obligated to prevent environmental harms?

The answer to this is pretty long, actually.  First, it’s true that Singer’s against speciesism, but speciesism is just the idea that species membership alone justifies differential treatment.  So it’s consistent to be against speciesism but still hold that some species are more important than others, morally speaking, if the reason isn’t simply species membership.  And in fact this is what Singer holds.  Singer’s variety of utilitarianism is based on interest- or preference-satisfaction.

Detour into moral theory

Technically, Singer’s famous argument is not utilitarian, and its soundness doesn’t depend on accepting utilitarianism.  But it’s close to utilitarianism in a crucial respect, and I’m going to ignore the differences in what follows.  (Pedants and/or ethicists be damned.)

So.  Utilitarianism can be thought of as a conjunction of two ideas.  First, that rightness consists in maximizing the good.  Second, that the good is happiness.  (There are many, many variations on these ideas but the family of utilitarian theories generally adheres to them.)  But now we need to know what happiness is.  The classical utilitarians—James Mill, Jeremy Bentham, and (arguably) John Stuart Mill—held that happiness is pleasure.  Hence they’re known as hedonistic utilitarians. For hedonistic utilitarianism, all pain and pleasure count alike, no matter what kind of being you are.  Hence Bentham’s famous plea on behalf of nonhuman animals: “the question is not, Can they reason? nor, Can they talk? but, Can they suffer?”

But there are widely acknowledged difficulties with the idea that happiness is pleasure, and later utilitarian writers substituted different conceptions of happiness.  Singer opts for happiness as interest- or preference-satisfaction: roughly, getting what you like, or what’s good for you given the kind of being you are.

Because there is a spectrum of animal complexity, different species will have different interests.  Some animals merely have interests in staying alive and avoiding pain.  Human beings have many interests on top of that, and those interests are influenced by our individual makeups, our cultural setting, our level of education, our past struggles, and so on.  Thus a human who is badly off (say, living a life of grinding poverty) is, according to Singer, much worse off than a nonhuman animal (even an intelligent one like a pig) in analogously impoverished circumstances, because the human has many more interests, and most of those interests are more serious than the pig’s.  And death for a human is worse than death for other animals, since human beings have interests in their life plans, in their family’s well-being, etc.

So although Singer takes all interests equally, some interests are more serious than others, and some beings will have a greater number of interests frustrated by adverse conditions.  It turns out, then, that humans are—in a sense—more important than other species, although not every human interest trumps other animals’ interests.  Singer thinks e.g. vegetarianism is obligatory because no human interest in pleasure can outweigh an animal’s interest in staying alive.  But IIRC he is ok with some restricted kinds of medical testing on nonhuman animals, due to the importance of medical science.

The planet you can save?

Singer himself probably would not extend the argument from “the life you can save” to include the environment, broadly construed.  That’s because that argument depends on comparing outcomes as to their relative goodness/badness, and the way Singer assesses goodness/badness is in terms of interest-satisfaction.  Only a few animals (the sentient ones) have morally relevant interests in his sense, plants have none, rocks have none, ecosystems (indeed anything above the level of an individual organism) have none.  To the extent that ecological properties figure into his argument, they will figure indirectly as things conducive to good human lives.

That said, we could ask a couple of questions.  First, what kind of position would we get if we took Singer’s argument seriously, but jettisoned his conception of the good?  E.g. we could take up a conception of the good which is not only non-anthropcentric but fully ecocentric.  (The resulting position would probably be something like what I think of as Derrick Jensen’s: radical action to destroy civilization.)  Second, what happens if we stay with Singer’s view but amend it to take into account future people?

This is getting toward question (b), about whether even the narrow formulation of the argument is correct, given future human interests.  Tim Mulgan (Ethics for a Broken World) is someone who takes utilitarianism seriously, but who thinks that most ethicists haven’t yet learned to take future people into account.  When you do, he thinks, you realize that future persons stand to us in (almost) exactly the same way that today’s global poor do.  One group is distant in time, the other is distant is space, but exactly the same principles of justice apply.  So, Mulgan would say, Singer’s insights haven’t been pressed far enough, and once we see they apply to future people we find that we are behaving grossly immorally.  We ought to stop taking resources which future people need, we ought to take radical action to stop our destruction of future people’s climate, and we ought to live much, much more modestly, devoting our nonessential time and effort to making things right by the future.

But this conclusion is arrived at by entirely anthropocentric means—the only things considered morally significant  are people, and all other goods are instrumental to the welfare of people.  So we can make a case for taking the environment seriously, indeed for radical preservation of ecological systems, purely on anthropocentric utilitarian grounds, just by treating future people as equally important.  And if we, like Singer, extend moral consideration to some nonhuman animals, then the case for ecological preservation becomes even stronger.

(Interestingly enough, these ideas have played out between two utilitarians I know (call them ‘P’ and ‘T’).  After taking a flight to a conference in Europe, P mentioned to T that he’d bought carbon offsets.  T responded, “Why would you ever buy carbon offsets when you could donate that money to poverty relief?”)

Action and uncertainty

But now there is the question of how to evaluate actual proposed courses of action when the outcomes are uncertain.  The standard utilitarian answer is to do an expected utility calculation: multiply the value of an outcome by its probability of occurring, and, for evaluating actions, sum the expected utility of each action’s possible outcomes.  Then go with the action that comes out on top.  Of course, this is going to be difficult even for  short-timeframe decisions, and there’s idealization involved in assigning numerical values to outcomes, but your meat-and-potatoes utilitarian will say that that’s the ideal to aim for.

This answer becomes much less helpful when the far future is concerned, since it’s so hard to predict, and it becomes deeply complicated when there is uncertainty not only about future outcomes, but about which model for estimating future outcomes we should use in the first place. (Or do we combine models, and if so, how should we do that? etc.)  I certainly don’t have an answer to that, but because I’m not a utilitarian, I don’t really feel the need to have one.  I would guess that some philosophers who work on climate change have made proposals, but I’m not actually very familiar with that literature.  And there might be something helpful in the literature on evidence-based policy, though I’m not sure.

A shorter answer to all this might be that, no matter what ethical theory you’re working with, ethical reasoning always happens by conjoining normative premises about what one ought to do with descriptive premises about empirical fact.  When those descriptive premises become highly uncertain, then one’s reasoning about what to do is concomitantly uncertain.  But how much of a problem that is depends on your ethical theory to begin with.  Utilitarians will insist there is always a right thing to do; virtue ethicists (for example) not so much.  But I think this whole discussion is illustrative of a real problem for utilitarianism, given uncertainty about the future: all of an action’s consequences for happiness matter.  Thus utilitarianism might tell us to help the global poor (as Singer thinks), but it might also tell us to let them eat cake.  Everything depends on the empirical facts about which policy will yield the most happiness over time, but in many cases we just don’t have access to those facts.

For my part, I think that very broad principles for decision-making under uncertainty, such as the precautionary principle, go a long way, and needn’t rely on utilitarian justification.  But that’s another conversation, and anyhow I have never really achieved equilibrium in my own ethical convictions.

Leave a Reply


There aren't any comments at the moment, be the first to start the discussion!