S.F. Jude Terror wrote:
Ok, so in your opinion, where is the line? If the Phoenix was going to kill 100 people? 1000? 10000? At what point does it become necessary to infringe upon the rights of another to protect others?
Short answer: before seven billion. Long answer...
I think that's an incredibly complicated issue. Some utilitarians, for example, would say, 'whatever leads to the maximum
level of utility,' others would say, 'whatever leads to the highest average
utility.' There are other positions, too. I'm not even strictly speaking a utilitarian and I think this is an open question. That's why I call myself a consequentialist but I don't advocate any particular version of consequentialism. There is clearly still a lot of work to be done in this area.
If it was a simple question of lives saved, then, all else being equal, two lives would be enough to justifying taking one, if that really is your only choice (if you can save all three, obviously that would be preferable). If there are two bombs, miles apart, one in a city with a population of 1 and another in a city with a population of 2 and there's only enough time to defuse one bomb, obviously I'm going for the city with 2 people in it.
However, there's more than just lives saved to consider.
Humans can't be trusted to run these calculations most of the time and people want to live in societies where they have certain privileges so we end up with notions like 'rights' and 'laws' as a crude approximation of the resulting compromise. But we're still justifying those things on consequentialist grounds.
If I say, 'Why
is this illegal?' or 'Why
should I have this right?' all you can do, really, is point to the consequences of not having it and say, 'Do you want that to happen?'
So, if you say to a consequentialist, 'Should we get rid of laws and rights, then, since these are really just useful fictions?' the consequentialist is going to say, 'No, because that would lead to basically bedlam. The consequences would be horrible and that's exactly what I'm trying to avoid.'
In this case the consequences of killing someone to save billions of people are that it sets a worrying precedent and makes people feel less secure. That's not to be waved away as un
important but neither is it some all
-important deontological principle that must hold in all situations. We have these rights only because they probably do more good on the whole and we can't think of anything better. The moment that changes, they should go.
The consequence of not killing her, OTOH, is an unacceptable risk of billions of innocent deaths. Assuming that killing her doesn't make it worse, of course.
If doing the right thing leads to the deaths of billions of innocent people, in what sense was that the right thing? Because God says so? Because we have fundamental basic rights? Because humans are intrinsically valuable? But there is no God, rights are a recent human invention that most humans throughout history have never experienced and value is extrinsic not intrinsic. Worse, even if none of these things were the case it wouldn't provide a basis for morality. Suppose God commanded that children be beaten? Suppose that one race of people had the 'right' to beat their children, as has been the case in most cultures. Suppose turnips turned out to have more intrinsic value than children? In none of these cases would that permit harming innocent children. The only answer to the question that begins this paragraph that makes sense to me is 'Because it would lead to even worse consequences if we didn't
So whatever it is that we're worried about has to be worse than the deaths of billions of innocent people. Bad as violating someone's rights is, it's not that