Tuesday, February 14, 2023

Reply to Bentham's Substack

This is the second article I will be posting in an Exchange with Bentham's Substack. I will be replying to his opening article, seen here.  Let's get started right away.


Argument 1


Almost all of the considerations Matthew adduces in his blog article against deontology are paradox's or strange/unintuitive results in general. Many are parasitic on intuitions that are not independent of each-other. Such things were only one part of my case against utilitarianism on my blog alongside the epistemic, theoretical, explanatory and practical arguments I raised. So, I suppose the first thing to point out, is that a case which rests nigh solely on the same intuitions, in regards to matters where, perhaps, we have reason to distrust our intuitions especially about bizarre hypotheticals. Is, if not an inherently weak case, one we have immediate antecedent reason to be skeptical of. One which I doubt can have the "overwhelming force" Matthew claims.

Anyways, first intuition pump.
Suppose a person puts a bomb in a park for malevolent reasons. Then, they realize that they’ve done something immoral, and they decide to take the bombs out of the park.  However, while they’re in the process, they realize that there are two other bombs planted by other people. They can either diffuse their own bomb, or the other two bombs. Each bomb will kill one person.

It seems very obvious that they shouldn’t diffuse their own bomb—they should instead diffuse the two others. But this is troubling—on the deontologist’s account, this is hard to make sense of. When choosing between diffusing their own bomb or two others, they are directly making a choice between making things such that they violate two people’s rights or violate another person’s rights.

This is one where I'm confused what the force of this is supposed to be. You have a duty to diffuse any of the bomb's you are able to diffuse, and I don't even think that's for consequentialist reasons. Suppose there is a bomb in your vicinity, say one of the one's that was planted, then that fact confers obligation on you to diffuse it, to ignore it would be immoral, as it violates your greatest obligation, respect for humanity as such. If you are not able to diffuse all the bombs, and yours goes off, then that's just to say you aren't responsible, since you did your moral duty by diffusing the other bombs. Obviously, you are responsible insofar as you planted the bomb, but that's just to say the separate act of you planting it was immoral, but in respect of the act of diffusing the other bombs you are not responsible for your failure to diffuse yours given that you couldn't diffuse all three.



Argument 2

Suppose you’re deciding whether or not to kill one person to prevent two killings. The deontologists hold that you shouldn’t. However, it can be shown that a third party should hope that you do. To illustrate this, suppose that a third party is deciding between you killing one person to prevent the two killings, or you simply joining the killing and killing one indiscriminately. Surely, they should prefer you kill one to prevent two killings to you killing one indiscriminately.

Thus, if you killed one indiscriminately, that would be no worse than killing one to prevent two killings, from the standpoint of a third party. But a third party should prefer you killing one indiscriminately to two other people each killing one indiscriminately. Therefore, by transitivity, they should prefer you kill one to prevent two killings to the two killings happening—thus they should prefer you kill one to prevent two. To see this let’s call you killing one indiscriminately YKOI, you killing one to prevent two killings YKOTPTK, and the two killings happening TKH.
YKOTPTK < YKOI < TKH. < represents being preferable. Thus, the deontologist should want you to do the wrong thing sometimes—a perfectly moral third party should hope you do the wrong thing.


(Worth noting that it is absolutely permissible, even on deontology, to kill 1 to save 2 if it is self-defense, or protecting them from a killer, but we'll suppose henceforth that the 1 is innocent).

There is a couple ways to interpret what is being said here, so it'll perhaps be helpful to disambiguate them. Insofar as this argument is attempting to pick out an incoherence between how an agent should act vs what they should prefer to happen given the truth of deontology. Well, as a deontologist, I'm not committed to a position about what people "should hope" or "should prefer". Deontology is, exclusively, a theory about what makes actions right or wrong, my position, qua deontologist, is silent on the matter of what events we should prefer occur from the third person. Perhaps one way in which it might be relevant, is as follows; having a preference for the "killing 1 to prevent 2 killings" scenario, is more conducive for agents to cultivate respect for humanity(rational nature) as such, and therefore more virtuous dispositions. But if that's the case, it's unclear what the incoherence or tension in the position is supposed to be. Both not killing to save 2 and preferring 1 being killed to 2 being killed, are both unified under the maxim of respecting for humanity as such. At worst, it is a slightly weird entailment, but slightly weird entailments are hardly unique to deontology.

Another way of understanding it, is that when we calculate the total value of each state of affairs from the third person. The state of affairs involving one person being killed to prevent two deaths is all-things-considered better than the state of affairs wherein one killing happens. So, for this reason you ought to prefer the killing to save 5 to happen. Yet, the deontologist thinks you shouldn't kill one to prevent two killings. If this is all that's being said, then what's being said doesn't seem to be much more than a restatement of deontological commitments. There is no "paradox", just different questions being asked, "is the action right?" and "is the state of affairs more valuable?". Deontologists do not think an actions rightness depends on the total value of the states of affairs, but on other considerations, so it isn't the least bit surprising that the right-making features of actions and the total value of states of affairs are judged differently, and are not reducible to each-other. 

Additionally, I think what little intuitive force this has going for it, is counterbalanced, by my lights at least, when we then look at intuitive right-making features of actions that seem to be over and above the total goodness of the value of states of affairs. Such as; the intentions and character-traits the agent has, the maxims the agent acted on, whether the act involves violating autonomy/self-determination, humanity or other things we take to be important. etc.


Argument 3

Imagine one thinks that it’s wrong to flip the switch in the trolley problem. While we’ll first apply this scenario to the trolley problem, it generalizes to wider deontological commitments. The question is, suppose one accidentally flips the switch. Should they flip it back?

It seems that the answer is obviously not. After all, now there’s a trolley barreling toward one person. If you flip the switch it will kill five people. In this case, you should obviously not flip the switch.

However, there’s a very plausible principle that says that if an action is wrong to do, then if you do it, you should undo it. Deontologists thus have to reject this principle. They have to think that actions are wrong, but you shouldn’t undo them. More precisely, the principle says the following.
Reversal Principle: if an action is wrong to do, then if you do it, and you have the option to undo it before it has had any effect on anyone, you should undo it.
This problem can also apply to the footbridge case. Suppose you push the guy. Should you pull him back up? No—if you come across a person who is going to stop a train from killing five, you obviously shouldn’t preemptively save him by lifting him up, costing five lives.
A few things can be said here. I feel myself under no initial pressure to accept the reversal principle. There seems to be no reason to think that such a principle is a necessary truth that governs all morally salient actions. The only necessary universal moral truth I'm committed to, is that an action is wrong iff it violates the categorical imperatives, and there seems to be no reason to think reversal actions cannot fall under that category. At best what looks plausible to me is the following principle;
Weak Reversal Principle: if an action is wrong to do, then if you do it, and you have the option to undo it before it has had any effect on anyone, that fact is some defeasible reason to think you should undo it.
I think this principle is clearly far more intuitive than the strong one, and that any intuitive plausibility the strong principle has going for it is parasitic on the intuitive plausibility of the weak one. Suppose it were the case that undoing some wrong action would involve violating someone's autonomy, murder, cultivating really terrible character-traits and dispositions, it seems like considerations like these, can clearly outweigh the fact that the act you are undoing is a wrong act when deliberating whether the reversal act is right to do. Ironically, the only reason I can think of to deny this, would be if one is a consequentialist, which I am not.

To answer the cases then; I think you should probably flip the trolley switch so there isn't an immoral act to undo. No, you shouldn't save the fatman. No, you shouldn't remove the 5 people's organs to save the one. The fact that you are reversing a wrong action is some defeasible reason to do it, and that in many cases all-else equal you should undo wrong actions, but I reject that this fact is overriding in these cases.

Another point worth making is, depending on how reversal is construed. There either seems to be a tu quoque objection here, or we should be skeptical of our intuition in these cases. Does reversal mean the action is already done, so you go back and undo the effects of your action? Then it looks like a consequentialist should be committed to thinking there are cases where you shouldn't reverse wrong actions too. Suppose you harvested 1 person's organ's to save 5, then you realize one of the 5 you saved is a depraved serial killer, and will kill, rape, and torture many more victims thanks to your action. So, all-things-considered, the action was wrong on consequentialism. Suppose you can undo it, but the act of doing so, has massive causal ramifications, inscrutable to you, which would eventually lead to the birth of Super Hitler, who will initiate a global nuclear holocaust. Strange scenario, but there you have it, a case where undoing a wrong action would be wrong on consequentialism. So, by the reversal principle consequentialism is absurd! Another way of interpreting reversal is undoing the action so that it doesn't even happen, perhaps by way of a time machine. But I simply don't put much stock in our intuitions about abstract scenario's like what we should do given a time machine, which may not be even possible.


Argument 4


This one is Huemer's paradox of deontology, and is the first argument that I feel the force of. Read Huemer's paper for more context. It starts by putting forth two principles.

Individuation Independence: Whether some behavior is morally permissible cannot depend upon whether that behavior constitutes a single action or more than one action.
And;
Two Wrongs: If it is wrong to do A, and it is wrong to do B given that one does A, then it is wrong to do both A and B.
Now, here is the case which generates the paradox;
Torture Transfer: Mary works at a prison where prisoners are being unjustly tortured. She finds two prisoners, A and B, each strapped to a device that inflicts pain on them by passing an electric current through their bodies. Mary cannot stop the torture completely; however, there are two dials, each connected to both of the machines, used to control the electric current and hence the level of pain inflicted on the prisoners. Oddly enough, the first dial functions like this: if it is turned up, prisoner A’s electric current will be increased, but this will cause prisoner B’s current to be reduced by twice as much. The second dial has the opposite effect: if turned up, it will increase B’s torture level while lowering A’s torture level by twice as much. Knowing all this, Mary turns the first dial, immediately followed by the second, bringing about a net reduction in both prisoners’ suffering. Did Mary act wrongly?
There is a general issue that I think is shared with some of these 'paradoxes' and it is a problem shared with the previous 'paradox' as well. It only arises if we assume principles of judging the moral features of actions, that seem plausible at first blush, but are ones I'm simply not committed to, and are not part of my normative theory. In general, if you propose a principle that is supposed to be a necessary truth regarding the moral features of all actions, and you have no independent argument for it, apart from brute intuition. My response is, naturally, to practice a high degree of skepticism that these principles are, at least universally, true.

With that noted, I'm skeptical of the 'Individuation Independence' principle. There seems to be no strong reason to think, that if two or more actions are individually wrong, it thereby follows that a conjunction which contains both actions will thereby also be wrong. Just as a set of bricks can have various properties that each brick lacks (different weights, size etc.), it doesn't seem completely absurd that a conjunction of the same actions could have right-making properties that those actions taken alone lack (and that this couldn't be one such example). There'd need to be a further argument given to think ethical action is a case where the properties of one, necessarily transfers across a conjunction.

But let's say you really, really want to keep these principles. Then, given those principles, deontology commits you to thinking Mary did act wrongly. This is out of accord with folk intuition. But perhaps we can put pressure on the intuitions by;

1) Pointing out that the scenario is one that is extremely bizarre; it involves having two dials that can increase and decrease suffering levels on a whim, but the dials also work in weird ways that prevent you from reducing one's torture without increasing another's. This is not a scenario anything like what anyone has, and most likely ever will encounter. Furthermore, accurate reasoning about such off the wall hypothetical's that have no connection to our ordinary lives, is highly unlikely to have survival or reproductive benefits so we wouldn't expect natural selection to select for true beliefs about these hypotheticals.
2) Focusing on the kinds of beliefs that lead us to judge Mary's action as right and putting pressure on those. For example, the action Mary commits' are one's that involve violating consent, but the result is good for both parties. But plausibly, we shouldn't force someone to stop smoking, even if it's good for them. We shouldn't promote paternalism for the good of everyone. So we shouldn't, generally, violate consent for a rationally informed agent to bring about that which is good for them. This seems especially true in cases where violating consent involves causing significant pain.

TL;DR. This argument is strong but not insuperable. If you are a deontologist, and your intuition of the truth of the principles is stronger than the intuition that Mary acted wrongly, reject that she acted wrongly, if your intuition that she acted wrongly is stronger, reject at least one of the principles.


Argument 5

The argument is roughly as follows—every time a person drives their car or moves, they affect whether a vast number of people exist. If you talk to someone, that delays when they next have sex by a bit, which changes the identity of the future person. Thus, each of us causes millions of future people’s identities to change.

This means that each of us causes lots of extra murderers to be born, and prevents many from being born. While the consequences balance out in expectation, every time we drive, we are both causing and preventing many murders. On deontology, an action that has a 50% chance of causing an extra death, a 50% chance of preventing an extra death, and gives one trivial benefit is wrong—but this is what happens every time one drives.

One way of pressing the argument is to imagine the following. Each time you flip a coin, there’s a 50% chance it will save someone’s life, a 50% chance it will kill someone, and it will certainly give you five dollars. This seems analogous to going to the store, for trivial benefits—you might cause a death, you might save someone, and you definitely get a trivial reward.

I just don't get this one. Not only do I think the entire notion that this is an objection to deontology is confused.  I think it's, if anything, an argument against consequentialism.  

I think actions that involve directly violating certain rights is wrong,  and intentionally producing outcomes that leads to people being harmed and having their rights violated is also wrong, ceteris paribus. What I completely reject, is that it is wrong to cause rights violations and people to be harmed as a long-term, unintended, and unforeseen outcome. I don't think you're responsible in those cases. I reject the consequentialist assumption that long-term unforeseen, and unintended, identity-affecting outcomes are equally morally relevant in regard to the rightness or wrongness of actions as directly intended, or foreseen outcomes, as should any deontologist worth their salt. In fact, I go so far as to reject those kinds of consequences as even having any bearing, whatsoever, on what makes an action right or wrong. Only other things do, such as known consequences, prima facie duties, the maxims incorporated into the action, the character-based dispositions leading to the act etc. The argument thus, fails to get off the ground.  

The paper Matthew cites does respond to something close enough to this objection, by way of responding to Lenman, who also draws a distinction between foreseen vs unforeseen consequences in his paper. I think the response leaves much to be desired. 

What does it mean for some outcome to be foreseeable (to an ideally conscientious agent)? We can't say that some outcome is foreseeable just in case you were (in principle) in a position to know that the outcome would follow from your choice of that act. There is certainly no plausibility in the suggestion that the potential for some action to bring about some consequence can provide a reason against performance of that action only if the agent is (in principle) in a position to know that the consequence will surely obtain if the action is performed. Any such view would permit reckless endangerment. 

I take 'foreseeable' to mean the agent can reasonably expect a given outcome to occur given their intentions, background evidence, and beliefs (and other mental states) which are directly accessible to them. If there is nothing accessible to the agent which would permit the agent to know that the outcome is something that will obtain as a result from the act, then the agent cannot be responsible for the outcome which obtains. It's just begging the question to call this reckless endangerment. It is only reckless endangerment if the agent is blameworthy for what occurs, but nothing like this has been shown. It seems to me utterly implausible that the mere potential for some act to generate bad consequences confers reasons on the agent, if such reasons are not accessible to the agent.

So, for long-term consequences of ordinary acts. On my view the agent isn't blameworthy for 3 reasons. 

1) It wasn't intended
2) No one was directly caused to be harmed or used as mere means in the process of the act. 
3) The agent performing the act did not foresee any particular bad outcome that was to occur as a result of the act.

This also refutes Mathew's example. Your doing some course of action is not the direct nor proximal cause of an innocent family dying in a car accident, nor was it intended or known that such an outcome would occur. Further, importantly, your act could have still been done, and if some asshole didn't drink and drive, or if the driver wasn't distracted ,or if they weren't speeding etc. the death wouldn't have happened. On the other hand, your action of flipping the coin and it's landing on the wrong side does directly cause someone to be killed, and you knew it would. The act, by stipulation, couldn't have been done without leading to the death. Finally, it is false that in ordinary actions, the agent's epistemic situation is that there is a 50% chance some particular bad outcome obtains.  Instead, particular long-term identity-affecting outcomes of ordinary actions are completely inscrutable to the agent. No given outcome is expected. 

The paralysis argument against deontology is a failure, it looks to me. But I think a decent argument can be salvaged when it is properly understood as an argument against consequentialism. Consequentialists should factor in these unforeseeable possibilities when deliberating what action they should do, since on consequentialism the distinction between foreseeable vs unforeseeable consequences is not morally relevant. So consequentialists should not risk the unforeseen consequences caused by ordinary actions. Matthew hints at a response by saying that 'the consequences balance out in expectation'. However, this fails for the same reason Shelly Kagan's "canceling out" response to the epistemic objection fails, see my previous article for that. 

Argument 6

1. If deontology is true, then you shouldn’t kill one person to prevent two killings.
2. If you shouldn’t kill one person to prevent two killings, then, all else equal, you should prevent another person from killing one person to prevent two killings.
3. All else equal, you should not prevent another person from killing one person to prevent two killings
Therefore, deontology is false
There is a general problem with this argument. Any non-question begging reason (that is, reason that assumes the falsity of deontology) you can give to think 3 is true, is a reason to think 2 is false by the deontologists lights. Indeed, all the argument's that Matthew gives in favor of 3, are all, as a deontologist, I think pretty good reasons to think 2 is false. So it's hard to see how this argument is particularly effective against deontology, just reject 2 for the same reasons Matthew gives to accept 3.

My response is therefore to reject 2, because of the reasons Matthew gives, and roughly the same reasons I reject the strong reversal principle, I think there is a defeasible reason to prevent wrong actions it just isn't universally overriding. This should be sufficient to preserve our intuitions and avoid this argument. But let's quickly look at Matthew's justification for 2.
The idea here is pretty simple. It seems really obvious that you should prevent people from doing wrong things if you can at no personal cost. In fact, as this paper. which started the entire idea of this worry for deontology in my mind notes, this produces a strange result when it comes to deterrence. Presumably, if we think that killing one to save five is wrong, we’ll think that it’s a good thing that laws against murder prevent that. But if we think that third parties have no reason to prevent killing one to save five, then deterrence is not a reason to ban deontic rights violations with good outcomes.

If you have no reason to prevent organ harvesting, then it isn’t wrong. One should prevent wrongdoing, if all else is held equal.

The reasons he gives are;

1. "It's obvious", but this is ineffective, I don't think it's obvious. You have, at most, a defeasible reason, but I think we have good reasons to believe it is overturned in this case. 
2. It creates problems for deterrence. But this is also ineffective. I reject that deterrence is the reason we ban killing 1 to save 5, and I reject that legality is morality applied. Laws in the minimal sense should exist to guarantee people's freedoms and rights. So, it's obvious that we should ban murder, even in cases where 5 is saved. Also, Consequentialism holds that the act of killing 1 to save 5 is actually good, but presumably we shouldn't deter good actions by making them illegal. If that's a bad argument against consequentialism, which I think it is, then it's also a bad argument against deontology.


Argument 7

Take the following example of theft. Suppose that there are 1,000 aliens, each of which has a stone. They can all steal the stone of their neighbor to decrease their suffering very slightly any number of times. The stone produces very minimal benefits—the primary benefit comes from stealing the stone.

The aliens are in unimaginable agony—experiencing, each second, more suffering than has existed in human history. Each time they steal a stone, their suffering decreases only slightly, so they have to steal it 100^100 times in order to drop their suffering to zero. It’s very obvious, in this case, that all of the aliens should steal the stones 100^100 times. If they all do that, rather than being in unimaginably agony, they won’t be badly off at all.

The following seem true.

1. If deontology is true, it is wrong to steal for the sake of minimal personal benefits.
2. If it is wrong to steal for the sake of minimal personal benefits, it is wrong to steal repeatedly where each theft considered individually is for the sake of minimal personal benefits.
3. In the alien case, it is not wrong to steal repeatedly where each theft considered individually is for the sake of minimal personal benefits.

Therefore, deontology is false.

This argument rests on, roughly, the same intuitions as Huemer's paradox. So as an argument which forms part of a cumulative case in addition to Huemer's argument, it doesn't really add more weight it seems to me. I'll just briefly make 3 points here;

1. I'm skeptical that, if an action is wrong, it thereby follows that a conjunction which contains the same action in repetition will thereby also be wrong. For roughly  the same reasons I am skeptical of the Individuation Independence principle.
2. This hypothetical scenario is the most abstract and bizarre yet, so our intuitions here probably don't count for much.
3. There may be a tu quoque concern here. Suppose you randomly select someone from the global population and give them a medical needle jab, the action is wrong, you caused them minor pain and didn't benefit them in any way. But you keep doing this a billion times, and doing so, basically statistically guarantee's that at-least 1 person will benefit from the jab, and have horrifically painful diseases many orders of magnitude worse than the pain caused by all jabs combined prevented. The individual act is wrong, and you are not in a position to know any of the acts, on an individual basis, wouldn't be wrong (indeed statistically they probably are wrong), yet the conjunctive act is right on consequentialism. So, if Matthew accepts the principle that if an action is wrong, the same action in repetition must also be wrong, which he needs for 2 to be motivated, it looks like his own position is under some heat. 

Argument 8

We have lots of scientific evidence that judgments favoring rights are caused by emotion, while careful reasoning makes people more utilitarian. Paxton et al 2014 show that more careful reflection leads to being more utilitarian.

People with damaged vmpc’s (a brain region responsible for generating emotions) were more utilitarian (Koenigs et al 2007), proving that emotion is responsible for non-utilitarian judgements. The largest study on the topic by (Patil et al 2021) finds that better and more careful reasoning results in more utilitarian judgements across a wide range of studies.

Ok, so this is one I have alot less to say about, I'm not all that familiar with the research on the topic.  It is case by case, I'd imagine. A general hurdle for alot of these studies is that  there are relevant differences between the cases presented, and alot of it might be accounted for by people reacting emotionally to the relevant differences. It could be that the same people use different parts of the brain when making judgements about different scenario's, utilitarian's and deontologists alike. Rather than it's just being the case that utilitarian's are cold rational thinkers and deontologists emotional thinkers. 

I also have my doubts that the fact that people with damage to the pre-frontal cortex have a positive correlation with utilitarianism is something that says much in utilitarianisms favor. It's not implausible that emotional ability allows agents to better empathize, and is an important part of moral reasoning. 

But at the end of the day, if the studies do overwhelmingly show that utilitarian's are generally more rational than deontologists in their judgements, so be it, I'll concede that this is a point in utilitarianisms favor albeit a non-decisive one. 

Argument 9 

(1) Deontic constraint (for reductio): Protagonist acts wrongly in One Killing to Prevent Five, and ought instead to bring about the world of Five Killings.

(2) If an agent can bring about W1 or W2, and it would be wrong for them to bring about W1 (but not W2), then W2 ≻ W1. (key premise)

(3) Five Killings ≻ One Killing to Prevent Five. (from 1, 2)

(4) One Killing to Prevent Five ≻≻ Failed Prevention. (premise)

(5) Failed Prevention ⪰ Six Killings. (premise)

(6) Five Killings ≻ One Killing to Prevent Five ≻≻ Failed Prevention ⪰ Six Killings. (3 - 5, transitivity)

(7) It is not the case that Five Killings ≻≻ Six Killings. (definition of ‘≻≻’)# Contradiction (6, 7, transitivity).

 I reject (2). I reject that the rightness/wrongness of an act is a function of the world-states that the action brought about/could have brought about. Rather,  it is a function of the reasons/principles which governed the act. (2) seems, pretty much, tantamount to a denial of deontology.

(I) Denying (2)—e.g. by claiming that we should prefer wrong actions to be performed— would rob deontic verdicts of their normative authority.

 But I don't think denying (2) robs them of their normative authority. At the very least, I'd need to see an argument for that claim, for this argument to move me (or any non-consequentialist, really). I also argued, in the previous article, wherefrom deontic verdicts derive their normative authority. Never did I appeal to the total goodness of world-states, but only, the unconditional value of humanity. Until the charge of loss of normative authority is motivated, I have nothing further to say here. 

 Argument 10
Deontology holds that there are constraints on what you should do. But this produces a very strange result—it ends up forcing the conclusion that sometimes putting perfect people in charge of decisions makes things worse. Suppose that a person while unconscious sleepwalks and is about to kill someone and harvest their organs to save five people. Then they wake up and have a choice of whether to actually harvest their organs or not.

Given that harvesting organs makes things better—it causes one death instead of five—it would be bad that they woke up. But this is strange—it seems like putting people who always do the right things in charge of a situation can’t make things worse.
This is another one I just don't feel much of the force of. I already believed that there were cases where an agent doing the right thing can make things overall worse off, (Killing 1 to save 5 being an example). So, given this, I'm not sure what pointing out that giving a morally perfect agent control can make things worse off adds. It's just entailed from both my normative theory, and my pre-theoretic moral judgements. Not a surprising result.

Perhaps you might think it is a strange result, but I also think there is a similar result for utilitarianism which seems equally, if not more, strange. There seem to be situations where utilitarianism is self-effacing. Human's are very bad at judging even some of the immediate consequences of their actions, let alone the long term consequences. So, it seems very plausible that human's acting like, or being deontologists and following a set of deontological rules would be overall better, at the very least in some cases than leaving human's to their own devices and attempt to figure out what course of action will maximize well-being/minimize suffering. In which case, making utilitarianism publicly accepted would be bad on utilitarianism. But that's an odd result, surely it can't be wrong for everyone to hold to, and attempt to act in accord with the correct moral theory. 

Argument 11 
  1. If deontology is true, it’s impermissible to kill one person to harvest their organs and save five.
  2. If it’s impermissible to kill one person to harvest their organs and save five, it’s impermissible flip the switch in the trolley problem.
  3. It’s not impermissible to flip the switch in the trollye problem.
Therefore, deontology is false.


I reject 2,  but let's examine Matthew's argument for it.

Suppose that flipping the switch was less wrong than pushing the guy off the bridge. In this case, we should expect that, if one is given the choice between the two actions, they ought to flip the switch. After all, it’s better to do less wrong things rather than more wrong things.
Thus, the argument is as follows.
  1. If flipping the switch is significantly less wrong than pushing the man, then if given the choice between the two options, even if flipping the switch is made less choiceworthy, one ought to flip the switch.
  2. If given the choice between the two options, even if flipping the switch is made less choiceworthy, one ought not flip the switch.

         Therefore, flipping the switch is not significantly less wrong than pushing the man

Not sure what 1 means. What factors are we considering when we say an action is made "less choice-worthy"? This is vague to me. Matthew claims this premise is obvious in the article, but, as we've seen, he says a lot of things are obvious that I think are false. So I'm unsure whether to accept 1. 

I probably reject 2 depending on what is meant, but let's let Matthew disambiguate; 

Imagine the following scenario. There’s a train that will hit five people unless you do something. There’s a man standing above, on a bridge. There’s a track that you can, by flipping a switch, redirect the train onto, which leads up to the man on the bridge above the track. However, the train moves very slowly, so if you do that, the man will be very slowly and painfully crushed.

However, you have another option. You can, by pushing the man, cause him to die painlessly as he hits the tracks, and he’ll stop the five people. Which should you do?

Now, the deontologist is in a bit of a pickle. On the one hand, they think that you should flip the switch in general to bring about one death while saving five, but you shouldn’t push a man to save five. But in this case, it seems obvious that, given that the man would be far, far better off, it’s much better to push him than it is to flip the switch.
So, I lean towards thinking pushing the fat man is immoral and pulling the trolley lever is not. The main reason is because in the footbridge case you are using the fat man as a means, in the trolley case you're not using the person on the track as a means. What happens in each situation if the man isn't there? In the trolley case, you can still pull the lever and save those on the trolley. With the fat man, you can't. So you're using the fat man as a means to an end, but not the guy standing on the other track. Another important reason has to do with the Doctrine of Double effectIn the trolley case, your intent isn't (necessarily at least) to kill someone, the goal is to switch the track to save the five. This is what Anscombe highlights as a problem. 

With this in mind, what to think of the scenario? It highly depends. If the agent is deliberating between two ways to kill the fat-man, then it's immoral to do either, since in both cases he'd be intending to kill the fat man and use him as a mere means. If on the other hand the agent see's the switch, and pulls it with the goal to save the five, that's good. Now, in fairness, I can see how this, is intuitively a weird result so I do see this as having some intuitive pull. However, I don't think it's all that strong when we consider the morally salient differences between the two actions qua actions, as opposed to the two states of affairs qua states of affairs. All in all, I reject 2 (both arguments).


Argument 12

It doesn’t really matter if you or someone else violates someone’s rights. From, to quote Sidgwick, the “point of view of the universe,” it seems unimportant whether it’s you violating rights.

Indeed, it seems very clear that in the organ harvesting case, it’s more important that five don’t die than it is that you don’t get your hands dirty. This seems like an unassailable moral datum. But morality should pick up on the things that really matter.

This intuition is completely at home with my view. I grant that whether you, or someone else acts wrongly and violates someone's rights is not better or worse for the world. Where I disagree is that morality "should pick up on the things that really matter" in the sense Matthew means. Morality is not axiology.  It tells you what actions we should and shouldn't do,  not what is truly valuable from the point of view of the universe, if there even is such a thing, which I doubt. But even if there is, why should we expect moral judgements about the rightness or wrongness of human actions to track that? It seems we should only expect this if, antecedently, we are consequentialists, which I'm not. 

Argument 13 & 14

Moderate deontology holds that rights are valuable, but not infinitely so. While a radical deontologist would be against killing one person to save the world, a moderate deontologist would favor killing one to save the world. Extreme deontology is extremely crazy. An extreme deontologist friend of mine has even held that one shouldn’t steal a penny from Jeff Bezos to prevent infinite Auschwitz’s. This is deeply counterintuitive.
Depending on what is meant, I'm probably an 'extreme deontologist', if what that amounts to is denying threshold deontology and accepting some acts, such as murder, torture, rape etc. are categorically forbidden.  Though, I'm not as extreme as most others who we'd call 'extreme deontologists'. Yes, you should steal from Bezos. Yes, you should lie to the Nazi.  In any case, I'm extreme enough so that the explosion problem doesn't apply to me. Since I'm not a 'moderate deontologist', argument 14 fails to apply to me as well.  I'll leave this as an avenue for Matthew to attack my view on the grounds of 'craziness'. Moving on. 


Argument 15

There are various explanations of our deontological intuitions being produced by various biases. There are lots of biases that we should expect make us more willing to cause us to believe something like deontology even if it is false.

  1. Status quo bias. This describes people’s tendency to prefer things as they are, in fact, going. For example, if you ask people whether they think a hypothetical person should invest in A or B, their answer will depend on whether you tell them that they’re actually investing in A or B. But this explains pretty much all of our deontological intuitions. All of them are intuitions about non-interference—about keeping things as they are. These are thus prime candidates for things that can be chalked up to status quo bias.
  2. Loss Aversion. This describes a tendency for people to regard a loss of some harm as more significant than just losing out on a gain. Losing five dollars is seen as worse than not gaining an extra five dollars that one would have otherwise. But people being averse to causing extra losses, combined with the idea that various losses in, for example, Bridge are incorporated into their deliberation, means that they will be likely to oppose pushing the person.
  3. Existence Bias. People treat the mere fact that something exists as evidence that it is good. But this intersects with status quo bias and explains why we don’t want to change things.


I'm willing to concede this is some evidence for utilitarianism over deontology. Nonetheless a few points are worth making here.

For one, I have my doubts that these bias's (at least completely) explains all, or even most deontological intuitions. Intuitions about the overriding wrongness of violating autonomy/self-determination, torture etc. even in cases where the outcomes are overall worse. As well as, the intuition that the wrongness of, say, promise-breaking being over and above the bad outcomes don't seem to be explained by any of these bias's.  

For two, this seems to cut both ways to some extent.  Utilitarian's seem to ignore many seemingly salient factors in ethical decision-making, they only judge actions based on the state of affairs brought about by it. In general, all human's are prone to bias, and it is doubtful that utilitarianism truly solves it, rather than merely being a framework for cloaking many of our pre-existing moral bias's under the guise of mathematical calculation and certainty. 

For three,  most people who we would expect to be most reflective about the subject, and thus least prone to bias, as well as most knowledgeable e.g professional normative ethicists are non-consequentialist with more being deontologists than any particular other moral theory (although I think the divide between consequentialism and non-consequentialism is much more significant than the divide between deontology and virtue ethics, which I take to collapse in many ways) and consequentialism being the least popular of the main three only a sub-set of which will be hedonic act utilitarians (Matthew's view).  Of course they will be prone to bias too, as all humans are, but I think this consideration is enough to greatly off-set the force of Matthew's point.


Conclusion


Most of Matthew's argument's seemingly have little to no force, or just fails in the case of the paralysis argument, or seemingly applies equally to consequentialism, and rely on the same sorts of intuitions about extremely bizarre hypotheticals which we shouldn't expect to be reliable. The best arguments and considerations raised here are, at best, some evidence against non-consequentialism but are not nearly as devastating as Matthew claims.  By contrast, my article provided direct argument's for Kantian constructivism's parsimony and explanatory power, as well as serious epistemic and practical concerns for consequentialism, and, the part of my case that did rest on intuition focused on ordinary cases rather than abstract reasoning about strange hypotheticals. I conclude that we should reject consequentialism and embrace deontology based on the arguments provided so far. 


No comments:

Post a Comment