The status of intuition here is going to be a major methodological disagreement I have with Matthew that I'll push back on.
Matthew says;
The intuitions I appeal to are generally some combination of very obvious and the types of intuitions that are most reliable—see Huemer’s article for an elaboration on which intuitions are most reliable.
For two, pessimistic induction. Humans have a pretty bad track record with intuitions, and proper reasoning especially in the moral domain. Some examples; Astrology, various religious beliefs, genocide, slavery, eye for an eye, factory farming (Matthew agrees with this one) etc.
Finally, evolution selects for beliefs for benefiting survival, reproduction and ordinary social cooperation, but correct beliefs about strange hypotheticals do not yield any of these benefits so there is no evolutionary reason we would have true beliefs about them.
Given that what the theoretical virtues are is contested, and I’ve given a parsimonious view of morality—utilitarianism—it’s unclear what else would be part of an opening statement. Any argument with premises appeals to intuitions—why should we accept a premise beyond seemings?
For these reasons the lion's share of Matthew's case, from the very start, strikes me as, at best, incredibly weak defeasible evidence against deontology.
Argument 1
You have a duty to diffuse any of the bomb's you are able to diffuse, and I don't even think that's for consequentialist reasons. Suppose there is a bomb in your vicinity, say one of the one's that was planted, then that fact confers obligation on you to diffuse it, to ignore it would be immoral, as it violates your greatest obligation, respect for humanity as such.
His response being as follows;
But on deontology, you should regard your own rights violations as more significant, in your decision making, than others. After all, you shouldn’t violate one person’s rights to prevent ten identical rights violation. Thus, the verdict about this case stands.
If you diffuse your bomb, you will, all in all, be responsible for only an omission, and it’s an omission when there’s some other action that you took instead that was praiseworthy. If you diffuse two other bombs, you will be responsible for pRilanting a bomb and killing someone. The second is clearly worse on this account.
Arguments 2 & 9
TT first says “Well, as a deontologist, I'm not committed to a position about what people "should hope" or "should prefer".” Perhaps qua deontology he is not comitted to a position about what we should prefer, but if there are reasons to think that one should prefer one takes non-deontological actions—which there are, as I argued—then the deontologist has to think that perfectly moral beings should think “oh darn, this person’s doing the right thing—I really hope they stop doing the right thing.” This is a very strange consequence of a theory.
Another way of understanding it, is that when we calculate the total value of each state of affairs from the third person. The state of affairs involving one person being killed to prevent two deaths is all-things-considered better than the state of affairs wherein one killing happens. So, for this reason you ought to prefer the killing to save 5 to happen. Yet, the deontologist thinks you shouldn't kill one to prevent two killings. If this is all that's being said, then what's being said doesn't seem to be much more than a restatement of deontological commitments.
Aside from the rather strange locution, this mistakes the argument. There’s no reason why a third party has to prefer a better state of affairs—they could, in theory, want people to be deontologists. If deontology requires saying that you shouldn’t harvest organs, but you should really hope that other people harvest organs, all the worse for deontology.
Could a deontological Protagonist prefer One Killing to Prevent Five over Five Killings, whilst still maintaining that it would be wrong for her to kill? Such a combination of attitudes seems of questionable coherence. For consider the other emotional states and attitudes that go along with all-things-considered preferences. In regarding One Killing to Prevent Five as preferable, it seems that Protagonist would also need to hope that she chooses to realize this state of affairs, and subsequently feel regret and disappointment if she does not. This seems incompatible with regarding that choice as truly wrong (at least in any sense that matters, implying authoritative and decisive normative reasons to avoid so acting).Our concept of mattering seems intimately connected with preferability or what’s worth caring about. So even if deontic constraints could be coherently combined with utilitarian preferences, the upshot would seem to be that deontic constraints don’t really matter. Sure, the deontologist may maintain that there is an “obligation” not to kill. But this would seem a merely verbal victory if it turns out that we shouldn’t really want agents to fulfill such obligations, and that what’s truly preferable is to kill one to save five. Put another way: if we’re all agreed that maximizing happiness is what we should most want and care about, then any residual disagreements about “obligation” would seem no more threatening to the utilitarian than residual disagreements about what’s “honourable” (when we all agree that we’ve no reason to care about “honour” as such).So, if deontic constraints are to truly matter, we cannot generally prefer that they be violated.
To make the point more explicit, it seems like I can run something like a reverse of the argument here that would be equally uncompelling, nigh question-begging against the utilitarian.
"Our concept of rightness and wrongness seems to be intimately connected with intentions and maxims that one incorporates into their action, sure the consequentialist may maintain that it is "worse" to not kill one to save five. But this would seem a merely verbal victory if it turns out that there are right-making and wrong-making features of actions that are over and above the states of affairs brought about by them, and what is truly right is abiding by Kantian duties."
Argument 3
Matthew says;
Well perhaps you feel under no rational pressure to accept it, but this has nothing to do with whether there are good reasons to accept it. This is a sweeping principle that has deep intuitive appeal—of course you should undo wrong things. They’re wrong, so you should undo them.
It’s also supported by an appeal to a wide range of cases—if you set a bomb in a park and you can remove it at no cost, you should, because setting bombs in parks are wrong. If the principle is implausible, there should be one counterexample, which TT didn’t produce.
The weak reversal principle is phrased in a misleading way. Of course if you do a wrong thing, undoing it shouldn’t necessarily be done. If, for example, the only way to get rid of the bombs you planted earlier is to massacre 700,000 Egyptian women, you shouldn’t do it. But if you can undo it at no cost, you obviously should.
But then this wouldn’t be holding all else equal which is what’s stipulated by the reversal principle.
Reversal Principle: if an action is wrong to do, then if you do it, and you have the option to undo it before it has had any effect on anyone, you should undo it.
Suppose you harvested 1 person's organ's to save 5, then you realize one of the 5 you saved is a depraved serial killer, and will kill, rape, and torture many more victims thanks to your action. So, all-things-considered, the action was wrong on consequentialism. Suppose you can undo it, but the act of doing so, has massive causal ramifications, inscrutable to you, which would eventually lead to the birth of Super Hitler, who will initiate a global nuclear holocaust. Strange scenario, but there you have it, a case where undoing a wrong action would be wrong on consequentialism.
He replies;
But this plainly rests on an equation of subjective and objective wrongness. The principle only applies in cases of objective wrongness, where it’s a thing you really shouldn’t have done, not just a thing you thought at the time that you shouldn’t have done. Thus, of course you should sometimes undo subjectively wrong things, if you later discover that they aren’t wrong.
Argument 4
TT says they only seem plausible at first blush, but I don’t think this is true. These really strike me, when I reflet, as basic features of rationality, far more reliable than the emotional reaction I feel to a case like organ harvesting. They’re also more trustworthy because they apply to more cases—if they were false, they’d have clear counterexamples.
Matthew replies here;
I assume he meant two wrong, because his comments only make sense when applied to that. Obviously things can have emergent properties, but it doesn’t seem like if two actions are wrong, explicitly conditional on the existence of the other, then it can be wrong to do them both.
I also gave reasons to distrust our intuition about the torture transfer, given we want to accept both principles. Matthew responds in kind;
TT objects to the scenario, which involves two dials that can reduce another’s torture by some amount by causing someone else half as much torture as strange. But it’s no stranger than the trolley problem—in the real world, fat men don’t stop trains.
But in this case, we can stipulate that neither of them are given the opportunity to consent. In this case, it’s extremely obvious that you should decrease both of their tortures.
Argument 5
But this principle doesn’t solve it, because it’s not long-term, unintended, and unforeseen. After this argument is presented, it’s no longer unforeseen—now we all know that we are constantly causing tons of rights violations.
TT next tries to refute this by saying that it’s not foreseen in that “The agent performing the act did not foresee any particular bad outcome that was to occur as a result of the act.” But this is totally irrelevant—in the coinflip case, even if you don’t foresee any particular bad act, if you know your flip has a 50% chance of killing one and a 50% chance of saving someone, it still seems prohibited by deontology. So we’re back to square one.
But these also apply to the coinflip scenario. No one is being used as a mere means—it’s just a long side effect, and the intent is to get money
In any case, I think Matthew as a consequentialist is not appreciating as relevant, the difference between how we assess a particular moral action vs how we assess whether one has a duty to prevent something. These are two sides of the same coin for the consequentialist, but radically different for the non-consequentialist.
When we say you have a positive reason to prevent some action, I think the only thing that matters, (assuming all else is held equal here and you aren't violating duties) is the consequences, you prevent something because you don't want some undesirable state of affairs to obtain. Whereas when we are assessing moral actions like killing one to prevent five, we are looking at a bunch of things, most importantly we ask whether it violates the formula's of the CI, and apart from that, our assessment is very intention-based, and character-based. So, it's really not any surprise that we might judge some action as wrong for non-consequential reasons, but that we also shouldn't prevent it, for consequential reasons. I think after this is made clear, the weirdness, which was Matthew's only gripe, is dissolved.
I’m not sure if I quite understand what’s being said, but I’d imagine that most of the jabs are objectively wrong, but many are maybe subjectively right—depends on the details. But if each of them were wrong, then the conjunct would obviously be wrong.
But I don’t think that most ethicists are very good judges of this. Very few of them have read the serious arguments—from Huemer, Chappell, and so on—against deontology. Very few of them realize the utterly bizarre and appalling implications of deontology. And Parfit was on our side, and was sufficiently brilliant to offset the rest of ethicists put together.
TT says he doesn’t really feel the force of this—this strikes me as bizarre. It really is strange to think that it can be a terrible thing that perfectly good people are put in charge of various things. If they always do the right thing, it’s very strange that putting them in control of something is unfortunate.
It is counterintuitive when put like that, but when we look at the specifics of the case in question it's much less counterintuitive. We are looking at a case where a morally perfect person loses control of their actions and will kill 1 to save 5 in this state, if they are not put in control then 1 is killed to save 5 which is a better state of affairs since less people die. If they are put back in control however, they will not violate the CI and so will not kill the 1 to save 5, because they are a perfect Kantian agent. But that's a worse state of affairs because more die. But this is totally expected, since wrongness is not judged by the states of affairs brought about, but by the maxims which govern an action. It makes things worse qua state of affairs that they are put in charge and do not kill the one, but the action would have been wrong if they killed the one when given control for other reasons that don't have to do with the state of affairs that results, which is something that shouldn't be surprising as it's something deontologists were already committed to.
He also says;
All moral theories will be self-effacing for some people—if Jon would kill himself if convinced of any moral theory because it would make him sad, one shouldn’t promulgate that moral theory.
I argue that there’s not a big gulf between bridge and trolley, where you can either flip the switch to kill one and save five or push the man to kill one and save five. I do this by sketching out a scenario where a man is standing on top of a bridge and you can either flip a switch to redirect the train up to him killing him more painfully or push him, killing him less painfully. It seems you should push him—this would be better for him and worse for no one. But this shows, as I argue, that there isn’t a big difference between the two acts and, if as most agree flipping the switch is fine, so too is pushing the man.TT ends up biting the bullet and saying that you should flip the switch not push him. But this is absurd—it makes him die more painfully and benefits no one. Any moral system with a formulaic set of rules that make people worse off for no reason will be false. TT doesn’t think this is very unintuitive—I think that the principle that you shouldn’t harm one person to benefit no one is very plausible, so I disagree. But he doesn’t agree that it’s a bit weird and is at least some reason to reject deontology
I am not saying we should "Harm someone to benefit no-one". That may be the result in terms of world-states but I am a non-consequentialist. The reason I think one is wrong and the other is not, is that one is directly and intentionally harming someone and the other isn't. Pushing them to save five is you killing them, pulling the switch to save five is not you directly and intentionally killing them but it has the side-effect of killing them, the conflation between these two is consequentialism, so it's no surprise Matthew doesn't see the difference as relevant. That said, I think pulling the lever and not pushing the fatman is still a bit of a weird unintuitive result but it's not a sufficient reason to reject deontology.
Argument 12
This is, once again, a difference of intuitions. Still though, I think it’s worth saying something about the intuitions that motivate consequentialists. The consequentialist picture seems attractive because it picks up on what really matters. Not just in an axiological sense—it doesn’t seem genuinely important whether it’s you or someone else who violates the rights of another. This seems unattractive about deontology—an analogy that Richard has given that I’ll coopt is that of norms of honor.In an honor society, there may be intuitions about acts being dishonorable. But this seems problematic—it doesn’t seem like those things really matter. Morality is left impotent, inert, and unimportant if it’s not about what’s really important.
Arguments 13 & 14
This is in regards to paradox's of moderate deontology, I am not a moderate deontologist so they don't apply to me. Matthew starts by saying absolute deontology is insane so it is false. Thus, he thinks you shouldn’t kill one person to prevent an infinite number of people from being tortured in increasingly horrible ways. But this verdict is, on its face, absurd. So, his view is false. If we imagine you committing homicide was the only way to prevent everyone from being tortured in the worst ways imaginable for all of eternity, it’s very obvious you should do that.
Additionally, absolute deontology has issues with risk. Huemer explains this well. If you say that one reason—namely, your reasons not to violate rights—is infinitely more important than your reasons to promote the good, then you should never do anything, because anything risks violating rights, and that’s infinitely more important than anything else.
But I don't think this is effective against me. I only think specific actions which involve directly violating rights in extreme ways like murder, torture, and rape are absolutely wrong. I don't think doing actions which risk leading to rights violations as an indirect outcome are absolutely wrong. For the examples Huemer talks about, I don't think promise-breaking is absolutely wrong, and I don't think potentially falsely imprisoning innocents for crimes they do not commit is absolutely wrong, although I think killing them is, which is, in part, why I'm against the death penalty, but I don't think that's a huge bullet-bite.