Thursday, March 9, 2023

Final Reply to Bentham's Substack

This will be my final response in the cross-blog Deontology vs. Consequentialism debate I am having with Bentham's Substack. Needless to say, read all previous articles before reading this one.

Intuitions


The status of intuition here is going to be a major methodological disagreement I have with Matthew that I'll push back on.

Matthew says;
The intuitions I appeal to are generally some combination of very obvious and the types of intuitions that are most reliable—see Huemer’s article for an elaboration on which intuitions are most reliable.
Many of the intuitions Matthew appeals to are intuitions about what we should do in very abstract scenario's disconnected from ordinary decision-making/ reasoning and utterly unlike what any person is likely to encounter or broad principles which we have no way of knowing are universally applicable. Why should we expect these intuitions to be reliable? Even further, why should we expect the correct moral theory that is supposed to guide our actual day-to-day ethical decisions to track our intuitions about bizarre hypotheticals? I don't think we have good reason to expect either of these things. Further I think we have reason not to expect our intuitions to be reliable here.

For one, epistemic humility, and realizing how limited our brains, and general cognitive situation is. We only have a meager perspective and experience, we have no way of directly accessing the truth conditions for what should be done in bizarre hypotheticals, (like aliens experiencing nigh infinite agony and stealing stones to lower it), or if broad principles of moral action are universally and necessarily true (like the reversal principle, individuation independence etc.) at least through any kind of causal process or a priori deduction. 

For two, pessimistic induction. Humans have a pretty bad track record with intuitions, and proper reasoning especially in the moral domain. Some examples; Astrology, various religious beliefs, genocide, slavery, eye for an eye, factory farming (Matthew agrees with this one) etc. 

Finally, evolution selects for beliefs for benefiting survival, reproduction and ordinary social cooperation, but correct beliefs about strange hypotheticals do not yield any of these benefits so there is no evolutionary reason we would have true beliefs about them. 

Given that what the theoretical virtues are is contested, and I’ve given a parsimonious view of morality—utilitarianism—it’s unclear what else would be part of an opening statement. Any argument with premises appeals to intuitions—why should we accept a premise beyond seemings?
I've disputed that utilitarianism is parsimonious in my first article. I have no problem with appealing to intuitions in a limited extent, what I have a problem with is a case that rests only on brute intuition, not to mention a subset of a subset of our intuitions, our moral intuitions regarding off the wall hypotheticals.  Also, I believe a good deal of the intuitions Matthew appeals to are theory-laden, that is, not independent from prior consequentialist theoretical commitments which we'll get too later. 

For these reasons the lion's share of Matthew's case, from the very start, strikes me as, at best, incredibly weak defeasible evidence against deontology. 


Argument 1

Matthew responds to this excerpt in my original article; 
You have a duty to diffuse any of the bomb's you are able to diffuse, and I don't even think that's for consequentialist reasons. Suppose there is a bomb in your vicinity, say one of the one's that was planted, then that fact confers obligation on you to diffuse it, to ignore it would be immoral, as it violates your greatest obligation, respect for humanity as such.

His response being as follows; 

But on deontology, you should regard your own rights violations as more significant, in your decision making, than others. After all, you shouldn’t violate one person’s rights to prevent ten identical rights violation. Thus, the verdict about this case stands.

Your rights violations are only more morally relevant when evaluating your direct actions of violating rights, not when you are picking between actions which will prevent rights violations. On consequentialist assumptions there isn't a relevant difference, but I think there is as a non-consequentialist.

 When we judge actions on my view, we're asking whether you are acting on a maxim that can be rationally universally endorsed, whether you are using someone as a mere means etc. The act of diffusing the two bombs rather than one does not violate either categorical imperative (CI) formula, so it cannot be wrong.  So, since the act cannot be wrong, and you know diffusing as many bombs as you can will save more, it's completely consistent with deontology that you have strong pro tanto reason to diffuse the two rather than yours.

If you diffuse your bomb, you will, all in all, be responsible for only an omission, and it’s an omission when there’s some other action that you took instead that was praiseworthy. If you diffuse two other bombs, you will be responsible for pRilanting a bomb and killing someone. The second is clearly worse on this account.
You'd be responsible for planting the bomb regardless, the bomb going off doesn't make you more responsible. If you choose to diffuse your bomb, you'd still be responsible for causing a state of affairs where you had to diffuse it, preventing you from being able to also diffuse the other two. What I take this to mean is that the separate act of you planting the bomb was immoral, and the act of you diffusing the bombs should be judged separately. You aren't blameworthy for diffusing the most amount of bombs you could, in fact I think that's the right thing to do.


Arguments 2 & 9

This is in regards to the "paradox" that deontologists should hope people do the wrong thing,

Matthew writes; 
TT first says “Well, as a deontologist, I'm not committed to a position about what people "should hope" or "should prefer".” Perhaps qua deontology he is not comitted to a position about what we should prefer, but if there are reasons to think that one should prefer one takes non-deontological actions—which there are, as I argued—then the deontologist has to think that perfectly moral beings should think “oh darn, this person’s doing the right thing—I really hope they stop doing the right thing.” This is a very strange consequence of a theory.
 I take this to be shifting to the second interpretation which I address, you're not showing an incoherence internal to deontology rather an alleged incoherence between deontological commands and what we have otherwise good reason to think we should prefer. I do happen to prefer the scenario where one is killed to prevent five, to the scenario where five is killed, though I don't think it's an objective fact of the matter that I should prefer that. I'm not a realist about these preferences, I just think there are binding norms which constrain actions as a constructivist. 

Matthew then replies to;
Another way of understanding it, is that when we calculate the total value of each state of affairs from the third person. The state of affairs involving one person being killed to prevent two deaths is all-things-considered better than the state of affairs wherein one killing happens. So, for this reason you ought to prefer the killing to save 5 to happen. Yet, the deontologist thinks you shouldn't kill one to prevent two killings. If this is all that's being said, then what's being said doesn't seem to be much more than a restatement of deontological commitments.
Stating;
Aside from the rather strange locution, this mistakes the argument. There’s no reason why a third party has to prefer a better state of affairs—they could, in theory, want people to be deontologists. If deontology requires saying that you shouldn’t harvest organs, but you should really hope that other people harvest organs, all the worse for deontology.
They could do that, but then they'd be judging it as an action not as an event or state of affairs. If the state of affairs is better they would judge that they should prefer it happen, but that doesn't mean we'd judge the action as right. When we judge states of affairs as good or bad we'd be doing something like adding up the total ultima facie goods and evils instantiated in the world. When we judge actions as right or wrong on the other hand, we are looking at the maxims or reasons which governed the act, whether they abide by duties of maleficence and beneficence, the character traits cultivated, whether it accords with constitutive norms of rational agency etc. 

We can also think of clear examples here. A doomsday device is a bad doomsday device if it conks out and fails to perform it's explosive function, it is a better state of affairs that the explosion doesn't happen devastating humanity, but that doesn't change our judgement of the doomsday device qua doomsday device. Similarly, we make a distinction between judging actions qua actions, and judging actions qua resulting states of affairs. I think the intuition here basically dissolves when the difference is made clear. Indeed it seems to be an intuition that relies on ignoring this distinction  and importing consequentialist assumptions I don't accept. 

Matthew goes on to quote an excerpt from Richard Chappell;

Could a deontological Protagonist prefer One Killing to Prevent Five over Five Killings, whilst still maintaining that it would be wrong for her to kill? Such a combination of attitudes seems of questionable coherence. For consider the other emotional states and attitudes that go along with all-things-considered preferences. In regarding One Killing to Prevent Five as preferable, it seems that Protagonist would also need to hope that she chooses to realize this state of affairs, and subsequently feel regret and disappointment if she does not. This seems incompatible with regarding that choice as truly wrong (at least in any sense that matters, implying authoritative and decisive normative reasons to avoid so acting).

Our concept of mattering seems intimately connected with preferability or what’s worth caring about. So even if deontic constraints could be coherently combined with utilitarian preferences, the upshot would seem to be that deontic constraints don’t really matter. Sure, the deontologist may maintain that there is an “obligation” not to kill. But this would seem a merely verbal victory if it turns out that we shouldn’t really want agents to fulfill such obligations, and that what’s truly preferable is to kill one to save five. Put another way: if we’re all agreed that maximizing happiness is what we should most want and care about, then any residual disagreements about “obligation” would seem no more threatening to the utilitarian than residual disagreements about what’s “honourable” (when we all agree that we’ve no reason to care about “honour” as such).

So, if deontic constraints are to truly matter, we cannot generally prefer that they be violated.
But I don't have much more to add that I haven't said above.  I think the point of a moral theory is to tell agents what they should do and should not do, not tell you what matters more. It seems like we should only expect obligations to track what 'ultimately matters more' or what is 'preferable' from the third person if we're antecedently consequentialists.

To make the point more explicit, it seems like I can run something like a reverse of the argument here that would  be equally uncompelling, nigh question-begging against the utilitarian. 


"Our concept of rightness and wrongness seems to be intimately connected with intentions and maxims that one incorporates into their action, sure the consequentialist may maintain that it is "worse" to not kill one to save five. But this would seem a merely verbal victory if it turns out that there are right-making and wrong-making features of actions that are over and above the states of affairs brought about by them, and what is truly right is abiding by Kantian duties."


Argument 3
 

This is the argument against deontology that argues deontologists shouldn't reverse wrong actions, which Matthew thinks is weird. I rejected the reversal principle which stated that if an action is wrong, you ought to prevent it from being done.

Matthew says;
Well perhaps you feel under no rational pressure to accept it, but this has nothing to do with whether there are good reasons to accept it. This is a sweeping principle that has deep intuitive appeal—of course you should undo wrong things. They’re wrong, so you should undo them.
But Matthew has offered no convincing reason to accept the strong reversal principle, aside from an appeal to intuition which I do not share, and I think collapses on further inspection.
It’s also supported by an appeal to a wide range of cases—if you set a bomb in a park and you can remove it at no cost, you should, because setting bombs in parks are wrong. If the principle is implausible, there should be one counterexample, which TT didn’t produce.
Showing that it applies to one case doesn't even come close to showing it applies to all cases. Even if I didn't or was unable to provide a counterexample that wouldn't make Matthew's claim justified. Recall the dialectical context, Matthew is mounting an argument against deontology, I'm telling him as a deontologist I don't accept a principle the argument relies on. It is therefore Matthew's job to motivate the principle not mine to provide counterexamples.

In any case, I take removing 5 people's organs you took out of one person, killing them and saving the 1 person before he dies to be an obvious counterexample. It's wrong to kill the one person and extract their organs to save 5 because it's murder, and it's wrong to go back and kill the 5 to save the 1 before it's too late, because it's also murder. I see nothing strange about this. 

Matthew says; 
The weak reversal principle is phrased in a misleading way. Of course if you do a wrong thing, undoing it shouldn’t necessarily be done. If, for example, the only way to get rid of the bombs you planted earlier is to massacre 700,000 Egyptian women, you shouldn’t do it. But if you can undo it at no cost, you obviously should.
But then this wouldn’t be holding all else equal which is what’s stipulated by the reversal principle.
This was not made clear in Matthew's original article. The reversal principle did not include a ceteris paribus clause. It stated.
Reversal Principle: if an action is wrong to do, then if you do it, and you have the option to undo it before it has had any effect on anyone, you should undo it. 

With such a clause, the principle is, I think relatively trivial. But I would then reject that all else is held equal in these cases. Taking organs out of 5 people is obviously murder, so you shouldn't do it.  For the fatman case, you shouldn't push the fatman, because it is directly murdering someone and using them as a mere means. You aren't blameworthy for the death of the 5 as that's just then you not doing an impermissible action. If you save the fatman however, you are performing an action which  entails the death of the 5, that's bad. 

I also presented a tu quoque scenario where it seems consequentialism also says we  shouldn't prevent wrong actions; 
Suppose you harvested 1 person's organ's to save 5, then you realize one of the 5 you saved is a depraved serial killer, and will kill, rape, and torture many more victims thanks to your action. So, all-things-considered, the action was wrong on consequentialism. Suppose you can undo it, but the act of doing so, has massive causal ramifications, inscrutable to you, which would eventually lead to the birth of Super Hitler, who will initiate a global nuclear holocaust. Strange scenario, but there you have it, a case where undoing a wrong action would be wrong on consequentialism.

He replies; 

But this plainly rests on an equation of subjective and objective wrongness. The principle only applies in cases of objective wrongness, where it’s a thing you really shouldn’t have done, not just a thing you thought at the time that you shouldn’t have done. Thus, of course you should sometimes undo subjectively wrong things, if you later discover that they aren’t wrong.
This response confuses me. Surely, Matthew as a consequentialist thinks only objective wrongness matters. In fact, that's a big part of his entire motivation for consequentialism, it "picks up on the things that really matter from the point of view of the universe",  (Arguments 2, 9, and 12) but peoples subjective states don't matter, what matters is objective ultima facie goodness and badness. If an action is objectively wrong, it really ought not have been done tout court. So, if it is objectively wrong to save the 5 by killing the one, and to perform an action to undo it's effects, (both because of long term ultimate outcomes) then the initial action was wrong, and it is also wrong to reverse it. 

Argument 4

We are now on Huemer's paradox of deontology. If you want to know what that is, read the previous articles. I started by questioning one of the principles the paradox relies on. 

Matthew says; 
TT says they only seem plausible at first blush, but I don’t think this is true. These really strike me, when I reflet, as basic features of rationality, far more reliable than the emotional reaction I feel to a case like organ harvesting. They’re also more trustworthy because they apply to more cases—if they were false, they’d have clear counterexamples.
It is fine that you have the intuition that these principles are true, but it seems really bizarre to me to call them "basic features of rationality", it's not like one is committed to a contradiction or a logical mistake if they aren't accepted. They are not constitutive of rationality as such, as something like inference rules, laws of logic, or basic epistemic norms are. In any case, I expressed reasons why skepticism of the "Individuation Independence" principle seems warranted.

Matthew replies here;
I assume he meant two wrong, because his comments only make sense when applied to that. Obviously things can have emergent properties, but it doesn’t seem like if two actions are wrong, explicitly conditional on the existence of the other, then it can be wrong to do them both.
Maybe I'm misunderstanding, but I thought the "Individuation Independence" is the one my skepticism applies to.  I'm skeptical that whether an action is wrong, cannot depend on it's being one action or more. A conjunctive action may be right, despite each individual conjunct action it contains being morally wrong. It seems that conjunctions of actions  can have a different property from each of the conjunct actions. Easy example, if one is watching a football game, it's true of each individual person that if they stand up they'll see better, but it's false that if they all stand up they'll be able to see better.  I see no principled reason to think the same thing cannot be true of the moral features of actions. 

Although, perhaps we can cast doubt on the Two Wrongs principle in a similar way. If you stand up you'll be able to see better. This is also true, conditional on Billy, James and Sandra standing up on another bleacher as well, not obstructing your sight. However, it may be that if you all stand up this will cause everyone else to follow, so you won't be able to see the game if all 4 actions of standing up are performed. Or another example, the probability that it is snowing outside conditional on your friend wearing a blue shirt, is the same as the probability that it's snowing taken alone, but the probability of the conjunction of it's snowing and your friend wearing a blue shirt will have a lower probability. Something similar might be true of moral actions, a conjunctive action might be wrong despite each action being wrong conditional on the other, and the torture transfer may be one such example that Huemer discovered.  It is after-all, not clear that we should take the torture transfer scenario to be a counterexample to deontology rather than a counterexample to the principles.

I also gave reasons to distrust our intuition about the torture transfer, given we want to accept both principles. Matthew responds in kind;
TT objects to the scenario, which involves two dials that can reduce another’s torture by some amount by causing someone else half as much torture as strange. But it’s no stranger than the trolley problem—in the real world, fat men don’t stop trains.
The Fatman scenario itself may be kinda strange, but the intuition is about murdering one person to save a greater amount of people, which is something that can easily be understood and is not unlike what we would expect to ever encounter. I don't think the torture transfer scenario is like anything resembling an ordinary decision we should expect to encounter.
But in this case, we can stipulate that neither of them are given the opportunity to consent. In this case, it’s extremely obvious that you should decrease both of their tortures.
We'd still ask whether they would have consented had they been asked. If the answer is no, you're violating consent to bring about more suffering towards them. It may be for the good of both in the long run, but we ordinarily think such actions are wrong. This in conjunction with the previous point I think lessens the force of the intuition. 

Argument 5

This would be the paralysis argument against deontology, which I continue to think is the weakest of the arguments in Matthew's first post. Matthew tries to rescue it here, I will give this a thorough treatment and hopefully, finally put this argument to bed.

I mentioned that, as deontologists, we shouldn't take consequences which are unforeseen, long-term and indirect to be morally relevant. Matthew says;
But this principle doesn’t solve it, because it’s not long-term, unintended, and unforeseen. After this argument is presented, it’s no longer unforeseen—now we all know that we are constantly causing tons of rights violations.
Matthew misunderstands what is meant by foreseen here, I have in mind something much stronger. What I mean is that some particular bad state of affairs (Such as a serial killer being birthed) is known by the agent in question that it will occur as a result of the action they choose. But such a thing is not known. After learning the argument, what you know is, at best, that there are some things that might happen as a far off long-term result which would not have happened had your act not been performed. But you have no idea what will, other than a vague idea that most likely some of them would be rights violations which you also have no reason to believe wouldn't also occur absent your act. Nor do you know how your action is relevant to the causal chain, the causal relations your action bears to other events, further down the line, is completely inscrutable to you. By contrast, you do know very well the causal effects of, say, stabbing someone in the stomach.

Matthew does kinda respond to this;
TT next tries to refute this by saying that it’s not foreseen in that “The agent performing the act did not foresee any particular bad outcome that was to occur as a result of the act.” But this is totally irrelevant—in the coinflip case, even if you don’t foresee any particular bad act, if you know your flip has a 50% chance of killing one and a 50% chance of saving someone, it still seems prohibited by deontology. So we’re back to square one. 

Note that I do not think lack of foresight on it's own is enough to completely remove responsibility. Still, there are relevant differences here, Matthew is completely wrong to say that it is irrelevant. If you flip the coin, you are knowingly and intentionally causing an outcome that has a 50% chance of killing someone. When you perform ordinary actions, you are not knowingly causing an outcome with a 50% chance of killing someone, you have almost no idea what the probability of your action playing a causal role in someone's death is, and you certainly aren't intending said causal role. Nor do you have any reason to believe that your inaction wouldn't also lead to death's down the line. So, for these reasons alone, it's clear the coin analogy is flawed, you are more responsible for flipping the coin then performing ordinary actions which lead to long-term identity affecting consequences.

I mentioned that no-one is being used as a mere means in the process of performing ordinary actions as an asymmetry. Matthew says;

But these also apply to the coinflip scenario. No one is being used as a mere means—it’s just a long side effect, and the intent is to get money

But this is extremely misleading. Your intent is to get more money, by performing an action which you know has a 50% chance of killing someone, that's the part that's wrong. You are directly, knowingly risking someone's life for the sake of monetary gains to yourself, so you are treating that persons life as a mere means to an end. In contrast, there is no one who you are treating as a mere means by intentionally risking their life for personal benefits in doing everyday actions. Your actions may lead to deaths, but bringing about such an outcome is no part of your intention, nor is there any direct causal link between your act and their death.

Indeed, I think a very potent disanalogy here, is how eminently indirect and causally insignificant and distant the kinds of ordinary actions we engage in are with respect to long term rights violations. I briefly mentioned this in my article, and am very confident that this in conjunction with the fact that these outcomes are unintended and unforeseen, diffuses the paralysis argument. Normally, we judge agents as blameworthy for causing some outcome because their action is causally sufficient to bring about the effect, or, it is at least a proximal cause (e.g, pushing the fatman is causally sufficient to kill them, hiring a hitman is a direct proximal cause to the killing of your target etc). But your ordinary actions are not even close to causally sufficient to bring about these identity-affecting long-term outcomes, it is going to be your action in conjunction with countless other actions and environmental conditions, as well as future antecedent conditions and events closer and more directly causally effectual to the occurrence of the event, all of which, had they been even slightly different, the event wouldn't have occurred even conditional on you doing the act in question. This tremendous causal distance between your action, and the event, and the vast amount of prior, simultaneous, and intermediate causes that were also necessary to bring about the effect, is I believe something that dissolves your personal responsibility on my view.

Let's consider an example. Suppose if you didn't park your car at t2-t3, a woman wouldn't have had to look for another parking spot which leads to her having a different child, who becomes an alcoholic and tragically gets into an accident killing a family. It's clear that parking your car is not sufficient to bring about that outcome, if all the other parking spots weren't taken, if she wasn't delayed by stops signs and red lights at particular times, if she woke up earlier and wasn't late, future actions such as if she didn't have certain phone conversations with her friends afterword's, if she didn't choose to go out to the movies or to restaurants at particular times, the way the child is later raised by both parents, the influence of peer pressure on the child, other environmental pressures on the child's psychology,  the child's later choices in life as an adult not to seek out help for alcohol addiction and a bunch of other things I could list which would still only be a meager fraction of the total causal influences.  There are just so many events, each of which are necessary, and are together jointly sufficient to produce the outcome many of which obviously more morally pertinent than your action (Such as the man's personal choices), that it is very hard to see how your particular action is one that should be taken to be relevantly responsible for the event. Not only is it a far off distant and indirect influence, but you could have quite clearly still parked at t2-t3 and had even one of these influences not obtained the event in question would not have. This is true of roughly all ordinary actions and their causal relations with respect to long term identity-changing events we might think of. 

With that out of the way, let's again look at the coin analogy, in this analogy, you have a 50% chance of killing someone, so if you land on, we'll say heads, someone will certainly die. But this is very clearly disanalagous to ordinary actions, landing on heads is sufficient to cause someone's death, ordinary actions such as talking to someone, parking a car etc. will only cause the given event conditional on the existence of countless other causes and your action isn't even proximal, or remotely direct. 

For these reasons, deontologists should not be the least bit moved by the argument that deontologists ought not move. Unlike consequentialists, we shouldn't worry about consequences that are unforeseen, indirect, and vastly causally insufficient and distant from our actions.

Argument 6


This is the paradox regarding preventing wrong actions.

Matthew doesn't provide anything novel here to respond to that I haven't already addressed in the sections on argument's 2 & 3, aside from repeatedly insisting that my position is absurd, weird, and obviously false.  Which I don't find terribly convincing. I'll  remind Matthew that the tag line from his previous response article to me was;   “Despite what J.L. Mackie would have you believe, calling something your opponent believes in “weird” is not, by itself, an argument.”

In any case, I think Matthew as a consequentialist is not appreciating as relevant, the difference between how we assess a particular moral action vs how we assess whether one has a duty to prevent something. These are two sides of the same coin for the consequentialist, but radically different for the non-consequentialist.

When we say you have a positive reason to prevent some action, I think the only thing that matters, (assuming all else is held equal here and you aren't violating duties) is the consequences, you prevent something because you don't want some undesirable state of affairs to obtain.  Whereas when we are assessing moral actions like killing one to prevent five, we are looking at a bunch of things, most importantly we ask whether it violates the formula's of the CI,  and apart from that, our assessment is very intention-based, and character-based. So, it's really not any surprise that we might judge some action as wrong for non-consequential reasons, but that we also shouldn't prevent it, for consequential reasons. I think after this is made clear, the weirdness, which was Matthew's only gripe, is dissolved. 

Argument 7


This is regarding the bizarre scenario about aliens stealing stones to reduce their agony. I pointed out that our intuitions don't count for much here, Matthew demurs, but I will just refer Matthew and other readers to my first section on intuitions for why I think Matthew seriously overestimates the reliability of intuitions about strange scenario's.

I also offered a tu quoque, which Matthew responds to; 
I’m not sure if I quite understand what’s being said, but I’d imagine that most of the jabs are objectively wrong, but many are maybe subjectively right—depends on the details. But if each of them were wrong, then the conjunct would obviously be wrong.
All of the jabs taken individually are subjectively wrong as each of them are such that it is statistically improbable, we can say 1 in 100,000,000, that it will be in any way beneficial, and the vast majority are objectively wrong. But we can suppose we have no way of knowing whether someone has the virus so the only practical way to prevent it is to inject everyone. Each and every act of selecting someone from the population and jabbing them taken alone, is such that it probably caused pain and done no good and so is probably wrong on consequentialism, but the act of doing 1 billion jabs on random members of the populous statistically guarantee's that the disease would have been prevented in someone and thus good was done, so the act of doing it a billion times is right (both objectively and subjectively). 


Arguments 8 & 15


This is in regards to the research that utilitarian's are less prone to emotion and bias as deontologists are. I really don't have more to add here. While this may be some evidence, I am skeptical that the data is overwhelming, for some reasons I expressed already, and also because this kind of psychological research is quite speculative and I highly doubt we can attribute deontological judgements as being caused, rather than correlated with non-rational emotional attitudes in any decisive way. Bias's that are shaped by culture and society etc. is something that explains our moral judgements in general as well, so to the extent that this is taken to cut against deontology it should also cut against regular moral judgements, e,g that rape is bad.  That said, I'm not read enough on the research to mount a strong response and I don't think much turns on me conceding this point in any case. I think the considerations favoring deontology outweigh this. 

Matthew even admits that the majority of professional ethicists, who we'd expect to be most reflective, knowledgeable and less prone to bias's then most, being non-consequentialists is also evidence against his view. Although he tries to attenuate it in an unconvincing way; 
But I don’t think that most ethicists are very good judges of this. Very few of them have read the serious arguments—from Huemer, Chappell, and so on—against deontology. Very few of them realize the utterly bizarre and appalling implications of deontology. And Parfit was on our side, and was sufficiently brilliant to offset the rest of ethicists put together.
Aside from this being pure speculation, it cuts both ways, I doubt most utilitarian philosophers have surveyed the literature to examine every single argument deontologists have made against utilitarianism or for deontology. I disagree that the arguments Matthew has brought forth should really be all that compelling let alone completely devastating to deontologists anyways, for reasons I discussed. Also, Huemer is obviously familiar with Humer's argument, and Huemer himself is a moderate deontologist in spite of it.

As far as Parfit goes, I could easily just as well claim Kant, who is widely regarded as a genius and is one of the most influential philosophers to this day not just in ethics but in metaphysics, epistemology, aesthetics is on my side, so I win! But that would be silly. One guy does not supersede the entire field. 


Argument 10


This is with regards to the entailment from deontology that it may make things worse all things considered if you put a morally perfect person in charge.

Matthew says;
TT says he doesn’t really feel the force of this—this strikes me as bizarre. It really is strange to think that it can be a terrible thing that perfectly good people are put in charge of various things. If they always do the right thing, it’s very strange that putting them in control of something is unfortunate.

It is counterintuitive when put like that, but when we look at the specifics of the case in question it's much less counterintuitive.  We are looking at a case where a morally perfect person loses control of their actions and will kill 1 to save 5 in this state,  if they are not put in control then 1 is killed to save 5 which is a better state of affairs since less people die. If they are put back in control however, they will not violate the CI and so will not kill the 1 to save 5, because they are a perfect Kantian agent. But that's a worse state of affairs because more die. But this is totally expected, since wrongness is not judged by the states of affairs brought about, but by the maxims which govern an action. It makes things worse qua state of affairs that they are put in charge and do not kill the one, but the action would have been wrong if they killed the one when given control for other reasons that don't have to do with the state of affairs that results, which is something that shouldn't be surprising as it's something deontologists were already committed to. 

We can see that this argument is not really independent from the preference one (argument 2), it's the exact same intuitions that seem to be tied to consequentialist commitments which tacitly equate judging actions and states of affairs as under the same umbrella, that are really doing the work here. 

I also pointed out that there is a similar result that is just as weird for utilitarianism which is that it implies that there are cases where we shouldn't act like, or make publicly accepted utilitarianism. Matthew doesn't say much in response, he says it's a non-sequitur but doesn't explain why. If Matthew thinks putting people in charge can make things worse is a strange result, I'm not sure why he shouldn't also think making people believe and try to act in accord with the correct ethic can be bad and wrong isn't also a strange result. 

He also says;
All moral theories will be self-effacing for some people—if Jon would kill himself if convinced of any moral theory because it would make him sad, one shouldn’t promulgate that moral theory.

But I don't understand what's being said here. If John isn't convinced that he shouldn't violate humanity, and rape and murder people he'll kill himself? I think he should be convinced of those things, it's a problem with his moral character that he'll kill himself if he doesn't think rape is ok, no one but him is blameworthy for that.


Argument 11


I'll just quote Matthew here.
I argue that there’s not a big gulf between bridge and trolley, where you can either flip the switch to kill one and save five or push the man to kill one and save five. I do this by sketching out a scenario where a man is standing on top of a bridge and you can either flip a switch to redirect the train up to him killing him more painfully or push him, killing him less painfully. It seems you should push him—this would be better for him and worse for no one. But this shows, as I argue, that there isn’t a big difference between the two acts and, if as most agree flipping the switch is fine, so too is pushing the man.

TT ends up biting the bullet and saying that you should flip the switch not push him. But this is absurd—it makes him die more painfully and benefits no one. Any moral system with a formulaic set of rules that make people worse off for no reason will be false. TT doesn’t think this is very unintuitive—I think that the principle that you shouldn’t harm one person to benefit no one is very plausible, so I disagree. But he doesn’t agree that it’s a bit weird and is at least some reason to reject deontology
There is a recurring theme here, Matthew construes the position in a way that seems absurd but when we add more nuance and dive deep into what was being said we discover that it's less unintuitive and reduces to a standard disagreement between deontologists and consequentialists.

I am not saying we should "Harm someone to benefit no-one".  That may be the result in terms of world-states but I am a non-consequentialist. The reason I think one is wrong and the other is not, is that one is directly and intentionally harming someone and the other isn't. Pushing them to save five is you killing them, pulling the switch to save five is not you directly and intentionally killing them but it has the side-effect of killing them, the conflation between these two is consequentialism, so it's no surprise Matthew doesn't see the difference as relevant. That said, I think pulling the lever and not pushing the fatman is still a bit of a weird unintuitive result but it's not a sufficient reason to reject deontology. 

Argument 12

This is in regards to the argument that morality should pick up on what really matters from the point of the universe, I contested this and pointed out that this seems ambiguous between a moral and axiological understanding.

Matthew says;
This is, once again, a difference of intuitions. Still though, I think it’s worth saying something about the intuitions that motivate consequentialists. The consequentialist picture seems attractive because it picks up on what really matters. Not just in an axiological sense—it doesn’t seem genuinely important whether it’s you or someone else who violates the rights of another. This seems unattractive about deontology—an analogy that Richard has given that I’ll coopt is that of norms of honor.

In an honor society, there may be intuitions about acts being dishonorable. But this seems problematic—it doesn’t seem like those things really matter. Morality is left impotent, inert, and unimportant if it’s not about what’s really important.


But I don't think consequentialism picks up on what matters in any sense other than an axiological sense, and I don't think this argument shows that it does. I agree that it doesn't matter in terms of the axiological states of the world and independent of the perspective of any rational agent, whether you or someone else violates rights. But qua the standpoint of a rational agent, you have overriding normative reason never to violate rational nature, treating rational beings as mere means, in virtue of constitutive norms binding rational agents. Once again, it's not surprising that what actions you should do on deontology, are not a function of what is best in terms of states of affairs/world-states. The charge that this leaves morality inert remains nothing more than a question-begging assertion unless justified.

Arguments 13 & 14

This is in regards to paradox's of moderate deontology, I am not a moderate deontologist so they don't apply to me. Matthew starts by saying absolute deontology is insane so it is false. 
Thus, he thinks you shouldn’t kill one person to prevent an infinite number of people from being tortured in increasingly horrible ways. But this verdict is, on its face, absurd. So, his view is false. If we imagine you committing homicide was the only way to prevent everyone from being tortured in the worst ways imaginable for all of eternity, it’s very obvious you should do that.
This is the standard counterexample to absolute deontology people come up with. It's not really obvious to me that you should murder the innocent person. The reason is because, you aren't to blame for the torture if you don't do the murder. You're not doing the torture, the responsibility will lie with the perpetrator(s) of the torture, all you're doing is not going through with murder. But you can't be blameworthy for an outcome if you can't do anything about it without violating a moral imperative. On the other hand, if you murder someone you will be blameworthy for murder. 

There is also a point to be made that the scenario is very strange and does not remotely reflect anything anyone is likely to encounter in our world or in possible worlds close to ours, it's hard to conceive of a scenario where literal infinite torture will be caused wholly contingent on your choice to murder someone, and there is literally no possible way to prevent it aside from doing the murder. Such a scenario, I contend, would have to be very contrived, in a vacuum, and abstract so we should be in doubt about the reliability of our intuitions regarding it. Scenario's which are similar and closer to home, such as the CIA torturing to prevent terrorist attacks, or organ harvesting, are one's where I take it that deontic verdicts are clearly correct. 

Matthew also puts forward a risk argument; 

Additionally, absolute deontology has issues with risk. Huemer explains this well. If you say that one reason—namely, your reasons not to violate rights—is infinitely more important than your reasons to promote the good, then you should never do anything, because anything risks violating rights, and that’s infinitely more important than anything else.

But I don't think this is effective against me. I only think specific actions which involve directly violating rights in extreme ways like murder, torture, and rape are absolutely wrong. I don't think doing actions which risk leading to rights violations as an indirect outcome are absolutely wrong. For the examples Huemer talks about, I don't think promise-breaking is absolutely wrong, and I don't think potentially falsely imprisoning innocents for crimes they do not commit is absolutely wrong, although I think killing them is, which is, in part, why I'm against the death penalty, but I don't think that's a huge bullet-bite. 

Conclusion

I continue to think Matthew's case against deontology is weak, it relies mostly on intuitions that are either parasitic on consequentialist commitments which ignore subtle deontic concerns and conflate the goodness of world-states with the rightness of actions, or are about extremely strange scenario's in conjunction with broad moral principles, for which we shouldn't put much stock in our intuitions. The paralysis argument is an utter failure. The research on bias's and deontological beliefs being associated with emotion rather than rational thinking, is, I am willing to concede, some evidence though not insurmountable, there are reasons to reserve skepticism, and it seems to be roughly counter-balanced by the fact that most ethicists are non-consequentialist. So, I must conclude that Matthew has failed to provide anything like a conclusive proof for the falsity of deontology as he likes to claim. 

This is the end of the exchange between myself and Bentham's Substack on consequentialism vs deontology. Thank you to Matthew for the challenge and the invitation, 'twas interesting.  Till next time, ciao!