Tuesday, February 21, 2023

Reply to Bentham's Substack 2; Electric Boogaloo

This will be another installment in the epic debate between myself and Bentham's Substack. He has posted his response to my original article. You should probably read both for context before reading this. Now, I offer my rejoinder to his rejoinder. 


Kantian Constructivism 

Matthew says; 
There is a lot of literature on Kantian constructivism. One worry for this it is that it requires denying moral realism, which I take to be a mark against it.
To each their own! I think that's a mark in favor of it!
But another problem is David Enoch’s shmagency objection. The basic idea is this: the Kantian Constructivist says that you can, in theory, not respect other people as ends, but you, by doing so, revoke your status as an agent—to be a true agent, you have to not violate the rational will. But then the question arises: why should I care about being an agent?

This is an objection worth taking seriously, probably one of the strongest objections to constructivism or constitutivist views of action. I'll list a few reasons why I find it unconvincing.

For one, I may not be able to offer you further reasons to be an agent rather than a schmagent, but nonetheless we are rational agents with sets of values and aims. Certain evaluative truths are entailed internal to this standpoint and whether you like it or not, you can't opt out of it. Enoch's response is even if you can't opt out of the game of agency, this does not solve the problem because there's still the further question of whether we have normative reason not to break the rules constitutive of it. However, I would retort that breaking those rules is being inconsistent, given that you are a rational agent, for standard Kantian reasons I've previously argued. Perhaps then, it might be asked "what is the further reason to avoid being inconsistent?". I don't think there is one nor that there could be one, for that is just asking for a reason to be reasonable.

For two, the question is either asked internal to the standpoint of an agent, in which case constitutivism has a straightforward answer by appealing to agency's constitutive norm(s). Or the question is external, in which case it is unintelligible, as what it is to be a reason for action is embedded in, and only makes any sense for agents, "reason to be a schmagent" is thus, conceptually confused. See; (Silverstein 2015). 

For three, even if the question is intelligible and the unavoidability of agency doesn't help, the claim that absence of reasons to be an agent shows that constitutivist views cannot ground normativity remains, at best, unmotivated. Just because there is a set of normative reasons which we'll call n1-n99 which are grounded in constitutive features of agency, doesn't mean that these constitutive features  must in turn be grounded in further normative reasons, it may be that n1-n99 are the only normative reasons there are. See; (Paakunainen 2018) which also gives intuitive analogies here.

With regards to the Normative Question Matthew says;

There is an answer to 1 on moral realism—the reason to be moral is because morality is about what you have most reason to do it

But this doesn't answer 1. Why think such-and-such moral fact is what the agent has most reason to do? If the agent doesn't already think that's what they have most reason to do, what reasons do you give them? If you say it is a primitive necessary truth, then you're falling into the problem I stated before, far from answering the normative question, you merely located the necessity where you wanted to find it. 

I don’t think 2 is quite right—Jeffrey Dahmer should be moral even if he would need come to the belief that his action is justified and makes sense.

Ah, but this isn't what 2 says. What it says is that, on the condition that the agent knows what justifies their action being required, they come to form the belief that the action is justified and makes sense. The reasons' given have to be motivating, by the agents lights. If they aren't, then why should they accept them?

For question 3 Matthew says;

This assumes that the reason to be moral is about what happens to you. I think that dying for morality is probably worse for you, but it’s better for others. Several others dying is worse than death for you.
It is not about reducing morality to what is best for you. I'm far from an egoist. It is about what matters most, by your own lights, and what matters most seems to be your practical identity, which `is a description under which you see yourself as having value and your ends as worth-undertaking`. So, circumstances where you must give up your practical identity explain why you might be obligated to sacrifice your life.

Next; 
In terms of answering the questions of why all and only the things that actually matter matter, the answer would be grounded in facts about the things. There isn’t some extra further non-natural add-on—instead, the non-natural facts just serve to pick out which of the natural facts matter.

But why accord those natural facts value, rather than some other cluster of natural facts? For any natural fact you give, it seems the normative question will come back, "Do I have a reason to act in accord with, or regard as valuable these natural facts?". Merely positing it as a brute, or necessary truth doesn't answer anyone who doesn't already accept it, it just, once again, locates the necessity where desired. Whereas, on Kantianism the source of normativity, is rational willing/autonomy/deliberation as such, which every agent has, and is just one's capacity for setting one's ends and taking them to be worthy of pursuit, and deciding which desires or external facts are better to act on. You ought to accord rational nature unconditional value, because rational nature is the source which generates all your reasons for action, the condition for you valuing anything at all, and viewing your ends as worth-bestowing. It is motivated by a transcendental argument. 
If we ask why pain is bad, what answer could be more satisfying than “because of what pain is like.”

You can intelligibly know the phenomenal content of pain, and fail to see it as a reason to avoid it.  There's no incoherence here.  In fact, plausibly many rationally find pain desirable as it allows them to grow emotionally and physically.

But I think that there are issues for the Kantian constructivist account of normativity. It seems unable to ground non-moral axiology. It seems that tornadoes are bad and puppies are good.

That's correct.  But I don't see this as a problem because it's not what Kantian Constructivism, as a theory, is trying to do. It's a theory of right and wrong actions not non-moral axiology, and it is consistent with various views one might take on non-moral axiology. It is similarly not a problem that the theory of general relativity doesn't explain the economic calculation problem.

But this doesn’t seem like a good answer. It seems like the reason that pain is bad isn’t just that you’d care about it if you thought hard. It seems that the fact of the badness of pain grounds the fact that you wouldn’t like it.

I don't think so. There is nothing about pain that has normativity built into it, at least as far as I can tell, and as has been argued. If one is fully aware of what pain is like and is just not motivated to avoid it, what is the irrationality here? There's no contradiction, no practical inconsistency, no means-end incoherence. So, what's irrational? 

In regards to the Kantian argument, Matthew says;

I disagree with this because I accept moral realism. If our reasons to have ends is just based on our desires then none of these obviously irrational desires would be irrational.

Matthew proceeds to  list off a set of 'obviously irrational desires'. But I reject that they are irrational. Much less obviously so. For reasons I explained in my first article. Matthew gives some responses, which I don't think are satisfactory; 

Most people think that if you set yourself on fire for no reason beyond the fact that you want to that that is irrational.
But what most people think doesn't matter. What matters is if the agent in question is actually being irrational not whether other people think they are. The reason most people think they are irrational is because they are imposing their own perspective whereby pain is undesirable, a perspective which the agent in question does not share. I fail to see any reason they should be taken to be irrational rather than  simply out of accord with the instrumental reasons ordinary persons are motivated by. Perhaps the idea is that well-being and avoiding pain is necessarily good, but, once again the agent in these scenario's doesn't accept that, so there is no inconsistency entailed from their perspective.

Further, and this is something I neglected to mention in my article, but I think people's intuitions here can be explained by evolution. The belief that avoiding pain and pursuing pleasure is necessarily good has tremendous survival and reproductive benefits since sex is pleasurable, things that can harm/kill us are painful etc. So we'd expect evolution to select us to have these beliefs even if they're false.
There are lots of strange desires. No one has the intuition that two aliens engaging in weird pleasurable activity where they put their tentacles in each others ears are wrong.

I do not deny this. But my point is we shouldn't trust our intuition that they are irrational, rather than really strange to us.  Because we can't actually show an incoherence from the agent's standpoint without positing further proposition's that we think are obvious from our own reflection on our phenomenology, but the agent for all we know, completely rejects from their own reflection. We also have no access to the agent's internal constitution or even any idea of what it would be like to be constituted like said agent. 

I’m not sure what’s being missed. Hedonism—which I don’t have to defend for the purposes of this article, just general utilitarianism—says that pleasure is good which makes it rational to pursue. What’s unexplained? There’s no deeper account of why pleasure is good just like all fundamental things bottom out at some point.
What's not explained is what makes it irrational for an agent to not desire pleasure.
The first premise talked about what you do, in fact, value. But none of that means you make things valuable; it just means you care about things. Your desires don’t make things really worth caring about.
I don't think your desires alone make them valuable either, your choosing them with your rational will makes them valuable, which involves deliberating between desires and picking out which one is a better reason for action. I think that does make things valuable, because I don't think there is anything further to value aside from what is entailed from the practical point of view of an agent who values things. I reject that there are external value properties, and have given reasons why. We'll get into more of that later. For now I'll just say, even if they did exist, I think they succumb to a regress of normative questions as discussed before.
No, this just means you can value things. Even if you can value things and somehow make them valuable with your will, this wouldn’t mean you are an end in and of yourself.

It means all value is conditional on rational will. It is the source of all value and the only thing that has value which is unconditional. Sounds like an end in and of itself to me. 

Just asking whether something respects the ends of rational creatures seems undefined. Indeed, I don’t think there’s a fact about whether you use someone as a mere means.

We use someone as a mere means if we treat them as an instrumental tool in a way that fails to respect them and their ends. If you kidnap someone for ransom, you are treating them as merely instrumental for your aims, disregarding their consent, life projects, self-determination,  autonomy etc. Here are a couple of articles which explain the ideas I have in mind here. 
One other worry is that this seems unable to ground morality. After all, it can’t explain axiology, and it also can’t explain the wrongness of harm to some severely mentally disabled, who are not rational beings

I've addressed the point about axiology. It's consistent with numerous views, just like consequentialism is (desire theory, hedonism etc.). The second is something Matthew is correct to point out as a difficulty, as the value of non-rational agents is not straightforwardly entailed from Kantianism, as is the value of rational agents. That is not to say there is no resources on the Kantian view for affording moral value to non-rational agents, e.g disabled people, animals, infants etc. it is plausible that we have an indirect duty to treat beings who are sufficiently similar to rational agents with kindness, otherwise we foster bad dispositions for respecting rational nature. Additionally, a precautionary principle here is plausible since they are so similar to rational beings. Cognitively disabled people, have modal status (Kagan 2019) and potential status since their faculties tend towards rational nature. Korsgaard also has a Kantian and Aristotelian-style argument for taking non-rational creatures such as animals to be ends in themselves (Korsgaard 2018). There's more that can be said, but that's all I'll say here.

But reasons internalism is false. The reason why ridiculous things don’t matter is because there’s nothing about them that’s worth pursuing, not that you personally don’t care much about them.

It is, I think, rather unclear what is meant by "worth pursuing" here. But let us suppose it were true that counting blades of grass was "not truly worth-pursuing" and "objectively a waste of time", however that's cashed out. Even if that were true, who cares? I still wanna count blades of grass! This is one reason why reason externalism strikes me as implausible, it's hard to give an account of what an external reason independent of any valuers' point of view is, and even if it could be understood, they don't really do anything it seems to me. They'd just be things that you can incorporate into your set of potential reasons for action which may or may not move you to act or correct your preferences or dispositions, exactly like any other natural, non-normative property.
They count in favor of what they’re reasons for. It’s unclear what about them is supposed to be incoherent. When I say “you shouldn’t torture babies for fun,” or “you have good reason not to set yourself on fire even if you want to” that seems coherent, even though it’s not appealing to any person’s desires.
But what does 'counts in favor of' mean here if it's non-goal-indexed? Is the question I'd imagine Lance would have. It seems unhelpful to introduce further concepts which will be understood in an anti-realist or instrumentalist manner.
I think a lot of this is irrelevant. If we think that as rational people reflecting we’d end up concluding that rights don’t matter
But I don't think we'd end up concluding that rights don't matter for Kantian reasons.
I think the idea that Mackie and Olson just call moral realism weird is uncharitable. They pick out the problematic features of the kind of normativity in question, which fit poorly with our background understanding of the world, and are completely unlike anything else we know, and would require a special faculty to access that doesn't fit with our ordinary ways of knowing everything else. Further, what is a counts-in-favor-of relation? The internalist has a straightforward account. Yet, if it's not some motivational fact or anything else about an agent, or set of agent's psychology and is a feature that is 'built into' the world, what could the truth-maker for such a relation obtaining even be? In virtue of what do certain natural properties instantiate this relation and others don't? It's unclear.

Matthew cites his article on moral realism to off-set the concerns of ontological parsimony and explanatory impotence. I'll just make two notes here. The first is, I think most of Matthew's arguments are not gonna be compelling for those who aren't realists antecedently, I've already explained why I find the 'phenomenal introspection' and irrational desires' arguments to be highly dubious. The second is, a lot of the general advantages of realism, e.g binding reasons on all rational agents, is something you get on a constructivist view like mine for free without the extra ontological items posited by realism. Moral convergence is equally expected under Kantian Constructivism as well.
I think consequentialism gives good accounts of that, as we see when we get to the specific objections. Additionally, no explanation was given at all for why Kantian deontology explains any of that.
It's straightforwardly entailed from Kantian deontology that all humans have absolute value and dignity, and it is in virtue of our rational natures that we are ends in themselves.
This is a whole can of worms, but I addressed it in my moral realism article.
I myself am not convinced that evolutionary debunking is an insuperable problem for moral realism, but I think it is a difficulty which my view handles better than realist views. As for Matthews' article I'll just once again plug my friends response who addresses Matthew on Evolutionary Debunking.

Overall, I think Matthew has failed to refute the arguments and considerations I laid out to favor Kantian Constructivism to rival views. 


Epistemic Objection 


Next we'll be looking at Matthew's response to, what I take to be, the most powerful objection to consequentialism.
For one, as I showed in my opening statement, unpredictability results in the conclusion that deontologists shouldn’t act at all.
I responded to this in my previous article.
Second, I think the expected value response to this works—it’s true that it’s hard to know the exact consequences, but we can make estimates.

This is the best response to the objection, which is why I pre-empted it in my first article. Let's see Matthew's response. 

But it’s also likely to prevent very bad things. All we can do is rely on our best judgment—many of these huge ripple effects fizzle out.
This just seems like, at best, a reason to be agnostic about whether the action is good, it doesn't justify your belief either way. But if you're agnostic about it, how do you pick one course of action rather than another when deliberating what you should do? As I argued before, it's extremely implausible, and indeed I've even directly argued that it is astronomically improbable, that the known consequences are sufficient to break the tie on expected utility.

To make the point more clear. Here's an analogy I liked from Lenman, suppose it's D-Day and you are a military leader who has to choose between 2 plans, plan A and plan B. The plan you choose will have tremendous consequences for the war, civilians, and the soldiers on the battlefield. Let's suppose you know that if you go with plan B, a dog will break her leg but if you go with plan A she won't. The unknowns of going with plan A or B are such that they otherwise cancel each-other out. Does Matthew mean to seriously suggest that the knowledge of the dog breaking her leg is a sufficient reason to choose plan A? Keep in mind, if you make the wrong choice the consequences are many magnitudes greater in significance then the dog breaking her leg.  Perhaps it is some reason, but it is, quite clearly, proportionally swamped by the other consequences. So, you should have basically no clue which plan to pick. This is similar to the consequentialists epistemic position in deliberating which actions one should do, since all and only consequences are salient in determining whether an action is right or wrong

All this, by the way, really generously  assumes the choice is between 2 options. If there are more, you have more options to partition the expected value space to, and the expected utility you get from choosing one option relative to all the others goes even further down. 
Imagine a game way more complicated than chess, where it was hard to know which moves were best. Even if it is hard to know, it’s not impossible, so you could still make moves with the highest expected value.

My answer is; If your epistemic situation wrt to the game is analogous to ours and the long-term identity-affecting affects of our actions, then yes, you should be in doubt about what move you should do. 
Truth teller then argues that we have no a priori reason to expect this to cancel out. This is true, but we can just evaluate the expected consequences, taking into account uncertainty. No part of this reply requires accepting the strong principle of indifference.
If Matthew is not using a principle of indifference he owes us an explanation for how he partitions and distributes the probability of the set of long-term astronomical identity-affecting outcomes and their expected utility, otherwise, and again, we should be clueless about what to do. He hasn't offered this.
Truth teller’s argument is more radical than he realizes. It results in the conclusion that we can’t calculate consequences, so consequences shouldn’t factor into our considerations at all. But this is absurd—if you could cure depression, for example, that would be good because it would have good consequences

It doesn't entail that no consequences factor in, just unforeseeable, long-term and indirect consequences.
Suppose you know that your action of going outside today will cause the extinction of all life on earth five years from now in an indirect way. On accounts that reject the significance of these consequences, these are, quite literally, no reason not to go outside.
If you know, then you've foreseen that some particular outcome will follow from your performing the act of going outside. I think that's sufficient to provide a reason not to go outside. What wouldn't be is if the consequence was both unforeseen, and indirect. 

I conclude that Matthew has failed to refute, or even truly take seriously the epistemic objection.
Demandingness Objection

 Next, Matthew's responses to Demandingness;

First, utilitarianism is intended as a theory of right action not as a theory of moral character.  
But what makes one a good person on any ethical theory, should be a function of the rightness and wrongness of the actions they perform. Just as what a good mechanic is, should be a function of the efficiency and success of the car-fixing actions they perform. So, it's hard to see how this distinction is supposed to help. If Matthew denies this, then first I'm not sure what else is supposed to determine the value of moral character, and second it seems to rob utilitarianism of normative authority. Why care if the actions I perform maximize utility, if I'll still be a good person regardless?

Virtually no humans always do the utility maximizing thing--it would require too great of a psychological cost to do so. Thus, it makes sense to have the standard for being a good person be well short of perfection.

 On utilitarianism what is the rightness of an act or omission determined by? Only the net utility produced. Therefore, it seems whether the weight of psychological costs is enough to make it right to abstain from donating to charity, donating your kidney etc.  or choosing to purchase luxuries for yourself instead, is going to be determined by the net utility. But, donating will generate more utility on net even when we factor in the psychological costs to you. It is not just that you are falling short of perfection on utilitarianism, you are blameworthy for failing to do the right action. Matthew needs to give a principled basis for why, by the lights of utilitarian judgements, you wouldn't be blameworthy. But it's hard to see how that can be done. 

Most of Matthew's responses to the the demandingness objection concede the demandingness of utilitarianism, arguing that it does not provide sufficient reason to think utilitarianism is false. After all, the correct ethical theory may well be demanding! This is fine, I think some of Matthew's responses here are reasonable. However, I never intended this objection to be a knockdown argument. Merely that our ordinary moral practice and beliefs are not nearly as demanding as utilitarianism entails, which is better explained and antecedently more expected on, non-consequentialist views than utilitarianism. There is also, the implication that utilitarianism wouldn't be a particularly helpful guide for humans, as, realistically, no-one truly follows it's demands. Not even Matthew does, he could have created a refugee fundraiser site rather than a blog dedicated to arguing for utilitarianism!  From this point I'll only address objections of most interest. 

(Sobel, 2007) rightly points out that allegedly non-demanding moralities do demand a great deal of people. They merely demand a lot of the people who would have been helped by the allegedly demanding action. If morality doesn’t demand that a person gives up their kidney to save the life of another, then it demands the other person dies so the first doesn’t have to give up a kidney.
It seems to me, not correct to say that you are demanding the other dies. All we are saying is one is not morally blameworthy for refusing to give their kidney. "We are not demanding anyone give Smith a kidney and we don't demand Smith dies" is not a contradiction. What would be, is if we said "We demand that Smith does not get the kidney" but as far as I can tell, my position doesn't entail that. A further argument would need to be given that it does. Further, it's actually very plausible that people have rights to bodily autonomy and self-governance, which can outweigh the others rights to life, in fact, I don't think a better case could be given where the majority of intuitions more clearly favor me.
Fourth, the drowning child analogy from (Singer, 1972) can be employed against the demandingness objection.
I've pre-empted this in my original article, I'll get to Matthew's responses later.
Kagan rightly notes that ordinary morality is very demanding in its prohibitions, claiming that we are morally required not to kill other people, even for great personal gain. However, Kagan argues a distinction between doing and allowing and intending and foreseeing cannot be drawn successfully, meaning that there’s no coherent account of why morality is demanding on what we can’t do, but not on what we must do.
It's clear that in cases where you're doing you are more responsible then when you are just allowing. If you rape someone, you're a rapist, but you're not a rapist if you only allow a rape to happen. Probably, what Matthew means to say is the distinction is not significant enough for there to be a difference with respect to our moral obligations in the particular cases in which utilitarianism demands more of us, but this seems exactly like the point that's in dispute, so it would need to be motivated. Perhaps Kagan gives arguments in his book, I've not read it. I'll let Matthew share them with us in his last reply.
Eighth, our intuitions about such cases are debunkable. (Braddock, 2013) argues that our beliefs about demandingness were primarily formed by unreliable social pressures. Thus, the reason we think morality can’t be overly demanding is because of norms about our society, rather than truth tracking reasons.
This is probably true. But for one, my point with the demandingness is not just that it is unintuitive, it's that it implies a moral practice that is completely unlike ours, one that is impractical, nigh unlivable for humans. For two, these same sorts of debunking considerations apply, mutatis mutandis, to pretty much all of our moral intuitions. It is after-all a fact that our moral judgements in general are highly sensitive and plastic in the face of various non-truth tracking cultural/social pressures (Gold et al. 2014) (Miller 1994) (Jafaar et. al. 2004) (Pryor et al. 2019). More fundamentally, the content of our moral judgements are shaped by evolutionary pressures which raises a similar debunking concern, (Street 2006) (Joyce 2005). The task before Matthew is to sustain a local debunking argument regarding our moral demandingness intuitions and avoid opening up a pandora's box of global genealogical debunking arguments about our moral intuitions. My own view is that this cannot be done, and that none of Matthew's own responses to EDA's work (I've already linked an article above for roughly the reasons why).

Further, the examples Matthew himself gives' as evidence for unreliability are not super compelling.
It’s generally recognized that people have an obligation to pay taxes, despite that producing far less well-being than saving the life of people in other countries. 

But why think that this is an intuition about a moral duty people have rather than it's just being a civic duty one recognizes as part in parcel of being a citizen of a country? In the same way we don't think putting up with shitty customers at Walmart is a moral duty, but it might be your duty as an employee at Walmart. 

As (Kagan, 1989) points out, morality does often demand we save others, such as our children or children we find drowning in a shallow pond. This is because social pressures result in caring about people in one’s own society, who are in front of them, rather than far away people, and especially about one’s own children.

To some extent, that's right. But again, there is something that seems right about this intuition, that even Matthew surely must admit. We think responsibility is scaled by 1) how much control you exert/have over the situation 2) your vision/awareness of the situation. The less control you have, and the more inattentional you are to it, the less responsible you are. If you hear on the radio that a tsunami hits a distant country, you're less responsible for not hauling over there and saving who you can, than if a tsunami happens in your vicinity and you fail to save people you could save by extending your arm. 

Tenth, scalar consequentialism, which I’m inclined to accept, says that rightness and goodness of actions come in degrees—there isn’t just one action that you’re obligated to take, instead, you just have actions ranked from most reason to least reason. Scalar utilitarianism gives a natural and intuitive way to defuse this. Sure, maybe doing the most right thing is incredibly demanding — but it’s pretty obvious that giving 90% of your money to charity is more right than only giving 80%. Thus, by admitting of degrees, demandingness worries dissipate in an instant.
This response is one of the more interesting, but again, it is hard to see how it helps. You'd still always have most reason to do the most utility-maximizing action. If I am deliberating among a set of options and there is one option I have most reason to do, if I don't do it, surely I'd be blameworthy for failing to do what I have most reason to do.

It looks like what Matthew has in mind is that there is no particular action that you ought do just actions which are better (more reason) and worse (less reason) to do. But for one, I'm not sure what a reason is if it's not something which tells you (or counts-in-favor-of) you ought to perform some action rather than another. For two, this leads to absurdities, suppose you're in a room and there are two buttons and you can only press one, B1 maximizes utility for a billion people, B2 maximizes utility for one person. Obviously, you ought to press B1 and not B2, especially if we think maximizing utility is the only thing that makes ethical decisions good. Yet, if there is no particular right action you ought do, that's false. Both buttons increase utility, B2 just does it much much less. 
But this is clearly irrelevant. Suppose you could save a child by going into the pond and pressing a button to save them far away. You still ought to do it.

Observing them is relevant in the sense that you are directly there, you are fully capable of acting in the now, and the situation confers reasons on you for you to directly choose to respond to, or ignore. Sure, if you magically know for a fact a kid is drowning far away and you have a magic button you should press it, since all the relevant features are shared! But when you perform ordinary actions you aren't being like "Heh, gonna buy expensive khakis even though I could use the money on charity because fuck starving children in Africa", were that the case, you'd be doing something wrong.
It’s totally unclear why this is the case! In the drowning child case, maybe you deem the child worthy of saving—just like a person who doesn’t donate to a children’s hospital deems the children worthy of saving—but you just prefer not to save them and spend your money on other things.
If you deem them worthy of saving, in the sense that you see them as a end-in-themself then it's irrational not to save them. If you don't save them because you'll get your pants wet, that means you actually don't find them worth saving at least in the relevant sense. 

Matthew's responses to the Demandingness objection are a bit better, but I conclude that he fails to completely stave off the objection. 

Unintuitiveness Objection


Last, we look at Matthew's responses to the unintuitiveness of consequentialism. 

He gives a series of cases involving raping people where one’s pleasures allegedly outweighs the pain of the victim—a non-hedonist, or a desert adjusted hedonist, can just say that only some pleasures count, and ones from rape don’t.
I didn't make this explicit but the raping coma patients/gang rape cases don't only apply to hedonism. Even if you're a non-hedonist, you can think pleasures are good, and the known consequences of raping coma patients is more pleasure and no pain caused. You can also think there are other goods contributing to well-being, such as desire-satisfaction, the gang-rape satisfies the desires of more people, the coma patient doesn't have any active desires which are being violated, active desires count for more etc. I'm not convinced that non-hedonic consequentialisms' have a straightforward escape hatch here, but regardless I was attacking Matthew's view. Desert-adjusted hedonism strikes me as implausible for other reasons. It falls apart really fast when we realize there is no principled basis for valuing some pleasures over others.
He gives the example of gang rape, but the badness of gang rape scales with the harm caused, also, this isn’t relevant to consequentialism, just hedonism.
But I addressed this, the harm might plausibly be outweighed by the good outcomes.

In regards to the case where you sacrifice your life to save your friends. 
On scalar utilitarianism, there aren’t facts about what you’re permitted to do, you just wouldn’t have most reason to do—that wouldn’t be the best thing you could do. But this seems plausible! It doesn’t seem like you have most reason to save your friend.
1. This seems to imply that scalar utilitarianism isn't action-guiding, why would I adopt it as an ethical theory if I want to know what actions I can and can't do? That seems like the bare minimum of what I'd want an ethical system to do.
2. I still have no idea what it means to say you have most reason to do something, if it doesn't imply that you ought do it.
3. If an ethical theory entails that there is no fact about whether you are permitted to, say, torture and abuse children for sadistic pleasure, I think that is evidence the ethical theory is a failure.
If we say that you have equal reason to do them or some such, then if both you and your friend starts with 1,000 units of torture and can either reduce your torture by 1,000 or the others by 500, both would be fine. But taking the action that reduces the other’s suffering by 500 makes everyone worse off.
This is really vague, because I don't know what 1000 or 500 units of torture is. I would say what I would normally say, it would be supererogatory for you to reduce your friends pain (it is a selfless, other-regarding act), but it would also be good for you to eliminate your torture, you're not obligated to reduce your friends. I fail to see how this is unintuitive. You're not making everyone worse off you're reducing your friends torture.

In regards to the case where a thief saves grandma's life while trying to steal Grandma's purse. 
This stops being unintuitive when we distinguish objective and subjective wrongness. Objective wrongness is about what one would do if they were omniscient, what a perfectly knowing third party would prefer, what they have most actual reason to do. The utilitarian judgment here about objective wrongness is obvious. But the utilitarian can agree that this action was wrong based on what the person knew at the time—it just coincidentally turned out for the best.
But whether a given action is right on consequentialism tout court, does not depend on the subjective states of the agent, but only what is objectively right (objectively produces the best state of affairs). As a non-consequentialist, I take into account intentions and other subjective states of the agent when analyzing what makes an action right, but consequentialists don't think whether an agent subjectively acted with the goal of maximizing utility is right-making or wrong-making, what matters is if they actually maximized utility. 

In regards to organ-harvesting Matthew offers the following case to put pressure on our intuitions. 
Imagine an uncontrollable epidemic afflicts humanity. It is highly contagious and eventually every single human will be affected. It cases people to fall unconscious. Five out of six people never recover and die within days. One in six people mounts an effective immune response. They recover over several days and lead normal lives. Doctors can test people on the second day, while still unconscious, and determine whether they have mounted an effective antibody response or whether they are destined to die. There is no treatment. Except one. Doctors can extract all the blood from the one in six people who do mount an effective antibody response on day 2, while they are still unconscious, and extract the antibodies. There will be enough antibodies to save 5 of those who don’t mount responses, though the extraction procedure will kill the donor. The 5 will go on to lead a normal life and the antibody protection will cover them for life.
But I still have the intuition that this is wrong, assuming the donor doesn't consent. Though, much less strongly then the kidnapping doctor case, because in that case he is directly kidnaping them off the street and murdering them, as opposed to it's being a patient already in his care and is already unconscious due to an affliction, and the doctor isn't directly murdering them, rather performing an extraction procedure that will result in their death. So even if I didn't share the intuition, this does nothing to save consequentialism from the unintuitive answer given to the kidnapping doctor case.
It sounds bad when phrased that way, but any agent who follows basic rational axioms will be modellable as maximizing some utility function. If we realize that lots of things matter, then when making decisions, you add up the things that matter on all sides, and do what produces the best things overall. This isn’t unintuitive at all. Indeed, it seems much stranger to think that you should sometimes do what makes things worse.
Not much to say here other than that I do not share Matthew's intuitions whatsoever. I reject the assumption that moral value is stackable in the same ways extrinsic values are. That human intrinsic dignity and autonomy should be respected only until the point where treating them such will make things slightly worse off (such as enough rats getting injected with heroine). I think most people, on reflection, and certainly most ethicists, agree with me here.

In regards to the general ways I listed in which Utilitarianism appears unintuitive;
But consequentialism can—just have the social function of welfare include equality, for example. Additionally, utilitarianism supports a much more egalitarian global distribution than exists currently.

Utilitarianism is not principally egalitarian though, which I take to be a problem because egalitarianism is one of it's main motivations. Sure, you can define consequentialism in such a way that it is, but Matthew and I would both agree that such a view is implausible for other reasons (you should torture someone infinitely to reduce global inequality etc.).
These tend to be wrong, but they tend to produce bad outcomes. It seems that breaking promises is bad because it makes things worse—its badness doesn’t just float free. Any high-level theory is going to sound weird when applied to concrete cases
I take this to be Matthew agreeing that this is a case where his theory fails to track our intuitions and how we actually diagnose actions such as promise-breaking as wrong. We explicitly don't think it's wrong in virtue of the bad outcomes, it's wrong because you aren't respecting a prior commitment you made, you deceived them, you're saying what they want, and what they believe isn't important, etc. In fact, we'd agree that even in cases where these sort of actions do not lead to bad outcomes you still did something wrong.
In the next section, Truth teller responds to various objections that are not, I believe, relevant to the dispute—none of them are arguments I’ve made for consequentialism.
But you did. The entire reason I included them is because of this video

That is all, I conclude that Matthew has failed, at least by my lights, to refute the unintuitiveness objection. 

Tuesday, February 14, 2023

Reply to Bentham's Substack

This is the second article I will be posting in an Exchange with Bentham's Substack. I will be replying to his opening article, seen here.  Let's get started right away.


Argument 1


Almost all of the considerations Matthew adduces in his blog article against deontology are paradox's or strange/unintuitive results in general. Many are parasitic on intuitions that are not independent of each-other. Such things were only one part of my case against utilitarianism on my blog alongside the epistemic, theoretical, explanatory and practical arguments I raised. So, I suppose the first thing to point out, is that a case which rests nigh solely on the same intuitions, in regards to matters where, perhaps, we have reason to distrust our intuitions especially about bizarre hypotheticals. Is, if not an inherently weak case, one we have immediate antecedent reason to be skeptical of. One which I doubt can have the "overwhelming force" Matthew claims.

Anyways, first intuition pump.
Suppose a person puts a bomb in a park for malevolent reasons. Then, they realize that they’ve done something immoral, and they decide to take the bombs out of the park.  However, while they’re in the process, they realize that there are two other bombs planted by other people. They can either diffuse their own bomb, or the other two bombs. Each bomb will kill one person.

It seems very obvious that they shouldn’t diffuse their own bomb—they should instead diffuse the two others. But this is troubling—on the deontologist’s account, this is hard to make sense of. When choosing between diffusing their own bomb or two others, they are directly making a choice between making things such that they violate two people’s rights or violate another person’s rights.

This is one where I'm confused what the force of this is supposed to be. You have a duty to diffuse any of the bomb's you are able to diffuse, and I don't even think that's for consequentialist reasons. Suppose there is a bomb in your vicinity, say one of the one's that was planted, then that fact confers obligation on you to diffuse it, to ignore it would be immoral, as it violates your greatest obligation, respect for humanity as such. If you are not able to diffuse all the bombs, and yours goes off, then that's just to say you aren't responsible, since you did your moral duty by diffusing the other bombs. Obviously, you are responsible insofar as you planted the bomb, but that's just to say the separate act of you planting it was immoral, but in respect of the act of diffusing the other bombs you are not responsible for your failure to diffuse yours given that you couldn't diffuse all three.



Argument 2

Suppose you’re deciding whether or not to kill one person to prevent two killings. The deontologists hold that you shouldn’t. However, it can be shown that a third party should hope that you do. To illustrate this, suppose that a third party is deciding between you killing one person to prevent the two killings, or you simply joining the killing and killing one indiscriminately. Surely, they should prefer you kill one to prevent two killings to you killing one indiscriminately.

Thus, if you killed one indiscriminately, that would be no worse than killing one to prevent two killings, from the standpoint of a third party. But a third party should prefer you killing one indiscriminately to two other people each killing one indiscriminately. Therefore, by transitivity, they should prefer you kill one to prevent two killings to the two killings happening—thus they should prefer you kill one to prevent two. To see this let’s call you killing one indiscriminately YKOI, you killing one to prevent two killings YKOTPTK, and the two killings happening TKH.
YKOTPTK < YKOI < TKH. < represents being preferable. Thus, the deontologist should want you to do the wrong thing sometimes—a perfectly moral third party should hope you do the wrong thing.


(Worth noting that it is absolutely permissible, even on deontology, to kill 1 to save 2 if it is self-defense, or protecting them from a killer, but we'll suppose henceforth that the 1 is innocent).

There is a couple ways to interpret what is being said here, so it'll perhaps be helpful to disambiguate them. Insofar as this argument is attempting to pick out an incoherence between how an agent should act vs what they should prefer to happen given the truth of deontology. Well, as a deontologist, I'm not committed to a position about what people "should hope" or "should prefer". Deontology is, exclusively, a theory about what makes actions right or wrong, my position, qua deontologist, is silent on the matter of what events we should prefer occur from the third person. Perhaps one way in which it might be relevant, is as follows; having a preference for the "killing 1 to prevent 2 killings" scenario, is more conducive for agents to cultivate respect for humanity(rational nature) as such, and therefore more virtuous dispositions. But if that's the case, it's unclear what the incoherence or tension in the position is supposed to be. Both not killing to save 2 and preferring 1 being killed to 2 being killed, are both unified under the maxim of respecting for humanity as such. At worst, it is a slightly weird entailment, but slightly weird entailments are hardly unique to deontology.

Another way of understanding it, is that when we calculate the total value of each state of affairs from the third person. The state of affairs involving one person being killed to prevent two deaths is all-things-considered better than the state of affairs wherein one killing happens. So, for this reason you ought to prefer the killing to save 5 to happen. Yet, the deontologist thinks you shouldn't kill one to prevent two killings. If this is all that's being said, then what's being said doesn't seem to be much more than a restatement of deontological commitments. There is no "paradox", just different questions being asked, "is the action right?" and "is the state of affairs more valuable?". Deontologists do not think an actions rightness depends on the total value of the states of affairs, but on other considerations, so it isn't the least bit surprising that the right-making features of actions and the total value of states of affairs are judged differently, and are not reducible to each-other. 

Additionally, I think what little intuitive force this has going for it, is counterbalanced, by my lights at least, when we then look at intuitive right-making features of actions that seem to be over and above the total goodness of the value of states of affairs. Such as; the intentions and character-traits the agent has, the maxims the agent acted on, whether the act involves violating autonomy/self-determination, humanity or other things we take to be important. etc.


Argument 3

Imagine one thinks that it’s wrong to flip the switch in the trolley problem. While we’ll first apply this scenario to the trolley problem, it generalizes to wider deontological commitments. The question is, suppose one accidentally flips the switch. Should they flip it back?

It seems that the answer is obviously not. After all, now there’s a trolley barreling toward one person. If you flip the switch it will kill five people. In this case, you should obviously not flip the switch.

However, there’s a very plausible principle that says that if an action is wrong to do, then if you do it, you should undo it. Deontologists thus have to reject this principle. They have to think that actions are wrong, but you shouldn’t undo them. More precisely, the principle says the following.
Reversal Principle: if an action is wrong to do, then if you do it, and you have the option to undo it before it has had any effect on anyone, you should undo it.
This problem can also apply to the footbridge case. Suppose you push the guy. Should you pull him back up? No—if you come across a person who is going to stop a train from killing five, you obviously shouldn’t preemptively save him by lifting him up, costing five lives.
A few things can be said here. I feel myself under no initial pressure to accept the reversal principle. There seems to be no reason to think that such a principle is a necessary truth that governs all morally salient actions. The only necessary universal moral truth I'm committed to, is that an action is wrong iff it violates the categorical imperatives, and there seems to be no reason to think reversal actions cannot fall under that category. At best what looks plausible to me is the following principle;
Weak Reversal Principle: if an action is wrong to do, then if you do it, and you have the option to undo it before it has had any effect on anyone, that fact is some defeasible reason to think you should undo it.
I think this principle is clearly far more intuitive than the strong one, and that any intuitive plausibility the strong principle has going for it is parasitic on the intuitive plausibility of the weak one. Suppose it were the case that undoing some wrong action would involve violating someone's autonomy, murder, cultivating really terrible character-traits and dispositions, it seems like considerations like these, can clearly outweigh the fact that the act you are undoing is a wrong act when deliberating whether the reversal act is right to do. Ironically, the only reason I can think of to deny this, would be if one is a consequentialist, which I am not.

To answer the cases then; I think you should probably flip the trolley switch so there isn't an immoral act to undo. No, you shouldn't save the fatman. No, you shouldn't remove the 5 people's organs to save the one. The fact that you are reversing a wrong action is some defeasible reason to do it, and that in many cases all-else equal you should undo wrong actions, but I reject that this fact is overriding in these cases.

Another point worth making is, depending on how reversal is construed. There either seems to be a tu quoque objection here, or we should be skeptical of our intuition in these cases. Does reversal mean the action is already done, so you go back and undo the effects of your action? Then it looks like a consequentialist should be committed to thinking there are cases where you shouldn't reverse wrong actions too. Suppose you harvested 1 person's organ's to save 5, then you realize one of the 5 you saved is a depraved serial killer, and will kill, rape, and torture many more victims thanks to your action. So, all-things-considered, the action was wrong on consequentialism. Suppose you can undo it, but the act of doing so, has massive causal ramifications, inscrutable to you, which would eventually lead to the birth of Super Hitler, who will initiate a global nuclear holocaust. Strange scenario, but there you have it, a case where undoing a wrong action would be wrong on consequentialism. So, by the reversal principle consequentialism is absurd! Another way of interpreting reversal is undoing the action so that it doesn't even happen, perhaps by way of a time machine. But I simply don't put much stock in our intuitions about abstract scenario's like what we should do given a time machine, which may not be even possible.


Argument 4


This one is Huemer's paradox of deontology, and is the first argument that I feel the force of. Read Huemer's paper for more context. It starts by putting forth two principles.

Individuation Independence: Whether some behavior is morally permissible cannot depend upon whether that behavior constitutes a single action or more than one action.
And;
Two Wrongs: If it is wrong to do A, and it is wrong to do B given that one does A, then it is wrong to do both A and B.
Now, here is the case which generates the paradox;
Torture Transfer: Mary works at a prison where prisoners are being unjustly tortured. She finds two prisoners, A and B, each strapped to a device that inflicts pain on them by passing an electric current through their bodies. Mary cannot stop the torture completely; however, there are two dials, each connected to both of the machines, used to control the electric current and hence the level of pain inflicted on the prisoners. Oddly enough, the first dial functions like this: if it is turned up, prisoner A’s electric current will be increased, but this will cause prisoner B’s current to be reduced by twice as much. The second dial has the opposite effect: if turned up, it will increase B’s torture level while lowering A’s torture level by twice as much. Knowing all this, Mary turns the first dial, immediately followed by the second, bringing about a net reduction in both prisoners’ suffering. Did Mary act wrongly?
There is a general issue that I think is shared with some of these 'paradoxes' and it is a problem shared with the previous 'paradox' as well. It only arises if we assume principles of judging the moral features of actions, that seem plausible at first blush, but are ones I'm simply not committed to, and are not part of my normative theory. In general, if you propose a principle that is supposed to be a necessary truth regarding the moral features of all actions, and you have no independent argument for it, apart from brute intuition. My response is, naturally, to practice a high degree of skepticism that these principles are, at least universally, true.

With that noted, I'm skeptical of the 'Individuation Independence' principle. There seems to be no strong reason to think, that if two or more actions are individually wrong, it thereby follows that a conjunction which contains both actions will thereby also be wrong. Just as a set of bricks can have various properties that each brick lacks (different weights, size etc.), it doesn't seem completely absurd that a conjunction of the same actions could have right-making properties that those actions taken alone lack (and that this couldn't be one such example). There'd need to be a further argument given to think ethical action is a case where the properties of one, necessarily transfers across a conjunction.

But let's say you really, really want to keep these principles. Then, given those principles, deontology commits you to thinking Mary did act wrongly. This is out of accord with folk intuition. But perhaps we can put pressure on the intuitions by;

1) Pointing out that the scenario is one that is extremely bizarre; it involves having two dials that can increase and decrease suffering levels on a whim, but the dials also work in weird ways that prevent you from reducing one's torture without increasing another's. This is not a scenario anything like what anyone has, and most likely ever will encounter. Furthermore, accurate reasoning about such off the wall hypothetical's that have no connection to our ordinary lives, is highly unlikely to have survival or reproductive benefits so we wouldn't expect natural selection to select for true beliefs about these hypotheticals.
2) Focusing on the kinds of beliefs that lead us to judge Mary's action as right and putting pressure on those. For example, the action Mary commits' are one's that involve violating consent, but the result is good for both parties. But plausibly, we shouldn't force someone to stop smoking, even if it's good for them. We shouldn't promote paternalism for the good of everyone. So we shouldn't, generally, violate consent for a rationally informed agent to bring about that which is good for them. This seems especially true in cases where violating consent involves causing significant pain.

TL;DR. This argument is strong but not insuperable. If you are a deontologist, and your intuition of the truth of the principles is stronger than the intuition that Mary acted wrongly, reject that she acted wrongly, if your intuition that she acted wrongly is stronger, reject at least one of the principles.


Argument 5

The argument is roughly as follows—every time a person drives their car or moves, they affect whether a vast number of people exist. If you talk to someone, that delays when they next have sex by a bit, which changes the identity of the future person. Thus, each of us causes millions of future people’s identities to change.

This means that each of us causes lots of extra murderers to be born, and prevents many from being born. While the consequences balance out in expectation, every time we drive, we are both causing and preventing many murders. On deontology, an action that has a 50% chance of causing an extra death, a 50% chance of preventing an extra death, and gives one trivial benefit is wrong—but this is what happens every time one drives.

One way of pressing the argument is to imagine the following. Each time you flip a coin, there’s a 50% chance it will save someone’s life, a 50% chance it will kill someone, and it will certainly give you five dollars. This seems analogous to going to the store, for trivial benefits—you might cause a death, you might save someone, and you definitely get a trivial reward.

I just don't get this one. Not only do I think the entire notion that this is an objection to deontology is confused.  I think it's, if anything, an argument against consequentialism.  

I think actions that involve directly violating certain rights is wrong,  and intentionally producing outcomes that leads to people being harmed and having their rights violated is also wrong, ceteris paribus. What I completely reject, is that it is wrong to cause rights violations and people to be harmed as a long-term, unintended, and unforeseen outcome. I don't think you're responsible in those cases. I reject the consequentialist assumption that long-term unforeseen, and unintended, identity-affecting outcomes are equally morally relevant in regard to the rightness or wrongness of actions as directly intended, or foreseen outcomes, as should any deontologist worth their salt. In fact, I go so far as to reject those kinds of consequences as even having any bearing, whatsoever, on what makes an action right or wrong. Only other things do, such as known consequences, prima facie duties, the maxims incorporated into the action, the character-based dispositions leading to the act etc. The argument thus, fails to get off the ground.  

The paper Matthew cites does respond to something close enough to this objection, by way of responding to Lenman, who also draws a distinction between foreseen vs unforeseen consequences in his paper. I think the response leaves much to be desired. 

What does it mean for some outcome to be foreseeable (to an ideally conscientious agent)? We can't say that some outcome is foreseeable just in case you were (in principle) in a position to know that the outcome would follow from your choice of that act. There is certainly no plausibility in the suggestion that the potential for some action to bring about some consequence can provide a reason against performance of that action only if the agent is (in principle) in a position to know that the consequence will surely obtain if the action is performed. Any such view would permit reckless endangerment. 

I take 'foreseeable' to mean the agent can reasonably expect a given outcome to occur given their intentions, background evidence, and beliefs (and other mental states) which are directly accessible to them. If there is nothing accessible to the agent which would permit the agent to know that the outcome is something that will obtain as a result from the act, then the agent cannot be responsible for the outcome which obtains. It's just begging the question to call this reckless endangerment. It is only reckless endangerment if the agent is blameworthy for what occurs, but nothing like this has been shown. It seems to me utterly implausible that the mere potential for some act to generate bad consequences confers reasons on the agent, if such reasons are not accessible to the agent.

So, for long-term consequences of ordinary acts. On my view the agent isn't blameworthy for 3 reasons. 

1) It wasn't intended
2) No one was directly caused to be harmed or used as mere means in the process of the act. 
3) The agent performing the act did not foresee any particular bad outcome that was to occur as a result of the act.

This also refutes Mathew's example. Your doing some course of action is not the direct nor proximal cause of an innocent family dying in a car accident, nor was it intended or known that such an outcome would occur. Further, importantly, your act could have still been done, and if some asshole didn't drink and drive, or if the driver wasn't distracted ,or if they weren't speeding etc. the death wouldn't have happened. On the other hand, your action of flipping the coin and it's landing on the wrong side does directly cause someone to be killed, and you knew it would. The act, by stipulation, couldn't have been done without leading to the death. Finally, it is false that in ordinary actions, the agent's epistemic situation is that there is a 50% chance some particular bad outcome obtains.  Instead, particular long-term identity-affecting outcomes of ordinary actions are completely inscrutable to the agent. No given outcome is expected. 

The paralysis argument against deontology is a failure, it looks to me. But I think a decent argument can be salvaged when it is properly understood as an argument against consequentialism. Consequentialists should factor in these unforeseeable possibilities when deliberating what action they should do, since on consequentialism the distinction between foreseeable vs unforeseeable consequences is not morally relevant. So consequentialists should not risk the unforeseen consequences caused by ordinary actions. Matthew hints at a response by saying that 'the consequences balance out in expectation'. However, this fails for the same reason Shelly Kagan's "canceling out" response to the epistemic objection fails, see my previous article for that. 

Argument 6

1. If deontology is true, then you shouldn’t kill one person to prevent two killings.
2. If you shouldn’t kill one person to prevent two killings, then, all else equal, you should prevent another person from killing one person to prevent two killings.
3. All else equal, you should not prevent another person from killing one person to prevent two killings
Therefore, deontology is false
There is a general problem with this argument. Any non-question begging reason (that is, reason that assumes the falsity of deontology) you can give to think 3 is true, is a reason to think 2 is false by the deontologists lights. Indeed, all the argument's that Matthew gives in favor of 3, are all, as a deontologist, I think pretty good reasons to think 2 is false. So it's hard to see how this argument is particularly effective against deontology, just reject 2 for the same reasons Matthew gives to accept 3.

My response is therefore to reject 2, because of the reasons Matthew gives, and roughly the same reasons I reject the strong reversal principle, I think there is a defeasible reason to prevent wrong actions it just isn't universally overriding. This should be sufficient to preserve our intuitions and avoid this argument. But let's quickly look at Matthew's justification for 2.
The idea here is pretty simple. It seems really obvious that you should prevent people from doing wrong things if you can at no personal cost. In fact, as this paper. which started the entire idea of this worry for deontology in my mind notes, this produces a strange result when it comes to deterrence. Presumably, if we think that killing one to save five is wrong, we’ll think that it’s a good thing that laws against murder prevent that. But if we think that third parties have no reason to prevent killing one to save five, then deterrence is not a reason to ban deontic rights violations with good outcomes.

If you have no reason to prevent organ harvesting, then it isn’t wrong. One should prevent wrongdoing, if all else is held equal.

The reasons he gives are;

1. "It's obvious", but this is ineffective, I don't think it's obvious. You have, at most, a defeasible reason, but I think we have good reasons to believe it is overturned in this case. 
2. It creates problems for deterrence. But this is also ineffective. I reject that deterrence is the reason we ban killing 1 to save 5, and I reject that legality is morality applied. Laws in the minimal sense should exist to guarantee people's freedoms and rights. So, it's obvious that we should ban murder, even in cases where 5 is saved. Also, Consequentialism holds that the act of killing 1 to save 5 is actually good, but presumably we shouldn't deter good actions by making them illegal. If that's a bad argument against consequentialism, which I think it is, then it's also a bad argument against deontology.


Argument 7

Take the following example of theft. Suppose that there are 1,000 aliens, each of which has a stone. They can all steal the stone of their neighbor to decrease their suffering very slightly any number of times. The stone produces very minimal benefits—the primary benefit comes from stealing the stone.

The aliens are in unimaginable agony—experiencing, each second, more suffering than has existed in human history. Each time they steal a stone, their suffering decreases only slightly, so they have to steal it 100^100 times in order to drop their suffering to zero. It’s very obvious, in this case, that all of the aliens should steal the stones 100^100 times. If they all do that, rather than being in unimaginably agony, they won’t be badly off at all.

The following seem true.

1. If deontology is true, it is wrong to steal for the sake of minimal personal benefits.
2. If it is wrong to steal for the sake of minimal personal benefits, it is wrong to steal repeatedly where each theft considered individually is for the sake of minimal personal benefits.
3. In the alien case, it is not wrong to steal repeatedly where each theft considered individually is for the sake of minimal personal benefits.

Therefore, deontology is false.

This argument rests on, roughly, the same intuitions as Huemer's paradox. So as an argument which forms part of a cumulative case in addition to Huemer's argument, it doesn't really add more weight it seems to me. I'll just briefly make 3 points here;

1. I'm skeptical that, if an action is wrong, it thereby follows that a conjunction which contains the same action in repetition will thereby also be wrong. For roughly  the same reasons I am skeptical of the Individuation Independence principle.
2. This hypothetical scenario is the most abstract and bizarre yet, so our intuitions here probably don't count for much.
3. There may be a tu quoque concern here. Suppose you randomly select someone from the global population and give them a medical needle jab, the action is wrong, you caused them minor pain and didn't benefit them in any way. But you keep doing this a billion times, and doing so, basically statistically guarantee's that at-least 1 person will benefit from the jab, and have horrifically painful diseases many orders of magnitude worse than the pain caused by all jabs combined prevented. The individual act is wrong, and you are not in a position to know any of the acts, on an individual basis, wouldn't be wrong (indeed statistically they probably are wrong), yet the conjunctive act is right on consequentialism. So, if Matthew accepts the principle that if an action is wrong, the same action in repetition must also be wrong, which he needs for 2 to be motivated, it looks like his own position is under some heat. 

Argument 8

We have lots of scientific evidence that judgments favoring rights are caused by emotion, while careful reasoning makes people more utilitarian. Paxton et al 2014 show that more careful reflection leads to being more utilitarian.

People with damaged vmpc’s (a brain region responsible for generating emotions) were more utilitarian (Koenigs et al 2007), proving that emotion is responsible for non-utilitarian judgements. The largest study on the topic by (Patil et al 2021) finds that better and more careful reasoning results in more utilitarian judgements across a wide range of studies.

Ok, so this is one I have alot less to say about, I'm not all that familiar with the research on the topic.  It is case by case, I'd imagine. A general hurdle for alot of these studies is that  there are relevant differences between the cases presented, and alot of it might be accounted for by people reacting emotionally to the relevant differences. It could be that the same people use different parts of the brain when making judgements about different scenario's, utilitarian's and deontologists alike. Rather than it's just being the case that utilitarian's are cold rational thinkers and deontologists emotional thinkers. 

I also have my doubts that the fact that people with damage to the pre-frontal cortex have a positive correlation with utilitarianism is something that says much in utilitarianisms favor. It's not implausible that emotional ability allows agents to better empathize, and is an important part of moral reasoning. 

But at the end of the day, if the studies do overwhelmingly show that utilitarian's are generally more rational than deontologists in their judgements, so be it, I'll concede that this is a point in utilitarianisms favor albeit a non-decisive one. 

Argument 9 

(1) Deontic constraint (for reductio): Protagonist acts wrongly in One Killing to Prevent Five, and ought instead to bring about the world of Five Killings.

(2) If an agent can bring about W1 or W2, and it would be wrong for them to bring about W1 (but not W2), then W2 ≻ W1. (key premise)

(3) Five Killings ≻ One Killing to Prevent Five. (from 1, 2)

(4) One Killing to Prevent Five ≻≻ Failed Prevention. (premise)

(5) Failed Prevention ⪰ Six Killings. (premise)

(6) Five Killings ≻ One Killing to Prevent Five ≻≻ Failed Prevention ⪰ Six Killings. (3 - 5, transitivity)

(7) It is not the case that Five Killings ≻≻ Six Killings. (definition of ‘≻≻’)# Contradiction (6, 7, transitivity).

 I reject (2). I reject that the rightness/wrongness of an act is a function of the world-states that the action brought about/could have brought about. Rather,  it is a function of the reasons/principles which governed the act. (2) seems, pretty much, tantamount to a denial of deontology.

(I) Denying (2)—e.g. by claiming that we should prefer wrong actions to be performed— would rob deontic verdicts of their normative authority.

 But I don't think denying (2) robs them of their normative authority. At the very least, I'd need to see an argument for that claim, for this argument to move me (or any non-consequentialist, really). I also argued, in the previous article, wherefrom deontic verdicts derive their normative authority. Never did I appeal to the total goodness of world-states, but only, the unconditional value of humanity. Until the charge of loss of normative authority is motivated, I have nothing further to say here. 

 Argument 10
Deontology holds that there are constraints on what you should do. But this produces a very strange result—it ends up forcing the conclusion that sometimes putting perfect people in charge of decisions makes things worse. Suppose that a person while unconscious sleepwalks and is about to kill someone and harvest their organs to save five people. Then they wake up and have a choice of whether to actually harvest their organs or not.

Given that harvesting organs makes things better—it causes one death instead of five—it would be bad that they woke up. But this is strange—it seems like putting people who always do the right things in charge of a situation can’t make things worse.
This is another one I just don't feel much of the force of. I already believed that there were cases where an agent doing the right thing can make things overall worse off, (Killing 1 to save 5 being an example). So, given this, I'm not sure what pointing out that giving a morally perfect agent control can make things worse off adds. It's just entailed from both my normative theory, and my pre-theoretic moral judgements. Not a surprising result.

Perhaps you might think it is a strange result, but I also think there is a similar result for utilitarianism which seems equally, if not more, strange. There seem to be situations where utilitarianism is self-effacing. Human's are very bad at judging even some of the immediate consequences of their actions, let alone the long term consequences. So, it seems very plausible that human's acting like, or being deontologists and following a set of deontological rules would be overall better, at the very least in some cases than leaving human's to their own devices and attempt to figure out what course of action will maximize well-being/minimize suffering. In which case, making utilitarianism publicly accepted would be bad on utilitarianism. But that's an odd result, surely it can't be wrong for everyone to hold to, and attempt to act in accord with the correct moral theory. 

Argument 11 
  1. If deontology is true, it’s impermissible to kill one person to harvest their organs and save five.
  2. If it’s impermissible to kill one person to harvest their organs and save five, it’s impermissible flip the switch in the trolley problem.
  3. It’s not impermissible to flip the switch in the trollye problem.
Therefore, deontology is false.


I reject 2,  but let's examine Matthew's argument for it.

Suppose that flipping the switch was less wrong than pushing the guy off the bridge. In this case, we should expect that, if one is given the choice between the two actions, they ought to flip the switch. After all, it’s better to do less wrong things rather than more wrong things.
Thus, the argument is as follows.
  1. If flipping the switch is significantly less wrong than pushing the man, then if given the choice between the two options, even if flipping the switch is made less choiceworthy, one ought to flip the switch.
  2. If given the choice between the two options, even if flipping the switch is made less choiceworthy, one ought not flip the switch.

         Therefore, flipping the switch is not significantly less wrong than pushing the man

Not sure what 1 means. What factors are we considering when we say an action is made "less choice-worthy"? This is vague to me. Matthew claims this premise is obvious in the article, but, as we've seen, he says a lot of things are obvious that I think are false. So I'm unsure whether to accept 1. 

I probably reject 2 depending on what is meant, but let's let Matthew disambiguate; 

Imagine the following scenario. There’s a train that will hit five people unless you do something. There’s a man standing above, on a bridge. There’s a track that you can, by flipping a switch, redirect the train onto, which leads up to the man on the bridge above the track. However, the train moves very slowly, so if you do that, the man will be very slowly and painfully crushed.

However, you have another option. You can, by pushing the man, cause him to die painlessly as he hits the tracks, and he’ll stop the five people. Which should you do?

Now, the deontologist is in a bit of a pickle. On the one hand, they think that you should flip the switch in general to bring about one death while saving five, but you shouldn’t push a man to save five. But in this case, it seems obvious that, given that the man would be far, far better off, it’s much better to push him than it is to flip the switch.
So, I lean towards thinking pushing the fat man is immoral and pulling the trolley lever is not. The main reason is because in the footbridge case you are using the fat man as a means, in the trolley case you're not using the person on the track as a means. What happens in each situation if the man isn't there? In the trolley case, you can still pull the lever and save those on the trolley. With the fat man, you can't. So you're using the fat man as a means to an end, but not the guy standing on the other track. Another important reason has to do with the Doctrine of Double effectIn the trolley case, your intent isn't (necessarily at least) to kill someone, the goal is to switch the track to save the five. This is what Anscombe highlights as a problem. 

With this in mind, what to think of the scenario? It highly depends. If the agent is deliberating between two ways to kill the fat-man, then it's immoral to do either, since in both cases he'd be intending to kill the fat man and use him as a mere means. If on the other hand the agent see's the switch, and pulls it with the goal to save the five, that's good. Now, in fairness, I can see how this, is intuitively a weird result so I do see this as having some intuitive pull. However, I don't think it's all that strong when we consider the morally salient differences between the two actions qua actions, as opposed to the two states of affairs qua states of affairs. All in all, I reject 2 (both arguments).


Argument 12

It doesn’t really matter if you or someone else violates someone’s rights. From, to quote Sidgwick, the “point of view of the universe,” it seems unimportant whether it’s you violating rights.

Indeed, it seems very clear that in the organ harvesting case, it’s more important that five don’t die than it is that you don’t get your hands dirty. This seems like an unassailable moral datum. But morality should pick up on the things that really matter.

This intuition is completely at home with my view. I grant that whether you, or someone else acts wrongly and violates someone's rights is not better or worse for the world. Where I disagree is that morality "should pick up on the things that really matter" in the sense Matthew means. Morality is not axiology.  It tells you what actions we should and shouldn't do,  not what is truly valuable from the point of view of the universe, if there even is such a thing, which I doubt. But even if there is, why should we expect moral judgements about the rightness or wrongness of human actions to track that? It seems we should only expect this if, antecedently, we are consequentialists, which I'm not. 

Argument 13 & 14

Moderate deontology holds that rights are valuable, but not infinitely so. While a radical deontologist would be against killing one person to save the world, a moderate deontologist would favor killing one to save the world. Extreme deontology is extremely crazy. An extreme deontologist friend of mine has even held that one shouldn’t steal a penny from Jeff Bezos to prevent infinite Auschwitz’s. This is deeply counterintuitive.
Depending on what is meant, I'm probably an 'extreme deontologist', if what that amounts to is denying threshold deontology and accepting some acts, such as murder, torture, rape etc. are categorically forbidden.  Though, I'm not as extreme as most others who we'd call 'extreme deontologists'. Yes, you should steal from Bezos. Yes, you should lie to the Nazi.  In any case, I'm extreme enough so that the explosion problem doesn't apply to me. Since I'm not a 'moderate deontologist', argument 14 fails to apply to me as well.  I'll leave this as an avenue for Matthew to attack my view on the grounds of 'craziness'. Moving on. 


Argument 15

There are various explanations of our deontological intuitions being produced by various biases. There are lots of biases that we should expect make us more willing to cause us to believe something like deontology even if it is false.

  1. Status quo bias. This describes people’s tendency to prefer things as they are, in fact, going. For example, if you ask people whether they think a hypothetical person should invest in A or B, their answer will depend on whether you tell them that they’re actually investing in A or B. But this explains pretty much all of our deontological intuitions. All of them are intuitions about non-interference—about keeping things as they are. These are thus prime candidates for things that can be chalked up to status quo bias.
  2. Loss Aversion. This describes a tendency for people to regard a loss of some harm as more significant than just losing out on a gain. Losing five dollars is seen as worse than not gaining an extra five dollars that one would have otherwise. But people being averse to causing extra losses, combined with the idea that various losses in, for example, Bridge are incorporated into their deliberation, means that they will be likely to oppose pushing the person.
  3. Existence Bias. People treat the mere fact that something exists as evidence that it is good. But this intersects with status quo bias and explains why we don’t want to change things.


I'm willing to concede this is some evidence for utilitarianism over deontology. Nonetheless a few points are worth making here.

For one, I have my doubts that these bias's (at least completely) explains all, or even most deontological intuitions. Intuitions about the overriding wrongness of violating autonomy/self-determination, torture etc. even in cases where the outcomes are overall worse. As well as, the intuition that the wrongness of, say, promise-breaking being over and above the bad outcomes don't seem to be explained by any of these bias's.  

For two, this seems to cut both ways to some extent.  Utilitarian's seem to ignore many seemingly salient factors in ethical decision-making, they only judge actions based on the state of affairs brought about by it. In general, all human's are prone to bias, and it is doubtful that utilitarianism truly solves it, rather than merely being a framework for cloaking many of our pre-existing moral bias's under the guise of mathematical calculation and certainty. 

For three,  most people who we would expect to be most reflective about the subject, and thus least prone to bias, as well as most knowledgeable e.g professional normative ethicists are non-consequentialist with more being deontologists than any particular other moral theory (although I think the divide between consequentialism and non-consequentialism is much more significant than the divide between deontology and virtue ethics, which I take to collapse in many ways) and consequentialism being the least popular of the main three only a sub-set of which will be hedonic act utilitarians (Matthew's view).  Of course they will be prone to bias too, as all humans are, but I think this consideration is enough to greatly off-set the force of Matthew's point.


Conclusion


Most of Matthew's argument's seemingly have little to no force, or just fails in the case of the paralysis argument, or seemingly applies equally to consequentialism, and rely on the same sorts of intuitions about extremely bizarre hypotheticals which we shouldn't expect to be reliable. The best arguments and considerations raised here are, at best, some evidence against non-consequentialism but are not nearly as devastating as Matthew claims.  By contrast, my article provided direct argument's for Kantian constructivism's parsimony and explanatory power, as well as serious epistemic and practical concerns for consequentialism, and, the part of my case that did rest on intuition focused on ordinary cases rather than abstract reasoning about strange hypotheticals. I conclude that we should reject consequentialism and embrace deontology based on the arguments provided so far.