Tuesday, February 21, 2023

Reply to Bentham's Substack 2; Electric Boogaloo

This will be another installment in the epic debate between myself and Bentham's Substack. He has posted his response to my original article. You should probably read both for context before reading this. Now, I offer my rejoinder to his rejoinder. 


Kantian Constructivism 

Matthew says; 
There is a lot of literature on Kantian constructivism. One worry for this it is that it requires denying moral realism, which I take to be a mark against it.
To each their own! I think that's a mark in favor of it!
But another problem is David Enoch’s shmagency objection. The basic idea is this: the Kantian Constructivist says that you can, in theory, not respect other people as ends, but you, by doing so, revoke your status as an agent—to be a true agent, you have to not violate the rational will. But then the question arises: why should I care about being an agent?

This is an objection worth taking seriously, probably one of the strongest objections to constructivism or constitutivist views of action. I'll list a few reasons why I find it unconvincing.

For one, I may not be able to offer you further reasons to be an agent rather than a schmagent, but nonetheless we are rational agents with sets of values and aims. Certain evaluative truths are entailed internal to this standpoint and whether you like it or not, you can't opt out of it. Enoch's response is even if you can't opt out of the game of agency, this does not solve the problem because there's still the further question of whether we have normative reason not to break the rules constitutive of it. However, I would retort that breaking those rules is being inconsistent, given that you are a rational agent, for standard Kantian reasons I've previously argued. Perhaps then, it might be asked "what is the further reason to avoid being inconsistent?". I don't think there is one nor that there could be one, for that is just asking for a reason to be reasonable.

For two, the question is either asked internal to the standpoint of an agent, in which case constitutivism has a straightforward answer by appealing to agency's constitutive norm(s). Or the question is external, in which case it is unintelligible, as what it is to be a reason for action is embedded in, and only makes any sense for agents, "reason to be a schmagent" is thus, conceptually confused. See; (Silverstein 2015). 

For three, even if the question is intelligible and the unavoidability of agency doesn't help, the claim that absence of reasons to be an agent shows that constitutivist views cannot ground normativity remains, at best, unmotivated. Just because there is a set of normative reasons which we'll call n1-n99 which are grounded in constitutive features of agency, doesn't mean that these constitutive features  must in turn be grounded in further normative reasons, it may be that n1-n99 are the only normative reasons there are. See; (Paakunainen 2018) which also gives intuitive analogies here.

With regards to the Normative Question Matthew says;

There is an answer to 1 on moral realism—the reason to be moral is because morality is about what you have most reason to do it

But this doesn't answer 1. Why think such-and-such moral fact is what the agent has most reason to do? If the agent doesn't already think that's what they have most reason to do, what reasons do you give them? If you say it is a primitive necessary truth, then you're falling into the problem I stated before, far from answering the normative question, you merely located the necessity where you wanted to find it. 

I don’t think 2 is quite right—Jeffrey Dahmer should be moral even if he would need come to the belief that his action is justified and makes sense.

Ah, but this isn't what 2 says. What it says is that, on the condition that the agent knows what justifies their action being required, they come to form the belief that the action is justified and makes sense. The reasons' given have to be motivating, by the agents lights. If they aren't, then why should they accept them?

For question 3 Matthew says;

This assumes that the reason to be moral is about what happens to you. I think that dying for morality is probably worse for you, but it’s better for others. Several others dying is worse than death for you.
It is not about reducing morality to what is best for you. I'm far from an egoist. It is about what matters most, by your own lights, and what matters most seems to be your practical identity, which `is a description under which you see yourself as having value and your ends as worth-undertaking`. So, circumstances where you must give up your practical identity explain why you might be obligated to sacrifice your life.

Next; 
In terms of answering the questions of why all and only the things that actually matter matter, the answer would be grounded in facts about the things. There isn’t some extra further non-natural add-on—instead, the non-natural facts just serve to pick out which of the natural facts matter.

But why accord those natural facts value, rather than some other cluster of natural facts? For any natural fact you give, it seems the normative question will come back, "Do I have a reason to act in accord with, or regard as valuable these natural facts?". Merely positing it as a brute, or necessary truth doesn't answer anyone who doesn't already accept it, it just, once again, locates the necessity where desired. Whereas, on Kantianism the source of normativity, is rational willing/autonomy/deliberation as such, which every agent has, and is just one's capacity for setting one's ends and taking them to be worthy of pursuit, and deciding which desires or external facts are better to act on. You ought to accord rational nature unconditional value, because rational nature is the source which generates all your reasons for action, the condition for you valuing anything at all, and viewing your ends as worth-bestowing. It is motivated by a transcendental argument. 
If we ask why pain is bad, what answer could be more satisfying than “because of what pain is like.”

You can intelligibly know the phenomenal content of pain, and fail to see it as a reason to avoid it.  There's no incoherence here.  In fact, plausibly many rationally find pain desirable as it allows them to grow emotionally and physically.

But I think that there are issues for the Kantian constructivist account of normativity. It seems unable to ground non-moral axiology. It seems that tornadoes are bad and puppies are good.

That's correct.  But I don't see this as a problem because it's not what Kantian Constructivism, as a theory, is trying to do. It's a theory of right and wrong actions not non-moral axiology, and it is consistent with various views one might take on non-moral axiology. It is similarly not a problem that the theory of general relativity doesn't explain the economic calculation problem.

But this doesn’t seem like a good answer. It seems like the reason that pain is bad isn’t just that you’d care about it if you thought hard. It seems that the fact of the badness of pain grounds the fact that you wouldn’t like it.

I don't think so. There is nothing about pain that has normativity built into it, at least as far as I can tell, and as has been argued. If one is fully aware of what pain is like and is just not motivated to avoid it, what is the irrationality here? There's no contradiction, no practical inconsistency, no means-end incoherence. So, what's irrational? 

In regards to the Kantian argument, Matthew says;

I disagree with this because I accept moral realism. If our reasons to have ends is just based on our desires then none of these obviously irrational desires would be irrational.

Matthew proceeds to  list off a set of 'obviously irrational desires'. But I reject that they are irrational. Much less obviously so. For reasons I explained in my first article. Matthew gives some responses, which I don't think are satisfactory; 

Most people think that if you set yourself on fire for no reason beyond the fact that you want to that that is irrational.
But what most people think doesn't matter. What matters is if the agent in question is actually being irrational not whether other people think they are. The reason most people think they are irrational is because they are imposing their own perspective whereby pain is undesirable, a perspective which the agent in question does not share. I fail to see any reason they should be taken to be irrational rather than  simply out of accord with the instrumental reasons ordinary persons are motivated by. Perhaps the idea is that well-being and avoiding pain is necessarily good, but, once again the agent in these scenario's doesn't accept that, so there is no inconsistency entailed from their perspective.

Further, and this is something I neglected to mention in my article, but I think people's intuitions here can be explained by evolution. The belief that avoiding pain and pursuing pleasure is necessarily good has tremendous survival and reproductive benefits since sex is pleasurable, things that can harm/kill us are painful etc. So we'd expect evolution to select us to have these beliefs even if they're false.
There are lots of strange desires. No one has the intuition that two aliens engaging in weird pleasurable activity where they put their tentacles in each others ears are wrong.

I do not deny this. But my point is we shouldn't trust our intuition that they are irrational, rather than really strange to us.  Because we can't actually show an incoherence from the agent's standpoint without positing further proposition's that we think are obvious from our own reflection on our phenomenology, but the agent for all we know, completely rejects from their own reflection. We also have no access to the agent's internal constitution or even any idea of what it would be like to be constituted like said agent. 

I’m not sure what’s being missed. Hedonism—which I don’t have to defend for the purposes of this article, just general utilitarianism—says that pleasure is good which makes it rational to pursue. What’s unexplained? There’s no deeper account of why pleasure is good just like all fundamental things bottom out at some point.
What's not explained is what makes it irrational for an agent to not desire pleasure.
The first premise talked about what you do, in fact, value. But none of that means you make things valuable; it just means you care about things. Your desires don’t make things really worth caring about.
I don't think your desires alone make them valuable either, your choosing them with your rational will makes them valuable, which involves deliberating between desires and picking out which one is a better reason for action. I think that does make things valuable, because I don't think there is anything further to value aside from what is entailed from the practical point of view of an agent who values things. I reject that there are external value properties, and have given reasons why. We'll get into more of that later. For now I'll just say, even if they did exist, I think they succumb to a regress of normative questions as discussed before.
No, this just means you can value things. Even if you can value things and somehow make them valuable with your will, this wouldn’t mean you are an end in and of yourself.

It means all value is conditional on rational will. It is the source of all value and the only thing that has value which is unconditional. Sounds like an end in and of itself to me. 

Just asking whether something respects the ends of rational creatures seems undefined. Indeed, I don’t think there’s a fact about whether you use someone as a mere means.

We use someone as a mere means if we treat them as an instrumental tool in a way that fails to respect them and their ends. If you kidnap someone for ransom, you are treating them as merely instrumental for your aims, disregarding their consent, life projects, self-determination,  autonomy etc. Here are a couple of articles which explain the ideas I have in mind here. 
One other worry is that this seems unable to ground morality. After all, it can’t explain axiology, and it also can’t explain the wrongness of harm to some severely mentally disabled, who are not rational beings

I've addressed the point about axiology. It's consistent with numerous views, just like consequentialism is (desire theory, hedonism etc.). The second is something Matthew is correct to point out as a difficulty, as the value of non-rational agents is not straightforwardly entailed from Kantianism, as is the value of rational agents. That is not to say there is no resources on the Kantian view for affording moral value to non-rational agents, e.g disabled people, animals, infants etc. it is plausible that we have an indirect duty to treat beings who are sufficiently similar to rational agents with kindness, otherwise we foster bad dispositions for respecting rational nature. Additionally, a precautionary principle here is plausible since they are so similar to rational beings. Cognitively disabled people, have modal status (Kagan 2019) and potential status since their faculties tend towards rational nature. Korsgaard also has a Kantian and Aristotelian-style argument for taking non-rational creatures such as animals to be ends in themselves (Korsgaard 2018). There's more that can be said, but that's all I'll say here.

But reasons internalism is false. The reason why ridiculous things don’t matter is because there’s nothing about them that’s worth pursuing, not that you personally don’t care much about them.

It is, I think, rather unclear what is meant by "worth pursuing" here. But let us suppose it were true that counting blades of grass was "not truly worth-pursuing" and "objectively a waste of time", however that's cashed out. Even if that were true, who cares? I still wanna count blades of grass! This is one reason why reason externalism strikes me as implausible, it's hard to give an account of what an external reason independent of any valuers' point of view is, and even if it could be understood, they don't really do anything it seems to me. They'd just be things that you can incorporate into your set of potential reasons for action which may or may not move you to act or correct your preferences or dispositions, exactly like any other natural, non-normative property.
They count in favor of what they’re reasons for. It’s unclear what about them is supposed to be incoherent. When I say “you shouldn’t torture babies for fun,” or “you have good reason not to set yourself on fire even if you want to” that seems coherent, even though it’s not appealing to any person’s desires.
But what does 'counts in favor of' mean here if it's non-goal-indexed? Is the question I'd imagine Lance would have. It seems unhelpful to introduce further concepts which will be understood in an anti-realist or instrumentalist manner.
I think a lot of this is irrelevant. If we think that as rational people reflecting we’d end up concluding that rights don’t matter
But I don't think we'd end up concluding that rights don't matter for Kantian reasons.
I think the idea that Mackie and Olson just call moral realism weird is uncharitable. They pick out the problematic features of the kind of normativity in question, which fit poorly with our background understanding of the world, and are completely unlike anything else we know, and would require a special faculty to access that doesn't fit with our ordinary ways of knowing everything else. Further, what is a counts-in-favor-of relation? The internalist has a straightforward account. Yet, if it's not some motivational fact or anything else about an agent, or set of agent's psychology and is a feature that is 'built into' the world, what could the truth-maker for such a relation obtaining even be? In virtue of what do certain natural properties instantiate this relation and others don't? It's unclear.

Matthew cites his article on moral realism to off-set the concerns of ontological parsimony and explanatory impotence. I'll just make two notes here. The first is, I think most of Matthew's arguments are not gonna be compelling for those who aren't realists antecedently, I've already explained why I find the 'phenomenal introspection' and irrational desires' arguments to be highly dubious. The second is, a lot of the general advantages of realism, e.g binding reasons on all rational agents, is something you get on a constructivist view like mine for free without the extra ontological items posited by realism. Moral convergence is equally expected under Kantian Constructivism as well.
I think consequentialism gives good accounts of that, as we see when we get to the specific objections. Additionally, no explanation was given at all for why Kantian deontology explains any of that.
It's straightforwardly entailed from Kantian deontology that all humans have absolute value and dignity, and it is in virtue of our rational natures that we are ends in themselves.
This is a whole can of worms, but I addressed it in my moral realism article.
I myself am not convinced that evolutionary debunking is an insuperable problem for moral realism, but I think it is a difficulty which my view handles better than realist views. As for Matthews' article I'll just once again plug my friends response who addresses Matthew on Evolutionary Debunking.

Overall, I think Matthew has failed to refute the arguments and considerations I laid out to favor Kantian Constructivism to rival views. 


Epistemic Objection 


Next we'll be looking at Matthew's response to, what I take to be, the most powerful objection to consequentialism.
For one, as I showed in my opening statement, unpredictability results in the conclusion that deontologists shouldn’t act at all.
I responded to this in my previous article.
Second, I think the expected value response to this works—it’s true that it’s hard to know the exact consequences, but we can make estimates.

This is the best response to the objection, which is why I pre-empted it in my first article. Let's see Matthew's response. 

But it’s also likely to prevent very bad things. All we can do is rely on our best judgment—many of these huge ripple effects fizzle out.
This just seems like, at best, a reason to be agnostic about whether the action is good, it doesn't justify your belief either way. But if you're agnostic about it, how do you pick one course of action rather than another when deliberating what you should do? As I argued before, it's extremely implausible, and indeed I've even directly argued that it is astronomically improbable, that the known consequences are sufficient to break the tie on expected utility.

To make the point more clear. Here's an analogy I liked from Lenman, suppose it's D-Day and you are a military leader who has to choose between 2 plans, plan A and plan B. The plan you choose will have tremendous consequences for the war, civilians, and the soldiers on the battlefield. Let's suppose you know that if you go with plan B, a dog will break her leg but if you go with plan A she won't. The unknowns of going with plan A or B are such that they otherwise cancel each-other out. Does Matthew mean to seriously suggest that the knowledge of the dog breaking her leg is a sufficient reason to choose plan A? Keep in mind, if you make the wrong choice the consequences are many magnitudes greater in significance then the dog breaking her leg.  Perhaps it is some reason, but it is, quite clearly, proportionally swamped by the other consequences. So, you should have basically no clue which plan to pick. This is similar to the consequentialists epistemic position in deliberating which actions one should do, since all and only consequences are salient in determining whether an action is right or wrong

All this, by the way, really generously  assumes the choice is between 2 options. If there are more, you have more options to partition the expected value space to, and the expected utility you get from choosing one option relative to all the others goes even further down. 
Imagine a game way more complicated than chess, where it was hard to know which moves were best. Even if it is hard to know, it’s not impossible, so you could still make moves with the highest expected value.

My answer is; If your epistemic situation wrt to the game is analogous to ours and the long-term identity-affecting affects of our actions, then yes, you should be in doubt about what move you should do. 
Truth teller then argues that we have no a priori reason to expect this to cancel out. This is true, but we can just evaluate the expected consequences, taking into account uncertainty. No part of this reply requires accepting the strong principle of indifference.
If Matthew is not using a principle of indifference he owes us an explanation for how he partitions and distributes the probability of the set of long-term astronomical identity-affecting outcomes and their expected utility, otherwise, and again, we should be clueless about what to do. He hasn't offered this.
Truth teller’s argument is more radical than he realizes. It results in the conclusion that we can’t calculate consequences, so consequences shouldn’t factor into our considerations at all. But this is absurd—if you could cure depression, for example, that would be good because it would have good consequences

It doesn't entail that no consequences factor in, just unforeseeable, long-term and indirect consequences.
Suppose you know that your action of going outside today will cause the extinction of all life on earth five years from now in an indirect way. On accounts that reject the significance of these consequences, these are, quite literally, no reason not to go outside.
If you know, then you've foreseen that some particular outcome will follow from your performing the act of going outside. I think that's sufficient to provide a reason not to go outside. What wouldn't be is if the consequence was both unforeseen, and indirect. 

I conclude that Matthew has failed to refute, or even truly take seriously the epistemic objection.
Demandingness Objection

 Next, Matthew's responses to Demandingness;

First, utilitarianism is intended as a theory of right action not as a theory of moral character.  
But what makes one a good person on any ethical theory, should be a function of the rightness and wrongness of the actions they perform. Just as what a good mechanic is, should be a function of the efficiency and success of the car-fixing actions they perform. So, it's hard to see how this distinction is supposed to help. If Matthew denies this, then first I'm not sure what else is supposed to determine the value of moral character, and second it seems to rob utilitarianism of normative authority. Why care if the actions I perform maximize utility, if I'll still be a good person regardless?

Virtually no humans always do the utility maximizing thing--it would require too great of a psychological cost to do so. Thus, it makes sense to have the standard for being a good person be well short of perfection.

 On utilitarianism what is the rightness of an act or omission determined by? Only the net utility produced. Therefore, it seems whether the weight of psychological costs is enough to make it right to abstain from donating to charity, donating your kidney etc.  or choosing to purchase luxuries for yourself instead, is going to be determined by the net utility. But, donating will generate more utility on net even when we factor in the psychological costs to you. It is not just that you are falling short of perfection on utilitarianism, you are blameworthy for failing to do the right action. Matthew needs to give a principled basis for why, by the lights of utilitarian judgements, you wouldn't be blameworthy. But it's hard to see how that can be done. 

Most of Matthew's responses to the the demandingness objection concede the demandingness of utilitarianism, arguing that it does not provide sufficient reason to think utilitarianism is false. After all, the correct ethical theory may well be demanding! This is fine, I think some of Matthew's responses here are reasonable. However, I never intended this objection to be a knockdown argument. Merely that our ordinary moral practice and beliefs are not nearly as demanding as utilitarianism entails, which is better explained and antecedently more expected on, non-consequentialist views than utilitarianism. There is also, the implication that utilitarianism wouldn't be a particularly helpful guide for humans, as, realistically, no-one truly follows it's demands. Not even Matthew does, he could have created a refugee fundraiser site rather than a blog dedicated to arguing for utilitarianism!  From this point I'll only address objections of most interest. 

(Sobel, 2007) rightly points out that allegedly non-demanding moralities do demand a great deal of people. They merely demand a lot of the people who would have been helped by the allegedly demanding action. If morality doesn’t demand that a person gives up their kidney to save the life of another, then it demands the other person dies so the first doesn’t have to give up a kidney.
It seems to me, not correct to say that you are demanding the other dies. All we are saying is one is not morally blameworthy for refusing to give their kidney. "We are not demanding anyone give Smith a kidney and we don't demand Smith dies" is not a contradiction. What would be, is if we said "We demand that Smith does not get the kidney" but as far as I can tell, my position doesn't entail that. A further argument would need to be given that it does. Further, it's actually very plausible that people have rights to bodily autonomy and self-governance, which can outweigh the others rights to life, in fact, I don't think a better case could be given where the majority of intuitions more clearly favor me.
Fourth, the drowning child analogy from (Singer, 1972) can be employed against the demandingness objection.
I've pre-empted this in my original article, I'll get to Matthew's responses later.
Kagan rightly notes that ordinary morality is very demanding in its prohibitions, claiming that we are morally required not to kill other people, even for great personal gain. However, Kagan argues a distinction between doing and allowing and intending and foreseeing cannot be drawn successfully, meaning that there’s no coherent account of why morality is demanding on what we can’t do, but not on what we must do.
It's clear that in cases where you're doing you are more responsible then when you are just allowing. If you rape someone, you're a rapist, but you're not a rapist if you only allow a rape to happen. Probably, what Matthew means to say is the distinction is not significant enough for there to be a difference with respect to our moral obligations in the particular cases in which utilitarianism demands more of us, but this seems exactly like the point that's in dispute, so it would need to be motivated. Perhaps Kagan gives arguments in his book, I've not read it. I'll let Matthew share them with us in his last reply.
Eighth, our intuitions about such cases are debunkable. (Braddock, 2013) argues that our beliefs about demandingness were primarily formed by unreliable social pressures. Thus, the reason we think morality can’t be overly demanding is because of norms about our society, rather than truth tracking reasons.
This is probably true. But for one, my point with the demandingness is not just that it is unintuitive, it's that it implies a moral practice that is completely unlike ours, one that is impractical, nigh unlivable for humans. For two, these same sorts of debunking considerations apply, mutatis mutandis, to pretty much all of our moral intuitions. It is after-all a fact that our moral judgements in general are highly sensitive and plastic in the face of various non-truth tracking cultural/social pressures (Gold et al. 2014) (Miller 1994) (Jafaar et. al. 2004) (Pryor et al. 2019). More fundamentally, the content of our moral judgements are shaped by evolutionary pressures which raises a similar debunking concern, (Street 2006) (Joyce 2005). The task before Matthew is to sustain a local debunking argument regarding our moral demandingness intuitions and avoid opening up a pandora's box of global genealogical debunking arguments about our moral intuitions. My own view is that this cannot be done, and that none of Matthew's own responses to EDA's work (I've already linked an article above for roughly the reasons why).

Further, the examples Matthew himself gives' as evidence for unreliability are not super compelling.
It’s generally recognized that people have an obligation to pay taxes, despite that producing far less well-being than saving the life of people in other countries. 

But why think that this is an intuition about a moral duty people have rather than it's just being a civic duty one recognizes as part in parcel of being a citizen of a country? In the same way we don't think putting up with shitty customers at Walmart is a moral duty, but it might be your duty as an employee at Walmart. 

As (Kagan, 1989) points out, morality does often demand we save others, such as our children or children we find drowning in a shallow pond. This is because social pressures result in caring about people in one’s own society, who are in front of them, rather than far away people, and especially about one’s own children.

To some extent, that's right. But again, there is something that seems right about this intuition, that even Matthew surely must admit. We think responsibility is scaled by 1) how much control you exert/have over the situation 2) your vision/awareness of the situation. The less control you have, and the more inattentional you are to it, the less responsible you are. If you hear on the radio that a tsunami hits a distant country, you're less responsible for not hauling over there and saving who you can, than if a tsunami happens in your vicinity and you fail to save people you could save by extending your arm. 

Tenth, scalar consequentialism, which I’m inclined to accept, says that rightness and goodness of actions come in degrees—there isn’t just one action that you’re obligated to take, instead, you just have actions ranked from most reason to least reason. Scalar utilitarianism gives a natural and intuitive way to defuse this. Sure, maybe doing the most right thing is incredibly demanding — but it’s pretty obvious that giving 90% of your money to charity is more right than only giving 80%. Thus, by admitting of degrees, demandingness worries dissipate in an instant.
This response is one of the more interesting, but again, it is hard to see how it helps. You'd still always have most reason to do the most utility-maximizing action. If I am deliberating among a set of options and there is one option I have most reason to do, if I don't do it, surely I'd be blameworthy for failing to do what I have most reason to do.

It looks like what Matthew has in mind is that there is no particular action that you ought do just actions which are better (more reason) and worse (less reason) to do. But for one, I'm not sure what a reason is if it's not something which tells you (or counts-in-favor-of) you ought to perform some action rather than another. For two, this leads to absurdities, suppose you're in a room and there are two buttons and you can only press one, B1 maximizes utility for a billion people, B2 maximizes utility for one person. Obviously, you ought to press B1 and not B2, especially if we think maximizing utility is the only thing that makes ethical decisions good. Yet, if there is no particular right action you ought do, that's false. Both buttons increase utility, B2 just does it much much less. 
But this is clearly irrelevant. Suppose you could save a child by going into the pond and pressing a button to save them far away. You still ought to do it.

Observing them is relevant in the sense that you are directly there, you are fully capable of acting in the now, and the situation confers reasons on you for you to directly choose to respond to, or ignore. Sure, if you magically know for a fact a kid is drowning far away and you have a magic button you should press it, since all the relevant features are shared! But when you perform ordinary actions you aren't being like "Heh, gonna buy expensive khakis even though I could use the money on charity because fuck starving children in Africa", were that the case, you'd be doing something wrong.
It’s totally unclear why this is the case! In the drowning child case, maybe you deem the child worthy of saving—just like a person who doesn’t donate to a children’s hospital deems the children worthy of saving—but you just prefer not to save them and spend your money on other things.
If you deem them worthy of saving, in the sense that you see them as a end-in-themself then it's irrational not to save them. If you don't save them because you'll get your pants wet, that means you actually don't find them worth saving at least in the relevant sense. 

Matthew's responses to the Demandingness objection are a bit better, but I conclude that he fails to completely stave off the objection. 

Unintuitiveness Objection


Last, we look at Matthew's responses to the unintuitiveness of consequentialism. 

He gives a series of cases involving raping people where one’s pleasures allegedly outweighs the pain of the victim—a non-hedonist, or a desert adjusted hedonist, can just say that only some pleasures count, and ones from rape don’t.
I didn't make this explicit but the raping coma patients/gang rape cases don't only apply to hedonism. Even if you're a non-hedonist, you can think pleasures are good, and the known consequences of raping coma patients is more pleasure and no pain caused. You can also think there are other goods contributing to well-being, such as desire-satisfaction, the gang-rape satisfies the desires of more people, the coma patient doesn't have any active desires which are being violated, active desires count for more etc. I'm not convinced that non-hedonic consequentialisms' have a straightforward escape hatch here, but regardless I was attacking Matthew's view. Desert-adjusted hedonism strikes me as implausible for other reasons. It falls apart really fast when we realize there is no principled basis for valuing some pleasures over others.
He gives the example of gang rape, but the badness of gang rape scales with the harm caused, also, this isn’t relevant to consequentialism, just hedonism.
But I addressed this, the harm might plausibly be outweighed by the good outcomes.

In regards to the case where you sacrifice your life to save your friends. 
On scalar utilitarianism, there aren’t facts about what you’re permitted to do, you just wouldn’t have most reason to do—that wouldn’t be the best thing you could do. But this seems plausible! It doesn’t seem like you have most reason to save your friend.
1. This seems to imply that scalar utilitarianism isn't action-guiding, why would I adopt it as an ethical theory if I want to know what actions I can and can't do? That seems like the bare minimum of what I'd want an ethical system to do.
2. I still have no idea what it means to say you have most reason to do something, if it doesn't imply that you ought do it.
3. If an ethical theory entails that there is no fact about whether you are permitted to, say, torture and abuse children for sadistic pleasure, I think that is evidence the ethical theory is a failure.
If we say that you have equal reason to do them or some such, then if both you and your friend starts with 1,000 units of torture and can either reduce your torture by 1,000 or the others by 500, both would be fine. But taking the action that reduces the other’s suffering by 500 makes everyone worse off.
This is really vague, because I don't know what 1000 or 500 units of torture is. I would say what I would normally say, it would be supererogatory for you to reduce your friends pain (it is a selfless, other-regarding act), but it would also be good for you to eliminate your torture, you're not obligated to reduce your friends. I fail to see how this is unintuitive. You're not making everyone worse off you're reducing your friends torture.

In regards to the case where a thief saves grandma's life while trying to steal Grandma's purse. 
This stops being unintuitive when we distinguish objective and subjective wrongness. Objective wrongness is about what one would do if they were omniscient, what a perfectly knowing third party would prefer, what they have most actual reason to do. The utilitarian judgment here about objective wrongness is obvious. But the utilitarian can agree that this action was wrong based on what the person knew at the time—it just coincidentally turned out for the best.
But whether a given action is right on consequentialism tout court, does not depend on the subjective states of the agent, but only what is objectively right (objectively produces the best state of affairs). As a non-consequentialist, I take into account intentions and other subjective states of the agent when analyzing what makes an action right, but consequentialists don't think whether an agent subjectively acted with the goal of maximizing utility is right-making or wrong-making, what matters is if they actually maximized utility. 

In regards to organ-harvesting Matthew offers the following case to put pressure on our intuitions. 
Imagine an uncontrollable epidemic afflicts humanity. It is highly contagious and eventually every single human will be affected. It cases people to fall unconscious. Five out of six people never recover and die within days. One in six people mounts an effective immune response. They recover over several days and lead normal lives. Doctors can test people on the second day, while still unconscious, and determine whether they have mounted an effective antibody response or whether they are destined to die. There is no treatment. Except one. Doctors can extract all the blood from the one in six people who do mount an effective antibody response on day 2, while they are still unconscious, and extract the antibodies. There will be enough antibodies to save 5 of those who don’t mount responses, though the extraction procedure will kill the donor. The 5 will go on to lead a normal life and the antibody protection will cover them for life.
But I still have the intuition that this is wrong, assuming the donor doesn't consent. Though, much less strongly then the kidnapping doctor case, because in that case he is directly kidnaping them off the street and murdering them, as opposed to it's being a patient already in his care and is already unconscious due to an affliction, and the doctor isn't directly murdering them, rather performing an extraction procedure that will result in their death. So even if I didn't share the intuition, this does nothing to save consequentialism from the unintuitive answer given to the kidnapping doctor case.
It sounds bad when phrased that way, but any agent who follows basic rational axioms will be modellable as maximizing some utility function. If we realize that lots of things matter, then when making decisions, you add up the things that matter on all sides, and do what produces the best things overall. This isn’t unintuitive at all. Indeed, it seems much stranger to think that you should sometimes do what makes things worse.
Not much to say here other than that I do not share Matthew's intuitions whatsoever. I reject the assumption that moral value is stackable in the same ways extrinsic values are. That human intrinsic dignity and autonomy should be respected only until the point where treating them such will make things slightly worse off (such as enough rats getting injected with heroine). I think most people, on reflection, and certainly most ethicists, agree with me here.

In regards to the general ways I listed in which Utilitarianism appears unintuitive;
But consequentialism can—just have the social function of welfare include equality, for example. Additionally, utilitarianism supports a much more egalitarian global distribution than exists currently.

Utilitarianism is not principally egalitarian though, which I take to be a problem because egalitarianism is one of it's main motivations. Sure, you can define consequentialism in such a way that it is, but Matthew and I would both agree that such a view is implausible for other reasons (you should torture someone infinitely to reduce global inequality etc.).
These tend to be wrong, but they tend to produce bad outcomes. It seems that breaking promises is bad because it makes things worse—its badness doesn’t just float free. Any high-level theory is going to sound weird when applied to concrete cases
I take this to be Matthew agreeing that this is a case where his theory fails to track our intuitions and how we actually diagnose actions such as promise-breaking as wrong. We explicitly don't think it's wrong in virtue of the bad outcomes, it's wrong because you aren't respecting a prior commitment you made, you deceived them, you're saying what they want, and what they believe isn't important, etc. In fact, we'd agree that even in cases where these sort of actions do not lead to bad outcomes you still did something wrong.
In the next section, Truth teller responds to various objections that are not, I believe, relevant to the dispute—none of them are arguments I’ve made for consequentialism.
But you did. The entire reason I included them is because of this video

That is all, I conclude that Matthew has failed, at least by my lights, to refute the unintuitiveness objection. 

No comments:

Post a Comment